bias in ai can be seen everyday
I was writing an introduction for an essay the other day, one of the many assignments my teachers gave right before our short break commenced. This particular piece of homework required us to find a piece of media separate from the literature we were writing the essay on, and draw similarities or differences between the two as a lead-in to the focal point of the essay, our thesis. I chose Seven Percent of Ro Devereux by Ellen O'Clover, a recent release that has earned a place as one of my favorite novels. While writing the book's description, I moved to note that protagonist Ro has an estranged mother.
Google first introduced Smart Compose in 2018 as a feature of Google Docs. Almost like a writing assistant, Smart Compose functions on machine learning that uses data from millions of users' writing to suggest words for the user to utilize. The practical advantages of Smart Compose are debatable; I rarely ever see suggestions myself, and when I do, I don't find them incredibly helpful. I don't think it would hurt anyone to write that extra word or those three additional letters by themselves. But regardless, when I was mentioning Ro's estranged mother, Smart Compose fired up after I'd finished typing 'estranged' and gave me a suggestion in its classic light gray font: father.
In a study done in 2022, research revealed that adult children are four times as likely to be estranged from their fathers (26% of the self-reported responses) compared to their mothers (6% of the responses). Even though such estrangement often ends, estrangement with mothers tends to conclude at a higher rate than that of with fathers: 81% compared to 69%. Considering these statistics, it would not be foul for a human to assume that if a child had an estranged parent, it would be their father. When AI/ML technologies mimic this, however, the crucial concept of human bias emerges.
Relatively, this example of bias is harmless. When such bias morphs into discriminating against women of color during early stages of job applications, or portraying high-intensity and prestigious work as done only by men, the situation becomes more serious. Personally, the most worrying thing about such bias is the feedback loop that manifests. For instance, if one unconsciously associates CEOs with men, and sees that most of the pictures online of CEOs are men (bias within the Google Image system), they will continue to believe that CEOs are men, possibly even more than they used to.
Instead of worrying about a world takeover manned by artificial intelligence, we should worry about how humans can be the core reason for many of AI's mistakes. We should focus on minimizing the amount of faulty human thought processes in AI, and treat AI technology as more of a resource and less of a companion. We should invest in bias prevention technology (many of which can be powered by AI themselves) to monitor online and physical premises, because the last thing we need is a tool with great past, present, and future promise to be molded into something that perpetuates manmade social beliefs.
Research: