Can Bias in Artificial Intelligence be Eliminated?

2024-05-20

Artificial intelligence (AI) built on a large amount of potentially biased information poses a real risk of automatic discrimination. But can we re-educate these machines?





For some, this issue is extremely urgent. In the era of ChatGPT, AI is increasingly making decisions for healthcare providers, bank lenders, or lawyers, using materials sourced from the internet.





As a result, the basic intelligence of AI is only as good as its source world, which can be filled with wit, wisdom, and practicality, as well as hatred, bias, and rage.





"This is dangerous because people are accepting and relying on AI software," said Joshua Weaver, director of the Texas Opportunity and Justice Incubator at a legal consulting firm.





"We may fall into a feedback loop: our own biases in ourselves and in our culture influence the biases in AI, creating a reinforcing cycle," he said.




Ensuring that technology accurately reflects human diversity is not just a political choice.





Other uses of AI, such as facial recognition, have already put some companies in trouble due to discrimination issues.





The US pharmacy chain Rite Aid, for example, faced an investigation by the Federal Trade Commission after its in-store cameras wrongly identified consumers, especially women and people of color, as shoplifters.





Experts are concerned that ChatGPT-style generative AI can create human-level reasoning in a matter of seconds, providing new opportunities for mistakes.





AI giants are well aware of this issue and are worried that their models may go astray or overly reflect Western society when targeting a global user base.





Google CEO Sundar Pichai said, "We receive queries from Indonesia or the United States," explaining why image requests for doctors or lawyers would strive to reflect racial diversity.





However, these considerations may reach absurd levels and lead to angry accusations of excessive political correctness.





Such a thing happened when Google's Gemini image generator created an image of a German soldier during World War II, absurdly including a black person and an Asian woman.





Sasha Luccioni, a research scientist at Hugging Face, warned that "thinking that technology can solve bias is already going down the wrong path."





She said that generative AI is essentially about whether the output "meets the user's expectations," which is largely subjective.





Jayden Ziegler, product lead at Alembic Technologies, warned that the massive models on which ChatGPT relies "cannot determine what is bias and what is not bias, so they cannot take any action on this."





At least for now, humans must ensure that the content generated by AI is appropriate or meets their expectations.





But given the hype around AI, this is not an easy task.





There are approximately 600,000 AI or machine learning models available on the Hugging Face platform.





"A new model emerges every few weeks, and we are working hard to evaluate and document biases or misconduct," said Luccioni.





One method currently being developed is called algorithmic de-biasing, which allows engineers to remove content without disrupting the entire model.





However, there is serious doubt about whether this method is truly effective.





Ram Sriharsha, CTO of Pinecone, suggests another approach of "encouraging" models to move in the right direction, "fine-tuning" them, and "rewarding" correct and incorrect responses.





Pinecone is an expert in retrieval-augmented generation (RAG) technology, which allows models to retrieve information from trusted sources.





For Weaver at the Texas Opportunity and Justice Incubator, these "noble" attempts to correct bias are "a projection of our hopes and dreams for a better version of the future."





But he said, "Bias is also an inherent meaning of human nature, so it is also built into AI."