Hidden Risks in Ubiquitous Artificial Intelligence

2023-11-23

Internal turmoil at OpenAI, triggered by the dismissal of CEO Sam Altman on November 17, 2023, has brought attention to the safety of artificial intelligence and the rapid development of artificial general intelligence (AGI). AGI is generally defined as intelligence that approaches human-level performance across a wide range of tasks.


The OpenAI board stated that Altman was dismissed due to a lack of integrity, but speculation has primarily focused on the disagreements between Altman and board members, who are concerned that OpenAI's rapid growth hinders the company's focus on the catastrophic risks posed by AGI.


The goal of OpenAI's AGI development is intertwined with the idea of AI acquiring superhuman capabilities and the necessity of protecting the technology from misuse or loss of control. However, AGI and its associated risks remain speculative at present. Meanwhile, AI in specific task domains is highly prevalent and has achieved high levels of authenticity.


AI Everywhere


AI occupies a significant place in many people's daily lives, from facial recognition unlocking your phone to voice recognition powering your digital assistant. It also plays roles that you may not be aware of, such as shaping your social media and online shopping experiences, guiding your video viewing choices, and matching drivers for shared mobility services.


AI also influences your life in ways you may not fully realize. If you're applying for a job, many employers use AI in the hiring process. Your boss may use it to identify employees who may resign. If you're applying for a loan, it's likely that your bank uses AI to make approval decisions. If you're receiving medical treatment, your healthcare provider may use it to assess your medical images. If you know someone involved in the criminal justice system, AI may play a significant role in determining their life trajectory.

Algorithmic Harm


Many AI systems have potential biases. For example, machine learning approaches use inductive reasoning, which involves inducing patterns from a set of premises based on training data. A machine learning-based resume screening tool was found to exhibit bias against women because the training data reflected past practices, where most resumes provided were submitted by men.


Using predictive methods in various fields, from healthcare to child welfare, can exhibit biases like queue bias, leading to unequal risk assessments for different groups in society. Even though legal practices prohibit discrimination based on attributes such as race and gender, research has found that equally risky Black and Latinx borrowers pay significantly higher interest rates on government-supported enterprise securities and Federal Housing Administration-insured loans compared to White borrowers.


Biases can also arise when decision-makers use algorithms in ways that differ from the intentions of the algorithm designers. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This is because asthma patients with pneumonia typically receive more aggressive treatment, reducing their risk of death compared to the overall population. However, if the results of such a neural network were used for hospital bed allocation, asthmatic individuals admitted for pneumonia would be dangerously deprioritized.


Algorithmic biases can also stem from complex social feedback loops. For example, in predicting recidivism, users want to predict which convicted individuals are likely to reoffend. However, the data used to train predictive algorithms is actually about who is likely to be rearrested.


The Here and Now of AI Safety


While large language models like GPT-3, which powers ChatGPT, and multimodal language models like GPT-4 are steps towards artificial general intelligence, they are also algorithms that are increasingly used in schools, workplaces, and everyday life. Considering the biases that can arise from widespread use of large language models is crucial.


For example, these models may exhibit negative stereotypes based on gender, race, or religion, as well as biases against underrepresented minority groups and people with disabilities. As these models demonstrate abilities that surpass humans on tests like the bar exam, I believe they need stricter scrutiny to ensure that AI-augmented work meets standards of transparency, accuracy, and accountability, and that stakeholders have the power to enforce such standards.