Salesforce Ethicist Labels Debating Doomsday AI as "Waste of Time"

2024-04-02

In the past year, you may have been involved in discussions about the prospect of super-intelligent AI systems replacing human workers. Or, discussions about a world resembling a science fiction doomsday scenario caused by AI.


Paula Goldman disagrees with these views. As Salesforce's Chief Ethical and Humane Use Officer, she believes that such discussions are a "waste of time".


AI systems are already in the co-pilot stage and will only get better over time. "With every advancement in AI, we are constantly improving our safeguards and addressing new risks," she said.


Goldman states that the field of AI ethics is not a recent development; it has been evolving for decades. Issues such as accuracy, bias, and ethical considerations have long been studied and addressed. "In fact, our current understanding and solutions are built upon past research and development. While predicting the future of AI is challenging, our focus should be on continuously improving our safeguards to effectively manage potential risks and ensure we have the capability to handle even the most advanced AI technologies," she added.


Strong Supporter of the EU Artificial Intelligence Act


As artificial intelligence becomes more prevalent, its ethical implications are receiving increasing attention, prompting governments and legislators around the world to enact strict usage policies.


The European Union (EU) has become the first jurisdiction in the world to introduce an artificial intelligence act, which aims to regulate this technology. Many legislators in the EU believe that without regulation, artificial intelligence could pose potential harm to society.


While different jurisdictions have different approaches to regulating artificial intelligence, including India, Goldman actually expresses strong support for the EU Artificial Intelligence Act.


"While discussions about regulating AI models are ongoing, what really matters is regulating the outcomes and associated risks of artificial intelligence. This is the general approach taken by the EU Artificial Intelligence Act," Goldman said.


The act categorizes artificial intelligence systems into four risk levels - unacceptable risk, high risk, limited risk, and minimal or no risk. It focuses on identifying and controlling the highest-risk outcomes, such as fair considerations in job applications or loan approvals, which have significant impacts on people's lives.


"Furthermore, the EU applies standards not only to the individuals creating the models but also to the applications built on these models, the data used, and the companies utilizing these products. This comprehensive, layered approach is often overlooked but crucial for effective regulation," she added.


However, regulation also brings concerns about hindering innovation, stifling creativity, and slowing down the pace of technological progress.


Nevertheless, Goldman believes that AI regulation is urgently needed. "These regulations should be developed through democratic processes and involve multiple stakeholders. I am proud of the efforts made in this regard and emphasize the importance of these regulations beyond individual companies," she said.


Humanity at the Helm


At the TrailblazerDX conference, Goldman and her colleagues, including Salesforce's Chief Scientist Silvio Savarese, emphasized the importance of building trust in artificial intelligence among consumers.


Savarese even stated that the inability to establish consumer trust "could lead to the next AI winter". Goldman and her colleagues also highlighted the urgent need for transparency and accountability in artificial intelligence systems to foster consumer trust and prevent potential setbacks in the adoption of AI.


"At Salesforce, we believe that trustworthy artificial intelligence needs to be human-guided. Instead of requiring human intervention in every AI interaction, we are designing robust control mechanisms for the entire system, enabling humans to oversee the outcomes of artificial intelligence and focus on high-judgment tasks that require their attention," Goldman said.


Salesforce's approach has always been to empower customers with control, recognizing that they are best suited to understand their brand voice, customer expectations, and policies.


"For example, through the Trust Layer, we plan to allow customers to adjust toxicity detection thresholds according to their needs. Similarly, with features like Retrieve Augmented Generation (RAG), customers can fine-tune the level of creativity they want to see in AI-generated responses," she said.


"Furthermore, incidents involving AI ethics highlight the importance of government intervention in establishing regulatory frameworks, as these issues may vary by region and culture. Therefore, government regulation of artificial intelligence is considered crucial," she added.


Balancing Safeguards is a Delicate Task


In addition, as the company rolls out its artificial intelligence products, collaborating with customers to ensure ethical and responsible use of the products and mitigate risks becomes crucial.


"At Salesforce, we release our products when we believe they are ready and responsible, but we also remain humble, recognizing that technology is constantly evolving. While we subject our products to rigorous internal testing from multiple perspectives, the real issues and areas for improvement are often discovered during pilot phases with customers," she said.


According to Goldman, this iterative process ensures that products meet specific standards before release, and the company continues to work with customers to learn and improve the products.


"Finding the balance between confidence in our products and ongoing openness is essential."