Laying the Foundation of Trust in the Generative AI Era with Salesforce

2024-03-15

Harvard Business Review recently published an article titled "Why is it so difficult to adopt generative AI," pointing out that companies of all sizes, from large enterprises to small businesses, face challenges in integrating artificial intelligence (including generative AI, rule-based traditional algorithms, and machine learning) into their operations.

One of the main reasons is that they are concerned about using a technology they do not fully understand. Therefore, it is crucial for companies to establish trust in artificial intelligence among consumers.

At the Salesforce developer conference TrailblazerDX, one of the major announcements was the release of Einstein 1 Studio. However, another key topic of the conference was the importance of building trust in artificial intelligence for businesses.

Multiple Salesforce spokespersons emphasized the importance of cultivating trust in artificial intelligence among consumers, especially when introducing new AI features to the platform.

In fact, despite the numerous advantages of large language models, companies still approach them with caution. This hesitation stems from concerns about the potential risks of models creating illusions, occasionally providing inaccurate answers, and leaking sensitive customer or business data.

Salesforce's Chief Scientist Silvio Savarese even stated that the inability to establish trust among consumers "could lead to the next AI winter."

Building Trust in Artificial Intelligence

To enhance the reliability of large language models, Salesforce introduced the Trust Layer last year.

This security intermediary ensures the safe interaction between users and large language models by masking personally identifiable information (PII), monitoring toxic outputs, ensuring data privacy, preventing persistent user data, and prohibiting additional training.

Over time, the customer relationship management company has continuously added different components to improve the Trust Layer. For example, it is using another large language model to detect any signs of toxic behavior, bias, or potentially offensive or harmful content.

Savarese stated that they are currently developing a component focused on generating confidence. This component assesses the certainty of artificial intelligence when producing specific outputs.

"Confidence indicators can be used to consider whether human involvement is required for further verification or evaluation, which may require three to four rounds of review," Savarese said.

He also mentioned that another key component being developed and prepared for inclusion in the Trust Layer is interpretability. After generating outputs, the goal is to clarify how specific outputs were produced, explain the decision-making process, and create steps involved in each specific output.

Is it Effective?

Despite significant efforts to eliminate illusions in large language models, achieving complete success remains challenging. However, companies like Salesforce have adopted various approaches to mitigate or control the extent of illusions.

Although establishing trust in artificial intelligence among consumers is crucial, people still have doubts about the effectiveness of these efforts. Muralidhar Krishnaprasad, the Executive Vice President of Software Engineering at Salesforce, stated that there was considerable fear and uncertainty about large language models at the beginning of last year due to people's unfamiliarity with them.

"However, this concern has diminished, especially as stakeholders believe that our Trust Layer provides assurance by combining technology with user data," Krishnaprasad said.

"Over the past year, the effectiveness of the Trust Layer has been recognized, and people have gained confidence, ensuring a sense of security for further innovation based on technology," he added.

However, certain regulated industries may still have concerns, particularly when operating under cautious government regulations in specific fields. But according to Krishnaprasad, even Salesforce's public sector clients show great interest in artificial intelligence.

Protecting Data Security

Last year, the San Francisco-based company officially launched Data Cloud after announcing it at the previous year's Dreamforce event.

Data Cloud allows Salesforce customers to centralize all their data in one place and leverage the power of unified data to enhance customer insights, personalized interactions, and seamless integration across the Salesforce platform, all with the help of artificial intelligence.

As customers migrate their data to the cloud, Salesforce must ensure the foundation of large language models is robust, preventing potential data leaks and reducing the risk of highly biased outputs generated by large language models.

Savarese indeed ensures the absolute security of bringing customer data to the Salesforce cloud. However, similar to the challenges of illusions in large language models, one cannot completely rule out the possibility of artificial intelligence encountering issues in certain situations.

"Salesforce's business is based on the fact that we properly safeguard customer data," he said.

Salesforce also allows customers to use their own data lake. "In this case, we introduced a metadata layer. We do not host customer data; instead, we establish a critical layer that is essential for artificial intelligence operations. Therefore, the cloud sits above the customer's data lake," Savarese explained.

Furthermore, Savarese emphasized that customer data is strictly excluded from the model training process. The data inputted into the model is neither retained nor used by the model for self-training purposes.

"This framework is crucial not only for our proprietary models but also for third-party vendor models, such as OpenAI's GPT models or Anthropic's Claude series," he added.

Einstein 1 Studio

With the launch of Salesforce's new artificial intelligence product, Einstein 1 Studio, building trust in artificial intelligence has become crucial. Einstein 1 Studio consists of three components. The first is the Copilot Builder, which allows developers to create custom AI operations to perform specific business tasks.

"It enables developers to create operations or reference existing operations that may exist in flows, APEX, etc., and then register them with the Copilot so that the Copilot knows what tasks developers can perform. Additionally, the Copilot Builder includes debugging tools that provide insights into the correctness of plans," Krishnaprasad said.

The second component is the Prompt Builder, which allows users to build and activate custom prompts within workflows.

"The Prompt Builder allows you to create prompts and integrate them with data from CRM or Data Cloud. It automatically triggers large language models, retrieves results, and enables the use of prompts throughout the platform," explained Krishnaprasad.

Lastly, the third component of Einstein 1 Studio is the Model Builder, where developers can build or import various artificial intelligence models. Krishnaprasad explained that the Model Builder also has three components—one is Predictive Modeling, where users can create their own predictive models or import existing models from platforms like AWS SageMaker.

"Additionally, users can leverage pre-built models or introduce their own large language models for fine-tuning and customization to be more widely used across the stack. This integrates seamlessly with the capabilities of the Copilot and Prompt Builder, providing a comprehensive toolkit for optimizing prediction channels," Krishnaprasad added.

These features are impressive and give Salesforce an advantage in the customer relationship management market. Gaurav Kheterpal, the Chief Technology Officer of MTX Group and a Salesforce Trailblazer, also agrees with this.