Why Amazon Q Deserves Another Chance

2023-12-06

At the re:Invent conference, AWS introduced Amazon Q, a generative AI chatbot designed specifically for enterprise needs. The company claims that it is safer and more reliable compared to OpenAI's ChatGPT. However, contrary to these claims, Amazon Q has become the focus of attention for various reasons. Just three days after its release, concerns about the accuracy and privacy of the chatbot have been increasing among employees. It has been reported that Q "has serious delusions" and sensitive data, such as the location of AWS data centers, internal discount plans, and unreleased features, has been leaked. Undoubtedly, Amazon quickly issued a statement saying, "We appreciate all the feedback we have received and will continue to refine Q as it transitions from a preview product to generally available." Case of Amazon Q Highlighted at the re:Invent conference, employees can use Amazon Q to perform tasks in popular systems like Jira, Salesforce, ServiceNow, and Zendesk, which sets Amazon Q apart. For example, an employee can ask Amazon Q to open a ticket in Jira or create a case in Salesforce. Interestingly, Amazon Q has not been released yet, and criticism has been ongoing. As a preview version, it is expected to undergo necessary revisions. "Companies need to realize that preventing an LLM (large language model) from having delusions is very challenging. It is best if they can control it as much as possible within a certain range. What OpenAI has done with GPT-4 is a difficult task that others may not easily replicate," said Nektarios Kalogridis, Founder and CEO of DeepTrading AI, expressing concerns about Amazon Q. Furthermore, we cannot directly attribute delusions to Amazon Q, as it can work with any model discovered on Amazon Bedrock. AWS' AI model library includes Meta's Llama 2 and Anthropic's Claude 2. The company states that customers using Q typically choose the model that works best for them, connect to the model's Bedrock API, use it to learn their data, policies, and workflows, and then deploy Amazon Q. Therefore, if delusions occur, it may be due to one of the aforementioned models. Additionally, ChatGPT also has issues with leaking sensitive information. Recently, when asked to repeat the word "poem" indefinitely, it leaked private and sensitive data. However, this has not stopped companies from using ChatGPT. Similar to Amazon Q, OpenAI's ChatGPT Enterprise has not been released yet. Brad Lightcap, COO of OpenAI, revealed in a recent interview that "thousands" of companies are on the waiting list for the AI tool (ChatGPT Enterprise). Since November, 92% of Fortune 500 companies have been using ChatGPT, a significant increase from 80% in August. Enterprise Chatbots are the Future Despite the raised concerns, Amazon Q brings significant benefits. Like ChatGPT Enterprise, Amazon Q will also allow customers to connect to their business data, information, and systems, enabling it to integrate everything and provide customized assistance to help employees solve problems, generate content, and take actions related to their business. The above features are based on RAG, which retrieves data relevant to questions or tasks and provides them to LLM. However, RAG brings the risk of potential data leaks, similar to what happened with Amazon Q. Ethan Mollick, a professor at the Wharton School, stated that RAG has its own advantages and disadvantages. "I've said it many times, using LLM to build customer service bots with RAG data access is not as simple as it seems. In fact, this is precisely the weakness of current LLMs - the risk of delusions and data leaks." OpenAI introduced the Assistant API at Devday, which includes a function called "Retrieval," which is just an RAG function. This enhances the assistant's ability to retrieve knowledge from outside our model, such as proprietary domain data, product information, or files provided by your users. In addition to OpenAI and AWS, Cohere is collaborating with enterprises to integrate generative AI capabilities. Cohere is one of the companies that recognized the importance of RAG as a method to reduce delusions and keep chatbots up to date. In September, Cohere launched a chat API with RAG. With this new feature, developers can combine user input, data sources, and model outputs to create powerful product experiences. Despite the ongoing concerns about delusions and data leaks, enterprises cannot give up on generative AI chatbots because they are bound to improve over time. This is just the beginning.