Salesforce Releases Open Source "Large Action Model" xLAM Series
Salesforce recently announced the official open-source release of its "Large Action Models" (xLAM) series, which is internally developed. These models are claimed to achieve higher accuracy at a lower cost compared to existing large-scale AI language models in the market.
In addition, Salesforce has introduced the xGen-Sales model, a proprietary model designed specifically for enterprise self-sales tasks, aiming to enhance the functionality of the Agentforce platform. Agentforce is a Salesforce platform that allows users to design AI agents capable of interacting with customers.
The xLAM series models are developed by Salesforce's AI research department with the goal of simplifying the creation process of AI agents that execute actions rather than just generating content, thereby reducing model complexity. Salesforce states that the xLAM models are smaller and more streamlined compared to traditional large-scale language models, and perform better in terms of tool usage. These models focus on performing specific tasks rather than complex dialogues, summaries, or generation capabilities.
At the core of the xLAM family is the ultra-compact xLAM-1B model, jokingly referred to as the "Tiny Giant" by the research team. Despite containing only 1 billion parameters, this model outperforms larger-scale models such as OpenAI's GPT-3.5 and Anthropic PBC's Claude in terms of tool usage and inference tasks.
The compact design of xLAM-1B enables it to run on mobile devices such as smartphones and tablets, enabling automation commands like weather application data queries. Salesforce has released xLAM-1B and its three other versions (xLAM-7B, xLAM-8x7B, and xLAM-8x22B) on the Hugging Face open-source platform for developers and enterprise users to try out.
One of the main challenges faced by the research team in developing these new action-oriented models was the requirement for training data. Particularly for the xLAM-1B model, specific and highly tailored data had to be used to maintain its compactness. As action-oriented and tool-usage AI models are still in their early stages, the team had to resort to synthetic data generation techniques to fill in data gaps and ensure that the model remains both compact and efficient.
Salesforce's move reflects the industry's reflection on the development trends of large-scale generative AI models. While models like Google PaLM and GPT-3 have hundreds of billions of parameters, they bring challenges in terms of deployment and management costs and delays. The success of xLAM-1B demonstrates that through specialized design, performance of large-scale models can be achieved in smaller models.