NVIDIA has recently introduced three new NIM microservices designed as part of larger applications, existing as small independent services aimed at enhancing control and security for enterprises utilizing AI agents.
One of the new services focuses on content safety to ensure AI agents do not produce harmful or biased outputs. Another service is dedicated to confining conversation topics within approved boundaries. The third service aims to prevent AI agents from attempting to bypass software restrictions.
These three new services are part of NVIDIA's existing NeMo Guardrails open-source toolkit and microservices suite, designed to help businesses optimize their AI applications.
According to official statements, by employing multiple lightweight and specialized models as safety guardrails, developers can address potential vulnerabilities that arise when relying solely on more general global policies and protections. A "one-size-fits-all" approach cannot adequately safeguard and manage complex agent-based AI workflows.
It appears that AI companies are realizing that the adoption of AI agents in enterprises is more challenging than initially anticipated. Although figures like Salesforce CEO Marc Benioff predict that over a billion agents will be running on the Salesforce platform alone within 12 months, the reality might be less optimistic.
A recent Deloitte study indicates that approximately 25% of businesses are already using AI agents or plan to start using them by 2025. The report also forecasts that about half of all companies will adopt AI agents by 2027. This suggests that while there is significant interest in AI agents, the pace of adoption does not match the rapid innovation in the AI sector.
NVIDIA may hope that such initiatives will make the use of AI agents appear safer and more mature, reducing their experimental nature. Only time will tell if these efforts achieve their intended impact.