Endor Labs Introduces AI Model Scoring Evaluation Tool

2024-10-17

Recently, Endor Labs unveiled a new feature called "Endor Scores for AI Models," designed to assist developers in selecting open-source AI models more safely and efficiently on the Hugging Face platform. Hugging Face is a renowned platform for sharing large language models (LLMs), machine learning models, and other open-source AI models and datasets.

As the demand for ready-made AI models among developers continues to grow, an increasing number are turning to platforms like Hugging Face. This trend mirrors the early adoption of open-source software (OSS); however, the complexity and potential risks associated with AI models introduce new challenges for AI governance. Endor Labs' new feature offers robust support to developers during AI model selection by providing a straightforward and intuitive scoring system.

Varun Badhwar, co-founder and CEO of Endor Labs, stated, "Our mission has consistently been to safeguard everything that code depends on, and AI models are undoubtedly the next crucial area in this endeavor. Today, every organization is experimenting with AI models, whether to power specific applications or to build entire AI-driven businesses. Therefore, security must keep pace, and now is a rare opportunity to mitigate potential risks and high maintenance costs from the outset."

George Apostolopoulos, Endor Labs' founding engineer, also expressed his approval of the new feature, saying, "Almost everyone is now experimenting with AI models. Some teams are building entirely new AI-based businesses, while others are looking to add the 'AI-powered' label to their products. One thing is certain: your developers are actively utilizing AI models. However, this convenience comes with risks. The current landscape resembles the 'Wild West,' where individuals often opt for models that meet their immediate needs while overlooking potential vulnerabilities. Therefore, we need a reliable scoring system to help developers make informed decisions."

Endor's AI model scoring tool focuses on evaluating models based on security, popularity, quality, and activity, while taking into account several key risk areas such as security vulnerabilities, legal and licensing issues, and operational risks. To generate accurate scores, Endor Labs' assessment tool applies 50 predefined checks to AI models on Hugging Face and conducts a comprehensive evaluation based on factors like the number of maintainers, corporate sponsorship, release frequency, and known vulnerabilities.

A standout feature of Endor Scores is its user-friendly search and ranking functionality. Developers do not need to know the specific model names; they can simply enter general queries such as "Which models can I use for sentiment classification?" or "What are the most popular models from Meta?" The tool then provides clear scores and rankings, enabling developers to quickly identify the most suitable options.

Apostolopoulos emphasized, "Your team is asked about AI on a daily basis as they seek models that can accelerate innovation. By evaluating open-source AI models with Endor Labs, you can ensure that the models you use are both reliable and secure. This will provide greater flexibility and competitiveness for your business."

Endor Labs' new feature undoubtedly provides developers with a powerful tool for selecting open-source AI models. As AI technology continues to evolve and become more widespread, we have every reason to believe that this scoring system will play an increasingly significant role in promoting the healthy and rapid development of AI technology.