Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Endor Labs Helps Organizations Identify and Select Secure Open Source Artificial Intelligence Models

October 2024 by Marc Jacob

Endor Labs announced Endor Scores for AI Models, a unique capability that makes it easier than ever for companies to identify the most secure open source AI models currently available on Hugging Face, the popular platform that enables developers to share Large Language Models (LLMs), machine learning models and other open source AI models and dataset. As with Endor Labs’ market-leading release, DroidGPT—an AI that helps developers select better open source software —Endor Scores for AI Models uses 50 out-of-the-box metrics to score all available AI models for security, popularity, quality and activity. The release represents a major step forward in AI governance by enabling developers to start clean with AI models, a goal that has so far proved elusive.

The AI landscape is the Wild West of technology development—it’s unquestionably exciting and offers virtually unlimited potential for constant advancement. Also, while training each AI model to meet individual needs can be resource-intensive, platforms like Hugging Face feature a massive repository of possible options. However, convenience comes with complications: Those AI models could have exploitable vectors for attack, or rely on other models that can expose the network to even greater risks.

In many ways, this aspect of AI development mirrors the growth of open source software (OSS)—in both worlds, there’s a wealth of options accompanied by often-hidden risks. In the case of OSS, every software package can bring dozens of indirect or ‘transitive’ dependencies, which is where most vulnerabilities reside. Similarly, Hugging Face offers a vast repository of open source, ready-made AI models, and developers focused on creating differentiated features can use the best of these to speed their own work. Again, there are serious risks involved, and Endor Scores for AI Models mitigates those by offering scores that enable developers to start with the most secure and appropriate options for their specific needs.

Those risks are both wide-ranging and dangerous. For example, pre-trained AI models from Hugging Face can harbor serious vulnerabilities, such as malicious code in files shipped with the model, or hidden within model ‘weights.’ When these models are integrated into an organization’s infrastructure, they can bring major threats. There are also operational risks: These AI models can be modified based on other existing models, creating a complex dependency graph that can be hard to manage and secure.

There are also licensing obstacles—any failure to adhere to pre-set terms can spark legal headaches. Organizations need to be aware of intellectual property (IP) and copyright terms of all the AI models they use.

Endor Scores for AI Models simplifies the task of identifying the best and most secure AI models from the array of options available on Hugging Face. Developers don’t even need to know the names of particular models—the process of finding the best options can start with questions such as: What models can I use to classify sentiments? What are the most popular models from Meta? And what is a popular model for voice in Hugging Face? In response, Endor Scores for AI Models will offer crisp scores that rank both the positive and negative aspects of each model. Developers can then select the most appropriate models for their particular needs.

Endor Scores for AI Models is available now for existing customers.


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts