Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Bugcrowd Launches AI Bias Assessment Offering for LLM Applications

April 2024 by Marc Jacob

Bugcrowd announced the availability of AI Bias Assessments as part of its AI Safety and Security Solutions portfolio on the Bugcrowd Platform. AI Bias Assessment taps the power of the crowd to help enterprises and government agencies adopt Large Language Model (LLM) applications safely, efficiently, and confidently.

LLM applications run on algorithmic models that are trained on huge sets of data. Even when that training data is curated by humans, which it often is not, the application can easily reflect “data bias” caused by stereotypes, prejudices, exclusionary language, and a range of other possible biases from the training data. Such biases can lead the model to behave in potentially unintended and harmful ways, adding considerable risk and unpredictability to LLM adoption.
Some examples of potential flaws include Representation Bias (disproportionate representation or omission of certain groups in the training data), Pre-Existing Bias (biases stemming from historical or societal prejudices present in the training data), and Algorithmic Processing Bias (biases introduced through the processing and interpretation of data by AI algorithms).
The public sector is urgently affected by this growing risk. As of March 2024, the US Government mandated its agencies to conform with AI safety guidelines – including the detection of data bias. That mandate extends to Federal contractors later in 2024.
This problem requires a new approach to security because traditional security scanners and penetration tests are unable to detect such bias. Bugcrowd AI Bias Assessments are private, reward-for-results engagements on the Bugcrowd Platform that activate trusted, third-party security researchers (aka a “crowd”) to identify and prioritize data bias flaws in LLM applications. Participants are paid based on the successful demonstration of impact, with more impactful findings earning higher payments.

The Bugcrowd Platform’s industry-first, AI-driven approach to researcher sourcing and activation, known as CrowdMatchTM, allows it to build and optimize crowds with virtually any skill set, to meet virtually any risk reduction goal, including security testing and beyond.

For over a decade, Bugcrowd’s unique "skills-as-a-service" approach to security has consistently uncovered more high-impact vulnerabilities than traditional methods. Our customer base, which numbers nearly 1,000, has benefited from this approach, which also provides a clearer line of sight to ROI. With unmatched flexibility and access to a decade of vulnerability intelligence data, the Bugcrowd Platform has evolved over time to reflect the changing nature of the attack surface – including the adoption of mobile infra, hybrid work, APIs, crypto, cloud workloads, and now AI. In 2023 alone, customers found almost 23,000 high-impact vulnerabilities using the Bugcrowd Platform, helping to prevent potential breach-related costs of up to $100 billion.


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts