Twitter unveiled its first-ever AI bug bounty program earlier this month. The task is to identify bias in its picture cropping method. After the META team discovered tiny cases of bias in their analysis earlier this year, Twitter disabled the picture cropping algorithms from public usage.
The winners will be chosen using a methodology developed by Twitter’s Machine Learning Ethics, Accountability, and Transparency, or the META team. The first-place award will be $3,500, which would be disclosed during this year’s DEF CON AI Village.
Faulty AI systems
AI-based businesses typically use a black box to prevent critical scrutiny of the findings generated from their systems. This has caused stakeholders to be worried about the lack of transparency and corporations’ failure to convey the critical aspects for the execution of their AI-based solutions.
Bug bounties, which compensate hackers for identifying flaws in software code before bad actors use them, are becoming indispensable in the realm of security. These bug bounty schemes might be beneficial to businesses when evaluating their results based on interpretability.
Thus, bug bounty programs not only serve to ensure the company’s confidence by identifying flaws but also aid in the evaluation of the firm’s cybersecurity division.
AI Project Bug Bounties
Professionals from Google Intel, Brain, OpenAI, and leading research laboratories in the United States and Europe collaborated last year to create a toolset for putting AI ethical concepts into reality. It has established numerous standards, including compensating engineers to uncover bias in AI, similar to bug bounties in cybersecurity software.
The paper stated, “If companies were more open earlier in the development process about possible faults, and if users were able to raise (and be compensated for raising) concerns about AI to institutions, users might report them directly instead of seeking recourse in the court of public opinion.”
According to Andrew Cormack, the chief regulatory officer at Jisc, bug bounties schemes, which compensate researchers for finding security weaknesses, have completely changed the way the tech industry addresses vulnerabilities. Bug bounties, according to Cormack, are a watershed moment in the way the IT industry addresses vulnerabilities. Today, suppliers and security professionals are proactive in their approach to security as a matter of course.
While bounty contests are not a substitute for systematic analysis and testing, they can allow organizations to subject their technologies to tests they would not have explored otherwise.
Popular AI Bias Bug Bounty Schemes
- Logically’s Bug Bounty Program: Logically has been collaborating with security professionals to keep customers safe from malicious networks and mobile apps.
- The Mozilla Security Bug Bounty Program is intended to encourage security research in Mozilla software and to reward people who contribute to making the internet a lot safer.
- Its CRASH initiative, which stands for Community Reporting of Algorithmic System Harms, brings key stakeholders together for tool discovery, scoping, and iterative prototyping. This is done to make AI systems more accountable and safe.
- ‘The Internet Bug Bounty’ is hosted by HackerOne, a “hacker-based” security testing platform. This initiative pays hackers who discover security flaws in some of the internet’s most critical applications. The initiative is funded by GitHub, Facebook, Hackerone, Microsoft, and the Ford Foundation and is overseen by a panel of volunteers chosen from the cybersecurity community.
- Bugcrowd is a crowdsourced security platform that uses analytics, automated security procedures, and human knowledge to identify and patch significant vulnerabilities. Bugcrowd revealed a $30 million Series D financing round in April 2020. It has cooperated with a diverse range of customers, including Atlassian, Tesla, Square, Mastercard, and Fitbit. They evaluate platforms for major technology companies and retailers such as eBay and Amazon.