The number of occurrences involving artificial intelligence has surged tenfold in less than a decade since 2016, following the rise of large language models and their digital tools, according to independent monitoring data.
OpenAI, Tesla, Google, and Meta account for the highest number of AI-related incidents in the open repository of AIAAIC, an initiative that tracks failures and promotes transparency in AI systems, algorithms, and automation.
Learn more about the AIAAIC
The AIAAIC manifesto highlights the need for genuine transparency and openness in artificial intelligence systems, algorithms, and automation. The organization argues that technological power cannot be monopolized by corporate or governmental interests but must translate into transparency, accountability, and democratic participation, ensuring that innovation goes hand in hand with ethics and human rights.
The organization warns that while the industry focuses on existential and catastrophic risks, concrete harms continue to be ignored. With new legislation in progress, the debate is shifting, but impact classifications remain limited, confusing, and inflexible.
If you want to access the repository of AI-related incidents and issues cataloged by AIAAIC, click here.
The first case recorded by AIAAIC dates back to 2008, when two scientific publishers retracted 120 articles after the French researcher Cyril Labbé discovered they had been generated by SCIGen. Created in 2005, the program randomly combines words to produce texts that mimic academic papers.
AIAAIC classifies events as either issues or incidents.
- Issues are public concerns about an AI system, even without evidence of harm. They may involve ethics, environmental impact, or inadequate governance, and if unmanaged, can lead to incidents.
- Incidents are sudden, public events that cause disruption, loss, or crisis, such as AI failures, data leaks, or unethical developer behavior. Most AIAAIC records fall into this category.
In 2024, while incidents declined to 187 compared to the previous year, issues surged to 188—the highest number recorded—totaling 375 occurrences, ten times more than in 2016.
This increase coincided with the rise in AI model releases by companies, researchers, and other groups, peaking in 2023.
Most of these incidents occurred in the United States. AIAAIC considers the geographical origin of the system, incident, issue, or data.
For example, if a U.S. company develops a model that generates celebrity deepfakes, the organization will register the event as occurring in the U.S., regardless of its global impact.
As of February 20, 2025, Brazil had appeared in 24 instances. The first was in 2018, when ViaQuatro, one of São Paulo's metro concessionaires, was legally required to stop using facial recognition technology for emotion analysis.
Another classification tracked by AIAAIC is sectors, referring to the industry sector (including government and nonprofit organizations) most affected.
In Brazil, recorded events include media and entertainment, politics, transportation, retail, and even religion—such as when a priest used a drone to carry the Eucharist to the altar during a Mass. The congregation reportedly supported the act, but when shared on Facebook, religious conservatives complained that it was “inappropriate,” “scandalous,” and a “desecration.”
Overall, most incidents and issues involve the media and entertainment sector, with 288 records as of February 20, 2025. Many of these cases involve copyright disputes, such as a 2024 incident where Netflix used AI to recreate the voice of Gabi Petito, a murder victim, in a documentary series about her death.
Although there are only 25 different sectors, AIAAIC has recorded over 1,000 AI systems involved in incidents or issues in its database.
All sectors cataloged by AIAAIC
The most well-known systems, such as those developed by big tech companies like Google, Meta, and OpenAI, lead in recorded incidents. However, 826 other individual or lesser-known systems are also cataloged, including proprietary and locally distributed models.
Besides big tech companies, Tesla, the electric car company controlled by billionaire Elon Musk, also ranks high on this list. There are 63 different records of incidents where AI-driven car systems caused problems, including fatal accidents.
AIAAIC classifies AI-related incidents into different issue groups—translated by Núcleo as “controversy reasons” for better understanding. These groups reflect concerns about the technological system, its governance, or misuse by third parties.
The categories of “accuracy and reliability” and “safety,” with 361 and 268 records, respectively, top the list of incidents recorded by AIAAIC.
Among the documented cases are failures in Tesla's autonomous driving system. AIAAIC has recorded at least 20 incidents where this system malfunctioned, leading to fatal accidents.
In the safety category, incidents also include AI-powered robots creating non-consensual pornography or child exploitation material. One example is Núcleo's report on Telegram bots, which appears in the AIAAIC repository.
List of all “controversy groups”
The definitions of each group are as follows:
- Accountability: Cases involving the failure of system creators, developers, or implementers to respond to inquiries, complaints, or appeals, or to allow third parties to audit, investigate, and meaningfully assess the system’s human and technological behavior and its outcomes.
- Accuracy and Reliability: Cases involving the degree to which a system achieves its stated objectives with precision and its ability to operate consistently and reliably without deterioration, failures, or malfunctions.
- Bias and Discrimination: Cases where a system produces systematically unfair or biased outcomes due to governance failures, poor data quality, or incorrect assumptions in the machine learning process.
- Employment: Cases involving the improper, unethical, or illegal use of AI in the workplace, as well as the development and deployment of such systems for employment-related purposes.
- Disinformation: Cases involving information or data that mislead users—whether accidentally or intentionally.
- Privacy: Cases concerning the protection of personal privacy for users of AI systems, algorithms, and automation, as well as individuals whose data has been or is being used to train these systems.
- Safety: Cases involving the physical and psychological safety of users and others impacted by a system and its governance.
- Transparency: Cases involving the extent to which a system is understandable to users, policymakers, and other stakeholders.
How we did this
We accessed AIAAIC’s public incident repository and used this R script to analyze the data, which was then visualized in interactive charts using the Flourish tool. OpenAI’s GPT-4 model was used to assist in translating category names and refining the syntax of the analysis code.
For the analysis, we used OpenAI's GPT-4.5 model to assist in translating category names, refining the syntax of the analysis code, and translating the original article from Portuguese.
Whenever we use AI tools, we conduct human review, verify the information, ensure the content meets our ethical standards, and confirm that such usage is necessary. This approach allows us to analyze large volumes of data more quickly and objectively.
You can give your feedback about our use of AI by clicking here.