MIT Researchers Develop AI Risk Database

MIT Researchers Develop AI Risk Database

The Freely Accessible Database Categorizes and Catalogs AI Risks for Policymakers and Industry Stakeholders

As recently exemplified by the European Union’s EU AI Act and California’s SB 1047, policymakers have grappled with defining the specific risks that AI regulations should address. In an effort to provide clearer guidance for these lawmakers, as well as for industry and academic stakeholders, researchers at MIT have developed an AI “risk repository.” which serves as a database that categorizes and catalogs various AI risks.

“This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible, and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” said Peter Slattery, a researcher at MIT’s FutureTech group and lead on the AI risk repository project.”We created it now because we needed it for our project, and had realized that many others needed it, too.”

The AI risk repository includes over 700 risks, organized by factors such as intentionality, domains like discrimination, and subdomains including disinformation and cyberattacks. According to Slattery, the repository was developed from a desire to understand the overlaps and gaps in current AI safety research. While other risk frameworks exist, Slattery notes that they only address a fraction of the risks identified in the repository, which could have significant implications for AI development, application, and policy creation.

“People may assume there is a consensus on AI risks, but our findings suggest otherwise,” Slattery emphasized. “We found that the average frameworks mentioned just 34% of the 23 risk subdomains we identified, and nearly a quarter covered less than 20%. No document or overview mentioned all 23 risk subdomains, and the most comprehensive covered only 70%. When the literature is this fragmented, we shouldn’t assume that we are all on the same page about these risks.”

To compile the repository, MIT researchers collaborated with colleagues from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence. Together, they combed through academic databases, gathering thousands of documents related to AI risk evaluations.

Their analysis revealed that certain risks were more frequently mentioned in third-party frameworks than others. For instance, over 70% of the frameworks discussed AI’s privacy and security implications, while only 44% addressed misinformation. Additionally, while more than 50% covered potential discrimination and misrepresentation perpetuated by AI, only 12% mentioned the “pollution of the information ecosystem,” referring to the increasing prevalence of AI-generated spam.

“A takeaway for researchers and policymakers, and anyone working with risks, is that this database could provide a foundation to build on when doing more specific work,” Slattery explained. “Before this, people like us had two choices. They could invest significant time to review the scattered literature to develop a comprehensive overview, or they could use a limited number of existing frameworks, which might miss relevant risks. Now they have a more comprehensive database, so our repository will hopefully save time and increase oversight.”

The question remains, however, whether the AI risk repository will be widely utilized. Currently, AI regulation is a patchwork of approaches with differing objectives. If this repository had existed earlier, would it have made a difference in shaping regulations? That remains uncertain.

Another consideration is whether merely agreeing on the risks posed by AI is enough to drive effective regulation. Many AI safety evaluations face considerable limitations, and while the repository is a significant resource, it may not resolve all challenges.

Nonetheless, the MIT researchers are determined to make an impact. Neil Thompson, head of the FutureTech lab, shared that the next phase of research will involve using the repository to assess how effectively different AI risks are being managed.

“Our repository will help us in the next step of our research, when we will be evaluating how well different risks are being addressed,” Thompson said. “We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”