Will the major Generative AI vendors allow an academic investigation of their security?

Computerworld.com reported that “More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections. The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.” The March 5, 2024 article entitled “Researchers, legal experts want AI firms to open up for safety checks.” https://www.computerworld.com/article/3714180/researchers-legal-experts-want-ai-firms-to-open-up-for-safety-checks.html) included these comments:

 The letter, and a study behind it, was created with the help of nearly two dozen professors and researchers who called for a legal “safe harbor” for independent evaluation of genAI products.

The letter was sent to companies including OpenAI, Anthropic, Google, Meta, and Midjourney, and asks them to allow researchers to investigate their products to ensure consumers are protected from bias, alleged copyright infringement, and non-consensual intimate imagery.

How do you think this GenAI investigation will go?

Previous
Previous

Healthcare breach at UT Southwestern!

Next
Next

Good news! NIST Releases Cybersecurity Framework 2.0!