![]() The exercise tested generative AI models from eight companies: OpenAI, Anthropic, Meta, Google, Hugging Face, Nvidia, Stability AI, and Cohere. Those potential harms were in abundant evidence at DEF CON, according to an analysis of the results published Wednesday by one of its main organizers, the AI safety nonprofit Humane Intelligence, in collaboration with researchers from participating tech firms Google and Cohere. Second, public red teaming creates a more realistic picture of how people might engage with these chatbots in the real world to create accidental or inadvertent harms. ![]() First, it offers a greater diversity of participants and perspectives engaging with the chatbots than smaller handpicked teams at the companies building them. While red teaming usually takes place behind closed doors at companies, labs, or top-secret government facilities, the organizers of last year’s exercise at the DEF CON hacking conference said opening it up to the general public provides two major advantages. ![]() With those companies’ participation, as well as the blessing of the White House, the goal was to test the chatbots’ potential for real-world harm while in a safe environment, through an exercise known in the security world as “red teaming.” Most of them were there to do one thing: try to break the artificial intelligence chatbots developed by some of the biggest tech companies out there. Last summer, more than 2,000 people (including this reporter ) gathered at a convention center in Las Vegas for one of the world’s biggest hacking conferences.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |