It’s called red-teaming.
As Big Law ratchets up for the generative AI economy, some top firms are pouring resources into the business of stress-testing artificial intelligence models for Corporate America, essentially smashing attorneys and data scientists together so they can make sure the machines don’t do anything that would get companies into legal trouble.
“What we’re doing is building both automated and human attacks, where we go to the large language model and, effectively, try to get it to violate legal standards,” said Danny Tobey, the AI and data analytics chair at DLA Piper.
As companies roll out chatbots and …