As concerns mount about AI’s risk to society, a human-first approach has emerged as an important way to keep AIs in check. That approach, called red-teaming, relies on teams of people to poke and prod these systems to make them misbehave in order to reveal vulnerabilities the developers can try to address. Red-teaming comes in lots of flavors, ranging from organically formed communities on social media to officially sanctioned government events to internal corporate efforts. Just recently, OpenAI announced a call to hire contract red-teamers the company can summon as needed. READ MORE