A Radical Plan to Make AI Good, Not Evil

0
98

It’s easy to freak out about more advanced artificial intelligence—and much more difficult to know what to do about it. Anthropic, a startup founded in 2021 by a group of researchers who left OpenAI, says it has a plan. 

Anthropic is working on AI models similar to the one used to power OpenAI’s ChatGPT. But the startup announced today that its own chatbot, Claude, has a set of ethical principles built in that define what it should consider right and wrong, which Anthropic calls the bot’s “constitution.” 

Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. “We’re very concerned, but we also try to remain pragmatic,” he says. 

Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. But…

Read more…

LEAVE A REPLY

Please enter your comment!
Please enter your name here