As AI has evolved from a set of research projects, including a handful of Titanic, industry-powered models such as the GPT-3, the field needs to be developed – or so, by Dario Amodei, former VP of research at OpenAI Guess who killed himself a few months ago to form a new company. Anthropic, as it is called, was founded with her sister Daniella and aims to create “large-scale AI systems that are run, interpretable, and robust.”
The challenge that Amodei siblings are tackling is that these AI models, despite being incredibly powerful, are not well understood. The GPT-3, which he worked on, is a surprisingly versatile language system that can produce extremely solid text in practically any genre and on any subject.
But suppose you produced rhymed couplet with Shakespeare and the Pope as an example. How does it do it? What is this “thinking”? Which knob will you bend, which dial will you bend, to make it more nostalgic, less romantic, or limit its pronunciation and vocabulary in specific ways? There are of course parameters to change here and there, but no one really knows how this highly reliable language sausage is being made.
It is one thing to know when the AI model is producing poetry, exactly when the model is looking at a department store for suspicious behavior, or setting a legal precedent for sentencing a judge. The general rule today is this: the more powerful the system is, the harder it will be to interpret its functions. This is not a good practice at all.
“Today’s large, common systems can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to progress on these issues,” the company’s self-description reads. “For now, we are primarily focused on research towards these goals; down the road, we see many opportunities to create value commercially for our work and for public benefit.”
The goal, it seems, is to integrate security principles into the current priority system of AI development that generally favors efficiency and power. Like any other industry, incorporating something from the beginning is easier and more effective than incorporating something at the end. Attempting to isolate and understand some of the biggest models can be more work than building them in the first place. Anthropic seems to be starting anew.
“Anthropic aims to advance fundamental research that will allow us to create more capable, common and reliable AI systems, then deploy these systems in a way that benefits people,” said Dario Amodei, CEO of the new venture company. And following the announcement of its $ 124 million funding.
By the way, that funding is as star-studded as you might expect. It was led by Skype co-founder Jan Tallin, and included James McClave, Dustin Moskowitz, Eric Schmidt and the Center for Emerging Risk Research.
The company is a public benefit corporation, and the plan for now, as limited information on the site suggests, is to stay on top of researching these fundamental questions to make larger models more accessible and interpretable. We can expect more information later this year as perhaps the mission and team come together and the preliminary results are revealed.
The name, incidentally, seems to derive from the “Anthropic Principle”, the notion that intelligent life is possible in the universe because … well, here we are. Perhaps the idea is that intelligence is unavoidable under the right circumstances, and the company wants to create those situations.