Intro
Hi. This is the first of three little video essays about how AI currently isn’t on the right path, and is set to make some of the biggest problems of the 20th century even worse.
“Alignment” is one of the words people use for trying to fix this problem. There’s a lot of talk about “aligning AI with human values”, or “aligning AI with human flourishing”.
(examples)
So, what do these people mean by human values? What do they mean by human flourishing?
Mostly these terms are undefined. But I believe taking these terms seriously is actually where to start in addressing these concerns.
I'm Joe Edelman. I'm known for XYZ. So I'm the guy to clear this up. I'll be assisted in this series by Ellie Hain, Joel Lehmann, Oliver Klingefjord, and Ivan Vendrov.
Contents
This is a video essay in three parts.
- In this chapter, I cover this thing about human flourishing or human values. Why this is so important to AI alignment. I'll try to get us clear together on how we have to define flourishing to deploy AI the right way. Flourishing is an intensely personal topic, because each person has their own knowledge and vocabulary for what it means to live well. But flourishing is also a global topic, and is relevant to how markets function, what business metrics lead to what outcomes, etc.
- The next chapter, which is linked in the description, is about the kind of AI that works with the kind of flourishing I’ll cover here. I call that “Wise AI”. In that chapter, I’ll define what I mean by wisdom — which is closely related to flourishing — and then give several demos of working, Wise AIs. That’s where I’ll be assisted by Joel, Oliver, and Ivan — all of them ML researchers. That chapter doesn’t really make sense without this one, so don’t skip ahead!
- Finally, in the third chapter, I’ll cover what else needs to go right — besides Wise AI — for AI to have a good impact on society. I’ll cover changes geopolitics and finance, changes in public policy and perception, and finally changes in the relationships of the major AI labs to one another and to the world. Ellie Hain will help me tell those stories. It’s when we put Wise AI together with these other changes that we can address the whole problem of introducing AI to society well. I call this whole plan “Full Stack Alignment”, and the goal of this short series is to get that across.
Hope that makes sense. Let's go ahead with Chapter One.
It Matters How We Talk of Flourishing
So. In this first part of the talk, I want to show something that might sound super-abstract. But I'll try to show it in a concrete way.
The thing I want to show is, it matters how we talk about flourishing.
How we conceptualize what we want out of life... Like, what vocabulary we use for talking about it, makes a huge difference.
This is where we have to start.
Many of you probably know that “AI Alignment” is a field, and that, in this field, they use vague terms like "aligning AI with human values" or "aligning AI with human flourishing."