The best overview of our values-based AI work is our paper on arxiv here:
The best intro to our work in economics is to first read our blogpost draft Market Intermediaries: A Post-AGI Market Alignment Vision and then Chapter 1 of this talk of Joe’s Rebuilding Society on Meaning (a bit out of date), and this paper draft https://github.com/jxe/vpm/blob/master/vpm.pdf
The best intro to our organizational plans and social vision is Ellie and Oliver’s interview with David Shapiro https://www.youtube.com/watch?v=bC2pQ78o754
Our work at MAI is woven from 5 academic strands.
From my “Values, Preferences, Meaningful Choice”
In his 1938 paper introducing revealed preference, @Samuelson1938 warned:
I should like to state my personal opinion that nothing said here... touches upon at any point the problem of welfare economics, except in the sense of revealing the confusion in the traditional theory of these distinct subjects.
Similar sentiments followed, in @Arrow1951, @Sen1977, @Anderson2001, etc. As an information basis for welfare, optimality, social choice, etc, revealed preference has been much critiqued.
A rich literature covers how revealed preferences---which, when summed up, are called engagement metrics---lead us astray. You can often get people to choose something without serving their real interests: you can misinform them, or leverage their misplaced hopes.
Or, you can make it so people need your thing for what once was possible without it: they need your car to get to work, your social media account to find a job, your dress to socialize with their friends, etc.
More broadly, you can manufacture social circumstances where people choose your thing to "keep up with the Joneses", to signal allegiance with their tribe, or because they've lost the ability to coordinate a real solution.[See discussions of the prisoner's dilemma in e.g., @Sen1973; @Anderson2001]
In some cases, the person will know their choice doesn't express their true interests---that they are bucking to external pressure, caught in the system, or setting aside their goals to conform to a social rule.[See @Sen1977; @Anderson1993 on 'commitment'] In other cases, options have been limited or biased behind our backs.
Specifically regarding social choice, economist Amartya Sen writes eloquently about how information on people’s preferences—even their stated preferences across a menu of options—isn’t enough for good social choices. Even richer information—about people's wellbeing or utility in different hypothetical states—isn’t good enough.
Preference-based systems like markets, voting, and recommenders don’t carry all the “demand” needed for good social choices.
Many of the failures of markets to provide for what we value (community, adventure, meaningful work, personal growth, ...) are because markets act on the preference supposedly revealed by our choices, rather than delivering what’s actually important to us;