The Meaning Alignment Institute will run experiments using LLMs to create market alternatives that avoid AI risk related market failures, better fulfill deep human preferences, and that could create decentralized alternatives to UBI. Aims: AI safety, human flourishing, resilient societies, and a blueprint for meaningful post-AGI economies.

Funding requested: $195K

Problem

Many AI risks are driven by markets misaligned with human flourishing:

There are markets for things that are bad for us, such as AI arms races among nations and labs, the market for AI girlfriends and other hyper-stimulus, isolating distractions, and markets for political manipulation and destabilization.

There are markets that displace us entirely, such as markets to replace ethically-concerned human workers with AI workers that put business objectives first, or potentially, markets that displace all human labor, such that humans lead meaningless lives as mostly consumers, and AGI-generated wealth is potentially concentrated among a few providers.

We can summarize these as failures of markets to put human values and meaning on a par with (what should be) instrumental goals like engagement, ROI, or the efficient use of resources.

There are three common responses to these problems with markets. Each centralizes power:

Not only do these approaches centralize power, they also don’t actually re-align markets: markets continue to pull the wrong way, patched by pledges, regulations, or redistributions.

Our Solution

We believe that powerful AI can be used to deeply align markets with what's actually good, eliminating both classes of market failure described above.

The idea is that buyers buy through an intermediary; **sellers are then paid by the intermediary according to the ‘goodness’ they produce for buyers, rather than according to the buyer's price. In other words, such a market intermediary uses non-market data about good outcomes (in our trial study, this is data about meaning collected through LLM interviews) to broker connections between providers and consumers.

This lets a market of many providers and consumers flourish, without misalignment.

The closest pre-AI example of this is health insurance. In some health insurance schemes, hospitals are paid by their success in maintaining or improving the health of a population. Thus, market incentives can be aligned with a human-beneficial outcome.

We believe these intermediaries are much easier to build at scale with modern AI, and can be deployed in new market segments where a type of ‘goodness’ can be specified. For instance: