About this document

<aside> 👉 This is a work in progress, not a public statement.


Please tell us

  1. Which principles are you fully on board with?
  2. Which are communicated unclearly, or need more evidence? </aside>

10 Principles for a Values Centric Future

  1. All discussions of AI, alignment, tech, and society dance around one question: What “good society” should we aim for? Equivalently: what is sacred about human life, that we should coordinate around?

  2. Civilization evolves as humanity finds different answers to this question. Whether we think the answer is “God’s will”, or “humanity’s freedom” or “equal rights”, we create institutions and new forms of social organization around the answer —churches, markets, democracies, syndicates, DEI departments, etc.

  3. We are now at a transition point. Such transitions come periodically as our civilization strains against the limits of one kind of social order and grasps for another. Such transitions cannot be avoided, or paused – they must be navigated carefully and with a clear vision for a good future. Each transition asks of us to be new, clearer vision on what’s sacred about life/ good society looks like.

  4. AI is a catalyst for a new way of thinking and doing. The more complex and powerful a society is, the more we finely-adjusted direction to its power. As humanity approaches the age of AGI, whatever we choose to maximize for will be infinitely magnified. we must reckon with the question of “what is sacred about life, which potentially infinite powers should be in service of?”

  5. Today, people have different answers to this: technological progress, inclusion and diversity, collective intelligence, decentralization, health and longevity
? We think

  6. Today, thanks to the recent advances in philosophy and LLMs, and a general population with growing interest in wellbeing, we can now clearly elicit the experiences people consider sacred. We are able to separate container from source of meaning. Get at what people, really, actually want, behind their preferences, ideologies, goals, norms etc. There’s something very special about this.

  7. This has a very important advantage in terms of coordination: When talking at this deeper level, people agree much more about what’s wise and meaningful. There’s more commensurability. For example (
) we also found that those experiences are in fact, quite universal, and there is a lot of agreement. We all share the same reality after all!

  8. We believe optimizing for people’s source of meaning can help us create the institutions we need to flourish in a post-ai world. Designing new institutions and AI around what people actually find meaningful, rather than simply abstract ideologies, is a great way to ensure flourishing in the age of AGI. With this, we could create:

    1. New political systems based on maps of shared values can transcend ideological deadlocks⁶. Social progress is stalled in many areas because the powerful have made idiotic religions out of political parties and cause areas. With a shared understanding of values, we can redefine our political identities and emphasize our individual quests for meaning and sacredness over externally imposed labels like red/blue, or oppressor/oppressed.⁷

      1. We have already done this with a goal as contentious as abortion!
    2. Policy can have much clearer goals. As AI spreads through society, which human jobs and roles should be preserved? The most meaningful! Which human relationships should be preserved? The meaningful ones! Which economic arrangements should be built around human life? Those that support meaningful lives!

      A shared map of what’s meaningful to people can make all of this concrete, and give us techniques for measuring our progress towards broadly-supported meaningful lives.

      This suggestion is common sense. ****If there will be a division of labor between humans and AIs, shouldn't humans be able to stick with what's meaningful, rather than getting the economic leftovers? What other criteria would make sense?

    3. Markets can be reformed. Many modern systems involve harnessing markets to serve non-market concerns: modern healthcare tries to make markets serve health outcomes. Livable cities try to harness development to serve the flourishing of neighborhoods. We believe a rich understanding of what’s meaningful can put markets in their place: leading to economies that go beyond consumption, atomization, and extraction, and center on meaningful lives and a strong social fabric. As a concrete version of this, we envision collectively-owned LLMs that allocate⁔ resources according to what those in the collective find meaningfulÂč.

    4. Finally, these maps of wisdom³ can also be used to create wise AIs**.** The major AI labs are currently racing towards superintelligence. But intelligence is not enough to create a good future. For this, we need wise systems — models that understand and develop their values in collaboration with humanity. A map of humanity’s values offers a way to develop such systems. These Wise AI models can help us find win-wins in situations where intelligence alone could not.

  9. Wise AI is just a starting point. To sustain this transformation, it must be integrated into a reformation of existing systems of collective action. Imagine we just had Wise AIs. Market dynamics would push against them, towards systems for optimizing existing business incentives. Geopolics, too, will push towards ruthless, military AIs, not the wise ones.

    So, we must upgrade models, markets, and governments together — to something more values-driven, participatory, convergent, and wise — if we want to make it through.

  10. In other words, we need “full-stack alignment”. That’s good news though!

    All the x-risk people say XYZ, to solve this we need coherent collective action, and that requires a positive vision

    Focusing on this (big, positive) shift can produce good policy outcomes, and good product directions. It’s way better than focusing on x-risk and dangers**.** You cannot create a good future by only focusing on how to prevent bad ones, because:

    (1) Good policy ideas come from positive visions of how things could be mutually beneficial, if they are put in the right relationships. Otherwise, we get regulatory capture, black markets, and warring factions.

    (2) Focus on risk narrows people’s thinking.Fear-driven people create fear-driven political responses, not generative ecosystems.

    (3) More generally, fear divides us, whereas hope and positive visions can unite us.

  11. So, let us be driven by love, not fear. We recognize the high stakes of the current moment. But we see the situation as an invitation to deepen our connection and commitment to the sacredness in life, and use this renewed connection as the guide for the path forward.

    The need for full-stack alignment is daunting, but also exciting. Let us take the stakes into account, and use this cultural moment to build beautiful things.

    We’re deeply committed to the sacredness of life. We believe being alive in this universe is the highest gift, and we’re committed to honoring the things that make that experience worthwhile — our drives for intimacy and connection, to expand frontiers and conquer unknowns, to create and to understand, to live in integrity with who we are. We believe these drives for life (our values and sources of meaning) are the fundamental pieces of human flourishing.

Sincerely,


Signatories: JE, EH, OK,

^ add yourselves