Understanding How AI Will Impact Society Long-Term
An overview of AOI's approach to alignment research
The rapid emergence of advanced AI offers us an unprecedented opportunity to reach widespread human flourishing, but our current systems do not put us on that path. As AI tools and companies start moving towards artificial general intelligence, it’s becoming more urgent for us to understand and improve the sociotechnical systems that ground and shape AI. Questions of AI governance, ethics, and law come to the forefront. It’s also imperative that we examine our own humanity: if we want AI to reflect and operate with sound human objectives and enable human flourishing, we need to understand what’s been holding us back, with or without AI.
At the AI Objectives Institute, we’re exploring what institutions and coordination strategies we can and should adopt in light of ongoing AI progress. We’re asking: what social technologies are necessary to align AI progress with human gain? What new social technologies does AI progress make possible, and how can we connect the two? For context, here are some additional areas we’re exploring:
What are the misalignments that already permeate society and cause our collective intelligence to do harm?
What are the coordination failures that exist in our society today, and how can we scale cooperation to empower each and every individual’s pursuits?
What current failure points in our institutions and economy are most likely to be exacerbated by technical progress?
How are the biases and prejudices reinforced by our current social institutions going to change as AI develops, and how can AI help us overcome our individual and collective blindspots?
How can we use AI towards collective intent alignment, helping enhance public deliberations by identifying agreements and common grounds, disagreements and cruxes?
Avoiding bad outcomes from advanced AI and aiding human flourishing require more than just technical methods for aligning an AI to human values: we must also learn to align the sociotechnical structures that build and use AI. The institutional background and information landscape within which AI systems develop will critically determine AI’s future. Our hope at AI Objectives Institute is that by leveraging current AI to make our sociotechnical world better we can reach positive feedback loops between AI alignment and collective agency.
Where We’ve Been… Where We’re Headed
In the early 2000s, we were ready for the miracles of real-time connectivity to bring new joys into our lives and new forms of democracy to our societies. We didn’t expect freefall into fake news, echo chambers, online bullying, mental health feedback loops, and a plethora of privacy threats and scandals. From our news consumption to our wellbeing, our financial status, and our national security, social media has infiltrated our lives at every intersection.
The impact of self-improving AI systems in the coming years will be much more drastic than what we’ve experienced with social media. Our capacity for social agency will be fundamentally transformed, with impacts on individual wellbeing, our collective organization capabilities, and the reliability of our institutions. The AI Objectives Institute (AOI) was founded to help steer this transformation and its many feedback loops towards good ends.
Our late founder, Peter Eckersley, started AOI to build a community around a series of projects with a common theme: defining goals that contribute to human flourishing, and translating them into patterns that can guide AI systems. We’re moving forward to honor Peter’s legacy and vision and to explore how society will be affected by AI developments.
At AOI, we believe that the ways in which human systems will fail at managing advanced AI will not be wholly unexpected: they will take the form of familiar institutional, financial and environmental failures, which we have experienced over the last decade at unprecedented rates. The core of every existential risk is the risk that we fail to collaborate effectively, even when each of us has everything to lose. Let’s learn to coordinate in service of a future that will be better for us all.