Shaping AI for the People: A Blueprint for the Future
Andrei Mihai
Artificial intelligence is no longer a futuristic promise; it is here, and seems to be embedded almost everywhere. Few technologies have spread so quickly, and few technologies have split opinion so sharply. To some, AI is the dawn of a new golden age, while others see a ticking time bomb. This tension between possibility and risk was also visible during a live poll performed with the audience at the 12th Heidelberg Laureate Forum this year, where “deepfakes and misinformation” was chosen as the most important AI challenge over the next 10 years, followed by concerns about ethics and privacy.
Beneath all this tension is one key question: How do we make sure AI works for people, not against them?
Jeff Dean (Chief Scientist, Google DeepMind and Google Research; ACM Prize in Computing – 2012) and David Patterson (ACM A.M. Turing Award – 2017) also asked themselves that question. The two gathered expert advice from fields ranging from science to policy and law. They discussed with experts from the field of AI, as well as with the likes of Nobel Laureate John Jumper and former US President Barack Obama. In a Spark Session at the 2025 Heidelberg Laureate Forum, they presented some of their conclusions.

Four Moonshots
The main conclusion was neither a rosy “AI will save us all,” nor a warning about rogue superintelligence. Instead, the two laureates tried to lay out a practical approach on how AI in jobs, education, healthcare, and even democracy will set the trajectory for billions of lives. They wanted to steer the research community with concrete goals, much like the “moonshot” of the Space Race.
The result, a project called “Shaping AI“, was born out of a “shared frustration over the polarized discourse on AI, which has devolved into a standoff between accelerationists and doomers.”
“Rather than simply predict what the impact of AI will be given a laissez-faire approach, our goal is to propose what the impact could be given directed efforts to maximize the upsides and minimize the downsides,” the project’s About page reads. A summary is also presented in an arXiv paper.
At the HLF Spark session on Tuesday morning, Dean and Patterson took the stage together to present some of their findings. They started with what they hope AI can actually deliver. “We should set concrete goals to have a positive societal benefit,” says Patterson.
They could not limit themselves to one “moonshot,” however. They landed on four:
- Functional Civic Discourse by 2030;
- AI for Healthcare;
- A Century of Progress in a Single Decade;
- Workforce Re-skilling.
The latter is perhaps the most straightforward to address. If AI displaces workers, it must also help them rebound. Patterson calls for an “AI rapid upskilling prize,” a system that helps low-wage workers retrain into middle-class jobs within six months. In fact, he explained how AI could actually help rebuild the middle class. Yet, people’s jobs concerns are not unfounded.
AI is far from the first technology set to change the job force, but the scale at which it is happening is striking. The impact is also geography-dependent. In developed countries like the US, the worry is lawyers or coders being displaced. In sub-Saharan Africa, for example, the crisis is the opposite: There are not enough trained professionals. In such regions, AI could be transformative, just as mobile phones once leapfrogged landlines. An AI “health aide” in a nurse’s pocket might literally save lives where doctors are scarce. Ultimately, if directed wisely, AI could expand employment by boosting productivity in sectors where demand is boundless, like education, healthcare, software, or research. But if left unchecked, it could hollow out industries with fixed ceilings.
The main, immediate focus is to remove the drudgery from current tasks, Dean points out. The recommended approach is to focus AI on increasing human productivity rather than labor replacement.
“AI focused on human productivity is better than labor replacement,” the Laureate says. AI can increase human employability, but we need safeguards when AI veers off course. The first objective should be to “remove drudgery from current tasks and only then move to new AI innovation,” Dean continued.
Can AI Help Democracy?
Whereas the impact of AI on jobs has some predictable parts, assessing the impact of the technology on our civic discourse and democratic societies is far more unpredictable.
We are seeing today how social networks and AI-fuelled operations often amplify division and stir misinformation; we also see AI used for surveillance in some contexts. The small HLF survey echoes similar concerns from broader civic society. But could AI also be used to repair some of the broken machinery of our society?
Experiments highlighted by Dean and Patterson in the paper show AI can sometimes play a positive role, moderating conversations, surfacing shared values, and even countering conspiracy theories. Patterson cites one familiar case: a friend who engaged in conspiracy theories argued with AI until their arguments simply ran dry. Later on, that friend showed less attachment to their false beliefs.
“Though many are rightly worried about the prospect of artificial intelligence being used to spread misinformation or polarize online communications, our findings indicate it may also be useful for promoting respect, understanding, and democratic reciprocity,” one quoted study mentioned.
Sceptical? So are Dean and Patterson. After all, good science rests on a healthy dose of scepticism. But they argue it is a research question worth funding. If AI can help societies pull back from polarization, it could be one of the greatest public goods of the 21st century.
In terms of healthcare, we are already seeing substantial benefits, but the two laureates emphasize the importance of starting with the basics: help nurses, physician assistants, and overworked clinicians cut paperwork and triage faster. Then move toward systems that catch misdiagnoses, which are still strikingly common.
The Stakes Are High
The impact of AI in society is almost guaranteed to be transformative. AI could accelerate scientific discovery by a factor of ten, compressing a century of breakthroughs into a single decade. It could double GDP growth in countries like the United States, lifting millions out of poverty and rebuilding the middle class. It could give overburdened teachers and doctors tools that free them from paperwork and allow them to focus on the human parts of their jobs.
But the risks are just as profound. Poorly directed, AI could concentrate wealth and power and create large pockets of long-term unemployment.
Patterson and Dean stress that technology will not magically align itself with human values. They argue that with intention, coordination, and the right incentives, AI could lead to global prosperity, but this is not a guaranteed.
“Artificial Intelligence (AI), like any transformative technology, has the potential to be a double-edged sword, leading either toward significant advancements or detrimental outcomes for society as a whole. As is often the case when it comes to widely-used technologies in market economies (e.g., cars and semiconductor chips), commercial interest tends to be the predominant guiding factor,” the paper reads.
“The AI community is at risk of becoming polarized to either take a laissez-faire attitude toward AI development, or to call for government overregulation. Between these two poles we argue for the community of AI practitioners to consciously and proactively work for the common good.”
At the 2025 Heidelberg Laureate Forum, AI was rightfully highlighted as one of the most consequential technologies of this time. Yet, as Patterson and Dean emphasize, AI does not come with a fixed set of outcomes. The way it is developed, deployed, and governed in the next few years will affect the lives of billions. It was a healthy and important reminder of the societal impact of research, and a reminder that while it is easy to fall into extremes, a balanced approach typically yields the best results.
“It can be as big a mistake to ignore potential gains as it is to ignore risks,” their paper concludes.
The post Shaping AI for the People: A Blueprint for the Future originally appeared on the HLFF SciLogs blog.