Slow AI: A Quiet Revolution

Benjamin Skuse

Though half the world is blissfully unaware, AI – particularly the generative type based on large language models (LLMs) – is reshaping the jobs market at a terrifying pace. Driven by corporate competition, the urgent scaling and advancement of AI by the likes of OpenAI, Google, Meta and others is already seeing many traditionally human tasks replaced by machines. This is even happening within these mega-corporations themselves. For example, around 30% of Microsoft and Google code is now written by AI, according to their respective CEOs. This is likely why these companies have been laying off software engineers by the bucketload. But a quiet revolution is starting to take root, offering a brighter – albeit slower – vision of the future.

A Slow Beginning

Back in the distant past – 2020 – whispers of a more measured way of approaching AI were already being heard in obscure pockets of the internet. In a blog post for the Ada Lovelace Institute (an independent UK research institute with a mission to ensure that data and AI work for people and society), Professor Jeremy Crampton of Newcastle University wrote a short manifesto for slow AI based on three fundamental principles: Think. Resist. Act Local. In more detail, he called for everyone to think about whether AI is the best solution for a given situation, resist the rhetoric around AI and the values that frame it, and ensure that AI is place-based, meaning co-produced with local communities without exploiting them as data subjects.

Not long after, in December 2020, a technical co-lead of Google’s Ethical Artificial Intelligence Team Timnit Gebru was fired for submitting a paper on the dangers of LLMs without Google’s prior internal review. Already a prominent scholar in the AI ethics community, Gebru’s dismissal made headlines around the world, propelling her into the public limelight.

From this position of influence, she co-authored perhaps the most well-known paper in the field, and the only paper I have ever seen to feature an emoji in the title: “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” ‘Stochastic Parrots’ was an explicit call to action for researchers to slow down, “take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks?”  

Woman speaking at a podium.
Timnit Gebru at The Web Conference 2019. Credit: Victor Grigas (CC-BY-SA-4.0).

Getting Organised

Soon after, Gebru founded a new interdisciplinary and community-based organisation, the Distributed AI Research Institute (DAIR), to help answer these questions. DAIR’s work is based on ensuring AI research is conducted in a more thoughtful, slow way, and produced and deployed to include diverse perspectives and deliberate processes instead of purely to serve opaque corporate agendas.

Similar independent, international, interdisciplinary grassroots movements have formed outside the bubble of experts developing AI tools. Dutch designer Nadia Piet, for example, began chatting to fellow designers and artists on Slack about new modes of thinking and talking about AI in 2018. Fast-forward to today and these conversations have morphed into AIxDESIGN, a global community of 8000+ people dedicated to conducting critical AI design research. In 2024, AIxDESIGN began exploring the slow AI concept in depth, inviting AI researchers, designers, technologists and artists to delve into current practices and perspectives, and imagine and set about implementing alternatives that embed mindfulness, care and community to AI development and deployment.

Even more recently, in July 2025 poet Professor Sam Illingworth (Edinburgh Napier University, UK) started a Substack weekly prompt series simply named Slow AI. Providing a way of experimenting with AI that reclaims human creative agency and explores different ways of using the technology, this understated series is yet another small step towards shaping AI for equitable human benefit.

Why Slow AI Matters

Why is all of this important and why does it matter to mathematicians and computer scientists? Ask an LLM whether AI is coming for these jobs, it will regurgitate the narrative that “highly skilled academics like computer scientists and mathematicians will experience job transformation and creation rather than mass displacement”.

This might prove true. But if a machine can discover new mathematics and find solutions to long-unsolved problems, the argument that mathematicians will be free to perform “a higher type of mathematics”, as Terence Tao and others profess, seems fanciful for all but the very best, such as Tao himself. What will lesser mathematicians do?

And what about computer scientists? Though it is widely touted online that computer scientists will not be replaced any time soon – particularly those who build, control and advance AI – there are signs that the job market is already becoming more competitive. And if AI is developed to perform high-level tasks such as reinvent its own architecture to be more energy efficient, or design and validate secure safety-critical software – neither of which are beyond the realms of possibility – where will computer scientists fit in?

Even assuming these rather gloomy scenarios fail to come to pass, unbidden, unregulated AI advancement has the potential to disrupt society, including the lives of mathematicians and computer scientists, in unpredictable and potentially significant ways. For example, only this July, xAI’s chatbot Grok began to spew antisemitic Hitler-praising bile across social media. Is this just the visible part of an iceberg of bias hidden below the surface of popular LLMs we all use on a regular basis? How is this influencing our opinions and decisions?

Professor Ruha Benjamin of Princeton University has literally written the book on how bias is infused into modern technology. She has exposed how seemingly neutral technologies like corporate AI hiring systems and digital surveillance and facial recognition technologies used by law enforcement actually often reflect and amplify the biases already present in society. Though these technologies can be calibrated carefully and deployed thoughtfully to remove prejudicial decision-making, Benjamin and other thinkers in this space argue that the rapid pace at which AI is infiltrating society has sidelined such ethical considerations, and makes incorporating them an uphill battle.

AI generated image of a handsome Asian man.
To the prompt ‘Draw a handsome man,’ it took four clicks of the Redo button before Google Gemini produced an image that was not a white man.

Taking Responsibility, Taking Action

Slow AI is not an anti-AI concept, but rather a call for those developing, using and overseeing AI to do so deliberately and responsibly so that it is fair, transparent and beneficial for all. A slow AI approach can help minimise job losses and economic inequality by ensuring that AI skills are broadly shared across society and that AI tools are developed to assist humans rather than replace them. It can also mean ethical considerations are embedded across AI development, from choosing diverse development teams to involving ethicists in data collection, model training, deployment and monitoring.

For this approach to succeed, ultimately it requires action and commitment from governments and technology companies in terms of shaping the technology’s future responsibly. However, as part of the first generation to routinely incorporate AI in our daily work practices and lives, individually we can influence those actions by engaging with AI in a more mindful way. For example, if you want to get the most comprehensive answer to a particular question in your field of study, is an LLM prompt, a Google search, a trip to the library or a conversation with an expert your best bet? Which is most likely to provide you with a full and satisfying answer? Have you considered the environmental impact of each in your decision? By taking responsibility at an individual level, we can each contribute to shaping a healthier relationship with the technology more broadly.

And for those entering or already enjoying a career in one of the companies or research groups that are developing AI tools – some of whom will no doubt be attending the 12th Heidelberg Laureate Forum – there is an even greater opportunity. They can pause, reflect on and counteract biases before they become embedded in the technologies’ structures. They are in a position of power to address the environmental impact of the technologies being developed right now. And they can shape how users interact with AI tools, from treating them as quick and efficient ‘stochastic parrots’ to machines that capture our imagination and attention, helping us navigate challenging questions and bring about profoundly important benefits to society at large.

The post Slow AI: A Quiet Revolution originally appeared on the HLFF SciLogs blog.