Is AI Becoming a Scientific Collaborator, More Than a Tool?

Andrei Mihai

At the 12th Heidelberg Laureate Forum, artificial intelligence (AI) was at the center of heated discussions. In the second part of the Hot Topic session on “The Machine-Learning Revolution in Mathematics and Science,” the panel explored various ways through which AI can impact research, as well as potential pitfalls and downsides. On stage sat a physicist who wrangles with petabytes of data, an AI pioneer who taught machines to outthink world champions, and applied researchers testing the limits of deep learning.

The discussion was less about hype and more about reality: how AI is already changing the way we do science, and where it might lead us next.

A Thought Partner and an Analyst

people on a panel with a screen in the background
The Hot Topic panel discussion (Part II) at the 12th Heidelberg Laureate Forum. © HLFF / Kreutzer

When people talk about AI, the usual headlines are either overly optimistic or doom and gloom. Reading most newspapers, you could think AI will either remake our society for the better or lead to rogue algorithms and machines stealing our agency. But the panel members saw AI more as a collaborator than a replacement.

Kyle Cranmer, a physicist at the University of Wisconsin, envisions “using AI as more like a thought partner or an agent for inspiration.” He sees AI less as a tool for discovering or proving theorems and more for providing ideas and laying the groundwork.

Thea Klæboe Årrestad, a particle physicist at CERN, uses AI for the Large Hadron Collider. She echoed that sentiment, but added that machine learning can help physicists make sense of vast amounts of data. The LHC experiments face an immense data volume and severe physical limitations on data readout, necessitating the use of AI for filtering and processing.

Due to power and hardware constraints (if too many readout chips are used, the detector can become obscured) the systems physically can’t read out all of the data. The AI’s function is to perform real-time data reduction, filtering the massive stream down to a small fraction of the total data volume so it can be stored and analyzed practically.

“At CERN we generate 40,000 exabytes of data every year and we need to reduce that to 0.02%. For that, we use … real-time machine learning to filter that data”.

The main challenge is, of course, figuring out which data to throw out and which data to keep. The existing algorithms can still be improved, but the implementation of deep learning has already resulted in a substantial increase in sensitivity.

“We’re doing analyses we could never have dreamt of doing with the amount of data we have purely because of deep learning,” Årrestad added.

Beyond the Data Deluge: The Age of Experience

David Silver, principal research scientist at Google DeepMind and a professor at University College London, has led research on reinforcement learning with the “Alpha” systems (like AlphaGo, AlphaZero and AlphaStar). He was awarded the 2019 ACM Prize in Computing for breakthrough advances in computer game-playing. Speaking on the panel, Silver says he foresees a new forthcoming age for AI, something he calls “The Age of Experience.”

people on a panel with a crowd in the foreground and a screen in the background
The Hot Topic (Part II) at the 12th Heidelberg Laureate Forum. © HLFF / Kreutzer

Until now, the tools that humanity has built have been, well, tools. They were used for some purpose or produced some output. In this new paradigm, AI can learn from experience to solve challenging problems, with the ultimate goal of developing profound, generally capable intelligences that are able to discover things that go beyond humans. This idea relies on the machine’s ability to learn autonomously through interaction; the system must be allowed to try things, explore, learn, make mistakes, learn from those mistakes, and get better

The classic example is that of AlphaZero, a DeepMind system that was not given any chess information. Instead, it was only left to play an enormous number of games with itself. Not only did it develop superhuman chess-playing ability, but it did so much more efficiently than when constrained by human strategies.

This is not a distant future, but a shift that is already well underway, in real experiments where AI systems learn by doing, make small mistakes, and come back stronger – much like human scientists themselves. So AI is increasingly learning through experimentation and trial-and-error, much like humans. Silver mentioned that soon, this approach will likely become more common and increasingly used in research groups around the world.

“We saw earlier the poll with the audience using a great range of these [AI tools]. Now imagine that you have a system which can interact through some unified environment like a terminal or a GUI that allows the agent to access any of these tools and sequence them in any way it wants in order to achieve some meaningful measurable goal.”

Cranmer also spots another benefit in using AI in this manner: it fosters inter- and cross-disciplinary collaboration. Deep learning provides the capability to solve specific, concrete problems (like approximating intractable likelihood functions or efficiently performing large integrals) that are common to a huge number of scientific applications, which can help build bridges across different disciplines.

“To me, that’s one of the things that I find really exciting about AI for science is that it’s leading to this cross-pollination of ideas that I’ve basically never really seen … the idea that you know a theoretical chemist and a person that does nuclear matter particle physics came up with simultaneously the same idea.

“Now all of these people are talking to each other and even using the same software package to do it and like I’ve never seen that kind of interaction between very different domains.”

Paleolithic Emotions, Medieval Institutions, Godlike Technology

The panelists emphasized that for AI to advance toward a true intelligence able to “discover things that go beyond humans,” it must be allowed the freedom to learn through interaction. Essentially, we should let the systems try things, explore, learn, make mistakes, learn from those mistakes, and get better. But what happens when the system gets it wrong?

“We should also acknowledge that we don’t have all the answers yet. It had some great successes and some applications where it works really well but there are also lots of questions … There are some real deep questions that still need to be answered,” says Silver.

The laureate stressed that this exploration should happen within strict boundaries, in places where the system can make small mistakes without severe consequences. He advocated that researchers should not be afraid to allow small mistakes to happen, as the system can then learn to generalize from those small errors to avoid making costlier, larger mistakes. But it’s essential that these mistakes take place in an environment where they are affordable.

Even in a scientific, experimental setup, such mistakes can be costly. In the case of CERN, Årrestad mentions, one mistake could melt an essential pipe or cable. “You really don’t want to do that mistake,” she says.

While the panel focused specifically on scientific discovery, this concern for the scientific world echoes larger-scale concerns from the real-world, where AI is increasingly used in various applications. Maia Fraser, Associate Professor at the University of Ottawa, says we can keep our optimism while also ensuring no severe mistakes cause problems.

“I was just going to add that we can be excited about the possibilities while also being aware of the potential downsides. We can do both at the same time: be motivated by the excitement and extra careful in the way things are deployed.” She referenced a famous quote by renowned biologist Edward Wilson, who said that our biggest problem is that we have “paleolithic emotions, medieval institutions, and godlike technology.” But Fraser says we have all the tools required to act responsibly.

“We have the capacity to navigate a course of action. We can get the benefits of the exciting stuff without falling into some sort of pit.”

AI is one in a long lineage of tools. But unlike past instruments, it does not just serve a simplistic purpose. It joins our reasoning. It helps us think of new questions, sees patterns we cannot, and sometimes, reasons in ways we cannot understand. If the researchers on the panel are right, we are standing at the edge of a golden age. An era when discovery accelerates not because machines replace us, but because they stand beside us.

The post Is AI Becoming a Scientific Collaborator, More Than a Tool? originally appeared on the HLFF SciLogs blog.