From privacy to CO2 emissions: the implications of information technology

Andrei Mihai

Information technology seems to be playing an ever-growing role in our society. But while the positive effects are clear to see, the potential pervasive effects are not always as obvious. In a #HLF21 dialogue that included laureates Shafrira Goldwasser, David Patterson, Joseph Sifakis, and Efim Zelmanov, as well as moderator Tarek Besold, the implications of technology were put in the spotlight.

Goldwasser’s first thoughts were that we should first understand just how big an impact machine learning and information technology have on our lives already. This is not an issue for future generations to bother with – it’s a problem for now.

“The availability of large data and machine learning has altered the face of every aspect of our life,” Goldwasser says. “Infrastructure, finance medicine, the way we shop, the way we are treated by doctors, the way we receive loans, and so forth. Courts are also using data more and more”.

The grasp that technology has on our society runs even deeper, Goldwasser argues. Even our basic set of values is being controlled and influenced by algorithms – essentially, we’re trusting the power of statistical prediction to define important parts of our society. The problem is that we’re not entirely sure how this power meshes the idea of how our society should run. For instance, we don’t know if these algorithms are fair, if they foster inequality, or if they are equitable. This also causes a power shift: whoever has more data has more power, says Goldwasser, and privacy is often a mere afterthought.

For David Patterson, the environmental impact of machine learning is a concern. “I think all of us are concerned about climate change and what it’s gonna mean,” he comments, adding that the tremendous excitement about machine learning has recently been dampened by concerns about the emissions produced by training large models. However, Patterson believes these concerns are overblown, and this environmental impact has been greatly exaggerated.

But one environmental impact that hasn’t been exaggerated, the laureates agree, is that of Bitcoin. Bitcoin uses more energy than whole country of Argentina – and its energy consumption continues going up. This has a lot to do with the way Bitcoin itself is designed, as other cryptocurrencies don’t use as much energy. As Goldwasser explains, Bitcoin works on the technical premise of proof of work, whereas other protocols deal with consensus, which uses less energy. Patterson agrees and mentions that whoever invented Bitcoin either didn’t have the technical background to design a better protocol or simply didn’t care.

The CO2 emissions of Bitcoin in 2019. The figure has only gone up since, but quantifying them exactly remains challenging.

Privacy

The discussion then moved to one of the thorniest matters around information technology and machine learning: privacy.

Firstly, who is responsible for ensuring citizens’ privacy in the age of machine learning and the internet? Both Sifakis and Zelmanov agree it’s governments.

“Governments and institutions, they define the rules of the game, they are responsible,” says Sifakis. “As long as governments, agencies, institutions, they don’t enforce some rules, it is a problem.”

Sifakis also mentions the case of autonomous vehicles, which allows many producers to operate on the mechanism of self-certification – basically enabling the companies to decide if they’re good enough. This is a slippery slope, Sifakis continues. “We should not be permissive [..] there should be limits.”

two women facing security camera above mounted on structure

Zelmanov and Patterson also pointed out that privacy means different things in different places and countries have wildly contrasting ideas about what is acceptable. The panel mentions China, for instance, where the government has, until recently, shown little regard for individual privacy. Does this mean that other countries should also ditch privacy as a foregone hope? Absolutely not, says Goldwasser.

“My point is, I think of we focus our attention on realizing what the problems are and defining them, then we can address them. To say that with privacy, the genie is out of the bottle and we can’t do anything – I cannot disagree more,” she comments.

Ultimately, governments will always play catch-up with companies as they try to regulate technologies, the panel agrees. But that’s not necessarily a bad thing, because, Patterson argues, we’ve seen that governments do have the regulatory mechanisms to enforce positive change when it comes to technology.

The dialogue ended with a discussion on work and technology. In the pandemic, we’ve all seen just how quickly the way we work can change, and thankfully, we have the means to enable a large part of the population to work from home. Even before the pandemic, machine learning benefitted from digital work, especially in the field of data labeling – the process of applying informative labels to raw data (images, text videos, etc). But in the context of extreme poverty in some parts of the world, this has led to the creation of “digital sweatshops“, where people can be overworked and underpaid.

This is not a new concept – it’s just the medium that’s different. The panel seems to agree that it’s not the technology that’s causing issues, but rather the social context, so that’s where the problem should be addressed first.

For better or for worse, machine learning is still dependent on manual human work, and will likely continue to be so for the foreseeable future. “To be against the idea of human beings labeling data would be a pretty strong statement,” Patterson emphasizes. Ultimately, perhaps this is the most important lesson: technology often seems like it’s creating new challenges and problems – but oftentimes, those problems are actually old problems in a new form.

It’s up to our society and its democratic system of checks and balances to ensure that information technology is used for the progress and benefit of society. As was discussed in a previous laureate dialogue, algorithms are like recipes: they are neither good nor bad. They are a tool, and it’s up to use them properly.

The post From privacy to CO2 emissions: the implications of information technology originally appeared on the HLFF SciLogs blog.