Blog Geoffrey hinton problematic heroism ai

Problematic heroism in tech and the much-needed plot twist

Problematic heroism in tech and the much-needed plot twist

And just like that, another hero was born. Geoffrey Hinton’s announcement seemed heroic according to the headline in the New York Times: “‘The godfather of AI’ leaves Google and warns of dangers ahead”. In the interview, Hinton justified building this technology that he now deems dangerous at Google by saying that the company was a “proper steward” of the technology until Microsoft challenged Google’s core business by augmenting its Bing search engine with AI. But in calling Google a “proper steward” Hinton seemed to have forgotten that this same company fired Timnit Gebru for pointing out the dangers of large language models that power ChatGPT.

We’ve seen this pattern before of a man profiting from his work and then suddenly growing a conscience and being recognized as a hero. Jan Koum did exactly that when he built WhatsApp telling users that their data was safe but saving metadata and uploading phone numbers from users’ contact lists to the company’s servers. He justified it by saying he would never sell the company, only to eventually sell out to Facebook for $19 billion. But he was hailed a hero after he eventually quit Facebook and donated $50M to Signal (an open source messaging tool that was built using a more deliberate approach to preserving privacy and which is what WhatsApp should have done in the first place).

The accidental villain

Don’t get me wrong, it’s great that these leaders eventually changed their stance – it beats the alternative. But if only these “heroes” had had that conscience all along, they would have made a different contribution to their fields (for example, perhaps Hinton could have built AI and training models that are deliberate about creating equitable outcomes and Koum could have built an app with a genuine commitment to protecting privacy by not storing user data). Rather than praising them as heroes for growing a conscience after profiting, the lack of conscience really warrants us seeing such individuals at least as accidental villains.

Movies tend to distort how we perceive villains. We expect the villain to have evil intentions and wring his hands as he contemplates mass destruction by mounting a conveniently helpful countdown on a nuclear bomb. Rarely does any technologist set out to create destruction deliberately – they stumble into it through their indifference to what happens to society as a result of their work. It turns out that hand-wringing villains are rare; accidental villains are ubiquitous.

Is it accidental if there’s a clear pattern?

But when you see a clear and well-established pattern of behavior, it begs the question: At what point should we think of villainy as a deliberate choice rather than accidental? These technologists are luminaries in their fields. Hinton who is a pioneer in deep learning is surely capable of deep thinking and knowing all along why this technology was dangerous. He chose to see Google as a “good steward” even when Gebru was fired and chose to be indifferent as to whether this technology would create inequitable outcomes until now. We have to recognize that indifference is a choice.

The pattern of excuses

There’s a repeating pattern of justifications which reinforces the idea that indifference is a deliberate choice. I’m sure you’ve heard these excuses from people in the tech industry:

  1. “If I don’t do this, someone else will”: This is the most common justification, one that Hinton also admits to. Let’s set aside what others will do. What’s important is what YOU choose to do and how you choose to affect society through your work.
  2. “Human progress happens through solving hard problems”: The reality is that technological progress doesn’t float all boats so this justification is about satisfying intellectual pursuits without regard for what it will do to society. Oppenheimer who built the atom bomb said: “When you see something that’s technically sweet, you go ahead and do it.”
  3. “Technology is neutral”: In other words, it’s “just a platform” – apparently it always is. It’s easy to justify any technical work by saying that the technology itself is neutral. And if it causes harm to society in any way, it’s because of “bad actors” who used it in a way it wasn’t intended. This excuse has the added benefit that it allows technologists and their companies to frame themselves as heroes for generously fixing issues that arise on their platform because of bad actors.
  4. “Regulations are the key”: Leaders like Sundar Pichay and Sam Altman often use the need for regulations as an excuse to defer responsibility. While regulations are important, the optimism for imminent regulations is astounding considering this would require politicians to understand bleeding-edge technology and pass regulations swiftly to prevent unintended consequences. In case you’re wondering, yes, they expect a panacea of regulations to be enacted by the same kind of politicians who think that the internet is made of a series of tubes and whose questions for Mark Zuckerberg in the congressional hearing offered endless fodder to late night comedy shows.
  5. “Consumers can vote with their dollars”: This justification is the most disingenuous of all. In reality, faced with monopolies consumers have no choice. But couldn’t one argue that people have democratic power? If people are tired of monopolies and want to curb them, they can vote for a party that will do better at regulating and enforcing regulations, right? Luigi Zingales, a professor at the University of Chicago Booth School of Business, points out that large companies are comparable to sovereign states in terms of resources: “These large corporations had private security forces that rivaled the best secret services, public relations offices that dwarfed a US presidential campaign headquarters, more lawyers than the US Justice Department, and enough money to capture (through campaign donations, lobbying, and even explicit bribes) a majority of the elected representatives.” With companies continuing to amass resources, consumers’ power to create change with their votes (whether democratic or with their money) is diminishing by the day.

When you observe such a clear pattern of excuses for building technologies and products with an indifference to how they affect society, it’s increasingly hard to think of these tech luminaries as accidental villains – it feels very deliberate. It’s not accidental to be indifferent – it’s a choice.

Who then are the heroes?

The plot twist in real life is that unlike in Marvel movies, what makes a hero isn’t someone who swoops in to save the world – that’s a tall order for any mortal. And anyone proclaiming that they’re saving the world only has unbridled and unjustified hubris – that’s just a different type of villain that we can talk about another day.

The real heroes are people like you who make small choices in your everyday work. You may think that you’re “just” a product manager, designer, or software developer, but I’m the side character who reminds you (the protagonist) that your talent has worth and companies need you – this means you have power. And with great power comes great responsibility.

How you can take on the mantle of a hero

So here’s the plot twist that I urge you to live out:

You’re the hero who rises from your desk realizing the power you hold: that you can vote with your labor for the world that you want to create. You realize that you’re a hero when you make the effort to understand how inequitable outcomes affect minorities who seem invisible to others and for feeling the pain of fellow humans instead of being indifferent to it. You realize what makes you a hero is not your productivity on tasks assigned to you but your thoughtfulness about the choices you make in your work, grounded in your vision for the change you want to bring to society and the world. You find heroism in honest introspection and asking how your current work is contributing to digital pollution and what decisions you want to make instead.

Every hero needs a role model. My hope is that you don’t regard Geoffrey Hinton, someone who chose not to sign a letter warning about the dangers of AI signed by over 1000 other AI researchers because he didn’t “want to publicly criticize Google or other companies until he had quit his job,” as a role model. Instead, I hope you recognize someone like Timnit Gebru as a role model, someone who was true to her values and spoke up while she was still working at Google in an effort to get the company to change its approach when she saw the danger in large language models.

My impassioned plea for you is that you take on the mantle of a hero by choosing to not be indifferent and by making courageous and intentional decisions.

Newsletter