Blog Acknowledging who benefits from ai and who is harmed

Why we must acknowledge who benefits from AI and who is harmed

Why we must acknowledge who benefits from AI and who is harmed

On his pro-AI campaign, Reid Hoffman, billionaire entrepreneur and investor, talks about the benefits AI will bring. But he also acknowledges that there will be downsides, “It’s not to say that there won’t be some harms in some areas,” he said. “The question is could we learn and iterate to a much better state?”

His stance reminded me of a scene in the movie Shrek where Lord Farquaad says with bravura to his subjects, “Some of you may die, but that’s a sacrifice I’m willing to make.”

Lord Farquaad in the movie Shrek “Some of you may die, but that’s a sacrifice I’m willing to make.” - Lord Farquaad, Shrek

At least Lord Farquaad is direct about who might come to harm: his subjects. I’ve observed that in the tech industry, we often talk about the dangers of AI in abstract terms. Take the example of Hoffman’s words of “some harms in some areas”, or the words of Sam Altman, CEO of OpenAI, at the congressional hearing, “the benefits of the tools we have deployed so far vastly outweigh the risks”. Leaders in the tech industry avoid acknowledging whom AI will harm.

Some have conjectured that many AI experts are intent on working on the technology despite knowing that there’s a real possibility of creating a catastrophe for humanity because they “feel that they have a responsibility to usher this new form of intelligence into the world.” While such reasoning is a great backdrop for a sci-fi novel, I believe the answer in the real world is far simpler. It just requires asking the right question: “Who benefits and who is harmed?” Let’s start with the first question.


Who is harmed by AI?

The answer to this in the short term is very clear: marginalized communities and people who are less well-off.

One of the most imminent dangers from ChatGPT is the disinformation it’s capable of generating. Disinformation tends to disproportionately affect marginalized communities including immigrants, people of color, and LGBTQ+ because it’s easier to use fear mongering and further political agendas by exploiting a population’s existing biases against these groups. Until now, the content for disinformation campaigns that was spread through social media had to be created by humans. That human effort can now be done away with using ChatGPT.

The large language models (LLMs) that ChatGPT is based on includes text mired in hate speech, racial, ethnic, and gender bias. If you ask ChatGPT to generate text that reflects societal biases, after the initial disclaimers, with just a couple of nudges, it generates biased text practically on command.

Blog posts created by ChatGPT can generate a readership quickly because the content sounds plausible, natural, and easy enough to read. In fact, Liam Porr, a student at the University of California, Berkeley, proved this by creating several blog posts using ChatGPT, one of which even rose to the top of Hacker News. The results of this small endeavor are monumentally scary: Porr proudly announced that his GPT-3 generated blog got 26,000 visitors in two weeks.

Porr’s experiment is scarier when you consider that the blog posts he generated were banal and the top post was titled, “Feeling unproductive? Maybe you should stop overthinking.” Imagine the audience size that could be reached and the polarization that could be created from incendiary posts that would spread more readily on social media.

AI is also likely to contribute to increased polarization through job loss. Goldman Sachs has predicted that AI could automate 18% of work worldwide. But these predictions only feel real and imminent when you read anecdotal examples of people whose well-paid jobs have been replaced already. An article in the Washington Post offers the example of Eric Fein who had a content-writing business that provided a comfortable life for him and his family. But in the span of a few months his work vaporized as clients decided to use ChatGPT even as they admitted that it wasn’t creating great content but it was good enough.

Studies have shown that automation has steadily been driving people towards lower paying jobs, but AI will take this to a new level by eroding jobs that have traditionally required college degrees.

Whether you’re dealing with disinformation, polarization in society, or loss of income, the more you belong to communities with power, and the more wealth that the generations in your family have amassed, the more immunity you have from these harms.

Now that we’ve talked about who will be harmed by AI in the short-term, we should ask the next question.


Who benefits from AI?

The answer to this too is clear: those who are able to invest in building AI through their money or their labor. OpenAI, the company behind ChatGPT, was recently valued at $29 billion while a four-week old startup founded by alums of DeepMind and Meta were raised $113 million at a $260 million valuation!

It’s important to note that most of the people who stand to gain from the windfall of AI, are the ones who typically fall in the top 10% income bracket. Typically, they are not from marginalized communities and therefore aren’t directly affected by disinformation, and their work is unlikely to be automated by AI for a long time.

For most people working on AI, the gains from AI seem very real while the dangers of AI don’t affect the people in their immediate social circles. When the possibility of harm from AI is to people you don’t identify with, the risks don’t seem so bad. This is why it’s easy to talk about the harm from AI in abstract terms, e.g. “some harms in some areas.”


Justifying some harms in some areas

It’s also easy to justify this early harm by saying that we’ll iterate to fix issues that come up. This was the rationale that Altman exalted when engaging in this arms race to be the first to market with ChatGPT. But while tech leaders can claim that iteration will address harms, there’s historical evidence that this approach doesn’t work.

Facebook caused harm by eroding democracy in several countries and helped incite violence in India, Myanmar, and Ethiopia, but the company hasn’t been able to tackle these issues effectively. The example of Facebook/ Meta shows that contrary to Hoffman’s claim, iterations don’t create a fundamentally different product or a much better state.

For most organizations, the business case for generating content and user engagement is obvious because it brings in revenues, but the business case for fixing issues to address harms to marginalized communities is weak because it’s viewed as an expense. As a result, the idea that AI companies will invest in fixing issues that create “some harms in some areas” sounds almost as convincing as Meta investing to moderate hate speech in other languages.

We have to acknowledge that when we speak about the harm from AI in such abstract terms and say we’ll fix issues through iterations, we’re being indifferent to the plight of many communities and indifference in itself is a choice.

Each of us is affecting people’s lives through our products. Like doctors, we’re prescribing our products to solve their problems and that requires taking the Hippocratic Oath of Product.

The Hippocratic Oath of Product means casual acceptance of “some harms in some areas” is fundamentally not okay. We must strive to create equitable outcomes for society instead.

But equitable outcomes rarely, if ever, happen accidentally. Technology typically replicates or exacerbates societal biases, and we need to be wary of anyone exalting any technology as the purveyor of utopia. Most importantly, iterating on technology doesn’t lead to fundamentally better outcomes.


Taking responsibility and creating equitable outcomes

Creating equitable outcomes requires being vision-driven. It means you have to think about the change you want to bring about in the world and systematically translate that into a strategy, your priorities, and how you measure success. This is where Radical Product Thinking comes in – it’s a methodology for being vision-driven:

1. Vision:

You have to start with clarity on the problem you want to solve and the outcomes you want to create for society in order to build a product to bring that about. You can craft a Radical Vision statement to help you and your team define this for your product.

2. Strategy:

Once you have this clarity, you can bake equity into every element of your product strategy. A good product strategy requires asking the following RDCL (pronounced “radical”) questions and you can think about equity at each step:

  • Real pain points: Who are the personas your product will engage and what’s the pain that makes them come to your product? This is your opportunity to think about those who might be marginalized otherwise.
  • Design: What’s the solution for each of those pains? Think about whether your solutions are inclusive. Remember the phrase “Intent is not impact”. You’ll need to test if the impact of your solutions is truly inclusive regardless of your intent.
  • Capabilities: Is your IP strategy and the underlying technology inherently biased or perpetuating a colonialist model for extracting gains? Are your business partnerships a win/win?
  • Logistics: Is your business model extractive or preserving systemic oppression? Does it create additional costs or hurdles for marginalized communities?

3. Prioritization:

Even the right vision and strategy can quickly become irrelevant if your decision-making consistently prioritizes short-term gains over progress toward your vision. The RPT approach to prioritization helps you keep track of how often you’re taking on Vision Debt and how often you’re Investing in the Vision.

4. Hypothesis-driven execution and measurement:

Your product is only successful if it’s creating the change you intended. You can write hypotheses for each element of your strategy and how you’ll measure if it’s working or if you need to course correct.

Creating “some harms in some areas” is a choice, just like the choice Lord Farquaad makes for his subjects. Instead, we can choose to be vision-driven and create equitable outcomes for society.

Newsletter