Am (A)I My Brother's Keeper?
Forget the nightmare scenarios about killer robots and AI that self-drives us off a cliff because we were mean to it. The AI question worrying me is whether humanity is up to the coming challenge.

I recorded a podcast on AI for Lehigh's "GO Getters" series last week. The episode will be out this summer, and I'll share the link then. However, I wanted to focus on what I felt was my most important message: what worries me most about AI.
I have begun to put reactions about AI broadly into three categories
The skeptic: we are overhyping this tech
The techno-optimist: this is going to make everything better
The doomsayer: this is going to destroy humanity
I see the second and third groups on a continuum, and I believe that thinking about AI as a binary future invites us to treat it as a yes/no decision, even though we have no control over its deployment. The skeptic sits in the middle, but they might underestimate the profound changes that are coming by drawing conclusions about the future based on the early stages of public AI products.
And so for my own worries about AI, I keep coming back to something Ezra Klein wrote back in March:
But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building. I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static.
Klein is discussing how technology will disrupt things that feel established, and in ways that we will not anticipate because the ability to predict a specific future even 3-4 years out is proving challenging with this technology in the real world. This includes industries, ways of thinking, and institutions. It will challenge our ideas about what it means to be human, and the value of the product of human hands in a world where mundane, monotonous labor is performed by digital hands.
Just in the past month, IBM said it plans to replace 8,000 jobs with AI; think non-customer service tasks like HR paperwork and accounting. The fast-food chain Wendy’s is experimenting with AI for the drive-thru. Ryan Broderick had a good thread about that story that you should read, but I’ll include the kicker:
What I’m saying here may seem like an alarmist view of the technology itself, but I would like to caution you against that. This is not a moral panic about technology; my concern is more specifically about how we react to this technology, and the possibility we might do it in destructive ways.
This brings me to the point I made during the recording of last week's podcast. If we acknowledge that technology is often implemented in ways that are inevitable and can have transformative effects that we cannot anticipate, then we understand that in any scenario with a widely adopted new technology, the birth and death of things we love are on the table. AI will dismantle some institutions and industries that we cherish because scarce things become abundant. It will improve processes that are not functioning well, create entirely new categories of business and jobs, challenge our capacity to think ethically, and profoundly shape society. This is a tale of technology, but not solely of this technology. AI isn’t special, in that regard.
My worry is that we are not prepared because we have spent years intentionally avoiding preparation. The public response to COVID-19 was a test run for the decay we have experienced in civic life, where we struggle with the concept of doing things for the common good. However, this debate has been ongoing for a while in the U.S., with the annual discussion of whether government assistance for those in need is a handout or a hand up and how that should be squared with the desire to pay less taxes. How much should we be obligated to help, in other words? While we observe individual acts of kindness and charity every day, and we have institutions such as non-profits and religious organizations that assist people, they are only capable of a certain level of scale.
The scale of the coming near-term AI disruption is hard to get one’s head around. I left the newspaper industry in 2004 before the layoffs accelerated and the advertising crash hit. Journalism has lost about 80% of its workforce since 2000, all because of a few digital tech innovations that gutted ad support for news media. But I don’t bring this up to talk about business. I watched as hundreds of colleagues, all dyed-in-the-wool journalists who strongly believed in the mission of news, struggled with their post-layoff identity. A few found new jobs, but as the cuts accelerated, there was often no newsroom landing for them. They had to find a whole new career while mourning the loss of a vocation so tied to their identity. Many of them struggled. A tragic subset of them developed things such as severe depression or substance abuse problems. The loss was profound, and coping was hard.
That’s just the news industry. What’s coming for society is of a scale we cannot fathom, a similar sense of loss to identities such as “financial analyst” or “factory worker” or “marketer.” Take my news industry experience and multiply it across, well, everything. Some steady, reliable professions that exist today and give us pride in our work and purpose will not exist at some point because AI makes those jobs obsolete. Others will keep their job but find it profoundly changed to the point of being beyond recognition, and they’ll be the lucky ones.
I am optimistic that this by itself is not doomsday because I think humans are adaptable, and new opportunities will come from this moment. But in the near term, as the change accelerates, there will be a need for care and help. We will have to figure out how to retrain people and help them see a different vision for the future, and we will have to figure out who pays for it just as we're waking up suddenly to a society changed slowly, then all at once.

This is partly about basic things such as income and providing for low-level needs on Maslow’s Hierarchy of needs (see above). But there are larger questions about identity and our sense of place in the social fabric when entire vocations can be wiped out at a scale and speed previously unimaginable, things upward on Maslow’s chart that matter a great deal to our sense of happiness and ability to function. There are questions about who wins and who loses in an AI society where we all are subject to it but only a few control it; questions of wealth, opportunity, and justice that have troubling possible outcomes if left unchecked. "The future is already here, it's just unevenly distributed," as William Gibson said.
The most optimistic scenario is similar to Gene Roddenberry’s, that being freed from basic survival unleashes a level of human growth and innovation in ways that capture our true potential. It is the type of future we’d ideally try to actively court if we could get it together. The other end is the doomsday scenario. I shouldn’t have to describe that, because that’s already out there and looks a lot like a Terminator movie. What we will probably find, though, is a future between the most optimistic and pessimistic scenarios, and one that challenges us to build in a better direction based on our initial landing spot.
I don't worry about the tech. Tech is tech. I worry that we don't have a common agreement that we are our brother's keeper just at the moment when we really need to be our brother's keeper. That's not an AI problem; that is a human problem.
There are important discussions leaders need to be having about AI deployment and regulation. We need to make sure that the technology is being deployed and run ethically and does not create new industry gods. But there are smaller ways we need to prepare now, and one of those things is looking around and realizing we're all in the same boat when it comes to the coming transformation. The scale of need we will experience in even the most mild case of creative destruction will be more than small-scale efforts are capable of by themselves.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Mastodon or Twitter.