Journalism prepared me to work with AI
I've long argued to prospective students that a journalism is the most flexible degree you can seek, one that can prepare you for diverse careers. It's only more true with artificial intelligence.
You may have seen some concerning debate in media culture. People are flocking to an information source that is widely available, and due to its overwhelming market dominance this information source has the potential to dominate and skew the way people see the news. Worse, while this information source provides a lot of useful information to those who interact with it, sometimes the answers are wrong. Like, really wrong. It messes up a lot. Sometimes its product reflects biases as the point of production. Other times it’s susceptible to bad sources that are the building blocks of the narratives it distributes. Even worse(er), it often lacks the self-reflection to question whether it’s correct.
But look, I’m not here to talk about how some people dislike CNN. I’m here to talk about Bing’s chat search and ChatGPT.
I kid, I kid.
But the sometimes-hate toward CNN is a good entry point to thinking about why people are wringing hands about artificial intelligence that uses large language models and generative text to provide answers to questions. After all, the process of journalism could roughly be described as similar to what ChatGPT does. It’s not a perfect analogy, but it’s a useful model for our purposes for all its promise and its potential downfalls.
Journalism provides a great framework to understand encounters with ChatGPT's narrative that is cobbled together from data because journalism itself is narrative cobbled together from data. Functionally, we speak the same language. The reporter gathers information from research and interviews, but they don’t publish lists of facts. They create narrative because it’s easier for an audience to grasp and remember a story than long lists of facts that lack a coherent thread tying them together. Similarly, AI chatbots aren’t serving up lots of links and telling you to have at it. Unlike Google’s search engine, AI bots are surveying those lists of links and trying to give you a story. I should note the process is very different on the back end (chatbots are using predictive text, telling you what it thinks you need to hear), but at a birds-eye view we’re talking about how facts become stories. Twinsies!
This isn’t new. Google search is useful, but look at what the younger generations are using to find information. Wikipedia for starters, but what is the top search engine teens are using right now to get information they want to know more about? YouTube, with TikTok nipping at its heels. The kids like visual narrative, and they’re going to places that have a lot of rampant, human-made mis/dis-information.
◆ ◆ ◆ ◆ ◆
Journalism education has long pointed to an informal, ongoing conversation between Walter Lippmann and John Dewey about information and democracy as a way of understanding the role news plays in civic life. Known as “The Lippmann-Dewey Debate” in my field, this ongoing conversation was spurred in part by Lippmann’s 1922 publication of Public Opinion, a seminal book that argued in part that democracy was structurally troubled because humans can’t help but apply to news stereotypical thinking built on superficial experience with words, places, people, and ideas they haven’t personally encountered. Humans, he argued, relied on media to learn about things not directly experienced, but the knowledge gained would only reflect shreds of the truth about those things. And so the sum of a person’s knowledge is socially constructed by some combination of small personal experiences and large amounts of stereotypical thinking about information built from an incomplete media diet. Worse, he said we don’t realize we’re operating based on stereotypes. He wasn’t speaking of stereotypes in the way we talk about them today (biases against certain people groups) but rather as deep feelings about topics built on superficial, second-hand engagement with those topics. Like how I say “India” and a lot of people picture the Taj Mahal in their head.
Lippmann argued then that for democracy to work it had to find a way to harness expert opinion, and subsequently journalism needs to focus on telling us the truth about the facts based on those expert views. Through such a process, he hoped we could become more aware of our stereotypes and move past them. It was an argument for a democracy built on social science processes.
But I said it was a debate, and that’s because Dewey had different views of the situation. Dewey largely conceded Lippmann’s points about human failings, but where he ultimately landed was the sense that these flaws didn’t matter. Lippmann, in his view, was too focused on the outcomes and the results, obsessing over the idea we are not always going to get to the “right” democratic outcome. Instead, Dewey’s main thrust was that the process matters. Yes, the decisions we make are important and have consequences, but the way we arrive at those decisions is also important and in the long run more important because they sustain the institutions of democracy itself. That we have systems and support them matters, and teaching systematic ways of engaging with information means that we can self-correct without turning every decision into the end of the world for the view that loses.
I reference Lippmann-Dewey because its take on information and journalism roughly mirrors the moment we are having in public sentiment about AI. Two weeks ago, Monmouth University released poll results showing that only 9% of Americans think some good will come from AI. A lot of the worry is related to outputs—bad results, AI trying to break up marriages, people using it to cheat in school, etc.
I’ve sat in meetings with fellow academics, both online and offline, and have heard similar lament. This is going to break things in weird ways we can’t predict, and people aren’t entirely convinced that what we will end up with will be better. This often feels to me like relived trauma from my younger days as a reporter, when I listened to newspaper publishers and executives try to will away the future by stressing the centrality of their own product in the face of a new technology that would eventually swallow their business model whole (the internet, in this case). I didn’t fall out of love with journalism because of those mistakes, and it didn’t deter me from going into journalism education despite the headwinds my chosen industry is facing. Instead I have approached my role as an educator with an unwavering faith in the methods of journalism and what it can do for us in the name of shared truth and understanding.
◆ ◆ ◆ ◆ ◆
Granted, but also we are at the infant stages of a technology that is going to be unleashed on us regardless of whether we are ready. So as with most things, I’m a Dewey person. It’s the process that matters, and if we have better process we can imagine better outcomes. Journalism can help with that.
Journalism is not a particularly new discipline in the academy. The world’s first journalism school opened at the University of Missouri in 1908, and on the heels of perhaps the greatest self-made crisis in the history of news: the Yellow Journalism era of the late 19th century. During that pivotal time, news was awash in inaccurate and sensational information that the public was being fed. It was obscenely profitable, but it was bad for democracy and ultimately bad for the industry because it eroded trust in the product. Is this sounding familiar?
What emerged from the Yellow Journalism era was the Missouri model, and shortly afterward at Columbia and then at schools all over the country. Journalism as a discipline built on the search for truth using social science and humanistic tools. Observation, inquiry, interviews, the testing of priors using data and facts.
Journalism is not at its best as a standalone degree, and in fact it’s arguable you’re doing it wrong if you’re not attaching it to something. Here at Lehigh, we require our students to either minor or double major in something outside media studies. Journalism plus, in other words. Journalism plus history. Journalism plus political science. Journalism plus gender studies. Journalism plus sociology.
Not journalism as part of the liberal arts, but journalism at the intersection of liberal arts inquiry.
We want good technical writers trained in critical thinking and accuracy, but we also want them to be broadminded and think across fields. We want them to be interdisciplinary, and in aggregate we want diverse points of view in the newsroom. This kind of orientation gives us tools for a curious approach to the answers we encounter, and it allows us to interrogate information more thoroughly as we encounter it. Open, curious minds that are engaged with the wider world beyond your field is a gateway to better questions. It’s the heart of what makes journalism vital even as the industry is changing, and I believe it’s at the heart of this moment we’re having with AI unleashed on society.
In gathering information to construct narratives, journalists have to ask questions and critically engage with what they encounter. Some details you encounter are immaterial to the story, or wrong, or coming from biased sources. Breaking news: sometimes people lie to you. You have to interrogate all of this before, during, and after you’ve finished writing. It’s a never-ending process, even after you’ve published. Journalism is the art of asking questions with an eye on the idea that the truth about the facts is rarely final.
What are my sources?
How good are those sources?
What questions do I have?
What follow-up questions do I have after hearing the answers to my questions?
I understand the impulse to worry about the product when it comes to AI. But like the newspaper publishers I came to see as terribly deluded, we can’t stop this technology. It’s here, with or without our consent. Our engagement should be on how to adapt and use it at its best, to think about how good process can get us to good product. A focus on the how also gives us a lens to examine the product in the aftermath. Content is never the endpoint. We read, critique, analyze, and debate all kinds of information we encounter in the news. Why should our engagement with the news look any different with generative text? Thus, AI’s degree of wrongness allows more opportunity for critique, and to the degree that the building block sources are transparent and knowable we have the basis for evaluation. Dewey believed that by focusing on process we gain the tools we need to learn and grow.
So, of course, ChatGPT can be wrong. It shouldn’t surprise us that facts spun into stories would be open to some poking and prodding. Have you read human writing? Just last week, the NYT published a godawful opinion piece about COVID and masking that completely blew it when it came to understanding the scientific study that was the foundation for a casual tossing-aside of masking efficacy. Humans can be bad at this too. We all don’t know what we don’t know, and we have blind spots. Every one of us. The democratic stake in good education is to create a process by which we get to a discernible and shared version of the truth, and so the mission remains no different than it was before AI. Teach deep inquiry, teach people how to ask good questions, teach them how to dialogue, and teach them how to think critically about what they hear.
Inquiry that looks a lot like a reporter’s methodology is at the heart of good AI exploration, because the skill of the future is the same as the skills of the past: asking good questions in the service of improvement, progress, and innovation. Humans thrive when we are taking on big questions, but focusing on outputs is like focusing on answers without tracing back the lineage of questions that led to breakthroughs. It’s true in science, it’s true in journalism, and it will be true with AI.
Humans socially construct reality, based on a set of facts we ingest, and then we turn it into narrative. One of the problems with that is that the breadth and depth of our information have a big impact; it becomes the source material by which we learn to see the world. Our worldviews are flawed and limited to our source material by design. But as I’m playing with generative text tools, the process is no different. I am essentially asking it to socially construct reality much as humans do. It’s serving up answers based on the inputs given to it. The upside is that it has a limitless bounty of text from which to draw, much more than humans could in real-time, and with time it could be capable of much more complexity than humans could produce with any kind of speed. But it’s still an answer that will need examination. No answers are perfect. Source material and guardrails around that material matter.
This journalistic orientation to taking information and turning it outward for consumption among audiences vast or niche is broadly applicable to other fields. Scientific reports, financial reports, white papers, activist efforts that turn data into persuasion efforts—any way in which humans create stories from bits of source text become an entry point for us to think about AI chatbots and more broadly a way to explore the benefits of inquiry in the service of directing artificial intelligence and creating a tool for our use. We should explore with caution, of course, much in the same way we should with any information process we encounter. But by being intentional about building frameworks, we can construct systematic ways to ask questions about the good and bad of the answers we get from AI.
But beyond an obsession with outputs, if the goal is human understanding and not just getting to know a new computer friend a bit better, the payoff can be even bigger. Just like editing taught me to be a better writer, interrogating GPT's sourcing to hunt for blind spots and implicit biases can help us learn about our own, making us better users of tools and creations in service of progress toward a better world. The whole goal of thinking and writing in the academy and broader word is not the mere letters and words; it’s understanding, in the service of humanity.
I recognize this methodology because it’s how I was trained as a journalist, and how I tried to operate given the immense power I had to shape people’s perception of reality based on what I’d report. It’s the heart of a journalism education based in the liberal arts.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Twitter, Mastodon, or Post.
Enjoyed this post Jeremy. I suspect, as we head into the rapids of LLM-generated content (and other such ‘rapids’) what we need is a framework to understand what’s going on. You provided a good framework for me and for the first time I feel comfortable with the whole thing.