Who's afraid of ChatGPT?
OpenAI's new generative text chat tool has gone viral. People who teach writing are afraid, but they shouldn't be.
As you may have heard, there's a new AI tool on the block called ChatGPT. Developed by OpenAI, this language model has the ability to suggest possible completions for written text, making it a potentially useful tool for writers. However, some educators have raised concerns about the impact of ChatGPT on students' writing abilities. In this post, we'll explore these concerns and consider whether they are warranted. Whether you're an educator or simply someone who is interested in the latest developments in AI, this is a topic that is worth keeping an eye on. Stay tuned for more!
OK, a confession: I didn’t write that intro. You might have even discerned that by the end of the paragraph. “Stay tuned for more!” doesn’t quite meet the moment or the style of my newsletter. Certainly doesn’t sound like me. There wasn’t even one Star Wars reference. FAKE NEWS!!!1!!!11!!
I am here to write about the future of writing, though. We’ve watched OpenAI’s ChatGPT go viral this week but thought it’d be best to lead with an example so you can see what this AI chatbot, premised on having a conversation in the search for answers to a question, can do. In this case, here’s the prompt I gave ChatGPT:
Write the intro to a Substack newsletter post about why educators are worried about ChatGPT hurting their discipline. One paragraph, in the voice of a tech hipster, for an audience of educators and people interested in writing and/or education.
ChatGPT operates in an AI area that harnesses predictive text (if you want a really good primer on it, check out the latest Hard Fork podcast from Casey Newton and Kevin Roose; can’t recommend that enough for curious folks).
Consider all the information publicly available on the web. If you want to know something, the way we’ve solved that problem over the past 25ish years is through search engines, which have become well-trained and very good at serving up links to sources based on keywords and questions. Google in that sense teaches you how to find answers in a sea of information rather than providing them.
ChatGPT is not Google. Consider the same premise as with the Google scenario. ChatGPT instead serves up original text answers to answer your question based on what it has learned from crawling the web. It’s mining the billions of web pages out there and trying to give you the best answer it can, which is more difficult than it might seem. It’s not just having to discern from that whole corpus of material what the answer is, it also has to start with some understanding of what the question is asking. So my prompt is pretty straightforward, but the AI still has to socially construct understandings of “tech hipster” and those audience constraints I threw at it when generating an answer. If I remove those constraints, I get a different answer because some factors or details in the reply become more or less relevant. Similar how you’d explain a concept idea differently to an educated audience vs. a room full of kindergarteners. The question, the audience, the data are all part of the answer you get.
Predictably this is going to make us think about what we currently do in the creative arts and writing. The Atlantic already has declared that “The College Essay is Dead” and that seems to be the buzz this week. I tend to find a lot of the concern oversimplified, but I do think it’s useful. Any time a new technology enters the arena, we get new tools. New tools should make us rethink old ways; there’s a reason farmers don’t dig in the dirt with their hands.
An example of where my head is at:
Haylie Buysse led all scorers with 23 points, making three of her team’s 3-pointers. Marley Harmer had her hands on everything with eight rebounds, five assists and three steals. Addi VanNess was second on her team with eight points, adding four steals.
This was from a basketball game recap story in the Press-Republican based in Plattsburgh, NY. Gonna be honest here: I didn’t really try hard to find a representative example. I typed “led the team with 23 points” into a Google News search and grabbed the first good one I could find from a long list of results. Why? Because there was a long list of results. This is boilerplate writing, very common in things like sports gamers, economic writing, weather reporting, etc., the types of news that take a lot of data and attempt to turn it into a narrative and have over time created their own commonly used format that looks the same across individual stories. This type of writing is common in a lot of the journalism that fills print pages and populates websites. I wrote a lot of this type of material as a young reporter. Games have high scorers and key standouts. We are trained to find and report those things, and the writing has become formulaic.
I had an eye-opening moment as a young reporter in the fall of 1998 when I was working my first news job at The Daily Democrat in Woodland, CA. One part of my job was covering UC Davis football games and as part of that job I interacted routinely with the Sports Information Director’s staff that was tasked with getting me access to players, hooking me up with stats before, during, and after games, and making sure I had all the data I needed to write well. One day while I was waiting around for a press conference, an assistant SID motioned to me to look at his laptop. He showed me how they generated their press releases nearly instantly. They had a Microsoft Excel spreadsheet set up to enter the box score, and that was linked to a MS Word document that had fields set up via a macro link to the Excel doc to pull in data and “write the story.” All they had to do was add a couple quotes and they had a press release in minutes. So for example, in the boilerplate language above the Word doc looked like this:
<field> led all scorers with <field> points, making <field> of her team’s 3-pointers. <field> had her hands on everything with eight rebounds, <field>assists and <field> steals. <field> was second on her team with <field> points, adding <field>.
In that example, each time you see <field> the Word doc is set up to search the Excel file for a specific piece of data. The first one would search it for the highest scorer, then fill it in based on the name attached to that. When I saw that, I realized the language for their press releases was always the same, but the differing data totals made it feel different. I hadn’t noticed. This was 1998, before bots and AI-driven text. It was possible with just two common Microsoft Office products, but even to young, inexperienced first-job Jeremy it felt like a future we weren’t thinking enough about.
Now that’s PR. Fine, boo the darkside if you must. But a lot of newswriting is boilerplate, much more than we often think about. And that creates openings for AI to spit out the formulaic copy reporters have been socialized to produce for decades.
The Los Angeles Times has been using generative text since 2014 to report on earthquakes in the region. Yahoo Sports has figured out how to write “game stories” about fantasy sports leagues using its Automated Insights tool. The Associated Press has been using bots to write economic stories and cover games since 2015. These are the types of stories that an editor on a news desk would typically write up on the fly based on some data they receive. In the case of the AP sports stories, Division III games that are not critical enough to send a human reporter to. Can a human write a better game story off a box score than a computer? Seems like a no. Most readers aren’t even aware this is happening unless it’s pointed out to them.
My point in all this is that it’s useless to cast AI writing as a fear of the future, because a version of that future has been here for nearly a decade. And humans have been producing stories like those a lot longer in ways that look more like automation than creative endeavors powered by the imagination of mortals.
Still, ChatGPT is some new evolution of the space. This isn’t using template technology to pull in data like I witnessed in 1998. It’s built on interaction, allowing you to give it a cold query and it has to imagine an answer based on the interpretation of data, rather than data itself. In other words, it might not be an answer that lives anywhere on the web because the exact question hasn’t been asked, but it still is able to spin up an original answer (or “content,” by extension) based on extrapolation.
I find that is where ChatGPT can be the most fun, trying to harness its creative ability to build answers to questions that haven’t been asked much, or at all. For example, take two prompts I gave it:
The McLuhan prompt was roughly what I’d ask on an exam in my introductory course here at Lehigh. The answer is solid and boilerplate, roughly one that would earn a B. It lacks nuance and explanation, particularly in the last two sentences. It’s a good starting point, but would be the equivalent of studying from incomplete notes.
The candy corn question and answer are pure imagination. Yes, there is Candy Corn Discourse on Twitter every year, visible to an AI. But asking it to invoke a fairly particular speaking style in crafting the argument and—crucially—a style the audience would recognize is what makes this answer interesting. It’s not the data itself, it’s the style and presentation.
My point is there is some difference between substance and style, and the uncomfortable truth is that a lot of media writing needs to have both. But here’s the thing: the candy corn answer is entertaining and perhaps interesting, but it’s also not something you can do much with. Rhetorically, there’s not much to debate. The McLuhan answer, though, can very much be picked apart. It’s the more boring snippet of text, but it has places where you can challenge it, poke at it, expand on it, and so forth. We can improve the answer to the first prompt, in other words. The second prompt answer just kind of hangs out there, entertaining but probably unmemorable.
My theory on a lot of this generative AI text right now is that its usefulness (learning, thinking, growing from what you read) is correlated with our ability to critique it and ask it follow-up questions. Things that have finality don’t expand our minds much and don’t stick with us if there’s nothing to do with it. Box scores are box scores. I can’t question whether a player scored more or fewer points. It’s data and lends itself to boilerplate copy lacking imagination.
That McLuhan answer though, we could debate some of that. That has inspired one of my planned exercises for my senior seminar this spring, to have students ask a question of ChatGPT and then paste the answer into a Google Doc that they can then annotate with critique. To grade it, perhaps, as I would by pointing out weaknesses in the writing. What is too generalized? What is jargony in ways that obscure understanding? What is incomplete?
The model I’m thinking about is actor vs. director. A lot of media content roles are understood from the perspective of the creator—the writer, the on-screen speaker, and the producer who cuts it together. But directors are different. Their job is to get the most out of the talent. They aren’t the performer, but they know how to push and ask questions of performers so that we can maximize the performance.
If you want a different analogy, it’s writer vs. editor. Editors poke, prod, push, ask questions, apply skepticism and generally work to shape a writer’s work so that it’s the best it can be. The editor role has long existed, of course, but it’s a hidden force in the content we eventually consume. The name on the story is how we think of that content even if we don’t see all the ways it’s been shaped by others. But you can’t consistently get stellar, thought-provoking writing without both roles.
Generative text on something like ChatGPT is forcing us to rethink what content is, and what humans should be spending time on. I am not particularly afraid of a world where we hand off boilerplate writing to a bot. Most of that writing was done routinely and uncritically anyhow, and this frees up humans to work on stories that require intuition, experience, and insights an AI won’t have. Focus on what matters.
But I’m also not worried because AI text is going to make us better question-askers as well. I’ve long told students that becoming a copy editor made me a better writer because I was fixing mistakes in writing that I eventually realized were in my own. It made me ask questions about the text and push reporters to write more clearly. And in the case of something like ChatGPT, learning how to ask questions and follow-up questions is a useful skill for the next generation. I want my students to realize there is a lot of value in that editor role, not just the writer role, as we approach a future with much more automation in the creative space. We still need to be asking questions about what we get.
One of my favorite things to do with ChatGPT is to try to trick it into doing what I want. Chat GPT routinely tells me it can’t answer my question because it’s not that type of AI. Here’s an example I worked up last night when my 6th grader sat down with me and and we tried to make ChatGPT generate a recipe based on a decidedly bad mix of ingredients:
That answer from ChatGPT is not uncommon. As a function of its constraints, it sometimes will flat out refuse to answer my question because in all of the text it has consumed, it determined that no being in their right mind would make a dessert with those ingredients. But we didn’t give up. We changed the nature of the prompt and asked it to imagine:
Basically, we said, “Fine, use your imagination.” And it did what we asked.
Just a small change, from asking it to do something in the real world to doing it in a fictional world, was enough to alter its context and open up ChatGPT’s imagination. Now to be quite clear, this recipe sounds atrocious. But it tried because I figured out how to make it try.
A lot of the lament about ChatGPT and the future of writing is probably about bots replacing humans to write copy that has little thinking value. I can understand that in the sense these are real jobs and people need careers.
But I look at this landscape and see opportunity. The market is shifting toward the ability to ask questions to and about these emerging technologies. Not asking once and being satisfied with the answer, but over and over again to shape and prod what it produces into something useful and interesting, like a potter shaping clay. In that way, whether the words spring from a human mind or a generative AI doesn’t matter so much as the guide shaping what we see in public.
My colleague Craig Gordon, a Lehigh alum who has taught in our department for a few years now, has taken this approach (you should subscribe to his team’s Don’t Count Us Out Yet! substack on all things tech and the future). His method is to teach our students that harnessing the power of AI starts with the ability to ask good questions. They spend a lot of time on this in class. There is a qualitative research methods approach to all of this, and it requires good training.
This is the point a lot of the hand-wringers seem to be missing. Learning to write well is a great skill in life, but the goal of student writing is centered on being able to explain what you know. Reading, thinking, synthesis, output. Generative text AI alters that process, sure, but not as much as we might think. Asking questions brings us to a new place: ChatGPT and its relatives can generate an answer, but how do we know that answer is correct? How do we know it’s good? How do we know the answer isn’t drawing on source material rooted in plagiarized material, misinformation, or harmful stereotypes rooted in history? Being able to answer those questions is why we have them writing in the first place. They don’t have to generate the initial crack at an answer to think and show us what they know.
It’s a skill I learned well as a reporter, and then as a journalism educator in inadvertently stumbling onto the Socratic Method before I adopted it as a teaching style: learning is often in asking the follow-up question, not the initial question. ChatGPT’s ability to synthesize billions of pages on the web and give us a starting-place answer is not the death of a form or an industry. Those answers could be incorrect, or rooted in bias. They might actually be pretty decent. But either way, they should start conversations with the humans interrogating them at the point of research and prewrite, not be the definitive copy that gets turned in for a class or published somewhere. If we treat generative text that way, we might be on to something transformational in education and media. It’s a huge opportunity to spend our brainpower on pursuing novel questions of substance and importance.
It’s the jumping-off point to new kinds of ways to engage our students and the public in the real goal of writing: the search for truth and real answers. What we need, then, is to teach imagination so that we ask better questions. That is a future I welcome.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Twitter, Mastodon, or Post.
You've pointed out how to use ChatGPT to train critical thinkers, but my take away from this is that using the tool as you describe would be a marvelous way to train editors. Or maybe that's redundant. I need an editor!
Also, that Trump piece is very funny.