Chatbots, education and the Star Trek computer
The sky is falling, some say. But the lines of code that make up an AI really are the thing that will ruin education, maybe it's the education model that needs rethinking.
"AI writes the script, with no time to skip
No need for a pen, it's all done in a den
A world of text, where language is vexed
The end of writer's block, as we know it"
- ChatGPT, channeling REM's “It's the End of the World as We Know It”
Begun, the chatbot wars have. First came ChatGPT late last year, which set Twitter ablaze with screenshots of us talking to our new best computer friend. This week we got a lot more. Within hours of each other, Microsoft and Google made dueling announcements about their next steps to bring generative text via AI to the public.
If this is new to you, I’d suggest you read my previous post on ChatGPT first. There’s more explanation about the underlying technology than I want to get into, and it grounds you a bit more in how I’m thinking about this technology right now.
This post is some extended thoughts about education and generative AI, but as they say, “First, the news ….”
Google has reportedly been in Code Red mode since ChatGPT grabbed the public imagination and attention. Chatbots have been around for a while, so the release of yet another understandably could be seen as not being big news, until it wasn’t. Kevin Roose wrote a great story about GPT’s rollout for the New York Times last week, noting the app quickly got to 5 million daily active users (DAU). That is an astonishing scale of growth in a short amount of time that blows past anything we’ve seen in the social platform era.
On Monday, Google announced that it was launching Bard, its own generative text built on its LaMDA large language model AI. As I wrote in that previous post linked to above, GPT is a different way of answering questions; it uses predictive text to give you answers to questions rather than doing what a Google Classic does (points you to sites with answers. If you need a mental model, a Google search is the librarian to ChatGPT’s professor at the lectern answering student questions. Bard is only available to select beta testers for now, but expect a broader rollout soon because the company can’t afford to wait long.
The reason for the urgency? Microsoft announced hours later that it is relaunching Bing which combines the search engine with a GPT component. Microsoft is a big investor in GPT, so a synthesis with its search engine was always a natural play. There are a couple of critical add-ons with a Bing integration that really improves on what GPT does (and doesn’t do) well. First, search results will provide answers but also provide you with a link to the source the answer was generated from. It’s a Works Cited list, something GPT doesn’t currently give you and is a major problem given that it’s hard to tell whether GPT’s answers are correct (and a major critique of GPT is it really is too confident in its wrong answers). Secondly, the new Bing will be integrated with Microsoft’s Edge browser, and that is really interesting. With a click, the chat technology can help you make sense of whatever page you’re reading. Imagine what this could do for news comprehension as this tech gets better:
“We’ve updated the Edge browser with new AI capabilities and a new look, and we’ve added two new functionalities: Chat and compose. With the Edge Sidebar, you can ask for a summary of a lengthy financial report to get the key takeaways – and then use the chat function to ask for a comparison to a competing company’s financials and automatically put it in a table. You can also ask Edge to help you compose content, such as a LinkedIn post, by giving it a few prompts to get you started. After that, you can ask it to help you update the tone, format and length of the post. Edge can understand the web page you’re on and adapts accordingly.”
We are still, unbelievably, at the beginning. Baidu, a behemoth that dominates the market in China relative to its American competitors, is jumping in with its own chatbot. Facebook, Apple, Amazon, and other tech companies are bound to jump in with some version of this technology in ways that make sense for how we use their products. If generative AI chatbots feel everywhere now, wait six months.
Being back on campus after my research leave last fall has been interesting. I spent the last six weeks of my sabbatical trying to experiment with and understand the possibilities and limits of this new wave of AI tools in text, images, and video (I wrote about DALL-E 2 last summer). But being among faculty colleagues and students has been different. The talk is everywhere. People are aware. A few highlights:
Several students told me professors addressed GPT the first day of class and told them they had tools that could detect whether GPT wrote the text (fact check: ehhhhhh)
A faculty colleague expressed what I’d call outright disdain for the technology, saying it was going to make us all dumber.
I spoke with a colleague who teaches writing-heavy courses in the social sciences, and they were really excited by GPT’s capability to explain difficult texts in ways that will help make discussions more productive.
Students are skeptical, somewhat. One I spoke with compared it to “borrowing sketchy notes for exam studying from a classmate who skips 25% of classes.” I love Gen Z’s sense of humor.
As a fan of dystopian literature, TV, and film, I’m a moth to a flame to the AI WILL RUIN EDUCATION genre of moral panics, as if education itself has been this static thing that has never undergone revision. But AI will not ruin education. It’s the argument for education.
As I wrote previously, the main skill for the next generation is how to ask questions and critique what AI gives you. Consider this story about computer science students being asked to critique AI outputs, for example. How do you do critique if you rely on AI for answers for everything? You cannot critique AI using AI answers as your foundation. In seminar classes, my typical style is to ask questions, listen to answers, and then offer some subtle critique or add-on information before asking more questions. This allows me to harness my expertise in the service of more questions, the goal being to help students arrive at better answers. But none of that is possible if I’m a know-nothing professor. Asking intelligent, meaningful questions that get at better answers requires depth and education. We’re asking the wrong question if we are worried students are going to use GPT to cut corners because to use it effectively students are going to have to know some things.
I think of a world where GPT, Bard, or Bing tools are deployed. They’re out there, but we still have a story to write and one we can’t write with a chatbot. That story is about how we use those tools, and what those tools are used for. The world is full of products that probably aren’t great for us in terms of public health, stress levels, or other types of harm. In a free market society, tools are routinely given to us without any training or help in how to consider thoughtful use. An education model that tries to shield our students from an AI tool’s existence is a fool’s errand, and counterproductive. Education is the key to unleashing what these tools can do, not the endangered species. It’s the vehicle we have for making sure our interaction with large language model chatbots is thoughtful and meaningful.
My education agenda for an AI future?
Critical thinking
Critique.
How to ask good questions
How to think through the ethics of these tools
Assessment of what these tools don’t do well (yet).
You can have these discussions in any discipline
Here’s an example of how I envision our future interactions with AI, which shapes how I teach question-asking as a marketable skill. I think of interacting with GPT much like how crew members interact with the ship’s computer on Star Trek. It’s honestly the main metaphor on my mind right now when I play with AI tools. The example that starts 10 seconds in is a good example:
The scene starts with a query, asking for a search of the term “Darmok” and what the computer yields back is akin to a bullet point list Google search result. But notice what happens at 0:33. Counselor Troi asks follow-up questions that don’t exist in text in a database, but rather require the computer to think about the database using filtering and crosschecking. Then at 0:37 Commander Data hears that answer and does a query search of the audio that inspired the search. It yields the term “Tenagra,” which leads to another search and then a search that asks the computer to synthesize multiple searches. All this is done by computer, all directed by a human using their critical thinking capacities.
The video above is a supercut of similar scenes in TNG, and worth the watch … if you replace Computer with ChatGPT, it’s not hard to see possibilities. This also is a great episode of Star Trek TNG because it’s all about an alien race that speaks in terms of myth and metaphor, and about the struggle to communicate when your language is different. But within that episode, this scene plays out that narrative struggle in concrete example terms. These Enterprise crew members have become adept at learning how to harness the computer’s knowledge base and quick search power by asking better questions.
We don’t watch Star Trek and think those crew members are somehow intellectually inferior to their ancestors because they are letting computers do the thinking for them. The future envisioned in this show is one where humans have figured out how to harness AI for good purposes while not surrendering the director role. In the most optimistic outcome that tracks with Gene Rodenberry’s vision of the future, the use of technology such as the Enterprise Computer tools frees humans up to be their best, most exploring, and curious selves. And to do this well, you have to be educated. It requires education and training to know more things in ways that make you ask better questions.
We can’t run away from this anyhow. AI is coming for the academy in unpredictable ways, and I’ve sat through years of godawful media conferences where print media folks tut-tutted all the digital people about how they were making audiences dumber. Having a conversation about outputs is useful, but disparaging the underlying tech when you have no say in its deployment anyhow is asking yourself to be cut out in helping shape what the unstoppable future looks like.
One example I’m thinking about. The pandemic unleashed on us as professors the need to pivot our classes into nontraditional formats such as video lectures. One of the things I consistently had to grapple with was the idea that the shift in modality had a sudden, jarring shift in inequity across learning styles. That inequity has always existed, but it becomes pronounced in a hard pivot when I have memorizers and heavy note-takers dealing with the fallout in different ways than visual learners and thinkers.
GPT has been interesting that way. We already know by now you can enter in complicated text and ask for a Cliffs Notes style summary. But GPT’s ability to customize adds a layer of accessibility to your father’s Cliffs Notes. For example, you can ask it to tweak its summary using models and metaphors, which allows people who are visual to grasp the information in different ways. Here’s one I did for blockchain, a technology that truly makes my head hurt:
It’s just one example. Follow-up questions could simplify those answers even more. But it’s a new avenue for education that centers education’s role in thinking and exploring to generate what the real goal was all along: deeper knowledge, and better understanding. And given that we can create custom explainers based on learning styles, we’re adding accessibility options to the pile if we think through it well.
I’m not techno-utopian, but I’m more bullish on the possibilities of generative AI than I am about the drawbacks. Most of the drawbacks seem steeped in This Is The Way We’ve Always Done It type of thinking and assessment strategies that were built in part based on our more limited ways of teaching and learning. But we can retrofit our education processes to accomplish a thoughtful, critical approach to the answers we get from these tools. The hopeful result is a more prepared, if you’ll pardon the Trek dad joke, Next Generation.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Twitter, Mastodon, or Post.