The AI "expert" gold rush
Or: How to spot a fake trying to scam you out of some sweet, sweet consulting dollars
Hey, did you know I’m on Substack Notes now? Check me out there. It’s a Twitter-like space where I’ll share short thoughts about things I’m writing and point to other good stuff to read.

Short post today with some observations on an unsurprising trend I’m seeing with AI: everyone is talking about it and a lot of them have unjustified confidence.
Ezra Klein wrote a great piece last month on AI for the New York Times. You definitely ought to read it, as he’s one of the few in big media tackling the social challenges of an AI world. He noted we are talking a lot about AI changing everything, but we are speaking with a type of assurance that isn’t truly knowable given the infancy of the technologies grabbing our attention:
But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book “High Weirdness,” Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” That is the world we’re building.
danah boyd [sic] zeroes in on this a bit further in her most recent Substack post, outlining the extremes between “zomg this is going to ruin everything! #terminator” thinking and “wow this is overhyped af” and noting how unsatisfying the whole dichotomy can be. The future isn’t knowable, she said, but there is a qualitative difference between pure speculation and an educated guess.
The key to understanding how technologies shape futures is to grapple holistically with how a disruption rearranges the landscape. One tool is probabilistic thinking. Given the initial context, the human fabric, and the arrangement of people and institutions, a disruption shifts the probabilities of different possible futures in different ways. Some futures become easier to obtain (the greasing of wheels) while some become harder (the addition of friction). This is what makes new technologies fascinating. They help open up and close off different possible futures.
All this to say when it comes to AI and society, the future cannot and should not be spoken of with any kind of assurance. But that shouldn’t stop us from types of informed speculation about outcomes should we choose a path. Not because the prediction could be particularly wrong or right, but rather because by envisioning pitfalls and downsides, we can construct guardrails and imagine safeguards that help us more carefully deploy AI technologies.
But look, assurance is what people want. It’s where consulting dollars are, and where there’s an opening to skate where the puck is going as people chase influence and rewards. I recently served as a programming chair for an academic conference, and in going through pitches in February, it struck me how many of them mentioned AI. Mind you, many of them were doing just that—mentioning AI—as a way to work in a sexy topic du jour into what often were pedestrian pitches. For 20-plus years I’ve seen this hype cycle in conference programming with blogging, then social networks, then visual networks, then code, and now AI. These things have a pretty predictable path.
I get the reason it happens. Being a first-mover in a new trend can work quite well for you if you get ahead of it and get noticed. I owe a lot of my career success in academia to being one of the first in Ph.D. school to study self-publishing on social networks. Game recognize game.
The corporate world is scrambling to understand how a ChatGPT or DALL-E will affect their business, and they probably don’t have anyone internally who can speak with authority on it. So it’s an area ripe for someone to come in and fake it until they make it, offering lots of buzzwords and clever talk without any substance. And how could they have substance? I’ve been teaching AI in class for three months now, and when someone asked me my pedagogy (said in Grey Poupon voice) I broke out into laughter. I am making this up as I go! Informed plans grounded in principles based on past success, sure, but it’s still a speculation-fest. Hell, at least 25% of what I’ve tried in classes has not worked. And there’s a lot of classroom value in playing to fail, but it won’t fit on a glossy consulting brochure.
Anyone who is selling a method as best practice, or the correct way, or giving you their 8 Tips For Effective AI use is selling snake oil. The tech is too new, and we don’t know shit about what this stuff does or what it all will look like next year.
So there’s a gold rush moment happening here, but let’s not confuse that with work we legitimately can be doing right now to understand these tools and prepare for a future we can’t predict. The grifters will abate as real expertise and case studies enter the market, but none of that is ready for prime time right now. And so there’s a vacuum there for people to gobble up the consulting dollars corporations and organizations are flinging at whoever will accept it.
I’ve given 5 public talks on AI now in the three months since returning from leave. All different audiences, all different methods. I have done one about class assignments that worked for me, but I have tried to stay on-message about the big picture and stay conceptual for now. The future of these technologies is unwritten, but I’m convinced that inquiry and dialogue will lead us down the best path as we learn more about what AI can do.
To that end, a couple of observations I keep sharing regardless of the talk:
Students are looking to us to describe and understand a future we cannot know.
Check out this slide of student comments after we spent time in class with an image generator:
While some profs are overly worried about ChatGPT cheating, our students are asking more existential questions about the value of their education. We should be listening to them. I have mostly good answers for them most days, but they are smart enough to sense the transformative nature of this tech and possibly even feel threatened by it. They’re grasping for ideas about what life will look like and how they will adapt. Human history is full of examples of how we adapted well and poorly. We shouldn’t run from that; it’s always instructive to take the history of technology and society and use it as a framework for thinking about what’s next.
Liberal arts inquiry gives us the best chance to adapt as the tools evolve and change
Another slide, since I’m making you look at my PowerPoints
As I’ve said before, using AI will be about asking questions as we learn to use these technologies well, and inquiry as a methodology is a great framework to adapt as these technologies scale and change. But to the larger social fabric problem, it’ll be even more important to ask questions about it all. The inputs, the outputs, the process, the libraries fed to these AI tools, the guardrails, and the biases that govern them. All of it. We can critique the deployment of AI right now without knowing where this is headed.
But we cannot do any of this without education. Good questions come from rigorous thinking, grounded in being well-read and open to ideas while not sacrificing our ability to think critically. The liberal arts style of education, based on open inquiry and curiosity, is more vital than ever.
The sky is falling, say some of my colleagues, but I’m convinced these emerging tools are an argument for education. One that will look very different because of the transformation of these technologies, sure, but we are going to need thoughtful people to use them.
Notice with those two observations that we don’t have to know the future. Our ability to adapt is built on the same things previous moments of technology and social change were built on. If you’re looking for someone to guide you in how your organization should be thinking through AI, look for someone talking about the bigger picture, grounded in experimentation, ethics, and critical thinking. If they’re coming with an out-of-the-box proprietary strategy, they’re selling nothing.
We still can prepare for this future with play, experimentation, and thinking grounded in the liberal arts, but it’s still an unsettled landscape with an unknowable future.
Jeremy Littau is an associate professor of journalism and communication at Lehigh University. Find him on Mastodon or Twitter.
Well-reported data investigation this morning by Lindsay Weber at The Morning Call. https://www.mcall.com/2023/04/16/allentown-parking-authority-issue-most-parking-tickets/