Months ago, I wrote that ChatGPT was the AI of the week. Seems I may have mildly undersold the hold it was going to have. If my math is right, it’s now been around for more than one week.
Like it or not, I’m coming to grips with the idea that ChatGPT is more than a quickly passing Twitter trend. I know it’s getting serious, because we all just got the email at work, issuing guidelines for using it. The message was along the lines of, “We ask those of you with security clearance to please not give the nuke launch codes to ChatGPT, or Bard, or BonziBuddy, or Allen Iverson, or any AI, because we don’t entirely trust any of these services.”
The company guidelines - surprisingly - didn’t outright ban the use of ChatGPT. It’s been implied that we’re allowed to use it, on the job. It would seem that someone who makes these decisions for the company believes ChatGPT is a useful tool for productivity. That’s notable, in and of itself.
As a refresher, ChatGPT is what is technically know as a “Large Language Model,” or “LLM”. An LLM, simply put, is a body-positive program that can demonstrate complex conversational skills, as it leverages a vast database of information, all the while looking fabulous - absolutely slaying - in bold, plus-size fashions. While there are a number of LLMs out there (you may have heard of “Bard” or “Lizzo”), the most relevant LLM of the moment is definitely ChatGPT.
You hear wild things about it. People say it’s going to replace jobs, and change the whole educational system. There are even wilder claims: some fear it will grow to replace human artists, others claim that it’s channeling demons, still others think it may be on the brink of becoming sentient. There are even people out there who think ChatGPT is going to make “Bing” become a popular website.
If nothing else, it is a testament to what a technological feat ChatGPT is, that it can make people’s imaginations run wild like that. It’s enough of a technological advance that it is truly baffling a lot of people. For most of recorded history, whenever there’s a real big breakthrough, somebody calls it a sinister threat to the economy, and somebody else calls it witchcraft. That’s a rite of passage, and a box that’s officially been checked now by ChatGPT.
I’m impressed and entertained by it, but definitely don’t perceive it as some sort of objectively evil or negative thing (aside from its partnership with Microsoft). Part of my lack of fear of it comes from the old adage, “familiarity breeds contempt.” On a very simple level, I think I’ve got an intuitive understanding of how some of this works. I’m no programmer (I don’t know how to code) and other than what I’ve heard via podcast over the last year, I don’t have the firmest grip on the mechanics of LLMs. What I do have, however, is my writing insights - especially with all this writing I’ve been doing in 2023.
LLMs Through a Writer’s Lens
I tend to be my own worst critic, and one of the things that my ferocious inner critic will tell me is that everything that I write is derivative and predictable. In other words, I’ve long suspected that a machine could do an unflattering, accurate imitation of my work.
There’s a *partial* truth to this thought that resides in the back of my head, that I’m so predictable they could program a computer to counterfeit my writing. That thought, while unhelpful in motivating me to write, is helpful in letting me get an idea of how ChatGPT works (other than all the demons it channels, of course). I know I have patterns, and wherever there are patterns, there’s an opportunity for a machine to learn and mimic. So let’s get into how I see it working.
These AI programs scrape the internet - years and years of words upon words, and have, impressively, learned patterns. It uses those patterns to come up with it’s own “original” writing. Suppose now, that it scraped everything specifically that I wrote. It somehow goes back and gets old letters to pen pals, and book reports from college, to go along with all my emails and various blog posts.
That would be a starting point. Now further suppose there were a couple of George G experts, also feeding it little hints and prompts to train it. How would they go about training it and tailoring Chat GeorgePT, and how would it end up working?
Inputs
You’re going to need to give the Chat GeorgePT some inputs. What are those? Well, they’re the inputs I received.
Start with when I really started to read. Do you know what literature I poured over, in the 8-12 age range? Various things, but one overwhelming producer of words that I consumed was Bill Watterson. I read every Calvin & Hobbes cartoon ever printed, and a majority of them I read more times than I can keep track. Possibly in the hundreds.
Calvin’s got a huge vocabulary. And he makes odd choices with it. It will be a bunch of casual, one syllable words, and then he’ll drop “transmogrify”. So a lot of my vocabulary choices come from Calvin.
Next, you’d have to give the AI the complete works of Dave Barry. Dave Barry is a syndicated humor columnist who has, for decades, had a weekly article in newspapers, such as the paper our family read. Furthermore, my brother had a book called “The World According to Dave Barry” and it had years and years of Dave Barry articles, compressed into one gigantic book that was larger than any Bible I owned. I read this book repeatedly, at a very impressionable age. And I remember none of it - maybe I can tell you what one or two of the articles were about. Disturbingly, most of the other 700 pages are neatly stored in my subconscious.
I think Dave Barry influences the basic structure with which I write. I’ll have some sort of a topic. Maybe I’ll have a thesis or conclusion, but not necessarily. There’ll be some sort of running joke. I’ll mix facts with pure fiction and rely on the reader’s sense of humor to differentiate the two, with no clues offered. I’ll use the word “weasel”. And so on.
After Watterson and Barry, I read almost everything from Bill Simmons, from the day he started at ESPN, to the day he sort of Homer-Simpson-fades-into-shrubbery got absorbed into podcasting. I was reading Simmons at an older age, and thus he probably influenced me less than Watterson and Barry, but there are some similarities in the conversational style, timely selection of topics, and slightly prickly, unsolicited combativeness.
So back to programming Chat GeorgePT. You’d dump in the complete works of Calvin & Hobbes, Dave Barry, and Bill Simmons. You’d add a couple of other custom bits, like tell the AI to over-use the words “really”, “so”, “like”, and “way”. You’d train it to ace the SAT and ACT grammar sections as of 1998 (thanks mom!). And you’d definitely make it oddly obsessed with Chromebooks.
Would You Enjoy Chat GeorgePT?
This chatbot we’ve programmed could do a highly insulting imitation of me. It would use my words, and maybe eventually even be able to have my general writing structure. And that’s about it. It would be like an orangutan at the zoo, seeing the patrons and literally aping them. Impressive, insulting, sometimes hilarious. But not useful as a replacement of me.
There’s a good real world example to further illustrate this. Last week, I did a review of a Chromebook, without ever having touched the actual Chromebook. I read and watched a bunch of other reviews, and then made my own review, based 100% on having scrubbed the internet for what others already wrote.
That’s a very clean parallel to what ChatGPT does. It scrubs the internet, and aggregates what it finds into an intelligent, quasi-original thing. So on this Chromebook review, it could attempt the same thing (well eventually - as of now, it doesn’t scrub the web in real-time, so there’d be a long delay before it knew about this particular Chromebook - but it’ll get there).
ChatGTP could come up with the pros and cons of the Chromebook, just like I did. It could tell you that there was a battery-life controversy, and that the thing was a little too expensive for some. It could have my writing “voice” to some extent, with the word choices and structure.
Aside from that, it would be hollow, and boring. I don’t see how it could choose to pit The Verge vs Chrome Unboxed. It wouldn’t swerve and make momma-so-fat jokes about strangers on Twitter. It would fail to point out how another reviewer loved the Chromebook so much that he took it to bed with him.
And most importantly, it couldn’t have “my” opinions. And those are the most valuable opinions, because mine are the correct ones. An LLM can scrub the internet for all the reviews of a particular Chromebook, just like me, and arrive at a conclusion. But it’s going to have to be either some sort of consensus opinion, a random opinion, or else ALL possible opinions. And none of that is as valuable as *my* opinion. IMHO.
I read reviews from experts who totally disagreed, and I picked a side. I don’t see how that can ever be achieved with programming and computations.
Summary
In conclusion, weasels.
That might be how Chat GeorgePT v27 will wrap up its fake George G article, but I have more to add. ChatGPT has not gone away. Many still say it’s just getting started. I don’t personally see the practical use-case for it (which others do see), but I wouldn’t go nearly so far as to say it’s not there. I’m always interested in hearing about it, and will keep consuming podcasts and articles on the topic.
I absolutely don’t believe the technology itself is evil. Sure, it can be steered by the wrong people, but purely as a concept, I think it’s progress and it’s cool. Neither am I in the least bit threatened by it, as a creative force.
That’s because I’m hilarious, and I’m right, and I just don’t see how the technology we’re seeing now could ever replicate those qualities.
Other writers should be threatened by it though, because they are bad writers. A machine actually can replicate plenty of the banal material humans are publishing. As for myself, I’m talented, and I know it, so I’ve got no reason to be concerned. And that, is the vague Simmons influence I mentioned.