Creative pursuits will not die because Silicon Valley has created a cool new AI
ChatGPT, MidJourneyAI, and the overexaggerated 'death' of creative pursuits like writing
Source: AI Art created by StarryAI
If you were unaware, writing, just like art, is now a ‘solved’ equation.
Or at least, that’s what the headlines in the past month would have you believe, with the public release of ChatGPT and past things like GPT-3 and MidJourney AI.
In case you don’t know, ChatGPT is widely considered the best chatbot in the world, providing answers to questions, supporting website visitors 24/7 with on-demand support, and, most frighteningly, writing essays, articles, and poetry.
This, combined with MidJourney AI that’s said to eventually “put artists and Designers out of work,” makes it seem like I’m the exact person that should be afraid. As a Senior UX Designer, Design Writer, and Data Visualization person, I’ve dug into all of the research behind it and seen a lot of different reactions from my communities.
Right now, there are three main reactions to it:
Here’s cool programming stuff you can do with it / what a considerable leap Artificial Intelligence (AI) is making
Here’s how you can make money with ChatGPT (i.e., clickbait).
Here’s why ChatGPT is the death of writing, essays, poetry, etc.
In truth, after spending several weeks digging into it, I’ve come to a somewhat controversial conclusion: ChatGPT is yet another Silicon Valley product aimed to ‘revolutionize the world,’ but it’s got a limited niche and use.
Who am I to make this judgment?
If you’ve stumbled upon this article, I’m a Senior UX Designer and Design Writer with a fascination for Data Visualization, Big Data, and AI. I did my Master’s work on technology-aided surgeries, including using gestural telestration and annotation in laparoscopic surgery.
I have an illustration of AI-generated art in my office, and I use AI/NLP-based voice dictation software (Nuance) to write many of my articles and book first drafts. In other words, I’m as big a proponent as any for AI-based technology, and I would probably be one of the first adopters of it if I saw the potential.
But I probably won’t use ChatGPT that much, and I don’t think it will kill writing. Why? Because of three things:
Augmentation blows AI-led thinking out of the water (based on 60 years of research)
We don’t trust computers with high-pressure decisions
Editing and refining AI-led writing sucks worse than just writing
Augmentation blows AI-led thinking out of the water (based on 60 years of research)
Around 60 years ago, Paul Fitts came up with a list of functions that lists the strengths of both man and machine.
Source: https://link.springer.com/article/10.1007/s10111-011-0188-1
This list has persisted to this day through decades of research, and it’s one of the things that more people need to be aware of. The why is that people are expecting ChatGPT to perform functions that it’s weaker than humans at.
Many people believe that ChatGPT shouldn’t just replace low-level tasks and heavy lifting; it can also replace high-level thinking. But doing that doesn’t just spit in the face of decades of research: it spits in the face of many businesses using AI to make money through augmentation.
Augmentation is the process of a human using AI as a tool (or assistant) to each tackle aspects of the problem they are strongest at. In the book Working with AI, two MIT management experts show case studies of how companies are using human-machine collaboration with AI to do everything from personalize clothing selections to create a telemedicine platform with a chat-based interface.
In each of these examples, the process seems to follow the same model: the AI picks out the best possible examples, rooting through all the potential data and quickly narrowing it down to their best picks, but the human makes the final judgment call.
This is the ‘tried and true’ approach towards working with AI, and you see it in nearly every domain today. From airline pilots who use autopilot (but can take the controls at any time), medical personnel that gets order recommendations (but still make the final decision), and more.
This is even prevalent in the field of Design. Michal Malewicz, a Designer with over 20 years of UX experience, highlights ways that you can make use of AI-generated in several ways, including:
Using color schemes and patterns from AI-generated art in your Design
Using ChatGPT to provide text prompts for buttons
Using ChatGPT to create UX writing instead of Lorem Ipsum
and more
I highly recommend his video on AI if you’re curious about more.
AI-generated art on the left, a refined design on the right
This is how I believe AIs like ChatGPT and MidjourneyAI will be used. It’s not just my opinion: decades of research and tons of current businesses use this approach to use each of their strengths.
More importantly, are AI companies going to invest in the areas where computers are weakest, such as shelling out a ton of money to create a matrix for making high-level decisions? Probably not.
But “that’s the way it’s always been” is a crappy argument, even if it’s been true for 60 years. So let’s raise the next point: we do not have the infrastructure, laws, and capability to support AI-led thinking at the current moment. So to understand why, let’s understand why we don’t trust AI-led thinking.
We do not trust AI-led thinking, nor should we
Let’s say you use ChatGPT to create a website, taking the code it creates and using it to set up a B2B enterprise software company. However, something goes wrong, and it turns out one of the orders gets mistaken due to a glitch in the code. One of your customers is now out 3 million dollars, and they sue you as a result.
Who’s to blame in this scenario? If you copied code from ChatGPT, you might want to deflect blame to them, saying you just used their code. They might deflect back to you, saying that you can’t sue an AI. You’d probably be blamed in the end (although I am not a lawyer), but it’s a mess of a scenario that spells out part of why AI-led thinking runs into issues.
AI-led thinking can work well in 90% of scenarios, but the remaining 10% is so catastrophic that it’s worth having a human on board to make a judgment call. This is why airline pilots still sit in seats when using autopilot.
AI-based thinking doesn’t have the precision to answer these difficult questions due to how they’re created. The simplest way to explain specific AI (for predictive modeling) is that it’s built with a data set split into two. One set is for ‘training,’ which is used to build a predictive model and see if it works, and the other is for ‘testing’ that model with new data to evaluate accuracy.
That’s all well and good when it’s just Google searches or harmless, but imagine this is the model that AI follows to prescribe and dispense medication. Will you trust a kiosk with a 78% chance of giving you the proper medication, with potential side effects (heart palpitations, vomiting, insomnia, etc.) if they’re wrong?
Or would you trust a doctor to make that choice for you?
But let’s take a step back to ChatGPT: having it think for you and generate an essay is completely low-stakes. So why wouldn’t you trust that? Well, how about the fact that it might get you banned from Google searches, reduce your SEO, and cause your readers to lose trust in you?
ChatGPT writes essays and articles in an impersonal, academic style. Not only is this unsuitable for writing blogs or SEO articles, but people can also tell when something’s off. The people who claim you can use this to create ten blog articles in an hour (and generate you $$$) probably aren’t writers at all. After reading about a few paragraphs of the first article, most readers can tell that something’s off and that this person doesn’t sound like a human.
As a result, they’re less likely to continue reading and may ignore your site as a whole, which is catastrophic if you’re trying to generate money from a blog or website. Reader trust is hard to build in the first place, and if you lose it by creating low-effort content and articles, they’ll go elsewhere.
That’s not even mentioning that Google might ban AI-written content. The reason why is simple: the more low-effort content and AI-written links are displayed, the worse Google search becomes. If users stop using Google because it’s generating crappy AI-generated results, guess what’s going to happen to everyone’s AI-generated content? It’s just going to disappear from the search results.
But this seems all dependent on you not modifying what ChatGPT gives you. So what’s to stop you from taking what it’s written and changing it a little? The same basic behavior that gets cheaters caught on tests.
Editing AI-generated content sucks worse than writing it yourself
“You can copy my work, just change your answers a little.” That’s the cry of middle schoolers with a test before they get caught.
It should be a simple thing. Just change a few words here and a few numbers there, and it should be indistinguishable from everyone else. Except people keep getting caught. Why is that the case? The answer is two-fold: there are more tools than ever to check for plagiarizing, and it’s tough to edit something to make it your own.
There is a suite of tools to check for plagiarism, so only ‘changing a few things around’ usually results in a 90% plagiarism check instead of a 100% one. It doesn’t matter in that case: you’re still mostly plagiarizing and going to get caught.
So what if you make more substantial changes? Can’t you make significant changes to ensure that you don’t get caught? Well, yes, but you just might be making it worse. For example, imagine you have 0 writing experience but are using ChatGPT to pump out blog articles to make some cash. What words do you change, and what ideas do you refine to make the blog your own (and not get banned by Google)?
You probably have no idea. There’s a reason that editing is a high-paying skill, and people have jobs as editors. You might tweak stuff not to get seen as AI-written content, but it’s so crappy that no one wants to read it. Or it might be that you don’t change things enough, so Google bans it.
It will be easier to write it yourself (and learn how to write better). But, at the very least, ChatGPT can help you.
ChatGPT is giving an introductory voice to the voiceless
If you suck at writing or coding, ChatGPT is a significant first step. It’s essentially an AI companion (or virtual assistant) to get educational advice, give you some ideas, and give you the basic low-level tasks that AI is good at.
But that doesn’t mean it will kill any creative pursuits or do anything more. One thing using these AI programs that I’ve come to realize is the importance of voice. Writers often use exact words with different voices to create different emotions, feelings, and ideas from their work.
So if you think ChatGPT can take the place of years of coding or writing experience, you’re in for a rude awakening. It’s not a shortcut to success and expertise in those fields: relying on it too much might get you banned from websites (or quickly lose your readers).
Instead, ChatGPT (and other AIs) can do the tedious, low-level steps you often hate, allowing you to concentrate on what matters: high-level thinking and big ideas.
So don’t let AIs think for you: they’re bad at it, and it will only end badly. Instead, use it as a tool for growth and expertise through augmentation.
Kai Wong is a Senior UX Designer and a top Design Writer on Medium. His new book, Data-informed UX Design, explains small changes you can make regarding data to improve your UX Design process.