AI and the Age of Content
The Stage is Set
Have you noticed how prevalent the word “content” has become in recent years? Streaming services are now “content platforms.” YouTubers are now “content creators.” The word content has come to mean just about any form of online material including websites, articles, blogs, videos, tv shows, podcasts and even movies. And as the printed world merges with the digital, the lines have become even more blurred. Books are content, cinema is content, the new is content. The original definition of content is incredibly general, and it literally means “the collection of things that are held or included in something.” The specific has become general. We live in an age when entertainment, information, commerce and social interaction have coalesced into a single entity with the singular purpose of occupying our attention, however briefly.
Enter AI.
The Scale of the Problem
It seems like you can’t open an app, internet browser or computer program without being hit over the head with new AI tools. It does feel like AI has gone from hypothetical to mainstream incredibly quickly, but what I’m not sure people realize is just how mainstream it has become. Have you noticed lately when you Google a question (or Safari or Bing, or whatever other search engine you prefer), you get tons of articles that seem very generalized and in the end don’t actually tell you anything you don’t already know? Yep. AI.
Social media platforms have been seeing this for a while, with AI bots spewing nonsense in post comments. But now chat forums like Reddit and Quora, the sites that people specifically went to to enter discussions with other humans, have also been infected with AI bots posing as humans and trying to answer users’ questions.
Why are people using AI in this way?
So five years ago, anyone who wanted to start a website (whether for their business or a personal or hobby site) was encouraged to write blogs. The reason behind this (in very basic terms) was because writing blogs meant that your website had more text, and more text meant that you had more keywords for a search engine to comb through and find during internet searches. The term “search engine optimization” has since become commonplace, and entrepreneurs and digital marketers like me do their darndest to cram as many keywords and search-terms into websites as we can to help it appear in more internet searches. But writing blogs takes time, effort and a certain level of patience that lots of people don’t have (as does reading them, thanks for that by the way). That’s why, in the very recent past, there were lots of people earning extra money as freelance blog writers, and if you had a website yourself, you inbox was full of spam from Indian blog writers offering to write quality content for your website.
Then AI arrived.
Suddenly, blog posts can be generated with a simple one-sentence prompt and a few mouse clicks. So why waste an afternoon writing one really good article when AI can write a few dozen mediocre ones in the same span of time? The problem is that AI doesn’t actually know anything.
As a wise Jedi once said:
(Apologies for the nerdy reference to the prequel trilogy.)
Anything written by AI generally sounds competent (at least when compared to what the average person can write), but doesn’t contain any specific knowledge or experiences that are worth your time to read. You might ask why people bother to create hundreds and thousands of AI articles and splatter them across the internet. What is the point of bringing people to your site if not to sell them something? Well the answer is that they are selling something, just not to the site visitors. You ever notice how these generalized sites you’re forced to wade through in search of an answer to what you thought was a simple question are just buried in ads? Turns out, selling advertising space on your high-performing website is a good way to make passive income. And your site will perform even better if you pay Google or other search engines to boost it in the search rankings.
This isn’t to say that people didn’t sell ads on their human-generated articles, but when humans had to write their own blogs, the content at least had to have some value in order to encourage people to view it. Human-generated content usually has some redeeming features that make it interesting, whether it’s a humorous outlook, a unique real-world experience or a high-level of expertise. AI generated content has none of these. The value of AI generated content is that it can be generated quickly and cheaply (often for free), and all the human has to do is post it, sell ads on it and sit back and watch the nickels and dimes roll in. It doesn’t matter if an AI article is rubbish and doesn’t attract that many viewers. If you have hundreds upon hundreds of them, the website visits and therefore ad revenue will eventually start to add up. It’s the classic example of quantity over quality, the problem is that the quantity is so overwhelming that the quality content is becoming genuinely hard to find.
But it doesn’t stop there. The online marketplace (especially sites like Amazon) are being flooded with books written by AI. AI image generators have become readily accessible to the general public, and an alarming number of people don’t realize that they’re looking at fake images. AI voiceovers have become standard practice in the advertising world and photorealistic AI generated video is coming closer to fruition with each passing day.
Now I don’t know about you, but I find this incredibly annoying, and more than a little disconcerting. It’s one thing not being able to find helpful information or answers to specific questions; what’s far more concerning is that AI, when confronted with a question or prompt that it isn’t specifically trained on, will just make stuff up. As I said before, AI doesn’t actually know anything. AI generates content is by combining, compiling and rearranging the data it’s been fed, and it can’t tell if the information it has ingested as part of it’s training is accurate or biased. The capacity for misinformation is staggering. This isn’t to say that humans aren’t capable of spewing BS across the internet (in fact, we’re quite good at it), but the advent of AI generated prose means that we can now do it even more quickly and with even less effort.
It Was Inevitable
As with every science fiction depiction of AI going rogue and taking over the world, this was all a disaster of our own making. The emergence if AI was inevitable in a culture that is in the throes of content addiction. I won’t claim to be immune to it. In fact I’m pretty sure that my generation is the one that started it. I can’t pinpoint an exact turning point for when this all began (there are other long-winded blog posts that can do that better than I can) but it was somewhere around the revolution of the internet.
My best guess is that this turning point was around the time that printed media stopped being the primary medium of information and we realized that we could instantly answer any question with a simple search. Information could now be monetized in ways that weren’t possible before. Information was now “content,” and as screen addiction became a reality for millions of people, we moved past consuming information and just started consuming…anything we could find. Cue the rise of the blog, the social media influencer and YouTube.
And thus our content addiction began.
Somewhere around the time we started endlessly scrolling through Facebook and following infinitely-deep YouTube rabbit holes, we stopped allowing ourselves to be bored. We never really realized we were filling our every waking moment with endless content, but we were. From listening to podcasts while we’re driving to watching YouTube while we cook to binge watching TV in the evenings. We are constantly consuming content, to the point that millions of peoples’ daily routines now includes hopping onto YouTube to watch other people go through their own daily routines. Content has become so prevalent in our lives that we’re forgoing real-world experience to watch videos of other people doing it. And then watch videos of other people reacting to those videos.
This isn’t art. It’s just content. (But that’s a topic for another blog…)
With the demand for content at an all time high, the development of content-generation software was inevitable. Entrepreneurial bloggers put it to work flooding the internet with articles that don’t actually say anything. Tech giants have added it to their search and chat functions in order to collect more detailed data about their users that they can then package and sell to advertisers. Website hosting platforms have made AI tools readily available to all their users in a bid to attract new business, compounding the problem even further.
The Point? Yes, I should probably get around to that…
So why am I telling you this? Well, aside from the fact that I’m hoping this blog post will contribute to the SEO of my website, I would like to offer a suggestion. I think a lot of people underestimate the power of “voting” with their wallets and clicks. The reason AI has become so prevalent is because of it’s potential for replacing costly human labor (like me). The fact that it has become easily available during a time of economic strain has only accelerated this process. As our lives become even more packed with “content” I would like to encourage you to turn your attention to content and products and businesses that value things created by humans. Tune out the AI noise to find and support the creators and small businesses who aren’t bowing to the social and economic pressure to utilize AI. That means reading books written by real people, subscribing to reputable sites that don’t use AI to generate articles, buying art directly from the artists, and supporting small businesses that aren’t using AI to cut corners.
Spot the Difference
It pains me to say it, but I think the general population won’t really notice or care that they are consuming AI generated content. If you believe (like I do) that most people live life on autopilot, and care more about convenience than anything else, then the outlook is pretty grim. Part of the issue is that AI is difficult for lots of people to spot. Unless someone is trying to sell AI services, they don’t usually announce the fact that their material was generated by AI, so AI content is rarely labeled as such. In fact, it often references a human author (or just invents one). The larger problem is the issue of digital literacy in general. AI imagery and text is is difficult to spot unless you’re really paying attention.
There are AI detectors out there…which are themselves run by machine learning or AI (true irony is rare), but lots of people don’t know or care enough to utilize them. I’m not sure how effective they are… zerogpt.com said there’s a slight chance that portions of this article are written by ChatGPT, so there’s that… I suppose if AI really could recognize AI, then it could become self aware, and in movies that’s when the missiles start flying…
While I won’t claim to be able to spot AI every time, as someone who is sensitive to this issue, I’ve realized that there are certain telltale signs that can help you identify AI generated articles. For one thing, AI typically doesn’t generate typpos. I recently read an article on a widely publicized site that was riddled with grammatical errors (one of my first jobs was working in the writing center at my university as a writing tutor and this article was so bad I wanted to take a red pen to my screen), but I remember telling myself, “well at least that means a real person sat down and wrote this article. Even if they did it poorly.”
For me, it’s all in the tone. AI generated text has a sort of sterile, semiformal tone. An AI that has been trained to write articles will sound like every generic information article you’ve ever read. It won’t use oddball metaphors or creative language that only a human would think of. The big tell is that it more than likely won’t include any highly specific, detailed information or anecdotes. It’s all very generalized. It’s all very flat.
AI images are (for me at least) easier to spot than AI text. A good way to think of the difference between AI images and “real” images is that computer-generated images haven’t passed through the lens of a camera. They don’t look grainy, they don’t look overexposed or underexposed, they’re not blurry from motion. Because of this, AI generated images often look painterly, or hyper-real. A bit too perfect. Of course, AI generated images also suffer from the fact that AI doesn’t know what things look like. It’s making its best guess as to what something looks like one pixel at a time and sometimes the results are hilariously wrong. Lines of detail bleeding into each other are another telltale sign that an image is fake.
While realistic AI generated video isn’t quite here yet, it’s not far off. In this context, the capacity for misinformation is far more concerning. Deepfakes (AI video recreations of a person’s likeness) began as a technology for replacing actors in movies (either because they had died or for flashback sequences where a younger version of a character is required) but have become a global security concern, as the malicious use of deepfake technology could be extremely damaging. If you want to learn more, here’s a link to a video from one of my favorite YouTube channels with more information about how deepfakes work and how to spot them: https://www.youtube.com/watch?v=KqlW1TczgsA&t=46s
Congratulations! You’ve Made it to the Conclusion
I know it’s not always possible or economically viable to forgo AI completely, but the only way to make sure that humans continue to make great things is if we are willing to consume them over AI generated alternatives. That means placing passion above convenience, uniqueness above accessibility and value over cheapness.
That isn’t to say we should boycott AI entirely, and I really don’t think we could if we wanted to. A lot of AI tools are here to stay, and some of them are actually helpful. I’d be lying if I said that I don’t sometimes use AI-powered editing tools to enhance images or clean up audio. My issue isn’t with AI tools though, my issue is with AI-generated content that is pushed directly to consumers to make a quick buck. As for the sci-fi-fueled phobia that AI is going to become self-aware and destroy the world, I’ll be more concerned when my laptop figures out how to self-update (seriously, I don’t know why I bothered turning on auto updates, my Macbook has NEVER been able to update itself when it says it’s going to).
A quote that I came across recently in the research I was doing for this blog post was “Why would I want to read something someone else couldn’t be bothered to write?” I’m not sure who originally said it, but it sums up my feelings on the matter quite well. The books and films and art that I love have value to me because I know that it represents the hard work, emotions and imagination of one person or a group of people. AI can’t create something that has feeling because it doesn’t have emotions. It has no sense of passion because it doesn’t know what tasks it’s carrying out. It can’t create something realistic because it has no understanding of reality. And no matter how smart Artificial Intelligence may become, it will never overcome the fact that intelligence does not equal imagination.
Cheers,
Mary
In case you couldn’t already tell, this article was written by a HUMAN BEING, not an AI bot.