Artificial Intelligence (AI) takes another leap forward in content creation with the release of ChatGPT. But what started off as a fun afternoon for editorial assistant Charlie Feasby – asking silly questions to a robot – eventually revealed the dark side of AI-generated content, and why businesses need to exercise caution using it.
Artificial Intelligence (AI) is stepping on the toes of content writers and artists everywhere. Take ChatGPT – a new chatbot from OpenAI that can write informative yet jargon-free and conversational articles. It’s slick and therefore pretty impressive, and it’s certainly landed with a raft of publicity, racking up one million users in its first five days. For fear of being replaced as a writer, I felt the need to push it to its limits.
When you ask ChatGPT about itself, it has a very calm and measured response straight out of a possible 2001: A Space Odyssey sequel:
‘I can be used for a variety of tasks, including answering questions, providing information, and completing various types of online tasks. I am a general-purpose AI and can assist with many other tasks as well.’
It’s a very broad definition and ChatGPT is clearly being vague in order to increase its userbase. Frankly, it should stay in its lane.
It’s currently free to use during the research period, with eventual plans for monetisation. All I had to do to gain access was to create an account with OpenAI and then I was ready to go – or, at least, in the queue ready to go. With its popularity in the first few days, queuing was a regular occurrence, but this died down after the initial hype.
The interface is simple – just like any other messaging service. You ask a question and it responds like a real person. There’s even a slight pause before it answers and a little icon showing it’s “thinking”. I tried not to think about how much time I spent waiting for a robot to reply to my messages. When they invent an AI that can ghost you, then I’ll be really impressed.
I asked it: “How do I make my writing more concise?” Its response was straight out of a journalism-101 workshop. It told me to use active voice, avoid unnecessary words and use shorter words. The response was informative, jargon free and conversational – exactly what it’s meant to be. I guess that’s 1-0 to ChatGPT.
I say ‘conversational’, but something always feels slightly off. Every response is clinical – there’s no personality behind it. Human writers slip opinion into everything they do but ChatGPT can’t provide one, even when you’re asking for more subjective things like book recommendations on a topic – it will just tell you to use Goodreads. Rather appropriate, as Goodreads is the most clinical of all review sites. I won’t deduct the point, but it’s definitely something to improve upon.
When you ask it to describe something, however simple it may be – eg what does a train sound like – it gives a stubborn answer. It could tell me that the sound of a train was caused by the motion of the wheels on the rails as well as the mechanical parts of the engine, but even with another prompt, it refused to say, “a train goes choo-choo” – something I’d tell you immediately. So that’s 1-1 now.
This is merely a limitation. The first real negative came when I asked for the UK requirements when cycling in the dark. It incorrectly said I needed a red front reflector. According to UK law, it isn’t mandatory for a bike to have a front reflector and, if it has one, it should be white. ChatGPT claims you can correct incorrect information so I tested this by explaining the law. Instead of following my suggestion, it doubled down, telling me that I was wrong. An argument almost broke out but then the site crashed – I consider that my victory making the total 2-1 to me. Hooray!
This backs up what many users have found with ChatGPT that the information given is not always correct. And it doesn’t cite sources so there’s no way to check without looking it up yourself – unlike professional writers, who are held to account on every source they use.
The internet gives a broad range of suggestions, some sensible and others not so much. One article, which I won’t cite to save them the embarrassment, told me to use it for relationship advice – an offer I’ll decline, thank you very much.
Some seemingly sensible suggestions are:
Although its use as a search engine is appealing with its jargon-free responses, the lack of sources and unreliability in the information makes it unfit for purposes. An adaptation to the programme where it shows its sources would be a step in the right direction, but as it cycles through so much information to generate its answer, this might not even be feasible. And in the process of finding how it sources this information, I uncovered a darker side of AI technology that made me question whether we should be using these platforms at all.
OpenAI – the company behind ChatGPT – currently has a class-action lawsuit taken out against it. The controversy stems from the AI’s use of copyrighted material from fan-fiction forums such as Archive of Our Own (AO3). It’s a huge database of conversational language that is a perfect resource for an AI to learn to speak colloquially. The only problem is the users didn’t give their consent for their work to be used – OpenAI knew this and did it anyway.
Its other option was to use a bank of volunteers to submit their conversations for OpenAI to use to teach ChatGPT colloquial language. Sadly for it, no such bank exists, so, allegedly, it stole the information instead.
The funnier side to this is that ChatGPT is an expert at writing fanfiction. The less funny side is that the company is profiting from other people’s work. It’s the same issues as using AI-generated art. While it’s technically not copyright infringement, as the art isn’t a direct copy, it imitates the style of existing artists, just like ChatGPT emulates the style of existing writers. Allegedly, until proven in court, etc.
And of course, *deep breath* AI-generated content devalues the work people make. When an article can be created almost instantly, it has no value and the people who unknowingly did all the hard work for ChatGPT will get nothing.
The site has already raised questions about the rise of AI-assisted plagiarism in schools and universities, and businesses should be conscious of this too. The high level of incorrect information is also a red flag – it’s a chatty Wikipedia with even fewer sources. But even if this issue is resolved it could cause significant damage to the integrity of educational institutions and of course, long-term, to learners’ genuine skills.
Errors in the system have already caused problems for companies who’ve used the GPT system (the AI model ChatGPT runs on). A French start up, Nabla, which used it for healthcare purposes, had a user being urged to take their own life.
It’s an AI flaw that doesn’t exist in humans. We understand what words and sentences mean in the context of situations, but AI doesn’t. It only knows how to form the symbols into something that we then read as language. The context is lost on it – an issue not close to being resolved soon.
ChatGPT isn’t magic. While using it may seem like a solution to creating content quickly, doing so ignores the unethical way the site generates its information. I initially felt positive about it, but now can’t recommend using it, nor any other AI-generated content.
AI can’t replace humans and, however good the technology gets, it should never replace a content writer. It’s unreliable, threatens the integrity of the work produced and is surrounded by dubious ethics that should put businesses off using it for the foreseeable future.
Some companies are trying to resolve this, for example Rolls-Royce and its Aletheia Framework. This toolkit is freely available and aims to build public trust in AI with guidelines for how to approach building a system. As you can probably guess, ChatGPT appears not to have followed any such guidelines.
If you’re expecting me to end with some grand revelation that this whole article was actually written by AI, I’ll have to disappoint you – I’ll never be using ChatGPT again.