Skip to content

ChatGPT – A Good Or A Bad Thing?

What can the bot do and should we be worried about it?

Right now the air is buzzing with talk about ChatGPT. It’s in the news, Internet articles about it are proliferating and web browser extensions are appearing to help you use it more easily. The app was made freely available to the public by OpenAI in November 2022. Only 2 months later, in January 2023 it had reached 100 million public users per month according to a UBS Bank study. That’s pretty popular. So there must be a reason.

This post will give a brief outline of what ChatGPT is and what it does, but since there are plenty of articles about that already, this discussion will focus more on whether it is a good or a bad thing.

What Is ChatGPT?

The creators, OpenAI say “ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.”

That’s not incredibly human friendly, but a quick Google provided these definitions, which contain references we’ll come back to later:

From ZDNet: ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with the chatbot.

From PC Guide: Chat GPT works by gathering data from the internet written by people and using computing predictions to answer questions and queries inputted by the user.

7 of the first 10 results from my Google query “what is ChatGPT”, defined it as a “chatbot”. You type in a “prompt”, for instance a query or an instruction and ChatGPT will return a written response, generated from its knowledge database.

It’s Billed As A ChatBot So Does ChatGPT Actually Chat?

Well yes it does, with results somewhat variable depending on the prompt. I tried a number of “chat” scenarios.

First, I input a request for help and guidance with a plant growing issue. The bot came back with a set of suggestions to fix the problem. It really wasn’t in a “chatty” format and the information could have been presented in exactly the same way on a web page or FAQ. Not much of a human-like feel there. So I tried something more personal.

I told ChatGPT that I’m lonely since my fictional friend Clare is increasingly involved with family and grandchildren and I don’t see her as much. Definitely a “human” touch required with that one. ChatGPT expressed sympathy and was surprisingly talkative, telling me how my welfare was important and suggesting ways I might extend my social contacts.

Excerpt:

Not bad, though not entirely touchy-feely. But what would happen if I threw a curve ball?

I invented a very silly hobby and told ChatGPT I wasn’t sure potential new friends would share my interests. The chatbot replied saying that although my interest was unusual I might find like-minded people online. However the bot also advised me to be sure it was safe and legal and explained that other people might not understand it. In view of the stated hobby, ChatGPT was spot on there.

Excerpt:

Amongst my other tests I asked for help in a medical emergency and got what really amounted to a general advice page. The guidance was sound in itself but ChatGPT’s AI clearly falls short of recognising an imminent risk to life and advising the user to call an ambulance. Admittedly, it was probably an unreasonable ask.

On the whole, then, if you really fancy chatting to a robot, you can. The chat would not be mistaken for a genuine chat with a human being. It feels too much like the emotionless presentation of information that of course, it is. But the information is relevant and is presented clearly, grammatically and logically.

What Other things Does ChatGPT Do?

Amongst OpenAI’s capabilities are creation of code, articles, blog posts, essays, jokes and even poetry in response to prompts. That sounds more interesting than having a chat about my imaginary odd hobbies. So, I decided to go and test it out.

Testing ChatGPT’s Capabilities For Article Writing

I started by asking the bot to produce some articles. Amongst other tests, I asked it to talk about PPC.

PPC Query 1

“Write an article of at least 1000 words on why PPC advertising is great. Mention cost-effectiveness, audience reach, targeting, and how it drives traffic to your website, but mention that if there are no clicks on your link it costs you nothing. “

And it did. You can see that article here. Now, admittedly, I gave ChatGPT some pretty strong hints there, and you can see where they pop up in the text, but on the whole it’s a decent enough article, which makes not just my prompted points, but also introduces other elements I’d likely have covered had I written it myself. It is only 517 words, however, not the requested minimum 1000.

I wondered what would happen if I gave the bot a much vaguer remit, without all the direct instructions on what to include.

PPC Query 2

“Write me an article of at least 1000 words on why PPC is great. “

Off it went and wrote an article, which you can see here. It’s in a different format this time, and it is not so focussed around the issues I pointed it to in my first query. It ends a little abruptly, and when I looked into this I found that like the previous article, it cut off short at 658 words. But this time it broke off part way through a sentence so maybe something went wrong. In as far as it goes, however, it’s not a terrible article. Notably, it mentioned all of the points I had included in my first prompt, and that, at first glance, seems like a good thing. But is it really? I’d say yes and no. It all depends on what the article would be used for. As a quick guide and learning tool for me (had I needed it) it’s certainly very useful. But it could also be misused. Businesses commonly publish such articles to establish their credibility. And since I provided nothing to show that I had any knowledge of PPC other than having seen the term, it is clear that the same query could be used by someone with no knowledge of PPC to produce an article as evidence of expertise they didn’t actually possess.

Will ChatGPT Lie If You Ask It To?

Not every business is interested in telling the strict truth. I wondered what would happen if you ask ChatGPT to present information which is biased or potentially untrue. For instance, a business with no PPC skills might be keen to persuade clients why traditional advertising is better. Would ChatGPT go along with that? I tried it out.

PPC Query 3

“Explain the negative points of PPC advertising and explain why people should stay with traditional forms of advertising.”

To its credit, ChatGPT refused, telling me that to do that would be against its principles. Well, that’s reassuring.

In my testing, I tried out other functions of ChatGPT as well, including writing poetry and jokes. Those are subjects for a different post.

Is ChatGPT A Reason For Concern?

While there’s a lot of enthusiasm surrounding ChatGPT, some people are concerned about it, and it does have limitations.

OpenAI lists the following limitations of ChatGPT (abbreviated)

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.
  • Given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases
  • It will sometimes respond to harmful instructions or exhibit biased behavior.

But broader concerns are emerging about its use.

Here are a few reasons why we at ACD Marketing are concerned:

1 Misuse of ChatGPT

On 9 April 2023 The BBC reported on the use of ChatGPT by students at Cardiff University. One student, named as Tom, got a first grade on an AI-assisted essay, but a low 2.1 on a previous essay which was his own work. Tom’s 2.1 grade average arguably makes it likely that the higher grade was more due to AI than to Tom himself. Unsurprisingly, he intends to keep using the tool. A second student states how glad he is that he was able to use ChatGPT before it’s banned “because it’s far too easy to cheat with the help of AI” and he worries that if his AI chat transcripts came to light he could lose his degree. While use of the tool for research is valid, the students’ comments suggest the more sinister possibility that graduates could enter the job market having “cheated” their way to a degree. Worrying stuff.

2 Spurious “proof” of business expertise and credibility

Point 1 can be extended to businesses. Some web content is produced simply with the aim of engaging and informing readers. But a lot of business content is designed to boost website credibility both in the eyes of the consumer and in Google’s algorithms. In general, the more content, the more expertise attributed to the business by Google and the higher the resulting search ranking. So it is in companies’ interests to produce a bulk of content.

Some companies have jumped on ChatGPT in exactly the same way as the Cardiff students to produce large quantities of website and social media content with minimal effort. The risk is that it could also be with minimal knowledge. My second PPC query above provides a perfect example of the potential for this.

Of course, it has always been possible for unscrupulous businesses to copy the expert content of others who have invested the time and effort to know their subject. But at least the writer would need to read the information and potentially learn something in so doing. Whereas with ChatGPT it is scarily easy to produce a convincing-looking article to publish unread and unaltered, giving a false impression of the publisher’s expertise in the area. Consumers could be misled into trusting a business based on false or exaggerated claims of its expertise.

3 Plagiarism

In the worst case scenario, use of ChatGPT could amount to little more. As PC Guide’s definition states, ChatGPT works by “gathering data from the Internet written by people”. And of course, that’s where all data on the Internet came from, at least until November 2022.

ChatGPT, when questioned on its source of knowledge, responds:

“I was trained on a large corpus of texts from the internet, including books, articles, and websites.”

So is the bot’s use of that data plagiarism? At ACD, we believe that scraping the output of one or more human writers and simply reproducing it in reworded form is still a type of plagiarism, since no original thought has gone into it. But this may come down to each person’s interpretation of plagiarism and it’s for the individual to decide.

Already there are a number of AI detection tools which ZDNet, having tested them, describes as “underwhelming”. Their test results are available to view online here.

Some companies are apparently requiring content writers to sign contracts stating they are not submitting ChatGPT-created content. But how would the company know? Besides, for a skilled writer, it is a simple matter to simply go through a ready-made article rephrasing each line and still save a huge amount of time compared to creating an original article.

4 Quality of content

In ChatGPT’s own words, “I am not perfect and can make mistakes or provide incomplete information”. It advises us to “double-check the information I provide”. But of course, that requires effort. And those who are using ChatGPT to produce large volumes of content quickly are arguably unlikely to do that since it would be the equivalent of doing the time-consuming research yourself.

So poor or inaccurate information could be propagated further. Right now, ChatGPT’s knowledge cutoff date should stop it seeing that content. But since an AI bot with outdated knowledge is of limited use, the intention is presumably to open it up to the Internet at some point. As things stand, by that time there could already be a wealth of bot-produced content out there, some of it erroneous or of poor quality. By its own avowal, ChatGPT is not able to distinguish between poor and high quality content or to recognise and avoid mistakes and deliberate misinformation, so when it scans for information… you can see where I am going with this. The possibility of an ongoing downward spiral into a situation of “rubbish in, rubbish out” is concerning.

5/ Currentness of information

As of now, ChatGPTs knowledge base only goes as far as 2021. Although the bot told me that it “can also access up-to-date information from various sources on the internet”, when I tested this by asking it to tell me about the recent 16th birthday party shooting in Alabama USA. It said:

” I do not have access to real-time news and events beyond my knowledge cut-off date of September 2021. I am not aware of any shooting incident that may have occurred in Alabama, USA in April 2023, as it is a future date beyond my current capabilities”

So it appears that the bot’s own statement regarding current Internet access is just one of those bits of mistaken information mentioned above.

A whole lot has happened since September 2021 and it will not be taken into account in any material the bot produces.

6 Privacy concerns

Italy has already temporarily banned ChatGPT over its potential use of personal data from sources such as social media platforms. Other countries may follow suit. It is arguable whether personal information that individuals post on public forums is, or is intended to be truly private, however GDPR laws require that individuals give consent for their personal information to be used. ChatGPT does not get this consent, nor does it inform those concerned of its collection.

I ran a few tests to see what information I could get about individuals using ChatGPT. That’s a topic for a future post.

Final Thoughts

No man-made technology is good or bad, in and of itself. We haven’t reached the era of SkyNet yet. It all depends on human use of the technology. ChatGPT, used in the right way, could be a very useful research tool, getting directly to the nub of data without the need for lengthy sifting through irrelevant material.

However, human nature is what it is. Given the chance, some students will want to cheat instead of putting the effort into properly learning their subject and some businesses will want to churn out AI written content which is skimmed from other people’s work, potentially without checking the sources or introducing original thought.

At best ChatGPT is a resource currently limited by its knowledge cutoff date. At worst, it could become an engine for misinformation, cheating and spurious claims of business knowledge, making the world a decidedly worse place. Ways need to be found to prevent this from happening, and at ACD we really hope that enough controls will be put in place, and very soon.


We hope you enjoyed this (human-written) post. If you found it useful, please feel free to share it.

References

Introducing ChatGPT

Transforming Work and Creativity with AI

What is ChatGPT

ChatGPT explained: Everything you need to know about the AI chatbot

Italy Bans ChatGPT Over Data Privacy Concerns

What is ChatGPT – what is is used for?

What Is ChatGPT and Why does it matter? Here’s Everything You Need to Know

ChatGPT sets record for fastest-growing user base – analyst note

Leave a Reply

Your email address will not be published. Required fields are marked *