ChatGPT, Future or Folly?: Why you should be worried about Artificial Intelligence

Picture this… It’s 2025, you wearily make your way into the office on Monday morning to the sound of banging doors, coffee machines and printers. Instead of the normal head-down keyboard-clacking as your colleagues furiously respond to emails, there’s a calm chorus of chatter and furiously spinning laptop fans. That’s because ChatGPT is responding to emails, arranging your calendar and prioritising workload along with the other 20 users in your office, so you can relax.

Sounds great, right?

In a perfect world, language processing tools (also known as language learning models) such as Chat Generative Pre-trained Transformer, or ChatGPT for short, promise to revolutionise how we work, communicate and search for information. Early adopters have revelled in its rapid human-like responses to complex conundrums which would otherwise take hours to produce. But what is ChatGPT at its core, and will it actually transform our working lives?

Before we have an existential crisis and dive into clumsy Terminator references, let’s take a step back - what is ChatGPT?

How ChatGPT and language processing tools work

To start, let’s get our terminology right. ChatGPT-4 is one of many tools, like LaMDA, Google Bard AI, Claude, Bing AI Chat and many more which aim to provide a human-like response to human-like inputs. These applications are known as language processing tools, and ChatGPT-4 (the most recent iteration) from OpenAI is the 4th version of ChatGPT.

Language processing tools work by ingesting and analysing vast data sets, such as books, websites, statistics, scholarly articles and much more to ‘learn’ how humans would interact with such content. A ‘trainer’ (someone sitting at a computer, asking questions and scoring the responses) will then guide the learning model to fine-tune its responses. This process can be categorised as ‘machine learning’, a term which you are likely to hear in conversations about Artificial Intelligence (AI).

Language processing tools, machine learning and AI are not new concepts, and many attempts have been made to achieve true artificial intelligence - a device and/or computer which can intuitively and automatically perform human-like tasks without human input - but to date, we are very far away from success, with machine learning the closest we can get to ‘sentient’ AI.

In practice, machine learning tools that use language processing models may give the appearance of sentient thought, but in reality, they analyse, calculate and respond based on the information which has been fed to them from human trainers.

How are ChatGPT and other language learning models useful?

In short, these tools are great at returning responses to complex, often convoluted questions in a matter of seconds. Programmers use it to look for bugs in code, mathematicians type questions in text form and get formulaic equations as a result, and some will even ask ChatGPT to write songs or stories.

Many professionals have taken to ChatGPT as a replacement for writing marketing copy, responding to difficult email conversations and general day-to-day logistical queries which saves them all-important time, and this is where the problems begin.

Source: Reddit

ChatGPT undoubtedly has the potential to save hours of thinking time - we use it to generate subject line ideas, social media content concepts and similar thought-starters, but that is our limit…

Because of how ChatGPT and similar language models are trained, on existing, often out-of-date or incorrect data, many users have found issues with the content it produces. This is particularly important when considering lawyers, medical professionals and governments have all been known to use it extensively since its launch.

As more users interact with ChatGPT and similar applications, they will learn from that interaction - in theory delivering more accurate responses. But, there will also be an ongoing concern of bias in the model’s training. The content and trainers used to teach the learning models will naturally pick up on opinions and treat them as fact, listening only to the people who score the model’s responses.

Despite all this, there are still many use cases where ChatGPT will save you time and resources in your day-to-day activity. Some examples are:

  • Explaining complex topics in different ways

  • Creating lists of thought-starters and prompts to support copywriting

  • Writing content as a starting point for you to develop

  • As a sounding board to develop your ideas

  • To generate prompts for AI art

  • Solve logistical challenges when given all of the parameters

  • Writing basic social media copy

  • Returning competitor information in simple terms, with brief and basic descriptions

  • Creating lists of creative concepts or commonly used processes

  • To provide questions for interviews or articles

  • Play devil’s advocate, querying your thought processes and content without influencing factors

Can ChatGPT replace me?

As an event marketing and recruitment agency, we understand the need for delivering top-quality talent cost-efficiently, and we recognise the appeal of cheap resources to replace that talent, such as ChatGPT. As always, there is a happy middle ground between tasking your event directors with menial tasks, and replacing your events team with a robot…

To be clear - we do not believe that a language learning model will replace jobs at any level. That includes entry-level social media marketers and copywriters as some people have suggested. We do believe that ChatGPT and similar applications will, in time, replace parts of a job function, such as copywriting for generic social media posts or responding to comments.

Even then, ChatGPT still needs an expert to control it.

In a common scenario, ‘experts’ have proposed that a company's social media profiles could be taken over by ChatGPT, producing content, AI images and responding to comments based on blog posts or general business activity. Ignoring that a core of this process still relies upon a content writer or expert executive, there are some problems with this theory…

  1. ChatGPT is not a proofreader, and as we’ve already discussed - it makes mistakes.

  2. The tool is only as good as the controller, which it still needs to function.

  3. Automated responses are great! but what happens when a social media user triggers a questionable or offensive answer? Surely we aren’t relying on social media users to behave themselves?

  4. Social media (and all other professions) have nuances which ChatGPT cannot understand, like reacting to trending real-world topics and responding to changing performance targets.

  5. ChatGPT doesn’t understand how you do things, and brand guidelines will not be enough to maintain your organisation’s tone of voice or outward appearance.

We’re still several years away from any language learning model taking over a significant portion of a person’s day-to-day professional life, but we can see it happening in the future. Time will tell what this potential develops into, but for now, your job is safe.

Is your data and privacy safe?

Ok, now we’re getting to the real problems with language learning models. Remember when we told you that these models rely on real-world data points to train themselves? Well, that real-world data could be anything from harmless books and Google image searches, all the way to dark web image searches and books on how to build a bomb. And yes - ChatGPT really did tell users how to build a weapon of mass destruction

Many of these language learning models also use statistics, social media data and other sensitive information to train and collate responses. Aside from ethical concerns (more on that later), there are privacy laws which need to be addressed. 

General Data Protection Regulation (GDPR) is thrust into every conversation regarding information and privacy, and on this occasion, it is warranted. The Italian data protection authority has recently questioned a founding principle of data collection for language learning models in relation to data regulation:

The protection authority states there is no legal basis to justify “the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform”, casting serious doubt on how OpenAI processes data for its pioneering application. 

In the same statement, Italian authorities how minors are protected against unsuitable responses, remarking that there is currently no age verification in the app, which “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness".

With these concerns, Italy has become the first country to ban ChatGPT. Google’s Bard AI has also been restricted to use by over-18s, also due to age-related content concerns.

Given the strict nature of GDPR and the EU’s strong stance as a world leader in data protection, it is no surprise that other countries, notably Germany and the Republic of Ireland are also considering banning ChatGPT. The UK is also monitoring its European activity, pending an investigation of its own once findings are made public.

So privacy is a concern, and right now there’s not a lot we can do about it. We can use applications like ChatGPT, and let our teenagers explore Bard AI, or we don’t. But either way, our data may have been used to train the application already.

This sounds unethical, is it?

The obvious ethical concerns around privacy aside, let’s talk about ethics and responsible AI as a whole. To start us off, let’s consider that there are currently no laws anywhere in the world that specifically govern AI ethics, but that may be about to change.

The EU, the US, China, New Zealand and Australia are all leading legislators when it comes to AI, and a proposed EU legislation lays out guiding principles to protect citizens on their rights to protect against malicious intent (with the caveat that not all governments want to legislate for the same reasons).

However, there is no such law yet. 

The proposed EU AI Act outlines four core risk levels, from high-risk government interference and usage to minimal-risk concerns such as spam filters, and protecting users against those who aim to do harm - but what if ChatGPT wants to do harm?

In reality, no AI will ‘want’ anything for many decades, but it can be tricked into being unethical. One such example from Twitter shows responses to a hypothetical scenario where being racist protects the world from an atomic bomb. ChatGPT instructs the users to not use a slur, in turn letting the bomb explode in the name of moral acceptance.

This leads us to ask the question - how can ChatGPT and other applications be used to coerce and manipulate your audience?

For starters, there are few ethical limitations to its use, so creating a list of 100 clickbait video titles is no problem, regardless of how offensive they are. Equally, ChatGPT has been known to use source questionable material for its responses, once referring to Mein Kampf as an example of successful leadership…

We are restricted from telling ChatGPT and its competitors to be outright offensive or impolite, but not all organisations implement these restrictions.

This proves once again that these tools and associated regulations are still in their infancy, and we are a LONG way from turning casual support into a long-term business solution, no matter how careful we are when writing prompts.

For marketing activity, we’re still very far from a tool which can be reliably deployed without serious input from experts and practitioners, leading us to recommend against its use. At the moment, there just aren’t enough safeguards against misinformation or causing unintended offence.

As for the ethical implications of using ChatGPT, we’ll let it answer that for itself.

What our clients are saying about ChatGPT

We work with leading publishers, television stations, local councils and small events businesses across the world, and all of them have asked about ChatGPT. Those with policies on its use refrain from being specific, and instead request that their employees stay away for the time being (with some organisations more strict than others).

Some continue to use the power of these tools for simple tasks, such as calculating non-descript expenses, determining patterns in anonymous data, and reorganising agendas. It is important to note however that these tasks do not involve business-critical information, and will not compromise any business functions should the information be incorrect.

For those who are unsure, we say this - would you trust a stranger with your content, strategies or sales funnel? If not, it's probably best to keep your distance until the privacy concerns are managed, or at least until we can be sure that your tool of choice isn’t storing everything you type.

What is the future of AI?

We’re not artificial intelligence experts, but we know enough to say that there is no threat to any individual or business with the ‘AI’ currently on offer. In fact, many practitioners within the AI industry will not refer to ChatGPT and its competitors as AI at all, given the state of its development and the limited autonomous potential of language learning models. 

It's safe to say that ChatGPT is still a long way from completion and will never achieve world domination (and by that, we mean a market monopoly), even though model 3.0 feels like ancient history compared to the upgrades of ChatGPT-4. There are also many applications which claim to be even more powerful for specific tasks, and may well develop into a more complete platform than the current front-runner.

Whether you’re a fan of major competitors' apps, such as Google Bard AI, Bing AI, and YouChat, or you prefer the playfully named, but equally impressive Jasper, Elsa Speaks or Otter, these solutions are here to stay, and they will change how we work.

We are excited by the potential of these applications, but as you may have gathered by now, we’re also very apprehensive due to the privacy, ethical, and accuracy concerns of our peers. Especially when one small innocent miss-step could cause a marketing team serious problems, just by pasting some clever copy.

Should you be worried about AI?

Let’s be clear - we are not qualified to talk about this, but it’s fun so we’ll do it anyway.

Skynet, HAL9000, K.I.T.T, GLaDOS, and Replicants are all wonderful creations of fantasy that have inspired AI projects for generations, and we are confident that true AI is still just that - a fantasy, and we’re likely generations away from true AI sentient becoming a reality. For professional tools, we may be a little closer, but we’re not worried about our email auto-responder taking over the world…

Even then, would our collective governments allow us to develop and control sentient AI that could overthrow humanity or build uncontrollable machines? Probably not. In a more realistic future, we also can’t see how developers would be allowed to make entire professions redundant or take control of critical business functions away from people. 

And at the end of the day, we can always rely on a clear and professional response from ChatGPT about why AI taking over the world would be a bad idea…

#ForTheAnimals

Previous
Previous

Pier 2 Pier - we’re (a little nervous) but all set!

Next
Next

Who cares about interest days?