Skip to content

'Don't look into the light!'

Clock icon 7 minutes reading time


So, AI and Large Language Transformer models. Who’s terrified? Who’s excited? And who is just watching and waiting for that second-mover, safer, wider, and more stable adoption with actual proof of value?

Don't look into the light!

Here at CDS, we’re watching, waiting, and playing behind the scenes to assess the real value of these models, particularly in the context of the large, complex, and regulated public organisations we typically serve. And here are probably the biggest challenges for our clients and us right now – cost and risk.

But, come on, they are clever, aren’t they? You can’t help but be impressed with the speed and accuracy of the responses these models create, particularly if you have had any exposure to ‘AI’ tools over the last few years, specifically the narrowness of their application and the need to orchestrate multiple applications together to get them to do anything constructive.

So as a starter for ten, I thought I would write this blog and get some initial thoughts down after the craziest months of technology frenzy I can remember.

For source material, I decided to go to the source... I have my ideas and impressions, but I thought the tools themselves were the ideal place to start. What would they come up with if I put these tools in my shoes with some of the questions I have at the top of my mind.

Here’s the prompt I used for two of the biggest and most famous models, Google’s Bard and OpenAI’s Chat GPT.

'Imagine you are a marketing and innovation director for a professional services agency, helping your clients understand the risks and opportunities of using large language transformer models like Bard or GPT4. Write an 800 word blog entitled 'Don't go into the light' which encourages clients to understand these technologies thoroughly before investing in them and to use the right tools to solve the right problems, without immediately going to the most sophisticated and expensive models, just because they are new.'

Firstly, here is the response from Google’s Bard:

Don't Go Into the Light

In recent years, there has been a lot of hype surrounding large language transformer models like Bard and GPT4. These models are capable of generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. However, it is important to understand the risks and opportunities of using these technologies before investing in them.

One of the biggest risks associated with large language transformer models is that they can be biased. This is because they are trained on massive datasets of text and code, which may contain biases. For example, a language model that is trained on a dataset of news articles may be biased towards certain political viewpoints. It is important to be aware of these biases and to take steps to mitigate them.

Another risk associated with large language transformer models is that they can be used to generate harmful content. For example, a language model could be used to generate fake news articles or to create phishing emails. It is important to use these models responsibly and to be aware of the potential for misuse.

Despite the risks, there are also many opportunities associated with large language transformer models. These models can be used to improve productivity, automate tasks, and create new products and services. For example, a language model could be used to generate marketing copy, to translate documents, or to write code.

It is important to use the right tools to solve the right problems. Not all problems require the use of a large language transformer model. In some cases, a simpler tool may be more effective. For example, if you need to translate a simple document, you may not need to use a language model. There are many online translation services that can do a good job for a fraction of the cost of a large language transformer model.

It is also important to remember that large language transformer models are still under development. They are not perfect and they can make mistakes. It is important to be aware of these limitations and to use these models with caution.

In conclusion, it is important to understand the risks and opportunities of using large language transformer models before investing in them. These models can be powerful tools, but they should be used responsibly.

Here are some additional tips for using large language transformer models responsibly:

  • Be aware of the potential for bias and take steps to mitigate it.
  • Use the models for good and not for harm.
  • Use the right tools to solve the right problems.
  • Be aware of the limitations of the models and use them with caution.

By following these tips, you can help ensure that you are using large language transformer models in a responsible and effective way.

Bard screenshot

And secondly, here is ChatGPT’s response:

As the marketing and innovation director of a professional services agency, I have seen first-hand the excitement surrounding the latest advancements in large language transformer models like Bard or GPT4. With their ability to generate human-like language and analyse vast amounts of data, it's no wonder that many businesses are eager to incorporate these technologies into their operations.

However, before jumping into the latest and greatest, it's important for clients to understand the risks and opportunities of using such models and to use the right tools to solve the right problems.

First and foremost, it's crucial to note that implementing large language transformer models is not a one-size-fits-all solution. While these models can be incredibly useful in certain contexts, they may not be the best fit for every problem or industry. In fact, in some cases, they may actually do more harm than good. For example, if a business relies too heavily on these models without understanding their limitations, they could end up generating biased or inaccurate outputs, which could ultimately damage their reputation and credibility.

Therefore, it's important for clients to thoroughly evaluate their specific needs and consider whether large language transformer models are the appropriate solution. This may involve consulting with experts in the field or conducting thorough research to determine the best approach.

Additionally, clients should be aware of the potential risks associated with using large language transformer models. For example, these models can be vulnerable to adversarial attacks, in which an attacker deliberately inputs data designed to mislead or manipulate the model. This can have serious implications in fields like finance or healthcare, where inaccurate predictions or recommendations could have severe consequences.

To mitigate these risks, businesses must invest in robust security measures and take a proactive approach to monitoring their models for potential vulnerabilities. They should also consider implementing additional checks and balances, such as incorporating human oversight or utilising multiple models to verify results.

Another factor to consider is the cost and complexity of implementing large language transformer models. While these models can provide significant value, they often require a substantial investment of time, resources, and expertise to set up and maintain. This may not be feasible for all businesses, particularly smaller ones without access to specialised talent or extensive funding.

As such, clients should carefully consider the return on investment and ensure that they have the necessary resources to support and maintain the technology. They should also consider alternative solutions, such as smaller-scale models or specialised tools, that may be more cost-effective and appropriate for their needs.

Finally, it's important for businesses to approach large language transformer models with a healthy dose of scepticism and caution. While these technologies can be incredibly powerful and transformative, they are not infallible. They are only as good as the data they are trained on, and even the most sophisticated models can produce inaccurate or biased outputs.

To avoid these pitfalls, clients should always validate their models with real-world data and feedback, and continuously monitor and adjust them as needed. They should also prioritise transparency and ethical considerations, ensuring that their models are transparently and appropriately trained and that they do not perpetuate biases or discrimination.

In summary, while large language transformer models like Bard or GPT4 may be the latest and greatest in technology, they are not a panacea for all business challenges. Before investing in these technologies, clients must carefully evaluate their specific needs and risks, and consider alternative solutions that may be more appropriate and cost-effective. Additionally, they must take a proactive approach to monitoring and securing their models, and prioritise transparency and ethical considerations to avoid potential pitfalls. By following these guidelines, businesses can harness the power of large language transformer models to drive innovation and growth, without falling prey to their potential risks and limitations.

ChatGPT screenshot 1

I’ll let you be the judge of the quality of these responses, but ChatGPT seems to have the edge. Not just in terms of content but more in terms of tone of voice and even writing style, which to my ear is significantly more engaging and sophisticated.

And just to put a cherry on the top of the blog, I trained resemble.ai on 30 seconds of my voice, and had it narrate the whole of the piece – but far from sounding like me, it sounded like a bored American robot… ah well…

One final thought before I sign off. There’s one further reason to write this piece in the middle of April 2023, and that’s to have a record of how naïve and unsophisticated we were in our approach to these tools. When we look back in 6, 12 or 18 months’ time and see how far they’ve come, what will we find?

Get in touch with Dom Baker our Head of Innovation or me, to get into where we see the risks and value in these tools.