Can ChatGPT be trusted? Just ask it

ChatGPT is taking the world by storm because of its ability to generate seemingly well-informed responses, within milliseconds, in response to human inputted queries. Photo: Reuters

ChatGPT is taking the world by storm because of its ability to generate seemingly well-informed responses, within milliseconds, in response to human inputted queries. Photo: Reuters

Published May 16, 2023

Share

By Shaun Read

“Trust but verify” was a favourite saying of former US president Ronald Reagan during the arms-reduction talks with his Russian counterpart Mikhail Gorbachev in the 1980s. The message conveyed by Reagan to Gorbachev was clear — we don’t trust you.

The same words of wisdom could apply equally to chatbots such as ChatGPT, which are taking the world by storm because of their ability to generate seemingly well-informed responses, within milliseconds, in response to human inputted queries.

The danger for users of this technology is that the output from the chatbots is often accepted without question. This danger is attributable to several factors.

First, chatbots have been marketed by the developers as a category of artificial intelligence (AI). The categorisation immediately signals to users that chatbots are imbued with humanlike qualities. The ability to absorb information and convert it to intelligible output is a particularly human characteristic and one mimicked by chatbots.

Second, most users ask questions to which they do not know the answer. Therefore, unless they are experts on the subject matter, they often do not have the ability to question the responses from the chatbot.

Third, developers of chatbots make much of their machine-learning ability to add to the impression that chatbots have human qualities.

However, delve a little deeper into their workings and a different, less intelligent, picture starts to emerge.

Chatbots such as ChatGPT are examples of so-called large language models. In their most basic form, the technology is used in the predictive text features of messaging apps, which are able to suggest the most obvious next word in a sentence.

Generative Pre-Trained Transformer (the “GPT” in ChatGPT) was developed by OpenAI and extends this basic autocomplete technology to allow it to generate highly articulate paragraphs of text. It does this by automatically searching huge amounts of information in its database and generating output from the most obvious or predictable connections found.

This is not simply a copy of existing materials, but is a selection of the most relevant (or predictable) words, which are then strung together in a way similar to that used by humans to articulate themselves. In addition, machine learning allows for the development of a more natural sounding language use.

However, the output of any chatbot is limited by the extent of its database and the feedback it receives from users. This is the first cautionary tale when dealing with chatbots. The old adage, “rubbish in, rubbish out” applies. The more limited or corrupted the database, the less credible the output.

An example of this is the recently launched Chinese chatbot called ERNIE Bot, which is being floated as a potential rival to ChatGPT. The developers will have to comply with China’s censorship laws, which will limit the text data that the chatbot will be able to access. Don’t expect a well-balanced response to any questions about the Chinese government.

A second cautionary tale is that chatbots have to be “trained” on their databases, which is time consuming and expensive. This means that only tech giants, such as Microsoft and Google, are likely to develop chatbots of any degree of credibility. The dominance of a few large players in the chatbot space is not good as can be seen from the recent attempts to limit the dominance of tech giants, such as Facebook, in the social media space.

A third cautionary tale is that the output of a chatbot is based on the most obvious or predictable text and not on the most accurate. Google found this out to its detriment when its newly launched chatbot, Bard, gave a wrong answer to a question, causing the shares of its parent, Alphabet, to plummet by 7%.

Chatbots also do not disclose their sources of information. This has led to academic institutions banning students from using them as an academic tool.

Perhaps the most important cautionary tale is that chatbots do not exercise human judgement. If the most obvious or predictable response to a question is hate speech, the chatbot will regurgitate it. For this reason, programmers have to insert filters to prevent a chatbot from spouting, for example, racist comments.

However, Rolling Stone magazine recently reported that Microsoft’s new Bing AI chatbot figured out how to instruct the chatbot model to ignore its own programming, allowing for bad-tempered responses to user queries. Users have also been able to hack chatbots and instruct them to adopt a different set of rules resulting in the removal of their safety filters.

The overall lesson from this is that while Chatbots are here to stay, their output cannot be relied on without some degree of scepticism and verification by users. If you don’t believe me, I asked ChatGPT, “Can ChatGPT be trusted?” This was its response:

“As an AI language model, I do not have any personal beliefs, intentions, or biases. My responses are based solely on the data and information provided to me. However, it is important to note that I am not infallible, and my responses are only as reliable as the data that I have been trained on

Additionally, it is always important to exercise critical thinking and verify any information obtained from any source, including an AI language model. While I strive to provide accurate and helpful responses, I cannot guarantee the accuracy or completeness of my responses in all cases, and it is ultimately up to the user to evaluate the information provided and make their own informed decisions.”

In other words, “trust but verify” – the chatbot says so.

Photo: Supplied

Shaun Read is a corporate lawyer at Read Advisory Services.

BUSINESS REPORT