By clicking a retailer link you consent to third-party cookies that track your onward journey. If you make a purchase, Which? will receive an affiliate commission, which supports our mission to be the UK's consumer champion.

What is ChatGPT, and is it safe to use?

ChatGPT puts the power of artificial intelligence in our pockets, but how exactly does it work, and can you trust what it tells you?
Using ChatCPT on a smartphone

ChatGPT has stolen headlines since it was launched for public use in late 2022. The tool, which is an Artificial Intelligence (AI)-powered chatbot, can convincingly write almost anything based on a limited brief. 

The impact on consumers in the future could be huge, with improved search engines, customer service and even product recommendations just some of the potential future uses of this nascent, yet remarkably advanced technology. However, while its responses are convincing at first glance, the accuracy of its ‘confidently incorrect’ responses have highlighted significant concerns around how this type of tool might be used and abused. 

With the UK regulator, the CMA, announcing a review of the AI market, the pace at which these tools are developing is clearly under scrutiny. Read on to find out more about ChatGPT and similar tools, how they work, and whether you should be concerned. 


Tech tips you can trust – get our free Tech newsletter for advice, news, deals and stuff the manuals don’t tell you.


What is ChatGPT and how does it work?

ChatGPT was developed by AI firm OpenAI. OpenAI is partly owned by Microsoft as well as several other investment firms.

It’s a chatbot that responds to almost any prompt, be it a question or command, in convincingly legible prose. GPT stands for Generative Pre-trained Transformer, which means it’s a tool that can generate responses based on what it’s already learned. It is a paid-for tool but there is a free version that you can use if the service isn’t too busy.

ChatGPT isn’t the only chatbot that works in this way, but it’s the one that’s gained the most attention in recent months. 

Where does ChatGPT get its information?

ChatGPT uses a collection of Large Language Models (LLMs) which are numbered according to how advanced they are - the free web version is currently using GPT-3.5. These are based on all sorts of sources including the web, books, social media and more. The resulting language dataset comprises hundreds of billions of words. 

The free version of ChatGPT is based on data collection that finished in early 2022, so it does not 'know' anything about the world after that time. There’s also a premium version called ChatGPT Plus (which costs $20 a month) that’s furnished with more up-to-date information from GPT-4. 

A ChatGPT-like tool is also available to people who use Microsoft’s Bing search engine. The responses to your queries are based both on the pre-learned database of GPT-4, but combined with more up-to-the-minute information about the world right now, and includes clickable citations. Google has a similar chat-based search tool called Bard but it’s only available to those with an invite.

Stay secure: Read our guide on how to protect your smart home from hackers

Differences between GPT-4 and ChatGPT

ChatGPT Plus uses GPT-4. It is, simply, a more advanced model than ChatGPT - or GPT-3. It was trained on a much larger data set that also included images.

It’s able to better infer meaning from complicated commands and is also able to combine both visual and text inputs and it will produce a response based on that, or could possibly even produce an image and text combined if you ask it to.

One popular example is sketching a website, taking a photo of the sketch and then feeding that into GPT 4, asking it to create the code for the website simply based on the sketch. This is a step forward from the still-very-impressive skill of creating a website from a text-based brief or fixing programming code. 

The company also claims that GPT-4 ‘hallucinates significantly less’, and is ‘40% more likely’ to produce a factual response. While this should result in fewer answers that are outright false or irrelevant to the question posed, that doesn’t mean you can simply trust everything that’s said. The company also says it’s made the product ‘safer’, making it less likely to provide answers for banned topics.

Can I trust the information I get from ChatGPT?

In short, no. But in the same way as you might use different sources to kickstart a research project or better understand what people are saying about a topic, ChatGPT and similar tools can be used in a way that can help get you started and find information that you weren’t aware of. The main thing here is to not use a chatbot as your primary source for information, but instead take the answers it gives you and pursue them until you have found the real facts. 

It’s important to understand how ChatGPT comes up with its answers. Crudely put, ChatGPT is very good at placing one word after another. It can do this because it’s 'learned' so much from the massive data gathering exercises that form the basis of the model that powers it. As such, it does not 'know' anything at all; all it can do is put words one after another that make sense.

It’s often accurate but equally it can write utter nonsense. Its responses have been dubbed by many as ‘confidently wrong’ because the tone ChatGPT uses does not leave any room for doubt, even if it’s talking rubbish. These tools also have the tendency to 'hallucinate', where facts go completely out of the window and it states facts that are patently false, such as the fact that the current year is 2022 or that it loves the user.

It does not know how to communicate a level of confidence in what it has written, and if you attempt to probe into how it knows what it’s told you – for instance by asking for a list of citations – it will simply produce a list of things that look like citations but may not actually be real references at all. 

Using a chatbot

Also keep in mind that the sources of its language prediction includes social media platforms, such as Reddit and Twitter. While these are fantastic resources for getting to grips with how people speak to each other conversationally, a portion of online content written by people is going to be false, misleading and even harmful. Since some of what ChatGPT produces is based on what is on these sites and other similar platforms, this should give you some idea of what level of trust you should give it. 

Other GPT-powered tools can be given different levels of credibility. For example, Bing’s chat tool provides citations and links to the sources for the facts it has presented. That’s not to say the sources themselves are accurate and there is a potential for citations to be ‘hallucinated’ (see above) entirely, as has been demonstrated by testing with Bing's system.

Stay safe online with our up-to-the-minute advice on scams

What does ChatGPT do with the information I enter into it?

Ask ChatGPT this question and it tells you that no information about what you enter into it is stored. This is indeed OpenAI’s policy, but the UK’s National Cyber Security Council (NCSC) still points out that you should not enter sensitive information (such as personal details or company intellectual property) into chatbots, and not to perform queries that could be problematic if made public (for example sharing your secrets and asking ChatGPT to solve a personal dilemma). 

As generative AI tools become more prevalent (see below) and are used by companies for specific customer service purposes, the data you enter could be stored under the T&Cs of the companies you’re communicating with. All well and good, but when GPT-powered chatbots have the ability to misunderstand, there is always a chance that the data held about you by these companies is incorrect, which could be a breach of the General Data Protection Regulations. 

How and where are ChatGPT and other generative AI being used?

Anyone can use ChatGPT for themselves - visit the website, sign up and start experimenting. Just bear in mind the limitations given above on how it works and can be used. ChatGPT is a language model, so it can’t generate art or images like some AI engines. However, it can in theory process images and make recommendations. For example, it could scan an image of what’s left in your fridge and then recommend a recipe for dinner. 

As covered above, the highest profile public use of ChatGPT so far is in Microsoft’s Bing search engine and Edge browser. This follows Microsoft’s multi-billion dollar investment in the technology. Bing Chat enables you to ask questions and look up terms, just as you would with normal web search. It will - in theory - give a more human and useful response. 

Various companies are experimenting with how to use ChatGPT in their services - for example, giving more personalised recommendations on retail websites. The bulk of uses though are behind the scenes, often using the tool to process vast amounts of data to do anything from improve efficiency to combat fraud. 

There are practical ways you can use it, too. Something ChatGPT does very well is explain things in clear and simple terms, so if you’re looking to get a message across – for example a York student who had a parking fine overturned after using ChatGPT to lay out the details – it can be a great way to do this. You could also use it as a starting point for broaching a tricky issue, such as a dispute with a neighbour. Just remember that the information ChatGPT generates should be treated as ‘research’, or as a guide, and in most cases should still be checked and validated before it is shared or used

On the negative side, there are various ways in which this technology can be used to create convincing content that is either fake, misleading or even used for scams. It’s still early days, but evidence has already been uncovered of huge numbers of ‘content farms’ using entirely AI-generated content to lure people to the site and gain as much revenue as possible from advertising. Content farms have always existed, of course, but without real human intervention it is now possible to produce convincing text on an enormous scale at very low cost. 

ChatGPT alternatives

If you’re looking for a helpful AI-powered tool there are various specialised products that could help with your daily work tasks. Below are a few interesting examples:

  • Otter.ai: This tool transcribes meetings, and also claims to be able to produce summaries of meetings for you to refer back to. Find at otter.ai, available to use for free with up to 300 minutes of meetings per month, lasting a maximum of 30 minutes each.
  • Bing Chat: Microsoft’s search engine was infused with AI earlier this year, and provides easily digestible search results, with citations, based on your queries. Available for free in Microsoft Edge apps.
  • MidJourney: This tool creates distinctive images from text prompts, ranging from the hyper-realistic to the downright bizarre and unnerving. You’ll need a Discord account for this. If you have one, you can go to midjourney.com and click the ‘join the beta’ button.
  • Google Bard: Most people can now sign up to Google’s chat/search tool for free. While it is made by Google, it doesn’t bear much resemblance to the search engine and, unlike Bing Chat, it doesn’t cite its sources. Available on the Google Bard website.

Is there any regulation for AI like ChatGPT?

There is no specific regulation for generative AI tools like ChatGPT, but other laws could well be applied to the responses it produces. 

For example, if ChatGPT generates text that is largely similar to a copyrighted source, it could be in breach of copyright law. In addition, there are already examples of ChatGPT defaming individuals, stating they were involved in crimes they did not commit. This could result in legal action for libel. 

There are undoubtedly many untested cases of where a chatbot could be in breach of the law. For example, if a retailer were to employ the services of a chatbot to provide recommendations for a washing machine with certain features, and then that machine did not have those features, there could be a consumer law case for the retailer to answer. 

AI like ChatGPT has to obey data protection rules on the use and storage of personal data. Indeed, Italy’s data protection regulator was quick off the mark to ban ChatGPT under data protection grounds to prevent personal data being stored and then shared back to other users. The ban was subsequently lifted after OpenAI “addressed or clarified” the issues that the regulator had raised.