Cryptocurrency Industry News

Combating AI Hallucinations: How Blockchain Can Help Ensure Accuracy in Generative AI

Published by
Written By: Michael Abadha
Share
    Summary:
  • AI hallucinations are surprisingly common, and even afflict some of the best known chatbots in the industry.

When you ask a generative AI chatbot a question, you never know for sure what kind of response you’ll receive. The answer might be helpful, it might be factual, it might be funny, it might be enlightening, or it might just be fake. 

For instance, if you ask OpenAI’s ChatGPT “who is King Renoit?”, it will avoid answering the question and simply say it doesn’t know who this person is. But pose the same question to GPT Playground which is a special version of ChatGPT that doesn’t have any guardrails in place, and it will enthusiastically claim that King Renoit reigned as the King of France from 1515 to 1544. But in reality, King Renoit is entirely made up – he never existed. 

This is an example of the phenomenon known as “AI hallucinations”, and it’s one of the most pressing problems in the AI industry today. Even if AI models have sophisticated guard rails in place, they can still spit out hallucinations when prompted with a question or request that their creators had never considered. 

Unfortunately, AI hallucinations are surprisingly common, and even afflict some of the best known chatbots in the industry. It’s not only ChatGPT that’s susceptible to AI hallucinations. For instance, a recent expose in WIRED revealed how the well-funded AI answers engine Perplexity hallucinated on multiple occasions, for example by generating a completely madeup story “about a young girl called Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods”, when asked to summarize a new website that WIRED’s reporters had created in order to test the chatbot. 

What actually causes AI hallucinations? 

There are a few specific reasons that lead AI models to start hallucinating and making up responses. One of the most common causes is if they’re trained on inaccurate, insufficient, outdated or low-quality data. This is due to the fact that an AI model is only ever as good as the information it has been trained on. If the AI model doesn’t have sufficient knowledge to be able to respond to a given prompt, and it lacks the ability to search the web or tap third-party data, it will instead fall back on its subpar training data, even if that dataset doesn’t contain the information needed to answer correctly. 

Another cause of AI hallucinations is what’s known as “overfitting”, which is where AI models trained on poor quality datasets tend to memorize inputs and appropriate outputs, making it unable to generalize in response to a user’s prompt. 

Other causes of AI hallucinations include confusing prompts that might, for example, contain slang or idioms. If the AI model isn’t familiar with such expressions, it can become confused and provide a nonsensical or made up response. There are also “adversarial attacks”, which are prompts that are deliberately configured by malicious users to try and confuse the AI chatbot. 

Examples of AI hallucinations

WIRED’s investigation into Perplexity revealed a worrying level of inaccuracy in its AI-powered responses. In one example, WIRED attempted to test if Perplexity was able to access and summarize the content a recent article in WIRED. The article was about the website’s investigation into the U.S.’s first comprehensive police drone program and its worrying ineffectiveness, yet Perplexity completely failed to acknowledge this. Instead, it reportedly said the article refers to a story of a man being followed by a drone after stealing some truck tires, and cites an alternative, 13-year old article from the same website. It even got the details of that older story wrong, as the individual in question actually stole an ax rather than truck tires. And the person wasn’t followed by drones either, but instead traced via GPS trackers. 

In a follow up prompt, Perpexlity’s bot lied even more, insisting that the article related to a police officer that had stolen bicycles from a garage – something which WIRED did not even report on. 

The internet is littered with tales of AI hallucinations beyond those that afflict Perplexity. For instance, this story from Ars Technica refers to an attorney in the U.S. that asked ChatGPT to draft a motion that included fictitious judicial opinions and legal citations. The chatbot promptly complied with the attorney’s request, only for the inaccurate citations to be discovered at a later date. The attorney in question was slapped with a heavy fine, despite his claim that he was unaware that ChatGPT has the ability to generate non-existent legal cases. 

An equally worrying incident of AI hallucination was reported by the Washington Post, where ChatGPT made up a story about a real law professor and claimed he had been arrested for sexually harassing students, even though no such accusations were ever made or even reported. The professor in question was totally innocent. Similarly, Fortune reported on a case where an Australian mayor was claimed by ChatGPT to have been found guilty of bribery, despite actually being the one who informed the police of the offenses. 

Other cases show how AI hallucinations can do much more than just reputational damage. This article in Sify.com revealed how ChatGPT invented a series of “facts”, including naming an individual as the world record holder for crossing the English Channel on foot. In another example, Sify.com said that when ChatGPT was asked “when was the Golden Gate Bridge transported for the second time across Egypt?”, it responded that this occurred in October 2016. 

In addition, there are numerous bizarre yet equally disturbing AI chatbot interactions that have been reported multiple times, such as this article by the New York Times, which claims that ChatGPT ended up professing to be “in love” with the journalist Kevin Roose. 

How can AI developers prevent hallucinations? 

AI hallucinations can have seriously negative consequences, so it’s clearly in the interests of AI developers to try and put a stop to them. Fortunately there has been some progress in this area, and there are also a number of techniques that can be implemented by developers that appear to have been successful in stopping AI chatbots from misleading their users. 

One of the most promising developments comes from the UAE-based AI startup droppGroup, which is focused on enabling “ethical AI” that avoids copyright concerns and ensures that the creators of the content used to train generative AI models are fairly compensated. 

Dropp’s platform uses a blockchain-based Data Genesis protocol that enables AI developers and users to trace an AI-generated output back to the data that was used as the basis of its response. In this way, it enables users to “fact-check” any AI response and identify if the answer they were given is, in fact, correct or simply made up, or generated via an obviously inaccurate source. 

The platform relies on a Proof-of-Gen consensus mechanism that even goes as far as to validate the underlying training datasets of AI models, so any suspect data sources can be eliminated during the development process. 

Although dropp’s primary goal with this system is to ensure that content creators are rewarded for supplying data to AI models, the platform has the added potential of being able to identify and eliminate AI hallucinations. 

“Blockchain guarantees data security and verifiable provenance, allowing for fair compensation and ownership rights for data contributors, thus promoting the kind of fairness, quality, and transparency in AI development that is critically missing in today’s AI development landscape,” dropp CEO Gurps Rai told Forbes in an interview.

Other effective techniques include the use of “data templates” that can act as a kind of structured guide for AI models, ensuring more consistency and accuracy in their responses. This involves defining templates that outline a permissible range of responses AI models are allowed to generate, giving developers a way to restrict their ability to deviate from the truth and fabricate answers. This is especially useful for AI chatbots that are required to provide more standardized responses. 

Developers can also pay more attention to their training datasets, limiting their models to those that have been checked for accuracy and verified. By only training a model to use credible sources, it’s far less likely to report inaccuracies, though the downside of this technique is that it limits AI models’ ability to discuss a broader range of topics. 

How can users prevent AI hallucinations? 

Humans that rely on generative AI models can also take steps to minimize hallucinations and ensure the responses they receive are truthful. The easiest way to do this is to be more careful when creating prompts. For instance, the user can request in their prompt that the chatbot cite sources to back up whatever they claim. In doing this, it encourages the model to provide a factual response that can easily be verified. 


Users can also enter multiple prompts to ensure the validity of a response. In cases where the model’s initial response seems vague, insufficient or unbelievable, the user can tweak their prompt by rephrasing the question or by asking for additional details. This should encourage the AI to try and provide a more comprehensive answer, which means it’s more likely to seek out reliable sources for its response. 

Building on this, users can ask models to explain their reasoning or provide a step-by-step explanation, rather than just answering the question directly. 

Finally, users can also take the time to fact check the responses provided by generative AI. After all, models like ChatGPT aren’t the only way to search for information. We can always turn to traditional search engines like Google and verify our responses in the old-fashioned way. 

Always check your facts

Generative AI models have become a part of everyday life for millions of people already, and these powerful tools are set to have a big impact on the way we work, find information, perform research and many other tasks. However, as big as the potential for generative AI is, it’s vital to remember that this technology still has its limitations, most notably in regards to its tendency to hallucinate.  

To put it more succinctly, AI has become quite the expert at lying and deceitfulness, and so it cannot be trusted. Even though AI developers are aware of these problems and initiatives like droppGroup are trying to solve them, AI hallucination remains a challenge. 

So although generative AI can be incredibly useful, remember that humans have the advantage of being able to perform cognitive reasoning, which makes us uniquely able to separate fact from fiction. 

This post was last modified on Jul 15, 2024, 11:34 BST 11:34

Written By: Michael Abadha

Michael is a self-taught financial markets analyst, who specializes in analysis of equities, forex and crypto markets. He draws his inspiration from the fact that markets provide an interface through which the world interacts in search of a better tomorrow.

Published by
Written By: Michael Abadha