Generative AI

The American writer Richard Powers’ latest novel, Playground, delves into the complexities of artificial intelligence, mirroring our current fascination with large language models like ChatGPT and Gemini. One particular passage from the novel, where a character expresses scepticism about artificial intelligence’s (AI’s) overreliance on information processing, has sparked an intriguing discussion.

When this passage was shared on social media, a user pointed out a discrepancy in ChatGPT’s attribution of the quote to Dave Eggers’ The Circle. This revelation led to a deeper exploration of these AI models’ capabilities and limitations, writes James Gleick in the New York Review of Books.

‘Hallucinations’

Upon further investigation, it became clear that these language models, despite their impressive ability to generate human-quality text, are prone to errors. They may provide incorrect information, a phenomenon known as “hallucination”. This occurs because these models are trained on vast datasets of text, learning patterns and correlations rather than true understanding.

These models are trained on vast datasets—books, articles, blogs, and tweets—but they don’t store complete texts. Instead, they retain statistical relationships between words and phrases. This process resembles lossy digital compression, as science-fiction writer Ted Chiang aptly observed. The original detail is sacrificed for efficiency, and the result is a blur of plausibility without precision.

How AI responds to questions

When you ask a question, the AI doesn’t search a database for facts. Instead, it constructs a response that sounds plausible. If the training data includes similar phrases linked to certain authors or themes, the AI might “guess” an answer—but guessing is all it’s doing, says Gleick.

The AI’s apparent accuracy comes only from training on human-created content that happened to be truthful.

Many users trust the AI models implicitly, citing them as authoritative sources. This is a grave mistake. AI chatbots simulate expertise but lack the ability to verify facts. As a result, they amplify misinformation, making them ideal tools for disinformation campaigns.

AI models are not oracles

It’s crucial to remember that these models are sophisticated mimics, not omniscient oracles.

Artificial intelligence holds extraordinary promise, but it has limitations. As we integrate AI tools into our daily lives, we must approach them with scepticism, remembering that they are statistical pattern-matchers, not sources of truth.

The line between plausible and accurate matters now more than ever. The future of AI depends on whether we demand tools that prioritise veracity over mere verisimilitude. Until then, we must remain vigilant.

As AI technology continues to advance, it’s essential to maintain a healthy scepticism and to critically evaluate the information they provide. By understanding the limitations of these models, we can harness their potential while mitigating the risks associated with their misuse.