Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenAI scientists: hallucinations are inherent characteristics of large models, not defects

2024-05-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

Xin Zhiyuan reports

Editor: peach run

[guide to Xin Zhiyuan] the big model is the "dream machine"! Hallucinations are an innate characteristic of LLM, not a defect. The unique perspective of OpenAI scientist Andrej Karpathy has sparked a heated discussion in the AI community.

Hallucinations have long been a clich é in LLM.

However, OpenAI scientist Andrej Karpathy's explanation of the hallucination of large models this morning was startling and sparked a heated discussion.

In Karpathy's view:

In a sense, the whole job of the big language model is to create hallucinations, and the big model is the "dream machine".

In addition, another sentence of Karpathy is regarded as a classic by many people. He believes that the opposite extreme of the big model is the search engine.

"the big model is 100% dreaming, so there is a hallucination problem. Search engines do not dream at all, so there is a problem of creativity.

All in all, there is no "hallucination problem" in LLM. And hallucination is not a mistake, but the biggest feature of LLM. Only the big model assistant has hallucination problems.

Jim Fan, a senior scientist at Nvidia, shared his view that "the fascinating thing is that the best LLM can" decide "when to dream and when not to dream by switching to" tool use mode ". Web search is a tool. LLM can dynamically adjust its "dream% hyperparameters". GPT-4 tries to do this, but it is far from perfect.

Subbarao Kambhampati, a professor at Arizona State University, also replied to Karpathy:

LLM is hallucinating all the time, but sometimes their hallucinations happen to coincide with your reality.

Whether the questioner can make the illusion consistent with his own reality depends to a large extent on the questioner's own ability to examine the content generated.

Based on this understanding, he believes that all attempts to personify the capabilities of LLM are only human wishful thinking, and it is futile to impose anthropomorphic concepts such as thinking, thinking, reasoning and self-criticism on LLM.

On the basis of recognizing the nature of LLM ability, human beings should regard it as a "correction device to supplement human cognition", rather than a potential tool to replace human intelligence.

Of course, the occasion to discuss this kind of problem is never without the figure of Boss Ma: "Life is just a dream."

It feels like he's going to say in the next sentence, we're just living in matrix simulation, 😂😂.

There is no "hallucination problem" in Karpathy:LLM, so LLM assistants have the hallucination problem that has been criticized for the big model. What exactly does Karpathy think of it?

We use "cue" to guide these "dreams", and it is the "cue" that opens the dream, and the large language model, based on the fuzzy memory of its training documents, can guide dreams in a valuable direction in most cases.

Only when these dreams enter areas that are considered to be inconsistent with facts, do we call them "hallucinations". This may seem like a mistake, but it's really just something LLM is good at.

Let's look at an extreme example: search engines. According to the input prompt, it directly returns the most similar "training document" in its database, word for word. It can be said that this search engine has a "creativity problem", that is, it will never provide new responses.

"the big model is 100% dreaming, so there is a hallucination problem. Search engines do not dream at all, so there is a problem of creativity.

Having said that, I understand that what people really care about is that they don't want LLM assistants (products like ChatGPT) to hallucinate. The large language model assistant is much more complex than the simple language model, even though the language model is its core.

There are many ways to alleviate the illusion of the AI system: using retrieval-enhanced generation (RAG) to more accurately trace dreams back to real data is probably the most common one. In addition, the inconsistency between multiple samples, reflection, verification chain, decoding uncertainty from the active state, the use of tools, and so on, are hot and interesting research areas.

In short, although there may be some nitpicking, LLM itself does not have a "hallucination problem". Hallucination is not a defect, but the greatest feature of LLM. The one who really needs to solve the hallucination problem is the big language model assistant, and we should also start to solve this problem.

LLM is a dream machine, please stop the wishful "personification" Professor Subbarao Kambhampati, an AI scientist at Arizona State University, summed up his research into a long article on X.

He believes that the generation of different cognition (including hallucinations) is the essential ability of LLM, so we should not have too idealized expectations of LLM.

Link address: https://twitter.com/ rao2z / status / 1718714731052384262 in his view, humans should see LLM as a powerful cognitive "simulator" rather than a substitute for human intelligence.

LLM is essentially an amazing huge bank of external unreal memory that, if used properly, can be used as a powerful cognitive "simulator" for human beings.

For human beings, the key to play the role of LLM is how to make effective use of LLM, rather than constantly deceiving others with anthropomorphic attempts in the process.

The biggest illusion about LLM is that we constantly confuse LLM with human intelligence and try to apply anthropomorphic concepts such as thinking, thinking, reasoning and self-criticism to LLM.

This personification is quite futile-and, as shown in many studies-can even be counterproductive and misleading.

On the other hand, if we don't set the sole goal of "developing a human-level AI system through LLM", we don't have to criticize that autoregressive LLM is very bad every day (such as Professor LeCun).

LLM is a "simulator" that can very effectively supplement cognition and does not naturally contain human intelligence.

LLM is much better at certain things than humans, such as quick generalization and summarization.

But the ability to do many other things is much worse than that of human beings, such as planning, reasoning, self-criticism and so on.

What mankind really needs may be:

1. Take full advantage of LLM. This can add humans or other reasoning tools to the LLM product architecture to enhance the advantages of LLM.

two。 To some extent, human intelligence is still the holy grail worth pursuing, keeping open research approaches, rather than just stacking computing power and expanding autoregressive architecture.

Where did the hallucination of large models come from some time ago, an organization called Vectara launched a ranking of hallucinations of large models on GitHub.

The results show that GPT-4 is the best at summarizing short documents, while the two models of Google PaLM are directly at the bottom.

Among them, the accuracy of GPT-4 is 97.0%, the hallucination rate is 3.0%, and the response rate is 100.0%. The accuracy of Palm-Chat 2 is 72.8%, the hallucination rate is 27.2%, and the response rate is 88.8%.

However, as soon as the list came out, it was questioned by many people in the industry.

John Schulman, co-founder and researcher of OpenAI, discussed the problem of hallucinations in a lecture, "RL and Truthfulness-Towards TruthGPT."

According to Schulman, hallucinations can be roughly divided into two types:

-Model guessing error

Pattern completion behavior: the language model cannot express its own uncertainty, cannot question the premise in the prompt, or continue with previous mistakes.

A language model represents a knowledge graph that contains facts from training data in its own network, so "fine tuning" can be understood as learning a function that runs on the knowledge graph and outputs token predictions.

For example, the fine-tuning dataset may contain the question "what kind of movie Star Wars is" and the answer "science fiction".

If this information is already in the original training data, that is, it is part of the knowledge graph, then the model will not learn the new information, but will learn a behavior-- output the correct answer. This fine tuning is also known as "behavioral cloning".

However, if the answer is not part of the original training data set (nor part of the knowledge graph), even if the network does not know the answer, it will learn to answer.

Fine-tune answers that are actually correct but not in the knowledge graph, thus teaching the network to fabricate answers-that is, to create "hallucinations". On the contrary, training with incorrect answers can lead to information concealment on the Internet.

Netizens discussed fiercely that "LLM is a dream machine". What a wonderful sentence. If we humans also live in dreams constructed by creators in the higher dimensions, can LLM also recursively create dream machines in the future?

When another netizen shared his request for ChatGPT to give his reply a confidence score, ChatGPT gave it 90% and explained why:

The confidence I provided at the end of the reply is an estimate, reflecting the extent to which I think the reply is consistent with your inquiry based on the context and information I have. It is not calculated through precise algorithms or formulas, but based on the complexity of the problem, the clarity of the information provided, and my subjective assessment of the accuracy and relevance of the information available.

In fact, the discussion of hallucinations is essentially a discussion of "truth", and perhaps nothing is absolutely objective "truth".

It is not always necessary to search for facts when the required data is already available. The challenge for LLM, however, is its learning process, which usually involves a "destructive" data compression. Solving this problem-reducing data loss can not only alleviate the hallucination problem, but also reduce the need for a large number of training data sets.

In fact, our ultimate goal is not just to create more LLM, but to achieve real intelligence. Although LLM provides a glimpse through language understanding, their fundamental flaw is that the learning process undermines data integrity. The ideal architecture should be to retain the data in the learning process, to remain faithful to the original information, and to enable the model to develop and improve its intelligence. I assume that such an architecture may involve copying data rather than compressing it.

Every LLM is an unreliable narrator, irreversible in terms of the nature of its architecture.

Do you agree with Karpathy on the explanation of the hallucination of the large model?

Reference:

Https://twitter.com/karpathy/status/1733299213503787018

Https://twitter.com/DrJimFan/status/1733308471523627089

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report