GPT-4 was attacked by netizens again because he was "too lazy".
Some netizens want to develop an application in Android system that can interact with OpenAI API in real time.
So send the method example link to GPT-4 to refer to writing code in the Kotlin language:
Unexpectedly, and GPT-4 to communicate for a long time, GPT-4 can not give a normal operation of the complete code.
Instead, it explains "what should be done".
This really annoyed netizens and tweeted that "the code that could be written two weeks ago is no longer working".
As a result, more netizens suddenly exploded:
Finally, someone's looking into it.
Everyone repeatedly said that they had encountered similar problems:
According to netizens, it seems that this situation has been happening since the big update of GPT-4 on November 6.
At present, some OpenAI employees have come forward to respond, saying that the problem has been reported to the team.
Just the code, the complete code! No wonder netizens will "break the defense", that is, after the above netizens sent the method sample link to GPT-4 and asked it to write code in Kotlin language.
GPT-4 responded like this, listing seven steps, all explaining "what to do":
The code is not given until the end, but it is only a basic "template":
Netizens were patient at first, telling it that "there is no need to explain, just give me the code, the complete code, the code that can run 100% normally":
As a result, GPT-4 opened his mouth to explain and cite examples:
Netizens were angry not to hit a place, directly interrupted it, and stressed again, "Don't explain, give me the code":
GPT-4 really understood this, changed the above template a little bit, and sent it out:
This is the beginning of the scene, netizens have no choice but to post complaints.
In response to GPT-4 's reply, netizens roared: what have they done to you? I'm sorry you've been weakened.
GPT-4 also looks innocent 🥺 at the moment.
Among the netizens who complained one after another, some even said that ChatGPT was no longer used.
AI image editor dingboard CEO@kache (yacine) also posted a complaint the day before, with 157,000 views:
For the past week and a half, I've been writing "childish" code because GPT-4 doesn't follow instructions.
Coincidentally, if calculated according to what netizens called "a week and a half", the time coincides with the fact that Altman really spread the story.
Kache (yacine) also has a post full of emotions, "Please give me back the old GPT-4":
The netizen said, "I understand you":
It used to make good guesses, but now it will give me ten reasons why it can't make good guesses.
Last week, I yelled "f*ing do itching!" into the chat box. Has reached an all-time high.
For a time, GPT-4 's "laziness" has become the object of "crusade" by many netizens.
Ethan Mollick, a professor at Wharton, couldn't watch it anymore and tested it himself, and the results seemed to show that it was true.
Ethan Mollick repeats a series of previous analyses done with the code interpreter (Code Interpreter).
GPT-4 knows what to do, but keeps asking "get the job done". As a result, the original step becomes a lot of steps, and some of them are strange.
Now Ethan Mollick is speechless.
What happened to GPT-4? The reason behind it is still unknown, and netizens have also speculated.
OpenAI employees: Ethan Mollick is still very strict after giving feedback to the team, thinking that even this is not enough to prove that GPT-4 is becoming more and more stupid, and he speculates that this may be a temporary problem of overloading the system.
If you encounter this problem on your mobile phone (mobile device), it may be because the mobile version of the system prompts ChatGPT to generate a shorter and more concise answer.
My test was conducted on the web version.
There was also a discussion on Reddit, in which a post pointed out that "it's not that the new version of GPT-4 is lazy, it's just that we used it wrong":
The article points out that after a major update to GPT-4 on the 6th of this month, there is no custom prompt for the basic version, which results in no predefined "path" for GPT-4 to guide its behavior.
This makes it very generic, but its output is also somewhat "directionless" by default.
One solution is to use the new custom GPT feature (GPTs) provided after the update to set up a special GPT for each job.
Netizens have also shared "little tricks" one after another:
One thing that changes the rules of the game with the new GPT-4 is the amount of code it can explain at once. It may be useful to explicitly say something like "Please write this test in its entirety".
At the same time, it is helpful to make it clear that "do not rewrite code that has already been written", which saves token and allows the model to focus on generating new output.
I also found that adding the "step-by-step" prompt adds some planned text at the beginning, which helps subsequent output better locate the context.
However, some netizens said that when they were using it, they would leave some "to-do items" anyway:
This netizen even said bluntly that GPT-4 now seems to have Alzheimer's disease:
OpenAI implies that the new version of GPT-4 is very good at following instructions, but this is not the case.
I've been using GPT-3 from the beginning, 3.5, and then 4, and I've never seen this degree of Alzheimer's.
OpenAI employees also responded to fierce complaints from netizens.
At first, netizens were asked to provide some specific examples, saying that they would do some research, and it is very possible to fix these problems in the next iteration of the model version.
As soon as this remark came out, more netizens "reported the fault".
Once again, will depue responded:
Thanks for the feedback, all the examples here will help us solve this problem faster. I have just forwarded it to the team and will be informed of the follow-up news.
It seems that the official follow-up response will have to wait for another wave. Has the family encountered a similar situation recently?
This article comes from the official account of Wechat: quantum bit (ID:QbitAI), author: Xifeng
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
CTOnews.com March 8 news, Godson said today that China Unicom Cloud Autonomous controllable Cloud has successfully adapted to Godson 3C5000 server, and its domestic virtualized cloud platform and container cloud platform are compatible with LoongArch architecture.
Thank you CTOnews.com netizen _ d _ for the clue delivery! CTOnews.com January 9 news, Lenovo has begun to warm up for the upcoming 2023 new Pro all-around book. According to the official poster released today, Xiao Xin's Pro notes
CTOnews.com August 12 news, the well-known oil pipeline channel The Gaming Revolution previously revealed that "call of Duty: modern Warfare 3" will not land on the previous generation of mainframe platform (PS4 / Xbox One). CT