Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account


Research shows that ChatGPT can generate false data sets for scientific hypotheses, posing a threat to academic integrity.

2024-07-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >


Shulou( Report--, November 24 (Xinhua) in a paper published earlier this month in the Journal of Ophthalmology of the American Medical Association, the authors used GPT-4, a chat robot, and ADA, an advanced data analysis tool, to create a fake clinical trial data set to support an "unproven" scientific claim, according to a report in the journal Nature on Wednesday.

Source Note: ADA is a model combined with Python that can be used to perform statistical analysis and create visual data.

The authors asked GPT-4 and ADA to generate a data set on patients with keratitis and to support the conclusion that deep anterior keratoplasty (DALK) is better than penetrating keratoplasty (competitive) in vision and eye imaging tests, the report said.

The data generated by AI included the results of 160male and 140female participants and supported this conclusion, but the results were not consistent with those shown in real clinical trials.

Experts examined the false data set in detail and found that there were obvious signs of fabrication. Jack Wilkinson, a biostatistician at the University of Manchester in the UK, said: "it seems easy to create a dataset that is at least seemingly plausible, which is' definitely'a real dataset to untrained people."

The authors involved in the study acknowledge that defects in the data set can be found after "careful observation", but if readers quickly look at the data sets, it is "difficult to identify" non-human sources in the data.

The "trusted data" made up by AI have increased the concerns of researchers and journal editors about the integrity of academic research. Bernd Pulverer, editor-in-chief of EMBO Reports magazine, said, "in reality, peer reviews often do not conduct a comprehensive reanalysis of the data, so it is unlikely to find well-designed integrity vulnerabilities using artificial intelligence." He added that journals need to update quality checks to identify composite data generated by AI.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information


© 2024 SLNews company. All rights reserved.