In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2024-09-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)12/24 Report--
Xin Zhiyuan reports
Editor: run alan
[guide to Xin Zhiyuan] the first year of the outbreak of this big model in 2023 is about to pass. Looking to the future, Bill Gates, Li Feifei, Wu Enda and others have made their own predictions for the development of artificial intelligence in 2024.
2023 can be said to be the spring of artificial intelligence.
In the past year, ChatGPT has become a household name
During the year, we were shocked by the various changes made by AI and AI, and became the fruits of our spare time.
During the year, generative AI made significant progress, allowing artificial intelligence startups to attract a lot of capital.
Artificial intelligence leaders began to discuss the possibility of AGI, and policy makers began to take artificial intelligence regulation seriously.
But in the eyes of artificial intelligence and technology industry leaders, the AI wave may only be in its infancy. Every year since then may be the year with the most surging waves.
Bill Gates, Li Feifei, Wu Enda and others have recently expressed their views on the development trend of AI in the future.
They all talked about looking forward to larger multimodal models, more exciting new features, and more conversations about how we use and regulate the technology.
Bill Gates: two predictions, one experience, six questions Bill Gates published a ten-thousand-word article on his official blog, depicting what he sees 2023 as a new beginning of a new era.
Https://www.gatesnotes.com/ The-Year-Ahead-2024?WT.mc_id=20231218210000_TYA-2024_MED-ST_&WT.tsrc=MEDST#ALChapter2 this blog still starts with his work at the Gates Foundation, talking about the far-reaching changes that have taken place or will take place around the world.
In response to the development of AI technology, he said:
If I have to make a prediction, in a high-income country like the United States, I guess we are 18 to 24 months away from the widespread use of artificial intelligence by the general public.
In African countries, I expect to see a similar level of use in about three years. There is still a gap, but it has a much shorter lag than we have seen in other innovations.
Bill Gates believes that AI, as the most far-reaching innovative technology in the world, will completely sweep the world within three years.
Gates said in his blog post that 2023 was the first time he used artificial intelligence at work for "serious reasons".
Compared with previous years, the world has a better understanding of what AI can do on its own and "act as an auxiliary tool for what it does".
But for most people, there is still a long way to go for AI to play its full role in work scenarios.
Based on his own data and observation experience, he says one of the most important lessons the industry should learn is that the product must be suitable for the people who use it.
He gives a simple example. Pakistanis usually send voice messages to each other instead of text messages or e-mails. Therefore, it makes sense to create an application that relies on voice commands instead of entering long queries.
From the point of view of his greatest concern, Gates raised five questions in the hope that industrial intelligence can play a huge role in related fields:
Can artificial intelligence fight against antibiotic resistance?
-can artificial intelligence create personalized mentors for each student?
-can artificial intelligence help treat high-risk pregnancies?
-can artificial intelligence help people assess their risk of contracting HIV?
-can artificial intelligence make it easier for every medical worker to access medical information?
If we invest wisely now, artificial intelligence can make the world fairer. It can reduce or even eliminate the lag between access to innovation in the rich world and innovation in the poor world.
Wu Enda: LLM can understand the world, and it is better not to regulate AI. Wu Enda said in a recent interview with the Financial Times that the apocalyptic theory of artificial intelligence is ridiculous and AI regulation will hinder the development of AI technology itself.
In his view, the current regulatory measures related to artificial intelligence have little effect on preventing some problems. Such ineffective regulation will not have any positive benefits except that it will hinder technological progress.
Therefore, in his view, rather than do low-quality supervision, it is better not to regulate.
He cited the recent example of the US government asking big technology companies to commit themselves to adding "watermarks" to AI-generated content to deal with problems such as false information.
In his view, some companies have stopped watermarking text content since the White House volunteered to do so. Therefore, he believes that the voluntary commitment approach as a regulatory approach is a failure.
On the other hand, if regulators transplant this ineffective regulation to issues such as "regulating open source AI", it is likely to completely stifle the development of open source and lead to the monopoly of large technology companies.
Well, if the level of regulation obtained by AI is the same as it is now, it really has no regulatory significance.
Wu Enda reiterated that, in fact, he wants the government to take matters into its own hands and formulate good regulations, rather than the bad regulatory proposals now seen. So he doesn't advocate letting go. But between poor regulation and no regulation, he would rather have no regulation.
Wu Enda also said in the interview that LLM now has a prototype of the world model.
"from the scientific evidence I have seen, artificial intelligence models can indeed build world models. Therefore, if artificial intelligence has a world model, then I tend to believe that it does understand the world. But this is my own understanding of the meaning of the word understanding.
If you have a model of the world, you will understand how the world works and can predict how it will evolve in different scenarios. There is scientific evidence that LLM can indeed build a world model after being trained with a large amount of data. "
Li Feifei joined hands with Stanford HAI to release the challenge of seven predictive knowledge workers. Erik Brynjolfsson, director of the Stanford Digital economy Lab, and others predict that artificial intelligence companies will be able to provide products that really affect productivity.
Knowledge workers will be affected like never before, such as creative workers, lawyers and finance professors.
In the past 30 years, these people have been largely unaffected by the computer revolution.
We should accept the changes brought about by artificial intelligence, make our work better, and enable us to do new things that we could not do before.
James Landay, a professor at Stanford University's School of Engineering, and others believe that we will see new large-scale multimodal models, especially in video generation.
So we must also be more vigilant against serious deep forgery.
As consumers need to be aware of this, as the public also need to be aware of this.
We will see companies such as OpenAI and more startups release the next larger model.
We still see a lot of things about "is this AGI?" What is AGI? But we don't have to worry about artificial intelligence taking over the world, it's all hype.
What we should really worry about is the harm that is happening now-false information and deep forgery.
GPU shortage Stanford University professor Russ Altman and others expressed concern about the global shortage of GPU.
Big companies are trying to bring AI functionality in-house, while GPU manufacturers such as Nvidia are already operating at full capacity.
The arithmetic power of GPU, or AI, represents the competitiveness of a new era, both for companies and even countries.
The competition for GPU will also put tremendous pressure on innovators to come up with hardware solutions that are cheaper, easier to manufacture and use.
Stanford University and many other research institutions are studying current low-power alternatives to GPU.
There is still a long way to go to achieve large-scale commercial use, but in order to democratize artificial intelligence technology, we must continue to move forward.
Peter Norvig, a more useful proxy, a distinguished education researcher at Stanford University, believes that in the coming year, Agent will rise, and AI will be able to connect to other services and solve practical problems.
2023 is the year to be able to chat with AI, and people's relationship with AI is just an interaction between input and output.
By 2024, we will see the ability of agents to do work for humans-making reservations, planning trips, and so on.
In addition, we will move towards multimedia.
So far, people have paid great attention to the language model, then the image model. After that, we will also have enough processing power to develop video models, which will be very interesting.
What we are training now is very purposeful-people write down what they think is interesting and important in pages and paragraphs; people use cameras to record what is happening.
But for videos, some cameras can run 24 hours a day, and they capture what's happening, without any filtering, without any purposeful filtering.
Artificial intelligence models do not have this kind of data before, which will make the model have a better understanding of everything.
Hope for regulation Li Feifei, co-director of HAI at Stanford University, says artificial intelligence policy will be worth watching in 2024.
Our policy should ensure that students and researchers have access to AI resources, data and tools to provide more opportunities for artificial intelligence development.
In addition, we need to develop and use artificial intelligence safely, reliably and reliably.
Therefore, the policy should not only focus on cultivating a dynamic artificial intelligence ecosystem, but also on the use and management of artificial intelligence technology.
We need relevant legislation and executive orders, and the relevant public sector should receive more investment.
Ask questions and give solutions Ge Wang, a senior researcher at HAI at Stanford University, hopes that we will have enough money to study what life, community, education and society can get from artificial intelligence.
More and more of this generative artificial intelligence technology will be embedded in our work, play and communication.
We need to set aside time and space for ourselves to think about what is allowed and where we should limit ourselves.
As early as February this year, Springer Publishing, a publisher of academic journals, issued a statement saying that large language models could be used when drafting articles, but that they were not allowed to be co-authors in any publication. The reason they cited is the accountability system, which is very important.
-- seriously put something there, explain what the reason is, and say that this is the way of understanding now, and more improvements may be added to the policy in the future.
Institutions and organizations must have this perspective and strive to implement it on paper by 2024.
Companies will face complex regulations in addition to this year's EU artificial Intelligence Act, California and Colorado will pass regulations by mid-2024 to address automatic decision-making in the context of consumer privacy, according to Jennifer King, a privacy and data policy researcher at Stanford University.
Although these regulations are limited to artificial intelligence systems that train or collect personal information, both provide consumers with a choice of whether to allow certain systems to use AI as well as personal information.
Companies will have to start thinking about what it means when customers exercise their rights, especially collectively.
For example, for a large company that uses artificial intelligence to assist in the recruitment process, what if hundreds of applicants refuse to use AI? Do I have to review these resumes manually? What's the difference? Can humans do better? We are just beginning to solve these problems.
Reference:
Https://x.com/StanfordHAI/status/1736778609808036101?s=20
Https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3
Https://www.businessinsider.com/bill-gates-ai-radically-transform-jobs-healthcare-education-2023-12
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.