Sam Altman, appearing as a guest of the World Economic Forum, made several interesting comments.
The 2024 World Economic Forum in Davos took place between January 15 and 19, and its influential participants are also focused on the expansion of artificial intelligence, as the organization behind the event has previously called services based on generative AI and the disinformation campaigns that increase with their help and the near future as a primary source of danger.
It is no wonder, then, that Sam Altman, who as the CEO of OpenAI behind the hugely successful ChatGPT text generator, has a great influence on the development of the industry, was also invited to the event. During the interviews held at the forum, the CEO said several interesting things, for example about nuclear energy.
THE Reuters based on his report the Bloomberg answering his question, Altman reminded that the infrastructures that drive artificial intelligence services operate with a huge energy demand, so according to him, the future of technology is only secured if humanity achieves a breakthrough in the field of climate-friendly energy sources, among which he singled out cheap solar energy and nuclear energy.
The CEO also revealed that this is why they are actively investing in nuclear fusion technologies, which will surely attract criticism from opponents of nuclear power plants.
So OpenAI votes for nuclear power, but that wasn’t the CEO’s only noteworthy statement. THE Gizmodo can be seen from his summary, that according to Altman, artificial intelligence will not take our jobs, as people will “go on living their lives” and continue doing their “human things.” In his opinion, we always find new things to do, but even in his great optimism he admitted that they have no idea what the future will bring. In any case, Altman’s words seem to contradict the prediction of the International Monetary Fund, according to which AI will it will have a huge impact on jobs.
In addition to all this, Altman also touched on the fact that although the The New York Times is currently suing them due to the use of the paper’s articles, they don’t actually need the copyrighted content of large media companies to train their generative models, and also rejected the assumption that AI-driven disinformation could pose a threat to the 2024 US presidential elections.
In other words, the head of OpenAI is optimistic about AI in every imaginable matter, but the future will decide whether his optimism is well-founded.