ChatGPT is an increasingly popular chatbot that uses the GPT-3 language model developed by OpenAI, used to generate responses to user input. The overriding goal of the creators of the bot was to prepare it in such a way that it imitated human communication as credibly as possible. ChatGPT was launched as a prototype just over a month ago, and that was enough to literally shake up almost the entire tech world.
ChatGPT turned out to be a tool whose operation generates quite a lot of problems in the world of technology. It turns out that even a layman can make a hacker, or at least – provide him with ready-made, malicious code.
NVIDIA A2 – low-profile Ampere accelerator, ready for artificial intelligence computing
ChatGPT, due to his skills, was already blocked on all devices and networks belonging to New York Public Schools (he was too good at writing homework). Even Google itself is afraid of this tool. As you can read in The New York Times, Google believes that ChatGPT could replace the classic search engine in the future (competition). Sundar Pichai, CEO of Google’s parent company, Alphabet, even attended several Google AI strategy meetings and directed his teams to focus their efforts on addressing the ChatGPT threat. But this is not the end of the alarming news about the new bot. It turns out that this one is equally efficient when it comes to … writing malware.
Microsoft close to acquiring Nuance Communications. Artificial intelligence and medical solutions will only gain from this
How do you know about it? This was proven by the cybersecurity company Check Point. It turns out that his employees searched hacker forums for ChatGPT-enabled pieces of malicious code, which they actually found. As we read on the TechSpot portal, hackers used the bot to prepare malware that searches for specific files, packs them into a.zip archive, and then sends them to the selected FTP server. As it turns out, ChatGPT helped cybercriminals prepare several other malicious solutions. As you can see, once again the statement that technology can help as well as harm is applicable. It all depends on who is using it. However, the biggest fear is that thanks to such versatile skills of the bot, it will be used for nefarious purposes, and in addition by people who would not be able to prepare similar code on their own. In other words, it can contribute to an increase in cybercrimes.
Source: TechSpot