Behind dazzling new technologies are often hidden in the shadows cottage methods and exploitation. Companies like Meta need staff to manually manage user-reported content. Artificial intelligence models also need support when their training bases need to be cleared of excess materials that can distort the AI’s way of thinking.
The creators of ChatGPT decided to unlearn their work of negative behavior at low cost. They used the services of Kenyan workers who were paid less than $2 for each hour of work with graphic content. This was a key activity from the perspective of product development, as the previous version of ChatGPT had a problem with eliminating, among other things, racist content from its response repertoire.
ChatGPT is going from strength to strength. Model 4.0, integration with Microsoft products, entry into the world of science and a paid premium package
External workers have been needed by OpenAI since November 2021, when thousands of data from various corners of the network – including its darkest corners – were sent to the servers of the outsourcing company Sam (also operating for Meta or Google). The Kenyan staff was tasked with tagging textual material containing descriptions of child sexual abuse, murder, suicide, torture, self-harm and incest as a source of bad learning patterns for AI. Wages ranged from $1.32 to $2 per hour (close to the real minimum wage in Kenya). Based on the current state of the ChatGPT model, it can be concluded that they have been successful.
ChatGPT – a popular bot is also able to write malware. Anyone can become a hacker?
However, the rate itself was not the problem. The materials analyzed were indeed an extract of bad human qualities. As a result, employees were forced to deal with drastic content for many hours, which led many of them to mental problems. As “Time” points out, access to psychological help was lacking. Doubts were also raised by the system of verification of awards and bonuses, which distinguished people working quickly (which usually does not offend, but in this case led to quick professional burnout). What’s more, OpenAI in February 2022 commissioned the collection of materials in the form of images that had even sharper content (supposedly not related to the ChatGPT project). This resulted in the premature termination of the contract, as the mere possession of this type of content is unlawful (for example, pedophile content). Prior to Sam’s final termination of cooperation, OpenAI received 1,400 files with sensitive content (classified in various categories).
Artificial intelligence helps predict earthquakes
The presented press reports will certainly not help OpenAI in raising funds. The use of such a low-paid workforce combined with psychological exploitation is simply disgusting when compared to the amounts that the organization obtains from investors. The situation calls for clarification, especially in the light of information about ongoing talks with Microsoft regarding financing in the order of 10 billion dollars. Time’s report puts OpenAI in a dystopian position – behind the brilliant invention are largely overlooked individuals whose painstaking work has not been noted in any way. It also shows how far artificial intelligence is from independence in the learning process. The history of Kenya reminds us that AI mind it still relies on exhausting human clicking on the keyboard.
Source: Time