The Slaves of ChatGpt – World and Mission
Categories: Communities, Technology
Tags:

The slaves of ChatGpt – World and Mission
Behind the increasingly surprising performance of artificial intelligence systems is the exploited workforce of a new category of workers of series B all over the world. That they should come out of the shadows and be protected


Computers that know how to chat with people, answer questions on practically every field of human knowledge, but also compose creative texts, such as fake reports in the style of famous journalists or brilliant comedies. The latest applications of artificial intelligence (Artificial Intelligence, AI) promise to bring a revolution in our daily lives and have already been greeted with enthusiasm by technology gurus as well as millions of citizens, conjuring futuristic scenarios and almost infinite potential. Since its release last November, ChatGpt had already reached 100 million users in January: an exploit that made it the app with the fastest growth ever. Technically, the product launched by the Silicon Valley company OpenAI is a chatbot, i.e. software that simulates human conversations, allowing users to interact with digital devices as if they were communicating with a real person.

And yet, without calling into question the killer androids of pop cinema, there is no shortage of reasons to look cautiously at innovations related to artificial intelligence, and they are not limited to the ethical doubts associated with their use in various sectors of social life, starting with that of work, with the risk – about which many have already warned – of undermining human beings themselves. In reality, behind the surprising (albeit still imperfect) performances of dialoguing software or platforms capable of autonomously generating artistic images, there are very human workers who, in the most remote corners of the planet, personally pay the price for this as well as other technologies, from which the affluent users of the rich world essentially benefit.

A recent investigation by Time revealed that OpenAI exploited, to perfect its product, the manpower of the employees of a Kenyan company, paid with starvation wages to undergo hours of psychologically strenuous work. The acronym “Gpt”, in fact, stands for “Generative pretrained transformer”, literally “generative pretrained transformer”: in practice, these “digital minds”, in order to function, must first be trained, and to be precise fed with enormous quantities of texts found at random in the large internet network, a vast repository of human language. But since many of these contents are “toxic” – that is, violent, racist and full of prejudices – the first tests showed how artificial intelligence absorbed this toxicity and then re-proposed it in interviews with users. How to remedy the inconvenience?

Since even a team of hundreds of humans would have taken decades to manually examine and “clean” all the data sets to be fed to the software, it was necessary to build an additional artificial intelligence system capable of detecting a toxic language, for example the ‘hate speech, then removed from the platforms. In practice, it was necessary to feed an AI with labeled examples of violence, prejudice, sexual abuse, to teach it to recognize them on its own and, once integrated into the chatbot, to filter the answers to the user, making them more ethically orthodox. Thus – revealed Time – since November 2021 OpenAI has sent tens of thousands of text fragments, which due to its gruesome contents “seemed to have come out of the darkest recesses of the internet”, to an outsourcing company in Kenya, where a few dozen labelers Data processors read and cataloged hundreds of songs, over nine-hour shifts, for a fee of between $1.32 and $2 an hour. Some of them stated that they were mentally marked by the task, with recurring traumas and visions related to the contents examined.

Months later, these critical issues would lead to the early termination of relations between OpenAI and Sama, the contracting firm, which is based in California but also employs workers in Uganda and India for Silicon Valley clients such as Google, Meta and Microsoft. In the meantime, however, in February of last year Sama had launched another pilot project for OpenAI: collecting sexual and violent images, some of which are illegal under US law, to be labeled and delivered to the client. On the other hand – an OpenAI spokesman later declared – this is “a necessary step” to make its artificial intelligence tools (including those for image generation) more secure.

Ultimately, the traumatic nature of the job prompted Sama to cancel all of her contracts with the San Francisco giant eight months ahead of schedule. But the story represented a wake-up call on the flip side of a technology apparently synonymous with progress for everyone and which instead, in some areas of the world, goes hand in hand with the most repeated forms of exploitation. “Despite the pivotal role these data enrichment professionals play, a growing body of research reveals the precarious working conditions they face,” admitted the Partnership on AI, a network of organisations.

In Nairobi, where another recent scandal revealed the case of local content moderators for Facebook being paid $1.50 an hour to view scenes of executions, rape and abuse, political analyst Nanjala Nyabola was even more direct: “It should be clear by now that our current digitization paradigm has a work problem,” he said. We are moving from the ideal of an internet built around communities of shared interests, to one dominated by the commercial prerogatives of a handful of companies located in specific geographical areas». For Nyabola, author of the book “Digital Democracy, Analogue Politics” (Zed Books), among other things, «a critical mass of underpaid labor is recruited in the legally weakest conditions to support the illusion of a better internet. But this model leaves billions of people vulnerable to a myriad of forms of social and economic exploitation, the impact of which we do not yet fully understand.

A scholar who has, on the other hand, been very clear about these dynamics for some time now is Timnit Gebru, a computer engineer who in December 2020 was at the center of a case for her sudden exit from Google in Mountain View, where she worked as co- head of the study group on the ethics of artificial intelligence. Gebru, who had also collaborated on a landmark study of racial and gender biases in facial recognition software two years earlier, had in fact written a paper highlighting risks and biases in large language patterns. Faced with her refusal to withdraw the text before publication, Google fired her outright.
Today, the tenacious 39-year-old scientist has decided to try to change the industry in her new role as founder of Dair, the Distributed Artificial Intelligence Research Institute, which works with expert AI researchers around the world. “Data labeling tasks are often performed far from the Silicon Valley headquarters: from Venezuela, where workers view images to improve the efficiency of self-driving vehicles, to Bulgaria, where Syrian refugees power facial recognition systems with selfies categorized by race, gender, and age categories. These tasks are often entrusted to precarious workers in countries such as India, Kenya, the Philippines or Mexico», reveals Gebru in a recent essay written for Noema magazine together with Adrienne Williams and Milagros Miceli, who worked closely with data loggers in Syria, Bulgaria and Argentina.

“Tech companies – denounce the three researchers – make sure to hire people from poor and disadvantaged communities, such as refugees, prisoners and other people with few job options, often hiring them through third-party companies”. To change course it is necessary to finance research on AI “both as a cause and as a product of unfair working conditions”. Technology gurus, but also the media, today have a responsibility to highlight the exploited labor behind the illusion of machines increasingly similar to human beings. Because “these machines are built by armies of underpaid workers all over the world.” They have the right to be protected.

Published at 2023-05-31 by Puerto Parrot
Fair use disclaimer
Some material is coming of the internet. If applicable, the link to the original page is added. If you own the work and feel that it shouldn't be posted on this website, please Contact us or visit our copyright and privacy page . Thank you.
There are no comments