Perpetrators and victims of Artificial Intelligence
…
Time is running out. It never stops. For no man, for no reason. So it's not going to stop for me either. At least not to write my Common Sense text. I have to X this option. Nah, it's not even an option. Not today, not with the tools and knowledge we have so far.
"Are you alive;" (question on Meta Messenger).
The question is asked by Evi Botsaropoulou, editor-in-chief of Common Sense. I wonder, am I alive? I have a pulse, I breathe, I myself ask the question of existence to myself. I probably meet most of the criteria of a living organism and a fairly advanced one, since in addition to biological functions, I am also aware of my existence. Cogito ergo sum. But I realize that my intelligence is not enough to prepare and deliver within the next few hours a text.
I am tempted for an experiment. Ask AI to write an article about itself. It is not difficult. I can request it from ChatGPT and I'll have it in seconds. But a major question arises, at least in my opinion: Would this text be creative? I'm sure the AI engine could produce an even, brilliant text and let me drag my tired body to bed an hour earlier. Is Chat GPT getting tired? It hurts;
The truth is, how he neither gets tired, nor hurts, nor perceives stimuli like a human being. And this is for the simple reason that AI models like Chat GPT are predictive models that essentially work using probability functions. They draw their knowledge from a huge database and using machine learning process data to meet a variety of requests, from text reproduction and analysis to service delivery. But they cannot be creative. Or rather to be perfectly honest, they can be partially creative. The more computational, the more "algorithmic" the process they are asked to perform, the more creative they can be. On the other hand, the less computational the process, the less creative this kind of artificial intelligence.
So back to my original question. Would the text Chat GPT write for me be creative? I think not. It would certainly be interesting in terms of content, but it would lack qualities that have to do with the unexpected, the emotional, the improvisational. He would miss his soul. Of course, this does not make them indifferent. On the contrary! Its effectiveness alone makes it not only absolutely interesting, but also incredibly useful. And we haven't seen anything yet. In recent years artificial intelligence has made incredible leaps and the speed at which it is evolving is incredible.
Flash back. Let's go back 16 years. We are at one of the most iconic and revolutionary presentations in the field of technology. In 2007, Steve Jobs introduced the first i-phone. A highly revolutionary product. As revolutionary as the first internet browser (Greek internet browser) in the not-so-distant 1990. Using the three-point rule – a crucial concept in communication theory – Jobs announced three products in one: The first, he said, was an iPod widescreen with touch controls. "The second is a revolutionary mobile phone," Jobs continued. "And the third is an innovative Internet communication device." Imagine how prehistoric the first i-phone seems to us today. Imagine now, what levels artificial intelligence could reach in 16 years from now.
There are many who argue that man will always be one step ahead of the machine. This possibility makes me feel safer. On the other hand, I am not at all sure that this is a possibility that will be confirmed. Take, for example, a game of chess. Those of you who are around my age (around 45 that is ☹) will remember the duels between Gary Kasparov and IBM's Deep Blue in 1996 and 1997. For the rest, let me mention that the first game, the one from 1996 won Kasparov. But the following year, Deep Blue came back stronger and better prepared to achieve the first AI victory in a game of chess against a world champion. Much has been said about the second game, and after dozens of analyzes by chess experts, the prevailing view is that it was not the IBM computer that beat Kasparov, but Kasparov who lost to Deep Blue, in an unprecedented upset. playing on his behalf. This perspective suits us as a species. It confirms our superiority over the machine: the defeat was brought about by bad timing. Let me now bring you back to reality. Forget this crap about human chess superiority, for that matter. DeepMind's Alpha Zero today does not lose to Kasparov, Fischer, Carlsen, Karpov, or all of them together. Alpha Zero is not a chess program, it is an artificial chess intelligence that learns chess with every move, that does not play moves, but plans moves. And all this with incredible computing power and speed.
So you can imagine, at what level artificial intelligence will be in 16 years from now. Or maybe you can't. Don't attempt it. Not because you don't have enough imagination. After all, Common Sense readers possess not only common sense, but also incredible imagination. The reason is because technology has a bad habit. It almost never fulfills the purpose for which it is created. It usually opens others Did Gutenberg ever imagine that his invention would threaten the Holy Roman Catholic Church, cause schisms and the rise of new doctrines, and lead to the spread of knowledge? Of course not. He wanted to make money by selling as many copies of the Bible as possible.
What scares me is not the level and mostly the use of artificial intelligence. After all, artificial intelligence is another technological achievement, and therefore does not have human qualities, such as good and evil. There is no such thing as good or bad technology. There is simply technology. The scary thing in the whole case lies elsewhere and on this point I identify with the philosopher Theofani Tassi: what is scary is the speed of development of artificial intelligence, which exceeds the ability of legislative control and creation of a regulatory framework.
Over the past two weeks we've realized that we can't rely on ethical rules, corporate governance structures, or even principled board members to keep us safe. They tried, to their credit, but the effort wasn't enough. The sudden firing and rehiring of Sam Altman, CEO of OpenAI, a non-profit company with a corporate mission of prioritizing AI security over profits, failed spectacularly to rein in speculators. OpenAI, Inc. was founded in 2015 with the aim of ensuring that Artificial General Intelligence – autonomous systems that can outperform humans in all or most tasks – does not go unchecked, if and when it is ever achieved. Initially, the company struggled to raise enough funds through donations to compete in a fast-growing and highly competitive field. But with just $130 million in revenue over three years, it fell well short of its $1 billion goal. So turning to private equity was a one-way street, trying to maintain its original mission within an elaborate governance structure including Altman as chief executive. The financier of the venture did not take long to be found. It is about Microsoft which "entered" with 13 billion dollars!
When OpenAI's board decided to fire Altman about two weeks ago—apparently because a majority of its members felt there was a conflict between his personal ambitions and the company's mission—the entire structure fell apart. Microsoft stepped in and offered to hire Altman and anyone else who wanted to follow him, jeopardizing OpenAI's very existence and its own investment. Thirteen billion dollars is not a small amount, but from the looks of it for Microsoft, it is probably small compared to the benefits that the commercial use of artificial intelligence can have. Although Altman has been rehired by OpenAI, along with a new board that seems more likely to curry favor with him, it's safe to assume that Microsoft will be the one pulling the strings. After all, Altman owes Microsoft his job and the future of the company he runs.
This case did not reveal anything new. Historically, capital is what usually wins when there are competing visions of the future of an innovative product or business model. Unsurprisingly, OpenAI failed to stay on task. If entire states cannot protect their citizens from the depredations of capital, a small non-profit corporation with a handful of well-intentioned board members has no hope at all.
What am I telling you now, huh? What protection of citizens? By whom; The state has long ceased to serve the citizen. On 5 June 2013, the world, or rather a part of it because most of us are sleeping the sleep of the law, was jolted awake by Edward Snowden's revelations, first published in the Guardian, about the illegal collection of personal data by the National Security Agency (National Security Agency – NSA) of the United States. And how did the NSA gather the data? It simply partnered with Google, Microsoft, Facebook, Yahoo!, AT&T, Verizon and hacked users' internet searches, phone calls, videos and any form of information you can imagine for millions of citizens, not just Americans. And imagine that this happened in the USA. Now imagine what happens elsewhere. In China I say now… How about a partnership of the Chinese government with tech giants Huawei, Ant Group (Alibaba's from which we buy cheap goodies) and some of the world's leading AI technology developers like SenseTime, Megvii, CloudWalk to develop citizen tracking technologies through facial recognition and DNA tracking in the province of Xinjang or East Turkestan. What's in there? Uighur Muslims who have been fighting for their independence for 14 years. It doesn't sell as much as the Palestinian one, I know.
And here we are today. From the initial euphoria caused by the spread of the internet and social media as tools of democratization, we have reached the exact opposite point: digital tools are inherently anti-democratic, or in the words of Yuval Noah Harari, “technology favors tyranny”.
Both of these perspectives are wrong. Digital technology is neither pro-democratic nor anti-democratic. Nor was there a need to develop artificial intelligence technologies to empower governments to monitor the media, censor information, and oppress their citizens. All of these are a choice of direction for technology, but not the only choice.
Time is running out. It took almost four hours for this text. I finally restrained myself and did not ask for ChatGPT's help. Of course, Tor and Google helped. I don't know what ChatGPT would write about the technology that gave birth to it. But I know that he has no self-awareness, but he has access to more data than I do. The crucial issue is where he gets them from and in what ways he can use them. I do not know. The bad thing, of course, is that those who know, it seems that they cannot guarantee our protection.
*Frontpage picture: speckyboy.com