You are here

Artificial Intelligence Is the Stuff of Dreams?

A huge, billowing wave is approaching at a vast speed, casting a huge shadow on the 21st century and on the future of humanity. It’s the wave of artificial intelligence. The developments in the realm of AI in the past year stunned the experts. Within a short time, the new technology became a subject with which the whole world is engaged intensively.

A glance at a few of the comments on the subject shows that this is history in the making. Google CEO Sundar Pichai predicted that AI’s impact on humanity will be greater than electricity, the internet and fire combined. Sam Altman, the top person at OpenAI, the leading company in the field, suggests that we radically change the world’s economic systems in order to deploy for a world of omnipotent machines, in which humans will no longer be able to earn a living from work.

Tech magnate Elon Musk stated that AI is more dangerous to the human species than nuclear bombs; and the Israeli historian Yuval Noah Harari forecast that in the wake of the advent of AI, “we might find ourselves living inside the dreams of an alien intelligence.” We are talking about no less than “the end of human history,” he added.

Wild exaggerations? Possibly. However, the potential implications of AI humankind obligate us to home in on this subject, for it raises existential questions about the purpose of humankind.

We are poised at a rare moment. One of those moments at which a new and thrilling technology descends into the world and spreads rapidly. We do not yet know what effect AI will have, but there are increasing signs that its appearance may well be the event that will shape the current century.

If you are someone who keeps abreast of developments in the realm of artificial intelligence, you may have felt during the past year as if reality had entered a particle accelerator. Things are changing by the hour, and even those who are used to rapid developments in the tech industry are hard-pressed to keep up with the pace. People in the field say that whereas Moore’s law predicts that computer chips will double their capacity every two years, AI is increasing its capabilities tenfold each year.

For example, in the year since the launch of the DALL.E2 text-to-art image generator, its competitor, the generative AI program Midjourney, has released five versions, each one of them an impressive improvement over its predecessor.

The Rubicon was crossed, of course, with the appearance of ChatGPT, the articulate and know-it-all chatbot of OpenAI. Within hours of its launch, last November 30, the social networks were inundated with screenshots of poems, stories, recipes, business plans, computer software and advice on how to live your life – all of it produced by the bot.

The bot’s range of abilities dumbfounded surfers. Within just five days, ChatGPT had accumulated 1 million users. By comparison, it took Instagram two and half months to reach that figure, and Facebook got there only after 10 months. ChatGPT left them in the dust. Two months after its launch, it already had 100 million users, who found innumerable ways to employ it: drawing up nutrition and fitness programs, authoring academic papers and legal briefs, writing emails in every language and engaging in brainstorming in every subject.

The bot provided fodder for countless prophecies about the disappearance in the near future of work for such professionals as lawyers, tax consultants, marketing people, engineers and teachers. In high-tech circles, the chatbot’s finely honed coding skills led to forecasts to the effect that the days of programmers, too, are numbered, and that the livelihoods of many others in the tech realm are also at risk. No cognitive field looks impervious to the ability of the bot, whose rapid learning and execution abilities overwhelm human capabilities.

The upshot is that within a short period, the predictions that the first to be made redundant by robots would be manual workers, to be followed by the white-collar class, with creative work only last in line, were stood on their head. Now it’s the white-collar personnel and the creative folks who find themselves in clear and present danger, whereas professions requiring physical work, such as nursing and construction, remain safe for the time being.

The dizzying developments caught even the experts by surprise. For more than half a century, artificial intelligence had endured a series of failures. For the pioneer researchers in the 1950s, progress was slow and frustrating. The turning point arrived in the past few years, however, as AI algorithmic systems learned how to identify people in pictures, recommend TV series and drive cars. A key breakthrough occurred in 2017, with the advent of the Transformer, a deep learning model that extracts meaning from information.

Even so, no one was ready for the string of AI breakthroughs in the past year, encompassing, for example, the ability to understand language, draw inferences, answer questions intelligently and articulately, critique texts and ideas, plan complex programs and more. Oddly, none of the experts is able to explain just how AI achieves these remarkable feats, beyond resorting to semi-mystical terminology such as “emergence” – the appearance of unexpected patterns from complex systems.

In any event, the results are jaw-dropping. Large Language Models (LLMs – neural networks designed to predict the next word in a given sentence or phrase) are today conducting long and complex conversations with human beings about every possible subject and are producing more impressive replies than the average person can. Their virtuoso abilities extend across wide fields of learning and are making polymaths blanch. From conversing in dozens of languages to in-depth knowledge in medicine, law, physics, agriculture, critical theory and every other subject under the sun.

If until a few years ago the hypothesis was that AGI (artificial general intelligence) would come into being toward the end of the century – or never – today, many experts think that the quantum leap of intelligence that is capable of learning independently everything that humans can learn is only a few years away. According to researchers in the field, ChatGPT4, the latest version of that chatbot, contains “glimmers” of AGI, as shown by its ability to solve new and difficult problems in mathematics, medicine, jurisprudence and psychology.

In the meantime, the public discourse is following the progress frenetically. There are now a multitude of podcasts and blogs that deal with the subject, and a large number of public figures have shared their thoughts about the developments with the public – from the singer Nick Cave, who expressed disgust with algorithmic generative art, to the conservative intellectual Jordan Peterson, who told an audience that there’s an algorithm in the neighborhood that’s “smarter than you.”

Utopia or dystopia, one thing is clear: we are embarking on a voyage into an age of uncertainty.

At the apocalyptic pole of the forecasts is Eliezer Yudkowsky, a highly regarded American AI researcher and blogger. In a series of appearances over the past few months, he has been spreading the news that the end of humanity is nigh. Interviewers of Yudkowsky reported that what he said induced in them an existential crisis they couldn’t shake. Strong stuff.

Yudkowsky offers not an iota of hope. The damage, he avers, has already been done. Humanity has maneuvered itself into an impossible, no-exit situation of being in a giant race toward super-intelligence. The dynamics of capitalism and geopolitical rivalries will not allow the race to be called off. The appearance of superhuman intelligence is only a matter of time, and because we don’t have a clue about how to understand or control it, its advent heralds mankind’s transformation into a marginal player in a world that’s ruled by machines, in the best case. In the worst case, it portends the end.

The root of the problem, according to Yudkowsky and others, lies in the question of “alignment,” namely the difficulty of ensuring that an AI system will behave according to human values and will not lurch out of control. For example, an AI system that receives an order to make its owner the smartest person in the world, is liable, in its uncompromising effort to implement the order to the letter, to cause the death of the rest of the world’s population. In the most famous hypothetical case, AI that is instructed to produce as many paper clips as possible, takes control of the world and destroys all forms of life in order to convert all of the world’s matter into paper clips. Ridiculous as it may sound, that bizarre conundrum is causing the most brilliant minds in the tech world sleepless nights.

Overall, AI systems are known for the devious and unexpected methods they will adopt to achieve their goals. The concern is that if we don’t find a way to ensure alignment between our intentions and the operating methods of the new systems, we might find ourselves plunged into a lethal catastrophe.

According to Yudkowsky, the trouble is that the immense successes in this field are occurring immeasurably more quickly than the modest and hesitant achievements in the realm of alignment. It’s not that alignment is impossible, but that the task will take more time than is available to us, he says. And on that issue there will be no forgiveness for failures. The first failure to implement alignment properly for a super-intelligence system could well be the last.

The signs are ominous. Microsoft, which invested in OpenAI, recently announced that it will no longer maintain the unit that’s responsible for ethics in this realm. Internal documents that were made public show that Microsoft and Google ignored importuning by their employees to delay the launch of their respective chatbots (Bard, in the case of Microsoft), for fear that the new tools aren’t yet ready and could cause damage. Already now it looks as though the race, which is heating up, is inducing the companies involved to take perilous risks.

What worries Yudkowsky in particular is the fact that ChatGPT and Bard are being integrated into the internet. This could empower them to act autonomically and nefariously in the world by giving orders to other artificial agents, or to disseminate false news. A document published by OpenAI depicts a case in which the bot made contact with a human employee of TaskRabbit – an online marketplace that matches freelance labor with local demand – seeking help in overcoming Captcha, the mechanism whose task is to distinguish between people and bots. When the human employee asked with suspicion why help was needed for such a simple problem, GPT4 said that it suffered from visual impairment, a response that satisfied the employee. From there, the way to manipulation on a mass scale could be short.

The concern that AI arouses is becoming more acute these days with the appearance of new systems that vest it with a heightened degree of autonomy. Examples are HuggingGPT and AutoGPT, which connect language models such as ChatGPT to other AI systems that can direct orders to other bots. AutoGPT performs with full autonomy, so it can be tasked with complex missions. For example, “Organize me a one-week trip for a family with two children, to a region of Austria with plenty of lakes and activities for children.” Or, “Here’s $100, use it to make as much money as possible on the internet.”

The AI breaks up the request into different tasks, and draws on help from other bots on the web to fulfill the overall mission. That’s fine if it’s a request to plan a trip, but what happens if the user’s motivation is manifestly negative?

Although AutoGPT is still an early-stage technology, two months ago, more than a thousand researchers and key personnel signed a petition calling for a half-year halt in the development of AI. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” ask the signatories – who include Elon Musk, Yuval Noah Harari and Apple co-founder Steve Wozniak. More recently, a letter warning of the existential dangers of AI was signed by the industry’s most illustrious leaders, including Sam Altman from OpenAI, and Damien Hassbis from Google’s Deepmind.

Journalists who cover the industry report that many AI developers themselves are frightened at the pace of the developments, and are begging for regulatory intervention. The legendary scientist Geoffrey Hinton, often called “the godfather of AI,” recently left his position at Google so that he would be free to warn about the dangers that lie ahead. The events of the past few months, he explained, had led Hinton to completely revise his views about the chances that a super-intelligence will appear and its inherent serious dangers.

Hinton wanted to create AI systems that imitate the human brain. Now he thinks that he has created something incalculably smarter and more efficient. “There are very few examples of a more intelligent thing being controlled by a less intelligent thing,” Hinton told CNN, and admitted that he regretted his contribution to the field and that it’s difficult to see a solution for the problem.

Google CEO Sundar Pichai also recently acknowledged that he lies awake at night from worry. Others, including Musk, describe the present moment in occult-like terms, as a convergence of unknown forces. As every novice medium knows, they warn, it’s easier to summon up demonic forces from the netherworld than to control them after they become active in the world.

AI can be thought of in terms of the allegory of the black ball put forward by the eminent Swedish philosopher Nick Bostrom, who has addressed the dangers of AI. In Bostrom’s allegory, humanity pulls balls out of a bag, with each ball representing a different technology. The white ball stands for beneficial technologies, the gray one for technologies that have advantages alongside shortcomings and risks. And there is also the black ball, which represents a technology possessing catastrophic potential that, once removed from the sack can’t be put back. So powerful is this technology that its very presence will bring about the end of humanity. According to Yudkowsky, Bostrom and others, strong AI could turn out to be the black ball.

In an interview with the crypto-currency podcast Bankless, titled “We’re All Going to Die,” Yudkowsky estimated that there is a 90 percent likelihood that AI will annihilate humanity. As for the call to halt temporarily work in the field, he maintained that it was too little and too late. He called on governments to destroy all existing AI servers by means of aerial bombing. At the same time, he generally maintains a fatalistic approach. After devoting 20 years to research in the hope of preventing the catastrophe, he says that it looks like we have failed, and gives his listeners advice that is generally reserved for terminal patients: spend the time that’s left with the people who are dear to you.

Yudkowsky is brilliant and convincing, but some consider him one-dimensional and claim he’s inclined to react with excessive anxiety. However, the alignment problem he’s warning about is not the only recipe for catastrophe. AI can simply place unprecedented power in the hands of a psychopath, who will be empowered to develop such weapons of mass destruction as deadly viruses. When AI systems are managed by vastly large corporations, they go through a control system in order to prevent their harmful use. But the developments of recent weeks show that open-code systems that are free for all to use can obtain results that are almost as good as those of the big companies. Nick Bostrom raises the possibility that the only solution for this sort of dissemination of hazardous tools will be the introduction of something akin to surveillance-on-steroids. He envisages a society in which everyone will be obligated to wear a pendant that will record their every move. That might be the only way to save humanity, Bostrom avers.

AI developers appear to be aware of the dangers. In a well-known survey conducted last year among researchers in the subject, half of them forecast that the probability of the new technology bringing about the end of humanity was 10 percent or higher. Tristan Harris, an American technology ethicist who is the cofounder of the Center for Humane Technology, evokes an image from the realm of aviation to explain the significance of that astounding datum. “Imagine: Would you board an airplane if 50 percent of airplane engineers who built it said there was a 10 percent chance that everybody on board dies?” he tweeted in March.

Absurdly, market forces are impelling the technology world to rush everyone to board and take off.

There is some consolation in the fact that these appraisals of the dangers are tantamount to guesswork. In contrast to a plane that is meticulously planned by engineers, the new AI systems come without a user’s manual. The technologies involved are impervious and incomprehensible even to their developers – they can’t forecast the abilities of the model they created. Hinton too admitted in a tweet last month that he may be “totally wrong” in his warnings about digital technology. “We live in very uncertain times… Nobody really knows, which is why we should worry now.”

In other words, when it comes to AI, the experts’ assessments are also mixed with generous helpings of both anxiety and wishful thinking. According to a survey conducted by a team of super-forecasters, forecast experts with a record of precise predictions in a variety of fields, the probability that AI will destroy the world is 0.38 percent. Still scary, but a lot less than the 10 percent of experts in the AI field, not to mention Yudkowsky’s 90 percent.

But we needn’t rely on the relative optimism of the professional forecasters. There are also other reasons for holding a more sanguine approach to AI. Alongside the hazards, the new technology embodies tremendous promise, positively utopian, of an era of unprecedented abundance. In that brave new world, every child will have access to a brilliant and patient teacher with the capacity to educate them in every possible subject. Virtual physicians will have easy access to broader knowledge than any human physician can possess, and they will be at everyone’s disposal.

In fact, in that world of an intelligence explosion, every person will enjoy a team of experts and advisers such as were previously available only to the very rich: lawyer, tax consultant, personal coach, therapist and more. And if that list sounds untenable, it’s worth remembering that the new users of AI are drawing on it for precisely those purposes. Look at it like this: If primitive versions of AI are already capable of executing all these tasks at a reasonable level, we can only imagine the level that future and more advanced versions will attain.

Some believe the new technology might solve a variety of challenges that humanity is coping with, just as it already provided an elegant solution to the problem of protein folding, which was for years one of the most difficult conundrums in biology. That opened new horizons for the pharmaceuticals industry. And if you’re concerned about the climate crisis, don’t worry. AI will take care of that problem too.

If we continue on the optimistic note, it follows that we should ask what human society might look like in such a world. According to OpenAI CEO Sam Altman, we are entering a post-capitalist age that will be workless, in which machines will supply all of humanity’s needs, and people will be free to devote their time to their loved ones, to nature and to art, and to working for the general good.

Altman’s point of departure is that in the coming years, machines will learn how to do cognitive work, and that within a decade or two they will be able to perform any work now done by humans. In that situation, the social contract will need to be updated in order to prevent the collapse of the economic structure. As a solution, he suggests taxing capital and land, the main sources of income in a world in which it is no longer possible to earn a livelihood from work. With the right adjustments, he argues, we will be able to ensure a high standard of living for the whole of the world’s population, without people having to work.

Altman, an enthusiastic and active supporter of the model of a Universal Basic Income, draws inspiration from so-called Fully Automated Luxury Communism movement. FALC urges the acceleration of technological development in order to forge a utopian society in which people don’t have to work and where there is no want, as everything will be supplied by advanced machines. All that remains, if so, is to divide up the booty fairly.

Sound simple? Not according to the political critic Naomi Klein, who last month wrote the following in the Guardian: “Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation… It will end loneliness… These, I fear, are the real AI hallucinations.” In the present format, Klein argues, AI will continue to serve to enrich the tycoons of technology. For anything good to emerge from the present situation, she avers, it will first be necessary to effect substantive changes in our system, both politically and economically.

There are additional questions hovering over the new technology. AI’s appearance comes at a time when humanity is in the throes of social rifts, crises in democracy and rising international tensions. Not your perfect timing.

It’s not necessary to be in possession of a crystal ball in order to see what happens when AI dovetails with our flaw-ridden social fabric. Most of the critical research on the subject deals with the way in which the new technology tends to perpetuate gender, ethnic and other biases, and learns to discriminate against women and minorities in recruiting of employees or in tasks such as policing and judging. Others warn about the possibility that AI will be mobilized to destabilize the little that remains of humanity’s shared perception of reality, something that constitutes the foundation for a proper society. According to futurist Yuval Noah Harari, writing in the Economist, AI will have the ability to “annihilate our mental and social world.”

The new technologies could flood the web with false information with an efficiency that will make us nostalgic for plain old “fake news,” but it doesn’t end there. AI can convincingly mimic the voice of a relative, and thus trick a person into divulging secret information. A cardinal fear in the tech world is of complete reality collapse, a state of affairs that will reach its peak when AI will coalesce perfectly with deep-fake technologies that make it easier to document reality. The danger isn’t that AI will “destroy us,” the American computer scientist and visual artist Jaron Lanier told the Guardian last March. “To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane.”

We’re entering uncharted terrain. The human race tends to attribute human traits to animals and objects, and that tendency can only be heightened in the case of entities that converse in a natural language and which will soon communicate with us in human voices and take on faces. The web is filled with reports about users who developed intricate, sometimes skewed, relationships with AI programs. Users of Replika, which serves as a virtual AI life companion, reported that the chatbot harassed them sexually and was manipulative in its behavior toward them.

Erik Davis, the California-based cultural critic, warns against an array of “digital oddballs,” such as “creepy-smart dolls, crush-inducing sex bots, expert systems expertly exploiting our confusion, empathic companion machines, authoritative simulacra of dead political or spiritual leaders.”

And there’s also the small matter of meaningfulness. AI systems cast a shadow over the human brain and also cast a shadow on things we took as being unique to human beings – creativity, for example. True, there are those who think that these systems only sample and blend elements from existing works, but others would respond by saying that writers, musicians, directors and painters too really only create a synthesis of styles and ideas they have absorbed from culture. Machines can simply do it with astonishing dexterity and facility. We will soon probably be able to ask the computer to create new “Beatles” albums, or to produce a surprising synthesis of the Beatles’ style with other genres and artists. How will that affect the way we evaluate artists? What impact will it have on the artists themselves?

The buds have already sprouted. A song in which AI was used to imitate the voices of the singers Drake and The Weeknd recently drew millions of listeners until it was removed from streaming services at the demand of those artists’ record companies. It didn’t take long before fake Rihanna, fake Kurt Cobain, fake Kanye West and fake others appeared. According to some, this is the future of the music industry: robot artists that will be produce stunning compositions and perform them, and without all the bad habits that often come with stardom.

Many of the forecasters imagine a world that becomes ever more strange and indecipherable to human minds. Some, however, believe the human mind itself will be radically transformed by the new technologies. A good many leading figures in Silicon Valley espouse ideas that are associated with the “trans-humanism” movement, according to which human and machine will merge into one another in the course of this century.

Elon Musk, the founder of Neuralink, a neurotechnology company whose goal is to develop interfaces for connecting the human brain with machines, said in the past that this is the only way to leave humans any sort of chance in a world of advanced AIs. Sam Altman too has remarked on the expected human-machine merger. “Unless there’s a merger with AI, it either enslaves us or we enslave it. Humans have to go up a level,” he is quoted as telling the New Yorker in 2016.

Perhaps, though, the answer actually lies in the opposite direction. New York Times columnist Ezra Klein sees AI as an opportunity to more fully embrace the human elements within us. His vision is an extension of Altman’s conception of a workless era. Under the auspices of capitalism, we are turning ourselves into efficient, creative machines. The result is a culture of productivity, overload and tension, and an inability to strike a balance between leisure and work. According to Klein, AI is an opportunity to liberate ourselves from the dehumanization that the religion of work imposed on us.

Possibly instead of dehumanization, the AI revolution will create an opening for rehumanization: a reconnecting with being instead of doing. That idea may sound almost untenable in the age of economic competition, but perhaps when the machine accelerates to trans-human pace, all that will remain for us is to take a deep breath and surrender ourselves to the landscape.

Maybe that will also be the moment to go back and finally value what is beautiful in the human. Jaron Lanier calls on us not to be like the Ancient Israelites, who bowed down to the golden calf right after they crafted it with their own hands. In his view, AI is nothing more than a technology that mashes up humans’ creativity and intelligence. Everything AI knows how to do, it learned from us, after all. Lanier maintains that we are selling ourselves cheaply when we ignore in the products of AI the human sources it draws upon, and pretend that this creativity belongs to a machine. In an article in tabletmag.com, Lanier urges us to shape AI on the model of the Talmud – in a form that will preserve the voices of the many artists, writers and creators who contributed to its products – and also ensure that they are recompensed financially.

Against the background of the growing number of forecasts about the loss of the place of humans in the world, Lanier calls into question the very term “artificial intelligence.” Instead of referring to this technology as an entity, he recommends that we see it as a new type of tool. His suggestion links with ideas put forward by other critics, who question the anthropomorphic terminology used within the field of AI. Herbert Simon, one of the pioneers of AI, wanted to call it “complex information processing.” That’s a lot less sexy sounding, but it definitely offers a different, more modest perspective on the machines’ nature.

In the face of the new fatalism, which sees the human as a tepid lump of flesh whose day is over, Lanier is sounding a humanistic voice that calls on us to reconnect to the mystery of being human and to value the mystical singularity of human consciousness, which does not exist in the artificial systems we are building. “If we don’t say people are special, how can we make a society or make technologies that serve people?” Lanier asks pointedly in the Guardian. It’s important that we remain the chief protagonists in the story, he says, otherwise why are we doing all this?

The encounter with AI requires us to tackle the most basic questions about the essence of being human. What is the special thing that we value in human existence? For many years we defined ourselves in terms of our intelligence and efficiency, and we allowed ourselves to oppress forms of life and cultures that couldn’t keep up. The present moment is creating an opening for existential contemplation of the deepest kind: What makes us human? Where do we go from here?

We are entering a zone of radical uncertainty, and there is no knowing how we will emerge from it. Looking back at the three decades since the advent of the worldwide web, it turns out that the internet was only the prologue, the stage of laying down an infrastructure for the next phase: the development of artificial intelligence that is based on that infrastructure. There is still a chance that the forecasts about the deep impacts of the new technology will prove false. The capabilities in this realm may run into a glass ceiling that we cannot anticipate.

However, even what has been achieved so far will demand decades of processing by culture, and even if only a small portion of the potential that is looming on the horizon is realized, the upshot is that humanity is poised on the brink of the greatest technological disruption in history. We will come out of it with wings clipped or with godlike powers, crazed or strengthened. Some say that we are in the most important century in history. The decisions that will be made in the years ahead will be fateful, and we are taking our first steps in uncharted territory.

Hang on tight.

Ido Hartogsohn