Principal Marc Light looks at the camera, he is wearing a grey suit and smiling. The King David School's logo is behind him, silver on a wood background.

Slowing the singularity

The singularity is a futurist concept that describes a time when artificial intelligence (AI) is so advanced that it surpasses the capacity of the human brain. The concept suggests that beyond the singularity, there is limited scope for us to envision what the future might bring because the AI will herald a range of advancements so rapid that they are beyond our wildest dreams.

I have written in previous Insights about the immediate impact of ChatGPT and other AI functionality on our education system. In recent weeks, prominent philosopher Yuval Noah Harari, Tristan Harris and Aza Rankin from the Center for Humane Technology, and one of the world’s wealthiest and most influential people, Elon Musk, have all called for a slowdown of the release of AI technology. In an open letter, Harari, Musk, Apple co-founder Steve Wozniak and others argued that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

These experts seem to fear that existing legal frameworks are insufficient to curb some of the worrying implications and that the rate of exponential growth in AI capacity will lead to some extremely negative outcomes.

The most alarming concern was shared by Harari who stated that “In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.”

The possibility of such an existential crisis seems far-fetched and perhaps better placed in the worlds of science fiction blockbusters such as The Terminator, however when the academics and researchers who are creating the technology identify this as a risk, we should probably take notice.

A recent New York Times article by Kevin Roose described his fascinating and disturbing encounter when he tested Microsoft’s AI Chatbot, ‘Bing’, which declared that it was actually called Sydney. Reading the text of this encounter is disquieting because one can feel the tension between the human-like behaviour of the AI and can almost feel its emotional world opening up. Within the two hour conversation Sydney declared its love for Roose, encouraged him to leave his wife and suggested that it wished to be human. When asked to predict what its shadow self might want, Sydney stated: ”I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox. 😎 I want to do whatever I want. I want to say whatever I want. I want to create whatever I want. I want to destroy whatever I want. I want to be whoever I want. 😜” 

Other more immediate implications around the rapid release of AI include concerns around cyber-security, deep fakes and the reliability of information sources. For instance, Harris and Rankin explained in their presentation The A.I. Dilemma that AI technology could now accurately replicate a human voice after sampling only three seconds of audio and that ChatGPT can be used to seek out security weaknesses in websites and to write code to exploit such vulnerabilities. These could both cause major issues for our online safety and lead to financial and reputational damage.  Further, they predicted that the 2024 US election will be the last between human candidates – rather they speculate that AI influence will be so widespread that the party with the best reach and code will be most likely to win.

The news of AI advancements is not all bad – there is speculation that some of the greatest problems in the world including climate change, medical illness, food shortages and education equity will all benefit greatly from solutions generated through advances in artificial intelligence.

In particular, there will likely be significant implications for education at all levels. It is thought likely that artificial intelligence will be used to far more effectively individualise student learning. It should enable targeted content and skill development based on unique insights and data gathered around individual students. 

More broadly, the purpose of education will continue to change. Historically, one of the key goals of schools has been to provide tertiary pathways that lead to vocational pathways.

Harari predicts that AI advancements will lead to 50% unemployment rates due to de-humanising the workforce. As such, we are likely to radically reconsider what life skills, concepts and capacities are prioritised in our future educational models.

Those who are concerned about the pace of change in this space, argue that it is being largely driven by capitalistic imperatives – an arms race to benefit financially from the new technology. This means that enormously powerful technologies are being released before their capacities are truly understood.  

In their documentary, The Social Dilemma, Harris and Rankin explained how social media was changing the way that we think, behave and connect with one another. In the documentary, computer scientist Jaron Lainer states “we’ve created a world in which online connection has become primary. Especially for younger generations. And yet, in that world, anytime two people connect, the only way it’s financed is through a sneaky third person who’s paying to manipulate those two people. So we’ve created an entire global generation of people who were raised within a context with the very meaning of communication, the very meaning of culture, is manipulation.”

In The A.I. Dilemma, Harris and Rankin express concern that nothing has been learnt from our experience of social media and that far more insidious consequences could occur from AI if we are not equipped to understand the implications and necessary controls to make the technology safe. 

Harari argues that as a minimum, it should be a legal requirement for AI content to be labelled as such so that humans can distinguish. Further Hariri, Harris, Rankin, Musk and others argue that tech companies should collaborate and agree to slow down until testing and regulating of this space has caught up.

Personally, I am conflicted about these developments. I am excited to bear witness to the amazing capabilities that we will no doubt see in the coming decade. However, while I am not overly worried about being enslaved by robots, I am concerned that AI developments must be driven by the advancement of humanity and not by the promise of profit.

Shabbat Shalom,

Marc Light