There are many lessons we can take from history about how to approach industry challenges, technology, environmental blunders, and even innovation, but movies do not typically fall naturally into that category. Star Wars, Star Trek and Back to the Future may come readily to mind when we talk about science fictions, but the most relevant to our time is arguably The Terminator, with its futuristic storyline involving Artificial Intelligence (AI)! More about the actual movie later, but let’s talk about AI itself and connection with ChatGPT, which has raised the AI bar in our present day understanding of what the technology is capable of doing–learn more about ChatGPT here.

AI can be friend or foe–and even both at the same time without you knowing–that’s the unprecedented level of sophistication that AI brings to the virtual table. It’s a view shared by Elon Musk and the late Stephen Hawking, who both realized the threat of a no-holds-barred pursuit of AI. There’s no doubt about what good AI can manifest, but too often we focus only on one side of the equation and blindside ourselves to the other. But this time around, the stakes are too high for us to make such a catastrophic mistake with AI–there’s no escaping the event horizon of the AI black hole once it’s crossed. The problem is that we won’t know when we’ve crossed it. There is no flashing red-light or alarm that will be triggered like at a nuclear power plant when above normal radiation levels have been detected. It is likely that we won’t know that we have gone too far enriching AI until we have gone too far!

Paradoxically, history has shown us that we learn our greatest lessons in hindsight–we go too far and then as we pick up the pieces, we put measures in place to avoid a repeat in the future. The rapid spreading and disruptive COVID-19 pandemic was one of the most recent lessons learned about how vulnerable we are to viruses and ill-equipped to handle anything of that nature–surely we’ll be ready next time. The 9/11 attacks showed how vulnerable our airlines were to terrorists and how planes could be deployed to create catastrophe and mayhem. We learned that lesson and measures are in place to reduce the potential for that recurrence. The concern with AI going awry due to a misjudgment is that there is no turning back–no redemption! Once AI goes rogue for whatever reason–willful deployment or honest mistake–it cannot be roped in and we don’t get a second chance. Now is actually our second chance–a movie with a major lesson to behold!

The Terminator is a 1984 action film directed by James Cameron, starring Arnold Schwarzenegger as the Terminator, a cyborg assassin sent back in time from 2029 to 1984 to kill Sarah Connor, whose unborn son will one day save mankind from extinction by Skynet, a hostile artificial intelligence in a post-apocalyptic future (Source credit: Wikipedia). I’m sure alarm bells should be ringing by now. In an interview, OpenAI CEO Sam Altman was quoted in The New York Times saying that AI’s “benefits for humankind could be ‘so unbelievably good that it’s hard for me to even imagine.’ (He has also said that in a worst-case scenario, A.I. could kill us all.)” (Wikipedia).

What we should learn is that there are times when we need to step back before stepping forward. In doing so, we need to set some ethical and technical boundaries within which AI is developed responsibly so that it helps us solve current problems–not create new bigger ones.

Undoubtedly, a digital/automated world has numerous advantages over the more analogue and labour intensive one we currently live in, but in its antiquated structure there are many safeguards that have become inherent by design. Safety is still a human driven effort and while a machine could be programmed to make safety decisions, our human abilities allow us to make pre-emptive judgment calls that could avert a situation long before a machine would process the same environmental conditions. Therefore, full digitalization does not mean a better and safer world. Furthermore, with 8 billion people on the planet, there are plenty of people available to do work. As such, the thrust should not be on using AI to replace the blue collar manual labour force, but instead to augment it and complete tasks that are highly risky or have limited skilled personnel compared to demand. There is very little guidance available to say how government funding is being streamlined and focused on what is really needed to support growth and development into the future. A pure interest in digitalization and innovation without a clear vision of goals and targets is at least blatantly ignoring lessons of the past!

The combustion and steam engines in the past solved the issues of mobility and labour shortage and were welcomed as innovation and disruptors of their times, but they came with a heavy price, which cannot be measured in dollars and cents. They created environmental concerns that are still being grappled with almost 100 years later–AI has the potential to leapfrog that into unimaginable territory–The Terminator, a movie made 40 years ago, forewarns us of this.

AI has some tremendous opportunities for humankind as Altman noted, but the threat aspect is equally real on several fronts–even extinction! The goal is not to limit AI, but to set the ground rules so that the Climate Change challenge is not usurped by a more significant existential threat…one humankind would surely lose!

Support responsible and sustainable AI development and deployment.

Categories : Categories : Visioning

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.