01 September 2020
Author: Professor Anthony Elliott
Smart algorithms that run large tracts of enterprise, chatbots that converse with customers, and even robotic guides for surgery – they’re all part of the world of artificial intelligence (AI). But while there’s a public fascination with AI assistants and self-driving cars, very few people understand how AI is changing the world.
Here, Professor Anthony Elliott, Dean of External Engagement and Executive Director of the Hawke EU Jean Monnet Centre of Excellence at UniSA Justice & Society shares his views on how AI is impacting the business world.
Artificial intelligence (AI) is fast changing the global economy. Like electricity, it’s invisible – a general-purpose technology that works its magic behind the scenes. The contours and consequences of AI remain elusive to us – we can’t see it in action, but we somehow experience its impact. And, like other general-purpose technologies, AI is becoming ubiquitous – everywhere and nowhere at once, it is both omnipresent and unnoticed.
But what is the real impact of AI on business and what role should it hold post pandemic?
The impact of AI on our economy is an issue on which techno-pessimists and techno-optimists fundamentally disagree. On the one hand, the pessimists argue that technology will create far more problems for humanity than it solves. It will take our jobs, intrude on our privacy and possibly escape our control. It could imperil our way of life. As the late Stephen Hawking commented; “AI could spell the end of the human race”.
The pessimists argue that technology will become a self-perpetuating machine, with AI rendering humans increasingly obsolete. Reducing our reliance on AI is the only way to resolve what could become an existential crisis. Futuristic novels and movies perpetuate this view with their dystopian narratives.
Techno-optimists, on the other hand, believe technology will improve the world in unimaginable ways and at a scale incomprehensible to us today. It will do this by learning from itself in an exponential trajectory of mutual benefit to individuals and the economy. AI is poised to radically extend economic prosperity.
Right now, we are in a conundrum with AI and its larger impacts. While its influence and penetration are growing, there has been no noticeable evidence of improvements in productivity across the developed world.
In 1987, Nobel prize-winning economist Robert Solow noted that the impact of the computer age was seen everywhere – except in national productivity statistics. His observation remained true for several years before some sectors in the economy surged, following the Internet revolution as well as breakthroughs in mobile telephony.
In a recent paper published in McKinsey Quarterly, Mekala Krishnan, Jan Mischke and Jaana Remes contend that the gap between technology adoption and the productivity boom is being experienced again in high-tech nations. Here, in a survey of global corporations, less than a third of core operations were automated or digitized, and less than a third of products and services were digitized. So, while AI promises to deliver a huge shift in economic output and productivity growth, the benefits of the digital revolution have not yet materialised at scale.
Other industry assessments, however, insist that AI and automation will unleash unparalleled economic disruption, helping to fuel global growth as productivity and consumption soar. PricewaterhouseCoopers, for example, has estimated that AI could add around $16 trillion to the world economy by 2030. To put that into a geopolitical context, that’s more than the current combined output of China and India.
Against the backdrop of the digital revolution, in which companies are spending billions to develop deep learning technology, AI is transforming manufacturing and service-based industries.
Take, for example, the creation of microscopic robotic devices the size of a single cell (syncells), which might eventually be used to monitor conditions inside an oil or gas pipeline, or to search out disease while floating through the bloodstream. Their potential is astonishing.
AI also underlies the breakthroughs of Israeli start-up, 3DSignals, which uses sensors to track the sounds made by machinery with an algorithm able to alert managers to potential breakdowns or malfunctions before they happen. Another example is the innovations of Doxol, an AI system of drones and robots that monitors every stage of a construction process and can alert managers to any potential problems.
Business and enterprise clearly look distinctly different in the light of artificial intelligence, and today’s exponential rise in innovation is only ramified if we connect these developments to biotechnology, nanotechnology and information science. But, if AI today is automating cybersecurity, helping programmers to be more productive, and automatically making business decisions, how might such transformations be impacted by COVID-19, arguably the greatest crisis of the 21st century?
There’s no doubt that COVID-19 has threatened the very structure of world affairs and, in particular, business and enterprise. In an astonishingly short space of time during 2020, COVID-19 brought the world's factories to a standstill and severely disrupted global supply chains.
Thankfully, globalization is not only about moving manufactured goods around the world, but moving ideas, information and data too. And, amid the terrifying new socio-economic threats arising from COVID-19, we have also witnessed a surge of digital information that has turbocharged virtual networks and the flow of AI technologies into everything from healthcare to business. Again, thankfully, such hi-tech interconnectivity has proven immune to quarantine.
The world has recently seen unparalleled international research cooperation in the fight against coronavirus, and these global efforts have substantially involved AI and related new technologies.
Consider, for example, the COVID-19 High Performance Computing Consortium, a US partnership involving government, industry, and academia to provide access to the world’s most powerful supercomputers in support of coronavirus research. Involving Google, IBM, Amazon, Microsoft and NASA, the Consortium shared some 30 supercomputer systems with the world’s scientific community, to enable research scientists to run millions of simulations to identify factors that might make a targeted molecule a good candidate to defeat the deadly coronavirus.
AI technologies like Natural Language Processing (NLP) are helping researchers tackle COVID-19, and related breakthroughs in computer vision technologies have been deployed to detect early warning signals such as clusters of coronavirus symptoms in new places.
The next steps in the fight against COVID-19 are clearly being led by AI technologies, so much so that our health systems might well be on the verge of a transformation as fundamental as those already impacting business and enterprise across the globe.
Professor Anthony Elliott is Dean of External Engagement and Executive Director of the Hawke EU Jean Monnet Centre of Excellence at the University of South Australia. He is a Fellow of the Academy of the Social Sciences in Australia, Senior Member of King’s College, Cambridge, and Super-Global Professor of Sociology (Visiting) at Keio University, Japan. Professor Elliott’s recent book, The Culture of AI: Everyday Life and the Digital Revolution, is published by Routledge.