Joanne Fischlin (firstname.lastname@example.org) is head of corporate, external & legal affairs at Microsoft Gulf, Dubai, UAE.
A few years back, we entered what Klaus Schwab, executive chairman of the World Economic Forum, first called the fourth industrial revolution. Just like previous industrial revolutions, it has changed the dynamics of our society, creating new opportunities, retiring others, and giving rise to great innovation. Yet one thing is significantly different. The difference today is the sheer ubiquity of technology in our lives and the speed of change. It took 38 years for the radio to reach 50 million users, 13 years for the television to reach that number, and less than a year for Facebook to do the same. The pace of innovation and its adoption by companies and consumers is frankly mind-blowing.
For legal, compliance, and business ethics professionals, we are hardly getting our heads around what artificial intelligence (AI) is, how it works, the risks associated with such a technology, and what safeguards we should be thinking about now that it has invaded our workplaces, not to mention our personal lives. What this means is that the companies we work for are no longer just active in their sector; they are also all slowly becoming technology companies themselves, bringing yet another set of challenges. The pace of AI’s innovation and its proximity to human intelligence affects us at personal and societal levels and requires us to think broadly about these issues, way beyond a simple checklist.
In this article, we explore how AI and its adoption by businesses across all sectors bring an additional set of challenges that legal and compliance professionals will need to assess, mitigate, and monitor to ensure fair, transparent, accountable, and explainable use of AI in their organization.
The promise of AI
“In a sense, artificial intelligence will be the ultimate tool because it will help us build all possible tools” - K. Eric Drexler
There is no universally agreed-upon definition of AI across the tech sector. One helpful way to think about it is former Vice President and General Counsel at Microsoft David Heiner’s definition: “AI is a computer system that can learn from experience by discerning patterns in data fed to it and thereby make decisions.” Accenture describes it as “a constellation of many different technologies working together to enable machines to sense, comprehend, act and learn with human-like levels of intelligence.”
Despite having several definitions, the promise of AI is that the knowledge gained from applying analytics and machine learning to the wealth of available data will enhance any decision-making process with additional intelligence, leading to better outcomes. “Today’s AI technology can already save thousands of lives and improve the performance of many systems across all sectors. For example, in healthcare, AI can reduce hospital readmission, enhance the quality of care for managing chronic disorders, and catch preventable errors in hospitals (the third leading cause of death in the US) by recognising anomalies in best clinical practices.”
In this context, will the future give birth to a new field called “AI law”? Today’s AI law feels a lot like privacy law did in 1998. We’re not yet walking into conferences and meeting people who introduce themselves as AI lawyers. By 2038, it’s safe to assume that the situation will be different, but currently, we are evolving in a world where complexity is rising by the day and regulatory frameworks are often absent or ill-adapted. In an ideal world, we’d get the necessary clarity and then move forward, but as we all know, that’s very unlikely to ever happen. Our businesses and customers, especially in these times of crisis, need the technology to help them better compete, save costs, and be more productive. Being ahead of the game entails uncertainty, complexity, and speed. And we get to deal with all of that.
AI has developed in fits and starts since the late 1950s, but three recent technological advances have provided the launchpad from which AI has taken flight. First, computing power finally advanced to the level required to perform the massive number of calculations needed. Second, cloud computing made large amounts of this power and storage capacity available to people and organizations without the need to make large capital investments in massive amounts of hardware. And finally, the explosion of digital data made it possible to build massively larger data sets to train AI-based systems.
The ability of a computer to learn from data and experience and make decisions is based on two fundamental technological capabilities: human perception and human cognition. Human perception is the ability of computers to perceive what is happening in the world the way humans do through sight and sound. Vision and speech recognition capabilities have long been the holy grails for researchers in computer science, but cognition (the ability of a computer to reason and learn) is what is critical in helping computer and data scientists make artificial intelligence effective. So now, thanks to computer-based multilayer neural networks and deep learning that connect computational units (referred to as neurons) and feed huge amounts of relevant data to train computers to recognize a pattern, AI can reason and be effective at drawing those insights that businesses need to remain in the game.
So, AI has the power to augment human ingenuity, help save lives, save the planet, and much more, but can it be managed as any piece of software? What should we be thinking about when our organizations are looking at adopting AI?
First, I think it is reasonable to say that no one has all the answers when it comes to the risks and benefits of AI, so a fair amount of humility is a very good starting point. Then, having a reasonable understanding of what AI is and what ecosystem it needs to thrive is a good baseline to anchor one’s thinking.