Artificial intelligence and the fog of innovation: a deep-dive on governance and the liability of autonomous systems


Alan Turing, in his famous 1950 paper, “Computing Machinery and Intelligence,” wrote, “we can only see a short distance ahead, but we can see plenty there that needs to be done.” 1A.M. Turing, Computing Machinery and Intelligence (1950), This sentiment, expressed nearly 70 years ago in the context of whether machines can think, reflects the current momentum of recent technological breakthroughs to endow machines with the ability to make intelligent decisions — the concept of Artificial Intelligence (AI). While the notion of AI is not novel, it has recently become a driving factor in industry because of compounded advancements in the availability of big data, machine learning approaches and algorithms, and powerful computing mechanisms. 2Executive Office of the President National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence at 6 (October 2016), (discussing big data, improved machine learning approaches and algorithms, and more powerful computers as three factors that began driving progress and enthusiasm for AI around 2010). More importantly, these technological breakthroughs have provided tangible realizations of how AI can be infused into nearly all domains to address society’s greatest challenges.

Even with these advancements, however, the exploration into AI is seated in infancy as society seeks to understand and overcome the technological, social, and legal challenges of computer systems endowed with human characteristics and abilities.

Turing’s sentiments towards progress in AI are not unique to technological development; rather, they stand as a modern summation of the legal and social thinking that continues to be necessitated by society’s reach for a scientific way to augment the human experience.

This paper is designed to further the discussion of AI governance and, specifically, the role of liability as an indirect form of regulation. PartI examines the technological foundation for AI, as well as the promises and perils it holds, as a precursor to understanding the encompassing issues of law and policy. Part II explores the technological, legal, and social barriers to AI governance, including how governing issues are compounded by the blended nature of AI with other technological domains, such as privacy, big data, and cybersecurity. In light of these challenges, it is likely that judicial decisions surrounding tort liability will be a driving force in shaping the AI landscape. Lastly, Part III analyzes the competencies of traditional liability regimes to remedy harms caused by AI systems. To an extent, the concept of strict liability is the most amenable tort regime that can be harmonized with emerging AI technologies. However, as the technology pushes towards greater autonomy in effectuating action, legal principles of agency become too attenuated to be applicable and allocating costs for harm becomes more complex. Absent a new approach to law and policy, it is unlikely that current liability rules will be sufficient to satisfy the expectations of the judiciary and the public as the underlying technologies develop. Considering these challenges, it is likely that law and policy directed towards AI will require society to accept solutions that may support conflicting values but are beneficial to humanity overall.


A. AI, Machine Learning, and Algorithms: A Technical Foundation

In recent years, AI has been thrust to the vanguard of technical development as nation states, private industries, and researchers seek to understand and exploit its potential.3Louis Columbus, McKinsey’s State of Machine Learning and AI, 2017, FORBES (July 9, 2017), (“Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions.”) Despite its prominence in the global technological realm, there is no universally accepted definition for AI. In a broad sense, AI constitutes a computerized system that can rationally solve complex problems or act appropriately to achieve an objective. 4Executive Office of the President National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence at 6 (October 2016), /preparing_for_the_future_of_ai.pdf. (“Others define AI as a system capable of rationally solving complex problems or taking appropriate actions to achieve its goal in whatever real-world circumstances it encounters.”) Some experts narrow the scope of AI based on taxonomies that reflect the function, capabilities, or problem space of the system. For example, venture capitalist Frank Chen categorizes the problem space of AI into five general groups: logical reasoning, knowledge representation, planning and navigation, natural language processing, and perception. The difficulty in defining what actually constitutes AI stems from the expansive nature of the problems and solutions sought to be conquered through AI, and the underlying performance of algorithms that fuel AI development. Because the problems and solutions to be evaluated by AI flow naturally between routine data processing by algorithmic systems and AI machine learning that requires intelligent computer programs, it is common for a problem to be viewed as requiring AI to be solved, but consisting of routine data processing once answered. While the definition of AI may be fluid and inexact, at its core is the pursuit of AI applications that can systemically produce intelligent behavior.