A Holistic Approach to Developing an Innovation-Friendly and Human-Centric AI Society

IIC - International Review of Intellectual Property and Competition Law, Sep 2017

Axel Walz

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:


A Holistic Approach to Developing an Innovation-Friendly and Human-Centric AI Society

IIC - International Review of Intellectual Property and Competition Law November 2017, Volume 48, Issue 7, pp 757–759 | Cite as A Holistic Approach to Developing an Innovation-Friendly and Human-Centric AI Society AuthorsAuthors and affiliations Axel Walz Editorial First Online: 18 September 2017 1k Downloads Artificial intelligence (AI) and robots have moved far beyond the confines of utopian novels and science fiction movies. AI and AI-driven robots have already become an integral part of our daily lives, as seen for example in automated chat bots used for customer service purposes and the steering of online requests via smart phones and computers under the names Alexa, Cortana, Siri or Watson. How powerful AI can be has been demonstrated by the much discussed victories of AI-driven bots against human champions in games such as chess, Jeopardy, Go or Dota 2. But AI is also on the rise in more critical applications including automated driving, surveillance of the public space, managing investment portfolios, conducting complex surgeries and rendering care services in hospitals and nursing homes. 1 The Need for a Risk–Benefit Analysis Notwithstanding the possible beneficial applications and the prospect of resolving or at least mitigating critical issues such as the shortage of skilled workers in an aging society, the rise of AI is also viewed from a highly critical perspective. The potential loss of human interaction in general with its unknown sociological and psychological effects, the loss of jobs with likewise unknown economic and eventual sociological and psychological impacts, as well as the fear of potentially selfish and dangerous artificial general intelligence have become an increasing public concern. Even leading innovators including Elon Musk and Bill Gates as well as astrophysicist Stephen Hawking have vigorously and repeatedly warned against potential dangers resulting from the use of AI. The question to what extent such warnings and fears are justified requires in-depth research involving experts from all technical and non-technical fields including ethics, sociology and law. Because the implications of using AI as a new technology are unknown, considering the principle of prevention as the basis of European consumer protection policy, one might advocate a more careful approach, calling for extensive tests of this new technology rather than implementing AI into all kinds of devices and service concepts immediately. This applies all the more because what happens, e.g. in a deep learning environment, is considered to be a black box even by developers of these technologies themselves. On the downside, this approach would certainly tend to delay AI-related innovation and might in some cases prevent AI from being implemented insofar as the potential dangers or unwanted side effects appear to be inacceptable. On the upside, however, a more preventive approach would form the basis for a more sustainable development and use of AI in a manner which is ultimately beneficial for humanity. This approach has, for instance, already been adopted with regard to medicinal products that may only be placed on the market subject to complex marketing authorisation procedures. Similarly, medical devices must undergo a conformity assessment prior to their being used in practice. The same applies to many other products including cars, planes and nuclear power plants. 2 Is AI in Need of New Regulation? Whether new marketing authorisation requirements or other regulatory means are necessary to ensure that AI remains under control and does not cause disproportionate harmful or otherwise unwanted effects can only be determined within a complex process of discussion and weighing of potential harmful and disadvantageous effects. The complexity of that process results from the fact that the risks associated with AI are not limited to immediate threats to human life or health. Rather, on a psychological and consequently sociological level, AI may result in fundamental changes in methods of social interaction and ultimately even impact what is considered to be formative for human self-perception. For this reason, the idea of market failure as a precondition for new regulation should, in the present scenario, not only be interpreted from an economic point of view. Market failure should in this respect be interpreted in a broad manner in that regulation may be required where a free market that normally regulates itself by supply and demand is not in a position to lead to the creation of goods and services that are ultimately beneficial to humanity. What is beneficial for humanity and which aspects will have to be taken into account in this regard are probably the most difficult questions to be asked, in particular as the same principles which may demand regulation in order to protect human life, health and dignity concurrently intend to avoid an overly paternalistic approach. In view of the potentially fundamental impact that AI may have on human society and its basic values as incorporated in particular in fundamental rights, only a holistic approach can ensure that all aspects of a human-centric society will be taken into account. Further, concerning the market failure doctrine, the need for regulation with respect to AI should also be looked at from a normative perspective by determining the framework for a human-centric but at the same time innovative society. Accordingly, policymakers and scholars in particular are called upon to engage in an interdisciplinary debate on how to build a human value-based AI society that we would want ourselves and our children to live in in the future. 3 A Competition-Oriented Approach Regulation, though, is not the only possible, and in many cases may not even be the best approach to retain control over AI. It is in this regard important to bear in mind what the nature of AI is: AI as such only describes the phenotype of different technologies that may be used for devices that we consider to be intelligent. AI may involve various technical aspects, such as machine learning, deep learning or cognitive computing which are each based on specific algorithms. There is, however, not one AI and not one machine or deep learning algorithm; rather, AI can be construed in different ways, based on different algorithms as its building blocks. Competition between different technologies can ensure that there will be a variety of products and services supported by different AI algorithms with different features, strengths and weaknesses from which customers can choose. At the same time, a diversity of algorithms used for AI applications would help to avoid an information and value bias. Dominance of individual companies owning the respective technologies could thereby be avoided and kept under control. Hence, the appropriate application of competition law, in addition to eventual regulation, is one of the tools to create and maintain a diversity of and at the same time control over AI-related technologies. In addition, in order to incentivize AI developers to create their own diverse algorithms as a basis for AI applications, a well-balanced intellectual property rights regime must be in place. In view of AI being primarily built upon software, IP researchers should critically review whether the balance between incentivizing developers by granting IP protection and maintaining free competition is still appropriate. In Germany, the Federal Patent Court has repeatedly rejected AI-related patent applications.1 Whether the patent granting practice as such is appropriate is a complex question, all the more so as a corresponding legal assessment must not be limited to a review of patent law but needs to consider other means of protection including copyright law as well as the free-market-inspired need for creating diverse and multi-technological AI systems. Still, the concerned legal community should also take up this challenge because, in addition to a well-balanced regulatory framework, a well-balanced IP and competition law system can add significant value to the goal of creating an innovative and human-centric AI society. Footnotes 1. German Federal Patent Court, Case Nos. 17 W (pat) 37/12; 17 W (pat) 4/10; 17 W (pat) 101/07. Copyright information © Max Planck Institute for Innovation and Competition, Munich 2017 Authors and Affiliations Axel Walz1Email author1.PhD (Dr. iur.); Senior Research FellowMax Planck Institute for Innovation and CompetitionMunichGermany

This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs40319-017-0636-4.pdf

Axel Walz. A Holistic Approach to Developing an Innovation-Friendly and Human-Centric AI Society, IIC - International Review of Intellectual Property and Competition Law, 2017, 757-759, DOI: 10.1007/s40319-017-0636-4