A tech lawyer’s perspective on AI

From a tech lawyer’s perspective, AI definitely is

  • … one of the most thrilling developments in technology: AI has the potential not only to change, but to newly define large parts of how we live, interact and do business;
  • … obviously one of the strongest buzz words in today’s discussions and communications: It seems there is hardly any new tech product that does not come with some kind of (alleged) AI (or at least “sophisticated algorithms”);
  • … an often misinterpreted and sometimes ambiguous term: When discussing AI, parts of the audience often think of “Strong AI” (with the machines taking total control over mankind) while the other parts are simply discussing “Weak AI” and the specific tasks that may be addressed by such systems;
  • … a great field for legal problem spotters: Let’s not stop here and let’s face and solve the challenges!

Various layers of discussions on “The Laws of AI”

Of course, new technologies always pose questions for the law. Furthermore, they force society to make decisions about how to deal with certain situations. This is true for artificial intelligence, but it is also nothing new overall:

  • The industrial revolution in the 19th century already produced a large number of new areas of law, including factory law, commercial law, industrial law and energy law.
  • With the increasing spread of the automobile at the beginning of the 20th century, numerous issues had to be resolved (including strict liability), which led to specific regulations all over the world.
  • During the last 25 years, the legislator saw a need for action in many internet-related areas, e.g regulations for technical infrastructures, allocation of existing and establishment of new IP rights, and liability regulations.

Trying to apply some kind of high-level view on the current legal discussions related to “The Laws of AI”, it seems there are different layers of topics:

  • Layer 1 (social considerations): As with every new technology that has the power to introduce far reaching changes and disruptions to our day-to-day life and to the public, it is important to openly discuss and evaluate possible benefits and downsides for society and individuals, and to define the basic guidelines.
  • Layer 2 (legislative considerations): Based on these basic guidelines, various questions have to be solved by legislators and regulators to help new technologies fit into the overall legal and regulatory framework. Challenges in this context may differ for each field of use (e.g. autonomous driving definitely has challenges different from those presented by AI-based asset trading). When trying to face and solve these, it is obviously not always easy to keep pace with technological developments.
  • Layer 3 (jurisdictional considerations): Application of existing laws can be a challenge when having to apply them to new technologies. The good news is that courts already established general rules of interpretation of laws some time ago, that can help (and in the past have helped) in this process. The remaining challenge is that proper judgements require a full understanding of the underlying facts, and presenting new technologies to the courts and educating the judges in this context can often be a difficult task.

In the discussions at all three levels it is not always easy to separate public interests from particular interests of specific groups, as the analysis is unfortunately not always forward facing. With new technologies challenging existing business models, it seems that parts of the discussions around “legal problems” of AI are mainly focused on defending own existing business models. Horse carriage drivers may have had their own view on the invention of the automobile; you may (easily) find similar examples in today’s discussions around AI.

In addition, it seems that discussions concerning “The laws of AI” sometimes stay very much focused on particular details of the facts. An example is the sometimes very narrow discussion on “the moral dilemmas of self driving cars” (i.e. the autonomous vehicle has to make a decision between two evils, e.g. killing two passengers or five pedestrians). If and how the dilemma can be solved, is certainly an interesting legal discussion, but the general discussion on self-driving cars should not exclude the possible general benefits to traffic safety as well as the fact that even humans in the same situation may not be able to solve this dilemma.

Example areas of the “The Laws of AI”

Current discussions about AI include the following areas:

  • Regulation of new technologies (e.g. autonomous systems);
  • Rights to data arising from the use of artificial intelligence;
  • IP-Rights in AI-developed inventions;
  • Protection and handling of personal data;
  • AI-based smart contracting;
  • Allocation of liability .

There is far more out there. Let’s start solving!