top of page
Dr. Elad Gil and Dr. Tal Mimran

Response of the Tachlith Institute and the Federmann Center to the Principles of Policy, Regulation, and Ethics in Artificial Intelligence

By Dr. Tal Mimran and Dr. Elad Gil

January 1, 2023















To

The Ministry of Innovation, Science and Technology

The Ministry of Justice – Legal Counsel and Legislative Affairs Department

  

Dear Sirs,

 

We are pleased to submit the position of Tachlith – The Institute for Israeli Policy, and the Federmann Cyber Security Research Center at Hebrew University, in response to the open call distributed on October 30, 2022, regarding the Paper on the Principles of Policy, Regulation, and Ethics in Artificial Intelligence (“Policy Paper”). We congratulate you on the comprehensive and professional work reflected in the document. Our response will focus on two main topics we believe are worthy of further in-depth study and examination within the advancement of the regulatory plan that appears in the Policy Paper, as follows.   

 

The difficulties arising from the choice not to adopt broad, economy-wide regulation need to be addressed by formulating a more fine-tuned regulatory policy which includes a national regulator and a uniform thinking methodology which would apply to all sectoral regulators.

 

The choice arising from the Policy Paper not to adopt an approach of broad, economy-wide regulation (similar to the legislation advanced in Europe) constitutes a policy that gives precedence to interests of technological innovation and entrepreneurship in the field of Artificial Intelligence (AI) over other values. This is a rational and understandable choice, but one cannot deny that it reinforces various societal risks inherent in the incautious implementation of AI and could lead to an unclear, incoherent policy in various areas that overlap with one another. The Policy Paper aims to solve this problem by formulating a “regulatory policy”, but this policy, aside from offering a menu of tools (soft regulation, harmony with what is being done in the world etc.) which each sectoral regulator can choose to use at their own discretion, does not provide a clear-cut guideline and does not guarantee the advancement of uniform policy targets on the national level.

 

We believe that in order to bridge this gap, two complementary steps should be adopted.

 

1. Institutionalization of a National Regulator whose role would be to guide and create harmony between the sectoral regulators (similar to the role of the Israel National Cyber Directorate in regulating cyber protection).

2. Adopting a uniform methodology for examining the legal and regulatory response to the challenges of AI in the various sectors.       

  

In this context, it must be emphasized that a tighter and more centralized regulatory policy can make more room for the concept of placing humans in the center and would also lead, in practice, to promoting innovation and preventing barriers in international markets. Within this balance, human rights do not hold back innovation, rather they walk with it hand in hand. The importance of a stricter arrangement would increase human rights compliance especially in small corporations and startup companies, which in many cases to not adopt a voluntary commitment to human rights, for example as within the United Nations’ Guiding Principles on Business and Human Rights. [1]   

 

Promoting accountability of developers and operators of AI systems that impact fundamental rights

 

Making do with soft regulatory tools and an ethical commitment with respect to AI systems that could have a fundamental impact on human rights could create a democratic deficit stemming from an absence of sufficient legal liability on the part of AI operators towards users. Thus, a central target set by the regulation document - placing humans in the center – could be harmed. Accordingly, we propose to regulate the topic of accountability for AI applications which explicitly impact fundamental rights, and illustrate our stance by referring to the test case of using AI to filter content on social media. This example would seem to instruct that at times, adopting an approach of deregulation in the preliminary stages could entrench a problematic reality on the ground, which it might be difficult for later regulation to solve.

 

We shall elaborate below.  

 

A. Tightening the regulatory policy: institutionalizing a National Regulator and adopting a uniform methodology for examining the regulatory and legal response


1. A central policy decision arising from the Policy Paper is the choice not to include, at this stage, a proposal to promote overarching legislation that would broadly regulate the development and use of AI tools. Inherent in this decision is a determination to accord precedence to values of encouraging innovation and entrepreneurship, which are of supreme importance to the national interest of promoting the technological sector, while consciously taking other risks: There is concern regarding the creation of a lack of clarity and uniformity, and the real possibility of creating areas of regulatory overlap alongside “black holes” that are not awarded coverage despite the risks arising from AI activity and development.

 

2. The Paper encourages self-implementation and internalization of the principles by AI operators and users, while clarifying the need to internalize the information over time, rather than promoting comprehensive, binding regulation of AI operators. Concurrently, the Paper proposes that the regulation promoted include compliance with international regulation, to prevent the placement of regulatory barriers that are unique to Israel. [2]

 

3. The response proposed in the Policy Paper to the aforementioned risks includes the adoption of a regulation policy, which is intended to serve as an umbrella framework for the promotion of a uniform, economy-wide policy, and a recommendation to establish a “governmental information and coordination focal point for AI regulation”. We believe these solutions do not provide a sufficient response to the aforementioned risks. The regulatory policy described in Chapter 6.1 is overgeneralized and does not create a framework for a uniform policy. This is actually an eclectic menu of proposals and tools (among them, granting importance to harmony with global regulations, adopting an approach of risk management, calling for the implementation of soft regulation, and encouraging experimentation and public involvement) which different regulators are expected to read, interpret, and prioritize in different ways.

 

4. This fact is particularly marked with respect to general-purpose AI systems, which are not expected to receive a uniform response or at times any response at all, by sectoral regulators, even though this is an area garnering accelerated development, which could have significant implications for safety and rights. [3] The proposed response, in the form of a governmental information and coordination focal point (Chapter 6.3), in its current format, is expected to have a limited impact due to the fact that the Policy Paper does not describe binding legal powers and authorities that will be granted to this body, when established. It is hard to imagine that a governmental body whose role would be to advise and make information available would function as a body that has teeth, which could impose a uniform policy and create a harmonized regulatory policy between sectoral regulators, each one of which has a narrow point of view which touches upon their own sector. A reality of under-regulation with respect to certain applications, and overlapping, and perhaps even conflicting and superfluous regulation with respect to other applications, is the likely outcome of this state of affairs.

 

5. This situation raises difficulty not only with respect to the development of sectoral regulation that is not synchronized with an overall national policy, but also with respect to the development of different branches of law. The AI industry’s development, and the wide distribution of its use, are expected to bring to the fore challenges to the interpretation and adaptation of the extant law in various branches – torts, consumer protection, competition, the prohibition against discrimination and more. In the absence of a uniform legal policy, directed by an entity that examines the questions on an economy-wide level, there is a danger that in different sectors, conflicting interpretations will be adopted and a nonuniform legal policy will be implemented. The solutions proposed in the Policy Paper do not provide a response to this problem.

 

6. Against this background, we recommend considering two complementary solutions as part of developing the regulatory policy.

 

7.  Institution of a National Regulator


7.1  Instead of intending to institute a focal point for information with limited, non-binding powers and authorities, we recommend instituting a National Regulator who would fulfill two primary functions. Firstly, the National Regulator would serve as the “regulator of regulators”. Its role would be to guarantee harmonization and collective efforts by all sectors in the economy to realize national policy targets in the area of AI. Within this framework, the Regulator would have binding authority to prescribe a uniform regulatory policy with respect to cross-industry AI applications, issue guidelines and interpretative commentary to sectoral regulators, and guarantee coordination in order to maintain uniformity in industry regulation. Secondly, the supreme regulator would develop the regulatory policy and advise the Government about the steps needed for its continued growth and advancement. Inter alia, the supreme regulator would assume responsibility for leading policy in the areas currently regulated by chapters 6.4 – 6.7 of the Policy Paper.


7.2 Instituting a supreme regulator organ constitutes the desired middle-of-the-road model, falling somewhere between the alternative of broad, economy-wide regulation and the alternative of sectoral regulation, making it possible to enjoy a substantial part of the advantages of each model. The supreme regulator would act from an institutional position that allows it to formulate a uniform, economy-wide national policy, to constitute a body for adoption and professional filtering of Israeli and international standards, and to guarantee a suitable level of flexibility among sectoral regulators, in a manner that would not push up against national policy.

 

7.3  A similar National Regulator model, acting as the entity which regulates sectoral regulation, was adopted in Israel within the draft legislative memorandum for the Cyber Defense and National Cyber Apparatus Law (5778 – 2018). There are indeed substantive differences between the aspects of national cyber defense and AI development and use. However, both concern a topic with broad national implications which, given the existence of sectoral challenges and the desire to maintain flexibility and variety between different sectors, requires an overall national policy to guarantee uniformity and coherency.

 

7.4  We propose the National Regulator should have the authority to guide the regulatory authorities in realizing the national regulatory policy and to regulate interpretive legal issues to comply with the extant law. Additionally, those who believe they have been harmed by the decisions of sectoral regulators would be able to turn to the National Regulator for reexamination of decisions concerning regulatory policy, and the Regulator’s decisions would have binding status.


8.  Implementation of a uniform methodology to handle regulatory and legal challenges


8.1  The expanded development and use of AI is anticipated to have a disruptive impact on markets and companies, and as a by-product of this, on the branches of law that regulate their activity. A substantial part of these will be cross-sector influences, and this should receive a harmonious evaluation and response. As stated, we do not believe this requires broad, economy-wide regulation (which would take a significant toll on local industry’s ability to realize its full potential). However, the Policy Paper does not currently provide the regulatory and legal community with methodological tools to handle the anticipated challenges uniformly, coherently, and rationally.

 

8.2 For example, occupational structures will undergo a dramatic change with the appearance of advanced technologies, which in turn will generate new safety risks. These developments will impact the financial, medical, academic-educational sectors and others. Although the concrete impact may vary from sector to sector, the regulatory and legal challenges in the fields of employment law, occupational safety and hygiene law, tort law and others, are cross-sector and require a coherent response.

 

8.3  In order to create a shared language and coherent positions between the various regulators and the legal advisors, as well as between the various ministries and within the Ministry of Justice, it is critical to implement a uniform thinking methodology to confront the shared problems. With the help of such a methodology, it will be possible to realize a regulatory and legal policy that relies on a uniform, organizing concept and coherency, instead of isolated islands of regulatory and legal responses, which naturally will be ununiform and sometimes even contradictory.

 

8.4  We believe this topic should be under the responsibility of the Legal Counsel and Legislative Affairs Department at the Ministry of Justice, which should guide the remaining government units. In this paper, we wish to highlight a methodology publicized by the scholars Crootof and Ard in the Harvard Journal of Law & Technology in 2021 [4]. Their article proposes an analytical framework for evaluating technology’s disruptive impact on effected branches of law and on the regulatory approach and regimes that regulate it, which can be implemented and operated with respect to AI applications. The proposed framework is innovative, and became central in the research dialogue, because it does not adopt an assumption that new technology generates a fixed need to update the law. Instead, it proposes to focus on the legal, normative, or institutional uncertainty the technology creates, and within that to understand what the correct meta-regulatory approach is and how the regulatory law and regime must respond.

 

8.5  Following are the main points of this methodology:

 

Diagram 1. Crootof – Ard Methodology for Evaluating the Legal Implications of Disruptive Technologies

 

8.6  The methodology is comprised of three stages:

 

8.6.1  In the first stage, regulatory and legal players are required to classify the type of legal uncertainty created by the technology, from among three categories: application uncertainty, normative uncertainty, and institutional uncertainty. An example is the installation of medical diagnosis machinery in healthcare clinics, which is intended to facilitate the speed and quality of analysis of the patient’s imagery and data. This could generate different levels of uncertainty (leading to over-regulation or gaps in the regulatory coverage) with respect to the application of regulation to medical accessories and machinery: Application uncertainty would ask if the extant regulation applies to AI-based machinery; normative uncertainty would address the question of whether the application of extant law would lead to a result which complies with the regulation’s purpose; and institutional uncertainty would examine whether the relevant regulatory bodies (the Ministry of Health, the Privacy Protection Authority etc.) have the institutional competence and legal authority to respond to the first two questions.

 

8.6.2 In the second stage, assuming there is some kind of legal uncertainty which warrants a legal response, the regulatory and legal players are required to examine what the appropriate initial meta-regulatory response should be. One possibility, which would usually be the default, is a permissive approach based on the presumption that the technology can be used without restrictive regulation, or in accordance with the extant regulation, until additional information is clarified with respect to the external costs that could be caused as a result thereof. However, in certain cases where it is discovered that the known risk is significant, the regulator might prefer the imposition of restrictions, and even absolute prohibition against use of the technology, already at this stage. The initial response will usually be made in a reality of incomplete information, yet in most cases a normative analysis of the nature of the legal uncertainty would enable a rational response. [5]   

 

    

8.6.3  In the third stage, the comprehensive response to the legal uncertainty is formulated, and the legal/regulatory status quo is restored. The solution will usually fall within one or more of three possible categories: “extending” the extant law to apply to the new technology, the development of new legal rules, or the development of a new legal/regulatory regime.

 

8.7  We believe that a broad, economy wide-application of the Crootof – Ard Methodology would enable the rational and cautious development of the law, through awareness of the needed balance between the values affected by AI-based technology, in a manner that would enable the coherent and uniform application by various branches of the economy, and various legal and regulatory players.     

 

9. Adoption of the above two recommendations should lead to a flexible regulatory policy, yet which is more centralized than the one proposed in the Policy Paper. Alongside the advantages described thus far, we believe that such a policy would also help Israeli businesses enter international markets, especially the European market.


10.  The Danger of Non-Compliance with the European Standard


10.1 This argument can be well demonstrated by a draft proposed by the European Union, which preferred a broad, meta-regulatory approach according to the technology’s risk level. The European legislation imposes obligations on AI systems that have been classified as high-risk, including on the manufacturers, importers, distributors, users, and third parties. [6] Among the obligations defined, we might mention human supervision, quality control, saving information, explainability, and more. Thus, for example, a reporting obligation is proposed for malfunctions and deficiencies in system functioning [7], and the need for enforcement of EU laws on AI systems is clarified. [8]

 

10.2  The Council of Europe is also promoting an attempt, on the inter-state level, to formulate a binding legal document for AI regulation. This document focuses on the protection of human rights, democracy, and the rule of law [9], while according special attention to vulnerable populations such as minors. [10] While consideration for sensitive populations is reflected in other Government regulatory guidelines [11], this was not done in the Policy Paper.

 

10.3  Implementation of the solution of a National Regulator and cautious, centralized development of the law using a uniform methodology could prevent the regulation applicable in Israel from falling behind the development of law in Europe and prevent a situation like the compliance crisis that occurred in privacy law.

 

10.4 One can assume that businesses in states that know how to respond well and prepare in advance for EU compliance requirements will be in a better starting position to enter the European market. As we know, recognition of legislative compliance makes it easier, from a regulatory standpoint, to receive information about European information subjects. Granting this status is critical for many Israeli companies, and it has great economic importance. We believe a National Regulator would know how to respond in a broad and effective manner to developments in Europe and would allow the Israeli market to be well-prepared to enter Europe. One must remember that the European Union’s judiciary’s caselaw, over the years, has required expansion of human rights protection as part of compliance recognition – a policy that can be promoted broadly by a National Regulator. [12]

 

B.  Promoting Accountability of Developers and Operators of AI Systems that Impact Fundamental Rights 


11. The Policy Paper aims to promote accountability as a central ethical value: “AI developers, operators, or users must demonstrate responsibility for its proper functioning, and upholding other ethical principles in their activity, inter alia with attention to accepted risk management conceptions and the available technological possibilities.” [13] Throughout the Paper, accountability is described as one of AI’s most meaningful challenges, and accordingly desirable practices, such as the need for explainability and human involvement, are proposed.

 

12. The Paper includes a number of proposals to confront this challenge in the area of soft regulation and ethics, but an effort was clearly made to avoid over-regulation which could constitute a barrier to innovation and technical entrepreneurship. We believe this is a worthwhile approach, on the whole. However, deviation therefrom could be justified under two conditions.


13. The first condition is when AI use has an immediate and significant impact on human rights. The second is when there is concern that avoiding regulation at an early stage would lead to a reality in which later regulation would find it hard to provide a solution for the protection of rights. It is important to consider that sometimes, “the social consequences of a technology cannot be predicted early in the life of a technology. By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economic and social fabric that its control is extremely difficult” (emphasis ours). [14]

 

14.  AI use has a deep impact on the democratic society we live in – whether with respect to storing information on users or impacting public opinion, or with respect to influence and control over the main platforms of expression. And yes, of course, AI’s is involved in decision-making processes which could lead to the violation of fundamental rights. A combination of soft regulation and the fundamental impact AI has on our lives could generate a democratic deficit in which, despite the great impact on the lives of users and citizens, AI operators do not bear sufficient legal liability.

 

In order to demonstrate our stance, we will take the question of filtering content using AI on social media, as a test case.


15.  Preventing a democratic deficit through accountability – a test case 


15.1  The initial meta-regulatory approach to practices of filtering content on platforms was a permissive, deregulation approach. It is well known that In the United States, the federal legislation, Section 230 of the Communications Decency Act (CDA), applies. This grants broad immunity to platforms from liability for content uploaded by users, while upholding their right to filter content without bearing liability. Over time, the large platforms have developed algorithmic tools that do a large part of the work of monitoring and filtering harmful content. Concurrently, in practice these tools mold the scope of the rights of billions of users to expression, privacy, and safety on the web.

         

15.2   In recent years, the deregulation approach has begun to fissure, and its damages are coming to the surface. In the United States, we are witnessing the beginnings of state legislation that aims to limit platforms’ power. Thus, recently a new social media law was passed in the State of Texas, [15] which prohibits large social media networks from censoring, blocking, removing, or minimizing exposure to posts based on political stances. Beyond this prohibition, the law requires the provision of information about the removal of content and suspension of accounts, twice a year. [16] An additional interesting case, which will soon be heard in court, is the Gonzalez v. Google affair [17]. This is a claim against Google, Facebook, and Twitter, who allowed the presentation of posts in support of ISIS on Gonzalez’s feed. Now, the Supreme Court must decide if the aforementioned traditional immunity stands for the platforms or not. This means that we are now at a critical point, which could mark a curtailing of the immunity granted to social media platforms in the United States. [18]

 

15.3 Significant developments are also occurring in Europe. The Digital Services Act (DSA) legislation imposes on platforms a series of regulatory obligations to guarantee suitable treatment of preventing and handling damages to users. For example, under the DSA, platforms are obligated to publish information regarding their content removal policy, while executing segmentation of a large amount of information about the speech removed. This increased transparency equips citizens with more information about how the social media network operates and grants the public more information about the social media networks’ influence in the country. This joins the arrangement within the General Data Protection Regulation (GDPR), which imposes obligations on digital content platforms, particularly with respect to their use of AI. Thus, for example, broad arrangements are in practice with respect to the right to be forgotten, as well as the order of search results in search engines, and autocomplete mechanisms constitute the object of rights.  


15.4 The awakening regarding the importance of promoting the accountability of AI operators, particularly in the context of content filtering, has not skipped over Israel. Thus, for example, within defamation claims, liability has been attributed to search engines regarding the activity of autocomplete algorithms. [19] The judgment in the Savir v. Bar Noy case set forth, in particular, that when the tortfeasor does not cooperate or refuses to remove the erroneous publication, the search engine must act to remove the publication when the search result is erroneous. [20] The judgment’s late implementation in the Barbi case serves as an indication of the beginnings of caselaw that recognizes a limited “notice and takedown” mechanism with respect to Google, under tort law. [21]

 

15.5   The important point with respect to all these regulatory developments is that they constitute a delayed response, too late it would seem. The user procedures and arrangements under which content platforms act were molded in Israel and the world in an unregulated environment, and scholars believe this created a series of social ills which current regulation will have difficulty uprooting. The platform industry blossomed and became a significant part of the economy in the digital age, and there are immense powers that make it hard to develop and adopt regulation that will make a real change. The AI tools’ activity in molding and realizing user rights sprung up in a reality where the platforms had distorted incentive mechanisms and this is apparent in the spread of harmful content on one hand, and overcensorship of users without due process on the other hand.

                

16. We believe that this unfolding of events can teach us an important lesson in the present context: AI tools that are operated by private actors could have a significant and immediate impact on human rights. Late regulation of these tools might not suffice. For this reason, in examining the regulatory regime that should apply in Israel, it would be appropriate for the regulator to consider the more immediate development and application of a regulatory tool to promote the accountability of operators of AI systems of this kind.

 

17. Preventing a “democratic deficit” with respect to systems that impact human rights would forward a central objective the regulatory Policy Paper under discussion sets for itself - placing humans in the center. In addition to improving the balance of AI user rights protection, this would increase public trust in AI systems. By comparison, the AI regulation framework currently being promoted in the European Union considers broad societal considerations – in contrast to sectoral considerations as proposed in the Policy Paper.

 

In conclusion, the regulatory regime to be adopted in Israel is expected to substantially impact the development of both industry and societal uses of this technology. We believe the conclusions adopted in the Policy Paper constitute a positive development in the growth of beneficial regulation. In this paper, we endeavored to elucidate two main areas in which additional attention to the possible implications of the chosen policy is required. We also sought to underline the adoption of modifications which would guarantee that, along with promoting entrepreneurship and innovation, a coherent and uniform regulatory environment, which knows how to guarantee human rights protection, develops in Israel.

 

Specifically, treating the “democratic deficit” arising from the use of AI systems that impact rights would forward a central objective the Policy Paper under discussion sets for itself - placing humans in the center. In addition to improving the balance of AI user human rights protection, this would increase public trust in AI systems.  


We would be happy to elaborate on any matter concerning the open call and this position paper, as well as to discuss our position with you . We are at your service for any questions.


Dr. Elad Gil

He is a senior fellow and head of research at the Tachlith Institute for Israeli Policy.


Dr. Tal Mimran

Head of the “Social Pact for the Digital Age” at Tachlith Institute, researcher and lecturer on international law and cybersecurity.


Note – For references cited in this article, see the original Hebrew text.




Comentarios


bottom of page