- See All
- Infrastructure & Project Finance
- Mergers & Acquisitions
- Banking & Finance
- Real Estate
- Energy & Natural Resources
- Global Trade & Customs
- Intellectual Property
- Employment & Labor
- Wealth Management
- Capital Markets
- Competition & Anti-Trust
- Data Protection
- Corporate Support
- White Collar Crime & Investigations
- Dispute Resolution
We are observing increasing efforts to regulate the use of AI systems and deepfake produced content in different jurisdictions. A recent draft law (the “Proposal”) was submitted to the Turkish Parliament proposing amendments to several existing laws. The Preamble of the Proposal providing a general reasoning for the Proposal stands out from other legislative proposals, with its direct references:
“One should not forget that if AI is not handled properly, it may pose risks that outweigh its potential benefits. Technology entrepreneurs such as Facebook CEO Mark Zuckerberg emphasize that while AI may save lives, it may also be exploited malevolently; and, therefore should be handled carefully. On the other hand, Tesla and SpaceX CEO Elon Musk considers AI to be one of the greatest existential threats facing humanity and argues that regulators must take a proactive stance.”
The Proposal has faint echoes from the EU AI Act as it relates to governance and accountability of AI systems. However, whereas the EU AI Act is predominantly a product safety law with a risk-based approach; the Proposal is limited in its scope, predominantly addressing deepfakes, personal rights violations and cyber security risks attributable to AI systems.
I. The Internet Law and the Penal Code
The Proposal sets out a definition of ‘AI systems’ into the Internet Law as follows:
“Any software, model, algorithm, or programming entity that performs specific tasks by processing data with limited or no human intervention, that autonomously or semi-autonomously produces outputs, makes decisions, offers recommendations or operates by means of machine learning, deep learning, artificial neural networks, algorithms or similar technological means.”
a. Catalogue Crimes
The current Article 8/1 of the Internet Law contains an exhaustive list of criminal offenses (e.g., sexual exploitation of minors, incitement to suicide) (hereinafter, the “Catalogue Crimes”) against which a content removal or access blocking order may be issued. The Proposal aims to incorporate several other categories of criminal offences to the Internet Law:
- The offence of insult, as already defined in Article 125 of the Turkish Penal Code.
- The Proposal also introduces a new subcategory of insult to be inserted into the Turkish Penal Code, as follows: “Any user that directs an AI system to commit an act that constitutes a crime under the Turkish Penal Code shall be treated as the perpetrator of that crime and punished accordingly. The penalty shall be increased by half for developers whose design or training of the system enables the commission of such offenses. The penalty shall be increased by half for developers whose design or training of the system enables the commission of the crime.” Oddly so, the envisaged subcategory is general in its nature and does not specifically concern the act of insult.
- The offence of threat, as already defined in Article 28 of the Turkish Penal Code.
- Crimes against humanity, as already defined in Article 77 of the Turkish Penal Code.
The Proposal adds that the Turkish Penal Code shall apply to the social network providers on which the artificial intelligence operates, in cases when the Catalogue Crimes are committed. The reasoning provided to this provision emphasizes the intention to extend the criminal liability of social network providers.
b. Unlawful outputs generated by AI systems
According to the Proposal, access blocking and content removal measures against AI system-generated outputs that violate personal rights, endanger public security, or have been fabricated with deepfake technology must be implemented within 6 hours. The Proposal notes that the content providers and AI system developers shall be jointly liable for this obligation. Remarkably, the notion of ‘AI system developers’ is not defined within the Internet Law or the Proposal. There is no reference or distinction between the notions of ‘AI provider’ and ‘AI deployer’ as in the EU AI Act.
The Proposal also enacts a new article to the Internet Law that specifically governs deepfakes. The proposed article implicitly defines deepfakes as “the fabricated generation of visual, auditory or textual content by means of AI Systems”. The Proposal obligates that such content must be labeled with a clear, comprehensible, and indelible statement indicating that it has been artificially produced – namely with the label “Generated by AI”.
The Proposal sets forth that content providers and/or developers who fail to comply with this obligation shall be subject to an administrative fine from 500,000 to 5,000,000 Turkish liras (approx. 10,220 to 102,250 Euros), depending on the nature of the content. Accordingly, in cases of systematic and intentional breach, an access blocking order shall be issued against the content provider. On top of that, if the content disrupts public order, harms personal rights or aims at political manipulation, access to the content shall be immediately blocked and if deemed necessary, a criminal complaint shall also be filed. While there is no definition of what constitutes ‘political manipulation’, the Information and Communication Technologies Authority (“BTK”) is granted the authority to carry out inspections, use technical monitoring tools, and issue guidelines within the scope of the proposed amendment.
II. The Electronic Communications Law
The Proposal introduces an arbitrary-seeming authority to the BTK by appointing a new duty within the Electronic Communications Law. If the bill passes, the BTK will be authorized to issue an ‘urgent’ access blocking order on AI content that threatens public order or election security. Those who act contrary to this obligation will imposed with an administrative fine of up to 10,000,000 Turkish Liras (approx. 204,500 Euros). The proposed amendment is vague, e.g., it does not elaborate on whether the content provider or the social network provider will be liable for the fine.
It is worth noting that, the currently in force Article 8/A of the Internet Law does grant the BTK President the authority to render a content removal and/or access block order, for the purposes of the protection of national security, public order, public health and the prevention of crime. However, this is enabled in exceptional circumstances that require prompt action and upon the request by ministries that are concerned with national security, public order and public health. In that case, the BTK President is obligated to submit such decision to judicial review within 24 hours. If the judge does not announce its decision within 48 hours, the decision automatically lapses.
III. The Cyber Security Law
The Proposal aims to insert a new section to the provision that governs the duties of those who provide services, collect or process data, or engage in similar activities via information systems within the scope of the Cyber Security Law.
The Proposal reads that ‘service providers in AI systems’ (yet again, not clearly defined in the Proposal or the Cyber Security Law) must (i) ensure the transparency and auditability of training datasets, (ii) establish content verification mechanisms to prevent the production of false and manipulative information, (iii) implement algorithmic controls to reduce hallucination risk, (iv) develop human-approved oversight mechanisms for high-risk applications, and (v) run cybersecurity vulnerability tests at regular intervals.
The Proposal further states that service providers who fail to implement the measures above shall be subject to up to 5,000,000 Turkish Liras (approx. 102,250 Euros) administrative fines; and in cases of serious breaches threatening public order, temporary suspension of operations may also be imposed.
IV. The Law on the Protection of Personal Data
The Proposal incorporates a subparagraph to the data security provision in the Law on the Protection of Personal Data as follows: “Datasets used in AI applications must comply with the principles of data anonymization, non-discrimination and lawful processing. The use of discriminatory datasets constitutes a breach of data security.”
V. Conclusion
The Proposal remains broadly framed and open to arbitrary interpretation due to certain definitional inconsistencies and deficiencies. However, its progress in the Turkish Parliament is worth following.
Should you require any references cited in this article or wish to discuss its contents further, please feel free to contact us.