- See All
- Infrastructure & Project Finance
- Mergers & Acquisitions
- Banking & Finance
- Real Estate
- Energy & Natural Resources
- Global Trade & Customs
- Intellectual Property
- Employment & Labor
- Wealth Management
- Capital Markets
- Competition & Anti-Trust
- Data Protection
- Corporate Support
- White Collar Crime & Investigations
- Dispute Resolution
The Personal Data Protection Authority (“Authority”) published the Guide on Generative Artificial Intelligence and the Protection of Personal Data[1] (“Guide”) on November 24, 2025.
The guide highlights the aspects that distinguish “Generative Artificial Intelligence” (“GenAI”) systems from conventional artificial intelligence systems and the new risks they may pose in terms of personal data protection legislation, as well as clarifying critical concepts not previously defined by the Authority, such as deep learning, deep fake, and artificial neural networks.
In the guide, GenAI systems are defined as systems trained on large-scale datasets that can generate content in various formats such as text, images, video, audio, or software code in response to a user-entered prompt or command. Unlike conventional artificial intelligence systems that solve specific problems using predefined data sets, GenAI systems can generate original outputs not found in the input data set and produce entirely new content. They can perform multiple functions with a flexible and versatile structure.
It is stated that GenAI systems, which have reached a widespread user base today, are used in customer service for virtual assistants, in the healthcare sector for analyzing patient records and preparing personalized treatment plans, in the education sector for creating personalized education programs, and in the advertising sector for audience analysis and campaign planning, as well as providing automation in sectors such as art, software, and law.
In contrast to all these benefits provided by GenAI systems, the risks they pose are also outlined in the Guide. Examples include erroneous and inconsistent outputs known as “hallucinations,” biased outputs, and manipulative content created using deep fake technology that contains fake visual and audio content. In the context of personal data protection, the use of these systems to create phishing emails and fake identities, thereby putting user data at risk, as well as the inclusionof personal data contained within the large data sets used to train these systems may appear in the outputs presented to users, or users sharing such data in their own inputs can lead to significant security issues.
The Guide recommends that the nature of each data processing activity and the actual roles of the parties be taken into account when identifying the roles of “data controller” and “data processor.” Indeed, numerous natural or legal persons may be involved assuming responsibility at different stages of the lifecycle of GenAI systems. Furthermore, actors such as “developers” or “deployers” may fall outside the scope of these roles. Such issues will require a specific assessment for each processing activity. At this point, determining who makes fundamental decisions regarding which types of data belonging to which categories will be processed and from which sources the data will be obtained is considered an important factor in understanding the roles.
The Guide refers to the general principles set forth in the Personal Data Protection Law No. 6698 (“Law”) that must be complied with in the processing of personal data, and also addresses what measures can be taken during the development, training, and implementation phases of GenAI systems to ensure compliance with these principles. In terms of these principles, it is recommended that oversight and control mechanisms be established and adequate retention and destruction policies be developed for users regarding the operation of GenAI systems and data processing activities. In terms of conditions to be complied under the Law, it is emphasized that simply informing users that a GenAI system is being used is not sufficient to obtain the explicit consent of the data subjects. Rather, information must be provided about the type of system being used, the purpose of processing, the nature of the data that will result from the processing, and the visibility of the data to third parties.
To ensure transparency, it is recommended that information notices and privacy policies be presented in interfaces that are easily accessible to users. It is also noted that in systems where users interact directly with GenAI systems, such as chatbots, users should be informed that they are communicating with a GenAI system. It is recommended that the necessary level of transparency be provided regarding the data sets used in the training process of AI systems. When personal data is obtained directly from publicly available sources through automated means, it is emphasized that publicly available information notices would be useful in cases where direct disclosure is technically impossible.
The Guide highlights the practical difficulties that prevent individuals from exercising their rights, particularly with regard to the right to object. It emphasizes that, in cases where automated decision-making systems using GenAI-based systems produce discriminatory or unethical outcomes, users should be able to object both to the outcome of the decision and to the basis on which it was made. In addition, it stresses the need to establish accessible mechanisms that provide transparency and accountability for users, covering stages such as training, fine-tuning, and output where personal data is processed. Data controllers are particularly urged to keep records of their data processing activities using methods such as data mapping or data labeling, to ensure that data subjects can exercise their rights.
To ensure the security of personal data, the approaches of privacy by design and privacy by default are emphasized, recommending that GenAI systems prioritize data protection not only during the usage phase but at every stage from the design phase onwards, and that only necessary data be processed by default without user intervention. In addition, the Guide recommends conducting data protection impact assessments to identify and manage risks, performing test processes using the red team technique to reveal unknown risks and detect them early, and performing regular software updates.
[1] https://www.kvkk.gov.tr/SharedFolderServer/CMSFiles/MTY5MjNmNmIwZWY3YTE.pdf