- See All
- Infrastructure & Project Finance
- Mergers & Acquisitions
- Banking & Finance
- Real Estate
- Energy & Natural Resources
- Global Trade & Customs
- Intellectual Property
- Employment & Labor
- Wealth Management
- Capital Markets
- Competition & Anti-Trust
- Data Protection
- Corporate Support
- White Collar Crime & Investigations
- Dispute Resolution
The Turkish Data Protection Authority (“DPA”) has published a document titled “Use of Generative AI Tools in the Workplace” (the “Guidance”) on its website, setting out its observations on the key risk areas associated with the use of generative AI tools in business processes, as well as governance approaches that organizations may consider in managing such use.
The Guidance indicates that, as generative AI tools become increasingly integrated into employees’ day-to-day workflows, their use should be assessed not only in terms of efficiency and speed, but also from the perspectives of personal data protection, information security, trade secret protection, decision quality, and corporate governance.
Shadow AI: The Growing Risk of Uncontrolled Use within Organizations
One of the key concepts highlighted in the Guideline is “Shadow AI.” This term refers to the use of generative AI tools by employees in business processes without the organization’s knowledge or control.
The DPA notes that such uncontrolled use may give rise to risks particularly in relation to accountability, the protection of trade secrets, and information security, including personal data.
This approach suggests that companies should focus not only on whether AI is being used, but also on which tools are used, by whom, for what purposes, and what categories of data are involved.
Risks Relating to Personal Data, Trade Secrets and Sensitive Information
The DPA notes that the uncontrolled use of generative AI tools may give rise to significant risks not only in relation to personal data, but also with respect to trade secrets, intellectual property rights and other sensitive corporate information. The Guideline emphasizes that sharing materials such as source code, product designs, business strategies, internal correspondence, human resources data and customer files with external AI tools may weaken organizational control over such information.
With regard to personal data, the DPA also makes clear that data processing activities carried out through generative AI systems fall within the scope of Law No. 6698. In this context, the sharing of personal data by employees through prompts may give rise to risks such as unauthorized processing, access by third parties or use for purposes beyond the original intent.
Accuracy and Human Oversight Are as Important as Speed and Efficiency
The DPA also draws attention to the risk of excessive reliance on outputs generated by generative AI systems. This phenomenon, referred to in the literature as “automation bias,” may lead users to accept content produced by automated systems as accurate without sufficiently evaluating it.
In addition, the Guidance notes that the ability of generative AI systems to produce content that appears convincing but is factually incorrect (“hallucinations”) should also be taken into account in organizational decision-making processes. Accordingly, the DPA emphasizes that AI-generated outputs should not serve as the direct basis for final decisions, but rather be treated as supportive tools subject to human review and assessment.
Key Compliance Considerations for Organizations
The Guideline also outlines certain governance approaches that organizations may adopt in relation to the use of generative AI tools in the workplace. In particular, it highlights the importance of the following measures for managing related risks:
- Establishing clear internal policies or guidance on the use of generative AI tools,
- Defining what types of data may be provided as input to such tools,
- Reviewing confidentiality and security safeguards for both corporate and personal data,
- Ensuring that generated outputs are assessed under human oversight,
- Conducting awareness and training activities for employees.
Implementation and Compliance Perspective
The Guideline makes clear that, for companies, the use of generative AI is no longer merely a tool for efficiency, but has also become a compliance matter that should be addressed within the frameworks of data protection, information security and corporate governance. In this regard, it is important for companies to clarify what types of data employees may share with external generative AI tools, to establish clear boundaries with respect to personal data, trade secrets and other sensitive corporate information, and to strengthen access controls, human oversight and internal policy mechanisms. In this respect, the Guideline reflects an approach that seeks not to prohibit the use of generative AI in the workplace altogether, but rather to place it within a transparent, controlled and responsible framework.