ARTICLE
6 March 2026

The Turkish DPA Published A Guideline On The Use Of Generative AI Tools In The Workplace

FE
Fidanci & Esin Partners

Contributor

F&E Partners is a next-generation boutique law firm based in Istanbul, delivering full-spectrum legal solutions across diverse practice areas, including but not limited to dispute resolution, corporate, regulatory, and real estate matters. Combining international experience with meticulous local expertise, we offer agile, partner-led counsel and strategic insight to help clients thrive in a dynamic legal and business landscape.
On 5 March 2026, the Turkish Personal Data Protection Authority published its guideline titled "Use of Generative Artificial Intelligence Tools in the Workplace".
Turkey Privacy
Şevval Bahar Esin’s articles from Fidanci & Esin Partners are most popular:
  • in Turkey
Fidanci & Esin Partners are most popular:
  • within Criminal Law, Insurance and Intellectual Property topic(s)
  • with readers working within the Law Firm industries

Recent Development

On 5 March 2026, the Turkish Personal Data Protection Authority (the "DPA") published its guideline titled "Use of Generative Artificial Intelligence Tools in the Workplace" (the "Guideline"). The Guideline addresses the key risks related to employees' use of third-party, publicly accessible generative artificial intelligence ("GAI") tools in business processes, and aims to raise awareness among companies, institutions and organizations and to promote informed use.

Although the Guideline is not binding, it constitutes an important reference source in terms of the DPA's expectations and assessment criteria in this field.

What Does the Guideline Cover?

The Phenomenon of Shadow AI and Its Risks

The main focus of the Guideline is the phenomenon referred to as "Shadow AI", defined as employees' incorporation of GAI tools into business processes without the organization's knowledge, approval or corporate oversight. While associating this practice with the previously discussed concept of "Shadow IT", the Guideline underlines that GAI entails separate and additional risks due to its data processing capacity and its direct impact on decision-making mechanisms.

In this framework, the risks highlighted in the Guideline are listed as follows: difficulties in ensuring accountability and auditability for GAI outputs that remain outside corporate oversight mechanisms; decision quality risks stemming from hallucinations and biased outputs; intellectual property risks arising from the sharing of source codes, business strategies and trade secrets with third-party tools; corporate reputational losses resulting from the use of content whose reliability has not been verified; cybersecurity threats occurring through unmanaged integrations; and the risk of unlawful processing of personal data and unauthorized access.

Emphasis under Law No. 6698

The Guideline expressly states that the Personal Data Protection Law No. 6698 (the "Law") applies, irrespective of the technology used, in all cases where personal data is processed, and that data processing activities carried out through GAI systems also fall within this scope. In this respect, the DPA recommends that the Guideline be evaluated together with its previously published "Generative Artificial Intelligence and Personal Data Protection Guideline (in 15 Questions)".

Points to Be Considered

In the Guideline, it is recommended to adopt a corporate approach based on guidance, balance and awareness, rather than prohibitive approaches; accordingly, the following points are highlighted:

  • Establishing a clear corporate policy or guidance framework that sets out which GAI tools may be used for which purposes and under which conditions, which types of information may be shared through such tools, and the principles governing risk management.
  • Raising employees' awareness so that they do not share corporately sensitive information and personal data with GAI tools; and, during interactions with such tools, preferring anonymous and generalized expressions as much as possible.
  • Considering the risk of "automation bias" arising from excessive reliance on GAI outputs and assessing the generated outputs under human oversight as supporting elements, rather than using them as the sole basis for final decisions.
  • Assessing data security and access control mechanisms—where necessary including role-based restrictions—based on the principle that employees should access only those tools designated by the organization and whose terms of use have been defined.
  • Sharing policies regarding the use of GAI tools within the organization, ensuring that employees can easily access these documents, and maintaining regular information and training activities.

Conclusion

Although the Guideline does not impose binding obligations on data-processing organizations, it sets out the DPA's assessment approach and expectations in this area. Organizations where employees widely use third-party GAI tools in business processes are advised to review their existing corporate policies in line with this Guideline, establish a clear framework on data processing and information security covering the use of GAI, and effectively communicate this framework to employees.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More