ARTICLE
7 November 2025

The Use Of AI In The Workplace

L
Liedekerke

Contributor

Founded in 1965 by lawyers committed to legal excellence, Liedekerke is an independent law firm recognised for its leadership and expertise and with an international reputation built upon an unchallenged expertise.

A premium Belgian business law firm for 60 years with offices in Antwerp, Brussels, London, Kinshasa and Kigali, our firm is dedicated to providing a world-class service by consistently delivering the finest assistance and guidance.

The firm has a strong advisory practice based on sector expertise and an in-depth knowledge of Belgian and European law. As an essential complement to its advisory activities, it represents clients in complex litigation before national, European and international courts, both judicial and arbitral, the Court of Cassation, the Council of State and the Belgian Constitutional Courts.

Earlier this year, our firm's Employment & Benefits and IP/IT/Data teams jointly hosted "The Future of Work: Tech Revolution", a roundtable lunch session with clients to explore the implications...
Belgium Employment and HR
Liedekerke are most popular:
  • within Energy and Natural Resources topic(s)
  • with readers working within the Banking & Credit industries

Earlier this year, our firm's Employment & Benefits and IP/IT/Data teams jointly hosted "The Future of Work: Tech Revolution", a roundtable lunch session with clients to explore the implications for employers of the implementation of the EU AI Act and, more broadly, the challenges and opportunities around the use of AI in the workplace. The conversation was lively and insightful, with participants sharing perspectives on legal, practical, and organisational aspects of AI adoption.

As we are preparing for the second roundtable session, we took the time to look back at the first session put together a few key takeaways and considerations that employers should keep in mind as they consider implementing AI technologies in their business.

1. AI in the workplace is about more than just the EU AI Act

While the EU AI Act1 is an important new piece of legislation, it does not exist in a vacuum. Employers introducing AI systems in the workplace must also navigate existing legal frameworks.

In Belgium, for instance, in addition to anti-discrimination and wellbeing at work legislation, several national collective bargaining agreements (CBAs) are directly relevant:

  • CBA no. 39 on the introduction of new technologies, requiring observance of a works council information and consultation process before implementing a new technology (such as an AI tool),
  • CBA no. 81 on the monitoring the use of email and internet, requiring transparency obligations and employee information and consultation obligations when monitoring employees' use of electronic communication and internet at work, and
  • CBA no. 9 on the obligation to inform and consult the works council on measures that may have an impact on employment, working conditions, etc.

These rules already shape how technology can be implemented, how employees should be consulted, and the limits of workplace monitoring.

At the European level, other pieces of legislation should also be borne in mind due to their significant overlap with the AI Act. This includes legislation such as the GDPR1, which remains a cornerstone given that many AI tools involve the processing of personal data, or the Data Act2, which came into force very recently, on 12 September 2025. The latter is expected to become increasingly relevant in practice as many AI tools rely on large volumes of data, including user- or device-generated data.

In practice, compliance with the AI Act should be seen as part of a broader legal puzzle that combines employment law, data protection law, and emerging AI-specific obligations.

Moreover, AI in the workplace is not just a legal or compliance matter. It touches on HR, IT, data protection, ethics, and even corporate culture. The discussion made clear that employers need a multidisciplinary approach, involving different internal stakeholders early in the process. This ensures that legal compliance is matched with technical feasibility and employee acceptance.

2.Some AI Act obligations are already in force

The AI Act is not something to "worry about later". In reality, some provisions are already binding. Indeed, the following obligations are already in force since 2 February 2025:

  • the prohibition of certain high-risk AI practices, and
  • the requirement to provide adequate training to staff using AI systems.

Moreover, a new set of provisions entered into force on 2 August 2025, including rules on general-purpose AI (GPAI) models and penalties.

Other obligations will gradually enter into force over the coming months, but employers should already start mapping their AI use cases and ensuring their current practices comply with the rules that are effective in force today. Appointing an AI responsible or AI Officer within the organisation may also be envisaged to manage the AI implementation process effectively.

3.The line between "deployer" and "provider" is thinner than it seems

Many employers consider themselves merely "deployers" of AI tools supplied by third parties. However, the AI Act makes it clear that in certain scenarios, such as when an employer customises, adapts, or integrates AI software, they may be treated as a provider, triggering a heavier set of obligations.

This shift can happen unintentionally, and without the employer realising it. Employers should therefore carefully assess how they deploy AI tools and seek legal guidance before making modifications that could change their regulatory status.

4.Governance and accountability are key

Beyond the legal framework, successful AI implementation depends on strong internal governance structures, with documentation and transparency being key. This includes:

  • clear policies on AI use in the workplace;
  • defined accountability (who is responsible if something goes wrong?);
  • documentation of AI-related decisions, including why a certain tool was chosen, how that tool works, how employees were informed or what safeguards were put in place, etc.;
  • transparency towards employees and other stakeholders, which also includes the regular review of relevant data protection policies; and
  • building AI literacy among staff, so that employees understand the opportunities, risks, and limits of AI systems they work with.

Employers who invest in governance now will be better placed to respond to regulatory changes and avoid reputational risks later. Proper documentation in addition fosters trust and transparency among employees and other stakeholders.

5.Mind the confidentiality of data in AI prompts

A key concern raised during the roundtable was the risk of inadvertently disclosing confidential or personal data when using generative AI tools. Many systems process prompts externally, which means that sensitive business information, trade secrets, or employee data entered into them could be stored or reused outside the employer's control.

Employers should therefore:

  • establish clear internal policies on what type of data can and cannot be used in AI prompts;
  • train employees on the risks of sharing confidential information; and
  • carefully select AI service providers offering sufficient guarantees regarding, confidentiality and data protection, in particular with respect to the use of data for training AI systems, transfers of data abroad, and disclosures to third parties.

Taking these measures is essential not only for compliance with laws like the GDPR, but also for safeguarding business-critical information and maintaining employee and client trust.

Conclusion

The AI Act is a landmark piece of legislation, but as our roundtable showed, it is only one piece of a much bigger picture. Employers should take a holistic approach, looking not just at the EU AI Act itself but also at the many existing rules that apply to technology in the workplace. By acting early, clarifying roles, safeguarding confidentiality, and building strong governance, employers can not only ensure compliance but also create a framework where AI can deliver real value for both business and employees.

Footnotes

1 Regulation (EU) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data.

2 Regulation (EU) 2023/2854 of 13 December 2023 on harmonised rules on fair access to and use of data.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More