The use of artificial intelligence (AI) tools such as ChatGPT, Gemini and others is becoming commonplace across industries – often without employers’ knowledge. Employees use AI to write texts, perform various analyses, work with data or program. However, without clear rules in place, such use of AI can pose significant legal, security and reputational risks. As this trend is unlikely to be avoided, it is important to understand how to minimize the risks associated with it.
Typical issues include the input of sensitive information or personal data into external AI tools, the risk of leakage of such data, uncertainties regarding the authorship of outputs and the inability to guarantee their exclusivity, potential violations of licence terms, and liability for erroneous or misleading results generated by AI tools. Such conduct may result not only in breaches of obligations under the GDPR, the leakage of trade secrets or other confidential information, but also in breaches of contracts with clients and a loss of trust with business partners. This, in turn, can lead to reputational harm as well as significant financial damage and sanctions.
Therefore, we recommend that employers, and others, maintain appropriate control over the use of AI tools. Employers already have an obligation to ensure a sufficient level of AI literacy among people who work with these AI tools. At the same time, the European Regulation on Artificial Intelligence (the so-called AI Act[1]), which sets general rules for the use of AI tools, is approaching full effect. However, a number of risks can already be prevented.
The basic – and often most effective – step is a properly designed internal directive that clearly defines:
- permitted and prohibited uses of AI;
- permitted and prohibited AI tools;
- rules for working with sensitive information, personal data and AI outputs; and
- employee responsibilities.
Well-designed internal rules are an essential basis for reaping the benefits of AI. They help the employer to reduce legal, security and reputational risks. At the same time, the obligations arising from the AI Act can be demonstrably fulfilled through an internal directive and regular training. Don’t leave the solution to the last minute.
If you are interested in further information or assistance with setting up internal rules for the safe use of AI, do not hesitate to contact us. We will be happy to help.
–
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance)
—
If you have any questions about this topic, please reach out to your contact person in our office or Martin Jonek and Michal Zahradník.
This document is a general communication and should not be regarded as legal advice on any specific matter.

