ChatGPT in the Workplace

What happens when employees use AI before any policy exists.

In most companies, ChatGPT is already in use. Not as a strategic decision, not as the result of an evaluation process, but as an individual reflex. Someone in marketing rephrases copy. Someone in sales has meeting summaries written. Someone in legal checks a clause. Nobody coordinated this. It just happened.

This is not the exception. It is the default in nearly every organization past a certain size. Usage does not begin with a pilot project, not with IT approval, not with a board decision. It begins with a browser tab.


Most leaders suspect it is happening. Most organizations still have no rules for it. That creates a situation no company would tolerate in any other area: a tool that nearly everyone uses, but nobody is responsible for.


What is actually happening

When employees use ChatGPT, they usually do not act out of carelessness or laziness. They do it because the tool provides a tangible benefit. A first draft in thirty seconds instead of an hour. A summary of a fifty-page document in three paragraphs. A brainstorming result that serves as a starting point for a concept.


The use cases that gain traction in practice are remarkably uniform across industries and company sizes: writing or rewriting text, drafting emails, preparing presentations, structuring spreadsheets, accelerating research. These are not exotic applications. They are everyday work situations where AI reduces friction.


The problem is not the usage itself. The problem is that it takes place in a space the organization does not see, does not understand, and cannot control. There is no overview of who uses which tool for what purpose. There is no distinction between use cases that are harmless and those that are not. And there is no process that clarifies what happens with the data entered along the way.


Why employees do not ask

The obvious question is: why don't employees talk to their managers about this? The answer is less about trust and more about structure.


In many organizations, there is simply no point of contact for this topic. It is unclear whether IT is responsible, the legal department, the data protection officer, or the employee's own manager. And even if there were someone to ask: the question of whether you may use a free online tool for your own text work does not feel like something that requires a formal process.


There is also an effect well known from shadow IT research. People do not bypass official structures out of resistance. They do it because official structures have no answer to their need. As long as a company does not offer a clearly defined framework within which AI can be used, employees will find their own way. The barrier today is zero: no download, no installation, no license request. A browser tab is enough.


Surveys consistently show the same pattern: a significant share of office workers use generative AI at work. Many of them say they do not disclose this to their employer. Not out of ill intent, but because there is no occasion to do so.


What happens to the data

Most employees use the free version of ChatGPT or inexpensive personal accounts. That means: inputs flow into a system operated by a US-based company whose terms of service change regularly and which, in certain versions, reserves the right to use inputs to improve its service.


In practice, this means: client data, internal strategy documents, contract drafts, personnel records, product information, or even informal meeting notes can end up in a system over which the company has no control. There is no data processing agreement. There is no guarantee that data will not be used for training purposes. And there is no way to delete it after the fact.


For companies subject to the GDPR, this is a concrete legal risk. For companies in regulated industries such as financial services, healthcare, or energy, it can be existential. But even outside of regulation: if you do not know which data leaves your organization, you cannot assess the risk you are taking.


Why bans do not solve the problem

The first reaction in many organizations is a ban. Samsung, Apple, JPMorgan, and others have restricted or blocked employee access to ChatGPT at various points. In almost every case, the background was a specific incident: sensitive data entered into the system.


Bans are understandable. But they do not solve the underlying problem. First, because they are hard to enforce. ChatGPT runs in the browser, on personal devices, over mobile networks. Second, because they do not end usage, they only make it invisible. And third, because they ignore the productive side of the technology.


Companies that ban AI entirely do not just lose efficiency. They also lose the ability to actively shape how AI is used. The alternative to a ban is not chaos. The alternative is a framework that enables usage without endangering the organization.


Why this becomes organizationally relevant

As long as individuals use AI for their personal work preparation, the impact remains limited. But that state does not last long. AI-generated content flows into proposals, reports, decision memos, client communications. At that point, the entire organization works with outputs whose origin is unclear.


This is not only about data privacy. It is about the quality of work. Who reviewed the text? On what basis was the analysis created? Are the numbers accurate, the summary correct, the recommendation sound? Where does the phrasing in the client email come from?


These questions are rarely asked as long as no policy exists that requires them. But the absence of a policy does not mean there is no risk. It means the risk remains invisible.


Starting February 2025, the EU AI Act also requires that all individuals who use or operate AI systems possess sufficient AI competence. This does not only concern the IT department. It concerns every person who opens ChatGPT during their workday.


The questions that arise

Organizations that take this topic seriously face a series of questions that are not technical, but organizational.


Who decides which AI tools are permitted and which are not? Is there an overview of what is actually being used? Which data may be entered into external systems, and which may not? What happens when an AI-generated result is flawed and causes harm? Who bears responsibility: the person who used the tool, the department that failed to set rules, or the leadership team that did not address the topic?


And finally: how do you ensure that employees use AI competently and responsibly when there are neither training programs nor defined quality standards?


These are not theoretical questions. They arise directly from the current state of affairs in nearly every organization.


From individual usage to organizational framing

The transition from individual AI usage to an organizational approach is not a technical migration. It is a shift in perspective. Instead of asking which tool is best, the question becomes how AI can be embedded into existing workflows, responsibilities, and quality standards.


This does not require a comprehensive AI strategy that takes six months of consulting. It requires three things to start: visibility into what is currently happening. Clarity on what is permitted and what is not. And someone who is responsible for the topic.


That sounds modest. But in practice, this is exactly what most organizations lack. And without this foundation, every subsequent measure remains ineffective.


Not a technology problem

The common misconception about this topic is that it is an IT problem. In reality, it is an organizational problem. IT can block or enable tools. But the question of how AI changes work, who is responsible for it, and which quality standards apply, is not something IT can answer. That is a task for business units, leadership, HR, legal, and communications.


The companies that handle this best are not those with the best technology. They are those that understood earliest that the question is not whether AI is being used, but under what conditions.


There are platforms, like PANTA OS, that address exactly this challenge: solutions for operationalizing AI across organizations. They create structure where individual decisions dominate today, around access, usage, quality, and traceability.