AI, Law, and Digital Sovereignty: Interview with Dr. Possard

In this interview with Dr. Marlon Possard, we would like to provide a brief overview of the most pressing legal questions regarding the employment of AI and cloud services by European organizations. Dr. Possard is an Assistant Professor, working at the University of Applied Sciences Campus Vienna (HCW), as well as at the Institute for Digital Transformation and Artificial Intelligence (Faculty of Law) at the Sigmund Freud Private University Vienna and Berlin (SFU). Since 2024, Dr. Possard has led the Department of Ethics of Artificial Intelligence at SFU Vienna and Berlin.

Interview with Professor Dr. Marlon Possard

Professor Possard, you have been focusing for years on the interface of law, ethics, and artificial intelligence. Looking at current developments, what has changed most in the way companies and public administrations handle AI systems over the past two to three years?

Dr. Possard: The most noticeable change is the speed and naturalness with which AI systems are being deployed. A few years ago, AI was still a strategic, future-oriented topic; today, it is an operational tool used in everyday work and personal life. However, clear governance structures are sometimes lacking. At the same time, the focus has shifted significantly: from the question “Are we allowed to do this?” to “How quickly can we deploy it productively?” This shift brings opportunities but also significant legal and ethical risks if reflection and control do not keep pace.

Many organizations now use cloud services with minimal barriers. Often for sensitive data. How aware do you think organizations are of the legal and ethical implications when internal or personal data is transferred to large cloud providers, especially from the U.S.?

Dr. Possard: Awareness is very unevenly distributed. Legal and data protection departments are usually aware of the risks, but other departments often are not. Cloud services are perceived as neutral infrastructure rather than a legally complex data-processing environment. Especially with U.S. providers, the fact that both technical and geopolitical as well as legal frameworks are relevant is often underestimated. Ethics is still frequently seen as a “nice-to-have.” Yet one critical point is often forgotten: it’s ultimately about trust, power dynamics, and loss of control. All these aspects must not be underestimated.

Let’s talk specifically about data protection: What legal risks arise when companies or authorities send personal data to cloud services outside the EU? Where do you see the biggest challenges regarding the interplay of GDPR, international data transfers, and cloud-based AI services, for example from the U.S.? The U.S. CLOUD Act seems to create a difficult situation for European organizations trying to use U.S. cloud services in a GDPR-compliant way.

Dr. Possard: The main risk lies in the lack of enforceability of European fundamental rights. GDPR requires an essentially equivalent level of protection, which is problematic in many third countries. The U.S. CLOUD Act exacerbates this because U.S. companies may be obliged to hand over data to authorities without informing European data subjects or providing effective legal remedies. Legally, this creates a gray area, sometimes covered by standard contractual clauses but structurally unresolved. The problem: legal uncertainty remains, especially for public institutions.

Few people realize that free versions of AI services, such as ChatGPT or Gemini, use the input data from chats to train their systems further. What are the legal consequences? What could happen to a company if employees enter sensitive data into external cloud services, and it is used to train their next large language model (LLM)?

Dr. Possard: From a legal perspective, this is highly problematic. Once employees enter personal or confidential data, this typically constitutes unlawful data sharing. Companies may face data protection violations or liability issues. Critically, these data are usually irreversibly integrated into training processes, making deletion under GDPR effectively impossible. This violates fundamental principles such as purpose limitation and data minimization.

In our opinion, this is exactly why companies have to provide viable and compliant alternatives to their employees. What concrete legal advantages do you see in keeping data and AI systems within the company’s own IT environment rather than transferring them to external cloud providers?

Dr. Possard: The greatest advantage is control. Operating systems locally or in a well-regulated internal environment allows organizations to genuinely fulfill data protection obligations—not just formally, but in practice. Access controls, deletion concepts, purpose limitation, and transparency can be implemented much more effectively. Moreover, the risk of international data transfers is eliminated. From a legal standpoint, this reduces complexity and ultimately increases legal certainty.

Governance and traceability are also important. Operating AI systems locally simplifies compliance with requirements, such as logging access, audits, or implementing the AI Act. What legal benefits does this offer?

Dr. Possard: Local systems facilitate both auditability and accountability. For the AI Act, this is crucial: documentation, risk assessment, logging, and human oversight can only be effectively implemented if you have insight into system logic and data flows. This is often not the case with external black-box systems. Governance is not an optional add-on; it is a prerequisite for legally compliant AI use.

From an ethical perspective, why is it problematic to give customer data away to external AI platforms—even if it is legally permissible? How important is the question “Where is my data going?” for the trust of citizens, patients, or customers in digital services?

Dr. Possard: I am often asked where ethics begins in practice. My answer: Ethics begins where the law ends. Even if something is formally permissible, the question remains whether it is legitimate. People entrust organizations with their data, expecting responsible handling. That trust is undermined when data is shared with third parties without necessity, especially when data subjects have no control or transparency. The question “Where is my data going?” is ultimately a matter of trust—and it must not be underestimated or squandered.

If organizations operate AI on their own infrastructure, they maintain high control over data flows and models. Would you say this approach can also be read as an ethical signal—that is, “We take your privacy and informational self-determination seriously”?

Dr. Possard: Yes, absolutely. It signals that privacy and informational self-determination are taken seriously. It expresses responsibility and respect toward data subjects. Organizations show that they do not pursue efficiency gains alone but also uphold societal values. In times of increasing digitalization and rapid AI development, this is a strong and necessary signal.

The term “digital sovereignty” is particularly popular in the public sector. How do you define digital sovereignty, and why is it dangerous for states, administrations, and educational institutions to become too dependent on a few large cloud providers or external IT service providers?

Dr. Possard: In academic discussions, the term is used in multiple ways. There is no single definition; its meaning always depends on context. For me, digital sovereignty means the ability to make autonomous decisions about digital infrastructures, data, and technologies. Dependence on a few large providers creates structural power asymmetries. Those who control the technology also influence processes, standards, and ultimately political leeway. For states and public administrations, this is particularly dangerous because democratic control and long-term stability are at stake.

You work closely with public administrations and policymakers. What structural risks do you see for democracy and the rule of law if critical infrastructures—including AI systems—are largely provided by private, often non-European platforms?

Dr. Possard: In short: the risk lies in the gradual privatization of core state functions. If decision logics, data flows, and technical standards are outside democratic control, transparency and accountability suffer. Democracy relies on traceability and responsibility, both of which are undermined when central infrastructures are no longer controllable. At this intersection, the ethical dimension of these processes becomes particularly evident.

Since the EU AI Act came into effect, many organizations are wondering what it concretely means for their AI use. In your view, what are the most important new requirements for companies and authorities operating their own AI systems, such as regarding risk classification, documentation, and transparency obligations?

Dr. Possard: The AI Act primarily requires structure: clear risk classification, technical and organizational documentation, transparency toward users, and defined responsibilities. High-risk systems (Art. 6 AI Act) must be explainable, auditable, and human-supervised. Some AI systems are prohibited entirely (Art. 5 AI Act). For many organizations, this is an adjustment, but also an opportunity to design AI responsibly from the start. Often underestimated are the competency requirements under Art. 4 AI Act, which obligate providers and operators to ensure that staff handling AI have sufficient expertise. This encompasses not only technical knowledge but a combination of technical, legal, and ethical understanding: employees must identify risks, critically assess results, and avoid misuse. For organizations, this means that AI use without accompanying training, awareness, and governance measures will no longer be permissible. AI thus becomes not just an IT task, but an organizational and leadership responsibility. Those deploying AI must ensure that humans are competent enough to genuinely take responsibility—not just formally. Ethics plays a central role here.

Finally, a more fundamental question: AI can massively automate processes, especially when systems are deeply integrated into internal workflows. From an ethical perspective, where do you draw the line between meaningful automation and the point at which human judgment and responsibility must not be “automated away”?

Dr. Possard: The line is where decisions can no longer be questioned, and responsibility becomes diffuse. AI may support, prioritize, and analyze. That is positive, especially for improving efficiency. But AI must not replace moral or legal responsibility. In sensitive areas, humans must remain capable of decision-making and accountability. The AI Act explicitly emphasizes this – the keyword here is “human in the loop”. Automation may enhance efficiency but must not remove responsibility. Ultimately, automation should relieve, not disempower.

 

Dr. Marlon Possard is an Assistant Professor, AI Competence Expert, and Certified Digital Legal Expert. He works at the Department of Administration, Economics, Security, and Politics and at the Research Center for Administrative Sciences (RCAS) at the University of Applied Sciences Campus Vienna (HCW), as well as at the Institute for Digital Transformation and Artificial Intelligence (Faculty of Law) at the Sigmund Freud Private University Vienna and Berlin (SFU). Since 2024, Dr. Possard has led the Department of Ethics of Artificial Intelligence at SFU Vienna and Berlin. He is currently pursuing his habilitation in the field of constitutive legal questions and serves, among other roles, as a visiting researcher at Harvard University in Cambridge, Massachusetts, USA. He is the author of numerous contributions and publications on law, public administration, and ethics (140+). In October 2025, he was awarded the honorary senator title by the Austrian Security Day (ÖST) in cooperation with the Vienna Chamber of Commerce (WKO) and the Association of Austrian Security Companies (VSÖ).

The interview was conducted by Philipp Schardax.

Share the Post:

Weitere Beiträge

Hybrid RAG – günstig und unabhängig

Wie lässt sich KI für die eigene Organisation nutzen, aber ohne in Abhängigkeiten von Anbietern zu geraten, und ohne teure Investitionen in Hardware? Die Antwort lautet Hybrid RAG. Hybrid RAG (kurz für Retrieval-Augmented Generation) kombiniert das Beste aus zwei Welten: Die Datenkontrolle und Sicherheit einer On-Premises-Infrastruktur mit der Skalierbarkeit und

Mehr lesen »

Was ist Retrieval Augmented Generation (RAG)?

Seit etwa 2 Jahren revolutioniert Retrieval Augmented Generation (RAG) den Einsatz von KI-Systemen in Unternehmen – besonders für Organisationen, die Wert auf Datensicherheit, Kosteneffizienz und digitale Souveränität legen. In diesem Beitrag erklären wir, wie RAG funktioniert, warum dieser Ansatz für Unternehmen so wertvoll ist, und welche konkreten Vorteile der On-Premise-Betrieb

Mehr lesen »

Kontakt

Sie möchten etwas für den Blog beitragen oder haben Fragen?
Senden Sie gerne eine Nachricht.