Search
Close this search box.

Artificial intelligence in business

Have you got a question?

Who is liable for wrong decisions?

Artificial Intelligence (AI) is no longer a topic of the future – it has become a reality in everyday business operations. Whether in customer communication, recruitment, or automated data analysis: AI systems are making decisions that were once the domain of humans. But what happens when these decisions are wrong? Who is responsible when an AI system discriminates, makes faulty diagnoses, or causes financial damage?

AI systems are increasingly taking on tasks with significant legal implications. They analyze job applications, assess credit risks, manage supply chains, and even make medical recommendations. In doing so, they often act autonomously, learn from data, and evolve – with the goal of being more efficient and objective than humans. But this very autonomy raises legal questions: Is AI merely a tool – or a “quasi-independent” actor? And if the latter is true: Who is liable for its decisions?

Under current law, AI is not a legal subject. It cannot hold rights or obligations. Responsibility therefore still lies with humans – more precisely, with the company that uses the AI. In practice, this means: The company is liable for damages caused by the use of AI, for example under the principles of product liability, tort law (§ 823 BGB), or contractual obligations. Various parties may be involved: the software developer (e.g., in case of faulty programming), the user within the company (e.g., due to incorrect configuration), or responsible decision-makers, if they rely on AI despite foreseeable risks.

The EU AI Act represents the world’s first comprehensive legal framework governing the use of artificial intelligence. Its goal is to minimize risks and define clear responsibilities. High-risk AI systems – such as those used in law enforcement or HR – will be subject to strict requirements. The AI Act includes transparency obligations for AI providers, risk assessments prior to deployment, and stricter liability rules for violations. For companies, this means they must ensure not only the technical but also the legal compliance of their AI systems.

AI can accelerate processes, improve decisions, and reduce costs – but it does not relieve anyone of legal responsibility. Those who use AI today must be prepared to answer for its consequences tomorrow. It is therefore crucial to link legal, technical, and ethical considerations from the outset. Only then can the potential of AI be harnessed without falling into legal pitfalls.

Legal expertise is essential when integrating AI systems into your business.

This article was created with the support of artificial intelligence – combining carefully crafted prompts, legal expertise, and automated text generation. While it does not constitute legal advice and the author assumes no liability for its content, the legal questions it raises are very real. As AI-generated content becomes more prevalent, the boundaries of authorship, accountability, and liability are being tested. Legislators and courts will increasingly need to address the question: When machines contribute to the creation of content or decisions, where does human responsibility begin – and where does it end?

Book a call back

Fill out our form and one of our experts will get back to you.
Landing Page - Get In Touch - Callback

Share this article

Got a question?

Please complete this form to send an enquiry. Your message will be sent to one member of our team.

Landing - Contact Form

Related posts

Got a question?

Please complete this form to send an enquiry. Your message will be sent to one member of our team.

Landing - Contact Form

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.