AI Systems: Is Your Core Platform the Right Choice?
Have you got a question?
The Source of Inspiration for Artificial Intelligence Systems: Is Choosing One’s Own Core Platform Service Always the Right Solution?
An increasing number of large companies are making use of artificial intelligence systems. The reason is straightforward: to maximize profits with minimal effort by leveraging the potential of new technologies.
For this reason, in recent years the European Union has adopted multiple legislative measures aimed at regulating the phenomenon of emerging technologies operating within the internal market. This need became particularly pressing following the Covid-19 pandemic, when an ever-growing number of users increased their reliance on technological tools, including for everyday activities.
By way of example, reference can be made to virtual assistants, cloud services, and many other tools identified as “core platform services,” all of which are subject to obligations and measures primarily established at the European level.
The European Union’s objective is, above all, to harmonize domestic legislation with European Regulations. It is within this framework that the most recent legislative acts have been adopted: the Digital Markets Act (Regulation (EU) 2022/1925), the Digital Services Act (Regulation (EU) 2022/2065), and the Artificial Intelligence Act (Regulation (EU) 2024/1689).
The Digital Markets Act is designed to ensure fairness, transparency, and openness in digital markets dominated by large technology platforms. It seeks to limit anti-competitive conduct by so-called “gatekeepers,” namely platforms which, due to their size and market power, are capable of exerting a significant impact on businesses and consumers.
The Digital Services Act aims specifically to regulate digital markets by imposing enhanced oversight on companies providing services defined as “high risk,” meaning those with a systemic impact on the online environment, including with a view to protecting minors accessing online services. The Artificial Intelligence Act, on the other hand, introduces specific transparency obligations based on the classification of systems as “high risk” or “medium risk,” governing the provision of AI-based services.
In particular, the Artificial Intelligence Act sets out detailed obligations relating to the use of artificial intelligence systems. This is clearly reflected in Recital 106, which provides that “Providers placing general-purpose AI models on the Union market should ensure compliance with the relevant obligations of this Regulation,” taking into account that, as stated in Recital 105, “General-purpose AI models, in particular large generative AI models capable of generating text, images and other content, present unique innovation
opportunities, but also challenges for artists, authors and other creators, and for the way in which their creative content is created, distributed, used and consumed.”
The primary obligation incumbent upon digital platforms—especially those classified as “high risk”—is to ensure an adequate level of transparency and to implement a quality management system in compliance with Article 17. In this respect, providers of high-risk AI systems must establish a quality management system that ensures, inter alia:
- the techniques, procedures and systematic actions to be used for the design, design control and design verification of the system;
- the examination, testing and validation procedures to be carried out before, during and after the development of the system;
- systems and procedures for data management, including acquisition, collection, analysis, labelling, storage, filtering, extraction, aggregation, retention and any other data-related operation performed prior to and for the purpose of placing high-risk AI systems on the market or putting them into service;
- the establishment, implementation and maintenance of a post-market monitoring system.
Failure to comply with these obligations may expose companies to liability, including liability arising from breaches of other legal frameworks, such as competition law.
This is what occurred in the case of Google, which allegedly used content from video editors circulating on YouTube to train its artificial intelligence systems without providing adequate compensation to those creators.
Accordingly, the European Commission has launched an antitrust investigation into Google to determine whether the company used content from online publishers and videos uploaded to YouTube to train its generative artificial intelligence services (“AI Overviews” and “AI Mode”) without offering appropriate remuneration.
The legal basis for the investigation is an alleged abuse of dominant position, in violation of Article 102 TFEU and Article 54 of the EEA Agreement. From the perspective of web publishers, the European Union is scrutinizing tools such as AI Overviews and AI Mode. The responses generated by these tools would be based on editorial content (indeed linked alongside the AI-generated answers) for which companies allegedly received no fair compensation. Moreover, Google would have drawn on such content without granting creators the opportunity to withhold consent, also in light of the resulting reduction in internet traffic from Google Search caused by these tools.
The Commission clarified that the investigation will focus on three key aspects:
- the nature of the conditions imposed on publishers and content creators for the use of their materials;
- the existence of effective mechanisms allowing rights holders to oppose such use without suffering negative consequences in terms of visibility or access to Google’s services;
- the possible existence of discriminatory treatment between Google and its competitors regarding access to the content necessary to train AI models.
The investigation initiated by the European Commission constitutes the official starting point of the regulatory oversight path undertaken by European institutions concerning the practices of major technology platforms. It represents the second formal proceeding opened against Google in relation to the training of generative AI systems. The first focused on the search engine’s anti-spam policy and the possible discriminatory treatment of news websites and publishers in search results, allegedly penalizing the hosting of third-party content on authoritative domains where adequate editorial supervision was lacking.
Should the investigation confirm a breach of antitrust rules by Google, the company could face a fine of up to 10% of its annual global turnover—the maximum penalty provided under EU rules for abuse of dominant position, as specified in Regulation 1/2003. Considering that the overall revenue of the Alphabet group reached approximately USD 307 billion in 2023, the fine could theoretically exceed USD 30 billion, potentially constituting one of the highest penalties ever imposed by the European Commission.
Therefore, from a factual scenario involving violations carried out through artificial intelligence systems, the entire body of existing regulatory legislation may be triggered. As in this case, the use of artificial intelligence systems may give rise to breaches of competition law, namely the rules designed to safeguard fair and contestable markets.
In particular, the recent wave of European legislative measures has generated significant compliance challenges, especially considering that certain services targeted for regulation fall within the scope of all three Regulations.
Such circumstances require a comprehensive assessment grounded in the awareness that we are facing a form of European regulatory hypertrophy, which makes it difficult to determine which rules apply in specific cases. Many companies operate in digital markets with substantial turnover and/or user bases (thereby triggering the application of the Digital Markets Act); they frequently provide services classifiable according to risk (triggering the Digital Services Act); and they often rely on artificial intelligence systems (triggering the Artificial Intelligence Act).
Is it truly appropriate to place regulatory frameworks governing the same sector within separate legislative acts? Or does this approach increase the margin of error and, consequently, the risk of sanctions?
- Oracle Italy S.T.A. s.r.l. Via Giovanni Porzio n.4- Isola B2 80143, Napoli
- (+39) 02 3056 5370
Book a call back
Share this article
Got a question?
Please complete this form to send an enquiry. Your message will be sent to one member of our team.
Related posts

DIGITAL OMNIBUS: IS SIMPLE ALWAYS BETTER?
I. A complex regulatory landscape. In recent years, the topic of digitalization—and with it, the protection of users’ personal and sensitive data—has simultaneously

AI in Finance
Artificial Intelligence in the Financial Sector – Regulatory Challenges Between DORA, MiCA, the AI Act, and Banking Supervision I. Introduction Artificial Intelligence (“AI”)

WHO Drafts Historic Pandemic Treaty
A historic agreement has been reached in Geneva: the World Health Organization (WHO) has formalized the draft of an international treaty for pandemic

EU labels tech
From 20 June 2025, energy and environmental labelling will become mandatory within the European Union for smartphones and tablets. This measure marks a
