Any sufficiently powerful technology eventually faces the question: who controls it and under what terms? The steam engine required technical standards and labor laws. The automobile brought about rules of the road, insurance, and driver licensing. Aviation led to international agreements on airspace. The internet necessitated norms regarding platform liability, personal data protection, and digital property.
Artificial intelligence is no exception. It is already embedded in banking solutions, medical diagnostics, hiring systems, criminal justice, and military developments. Just like preceding technologies, it raises questions for which existing legal systems have no ready-made answers: who bears responsibility for an algorithm's decision? How can one contest a conclusion made by a model? Whose data was used for training, and under what conditions?
Regulation is society's attempt to formulate answers to these questions before their absence begins to cause real harm. It is this logic, rather than political preferences or a fear of progress, that lies at the core of current regulatory activity surrounding AI.
Three Levels of Rules: Standards, Corporate Policies, and Laws
When people talk about AI regulation, they often mean only government legislation. However, the real picture is more complex: rules are formed at several levels that exist in parallel, sometimes complementing each other and sometimes coming into conflict.
Technical standards are the least visible but fundamentally important level. These are documents developed by specialized organizations: international committees for standardization, industry associations, and academic consortia. They describe how a system must be designed to be considered reliable, reproducible, and verifiable. Standards do not have legal force on their own, but both corporate policies and legislation rely on them. In fact, they set the technical language without which any legal regulation would be meaningless: one cannot demand «algorithmic transparency» without defining exactly what that entails.
Corporate policies represent the second level. Companies developing and deploying AI form their own rules: principles of acceptable use, internal ethics committees, and restrictions on technology application in certain contexts. These rules often emerge faster than government legislation – simply because businesses cannot afford to wait for legislators to untangle the technical details. Corporate policies are more flexible and can be promptly revised, but they remain voluntary and primarily reflect the interests of the organization itself rather than society as a whole.
Government legislation is the most formalized level. These are laws, directives, and administrative acts that establish mandatory requirements with enforcement mechanisms. This is where liability is fixed, and the rights of citizens and duties of organizations are defined. Legislation is inherently slower: its development process requires the reconciliation of many interests, expertise, and political decisions. This is simultaneously its weakness – it always lags behind technology – and its strength: it is through the law that rules mandatory for everyone are established, not just for those willing to follow them voluntarily.
There is no rigid hierarchy between these three levels. In practice, they interact: technical standards form the basis of legislative requirements; laws incentivize companies to develop internal policies; and corporate practices, in turn, influence how standards are formulated. Regulation is not a single document, but an ecosystem of interconnected rules.
Innovation and Control: Tension, Not Opposites
In public discourse, regulation is often presented as a threat to innovation: «if you introduce rules, you slow down development».This is a simplification that ignores the other side of the coin.
A lack of predictable rules also stunts growth – just in a different way. Companies that do not understand what requirements will be placed upon them tomorrow are forced to either over-insure or take on uncertain risks. Investors are more cautious about putting money into an industry where the legal environment is opaque. Users trust systems less if they do not understand who is responsible in the event of an error.
From this perspective, well-thought-out regulation is not a barrier to innovation, but a condition for its sustainability. Aviation did not become less advanced because planes undergo mandatory certification and pilots are licensed. On the contrary, it was the existence of uniform safety standards that allowed the aviation industry to scale globally.
At the same time, the tension between the speed of technological progress and the speed of rule-making is real and unavoidable. A legislative cycle takes years, while technology changes in months. Any law written for a specific version of a technology risks becoming obsolete before it even takes effect.
This is why one of the key discussions in the regulatory community today is dedicated to how laws should be built: through specific technical requirements or through general principles and goals. Specific requirements are easier to verify, but they quickly lose relevance. A principle-based approach – «the system must be safe and explainable» – remains significant longer but requires much more work in terms of interpretation in each specific case.
Why AI is Harder to Describe Legally Than Other Technologies
Most technologies that states regulated previously were tangible or at least clearly defined: a specific device, process, or product. A car can be described, measured, and certified. A drug can be tested during clinical trials.
AI as an object of regulation is structured differently. Firstly, it is not a single object, but a vast class of heterogeneous technologies: statistical forecasting models, computer vision systems, generative neural networks, and ranking algorithms. They are all grouped under a common term, even though they function differently and carry different risks. A universal «AI Law» will inevitably turn out to be either too broad to be applicable or too narrow to cover the diversity of systems.
Secondly, the same model can be used in completely different contexts – from movie recommendations to creditworthiness assessments. The risks in these cases are incomparable. This raises the question: should we regulate the technology itself or its field of application? Most modern regulatory approaches lean toward the latter, focusing on high-risk areas: healthcare, criminal justice, and critical infrastructure management.
Thirdly, AI systems based on machine learning do not have rigidly fixed behavior. A model trained on one dataset may produce different results when input information changes or during the process of fine-tuning. This complicates any form of certification: the «tested and approved» principle works differently here than with a physical device.
Fourthly, fundamental questions of liability arise. If an algorithm made a decision that caused harm, who should be held responsible – the model developer, the company that deployed it, or the organization that used it in a specific situation? The chain of participants is long, and a clear legal answer regarding the distribution of responsibility does not yet exist in most jurisdictions.
Challenges and Future of AI Regulatory Frameworks
Regulation as the Structuring of Uncertainty
It would be a mistake to expect that AI regulation will ever reach a completed state – a point where all questions are resolved and rules are finally fixed. This did not happen with the internet, which has been regulated for several decades and still generates new legal conflicts. It is unlikely to happen with AI either.
A more realistic description is that regulation is a continuous process of adapting rules to a changing technological reality. Legal frameworks inevitably lag behind – this is not a defect of the system, but a consequence of its nature. Laws are written based on already known experience, whereas technologies open up the new.
It is important to understand that this lag does not mean an absence of rules. Even in the absence of specific AI laws, there are general norms of civil liability, personal data protection, anti-discrimination legislation, and consumer protection – and these are applicable to actions involving AI systems right now. Special regulation does not create rules from scratch; it clarifies and adapts existing norms to new conditions.
The international dimension adds another layer of complexity. Technologies do not recognize state borders: a model developed in one country can be used in another and trained on data from a third. This makes international coordination – at least regarding minimum common standards – not just desirable, but necessary. Attempts at such coordination are already underway at the level of international organizations, though the pace of this work significantly lags behind the speed of technological development itself.
Finally, it is worth noting: regulation is not just about restriction. It is also about creating a predictable environment in which technologies can evolve and market participants can make decisions while understanding their rights and obligations. The insurance industry did not shrink because states mandated certain reserve standards. Pharmaceuticals did not stop developing drugs after the introduction of mandatory trials. Regulation sets the «rules of the game» – it does not cancel the game itself.
Where rules are absent or opaque, risks are assumed either by companies (which can afford such uncertainty) or by end-users (who usually cannot). It is this asymmetry, rather than an abstract fear of technology, that makes regulation practically necessary.
AI regulation is not an attempt to stop progress, nor is it a guarantee that everything will go perfectly. It is a gradual, contradictory, and endless process of building rules in response to how technologies change opportunities, risks, and the distribution of power in society. To understand this process is to see it for what it is: complex, slow, but nonetheless essential.