AI: Frontiers, Risks, and the Future

Where AI Development is Headed: Trends, Limits, and Open Questions

A concluding look at AI development trends: scaling, multimodality, infrastructuralization, and the open questions that lack ready-made answers.

Scaling Strategy in AI Development and Training

The Current Stage: Growth as the Dominant Strategy

Over the last decade, the development of artificial intelligence systems has been largely driven by a single principle: more data, more computing resources, and larger models. This approach has yielded tangible results. Models trained on massive corpora of text, images, and other data demonstrate capabilities that seemed technically out of reach just a few years ago – from coherent natural language dialogue to generating images from text descriptions.

Scaling remains one of the central strategies. Increasing the number of parameters, training data volume, and computing power continues to improve performance across a range of tasks. At the same time, the research community is increasingly noting signs of a «diminishing returns» effect: the growth in capabilities is becoming less proportional to the resources invested. This does not mean the strategy has exhausted itself, but it does force us to consider whether scaling alone is sufficient or if architectural and methodological shifts are required.

In parallel, multimodality is evolving – the ability of systems to work simultaneously with text, images, sound, and other data types. This is more than just a technical expansion: multimodal systems are changing the very form of human-computer interaction, bringing it closer to how humans perceive and process information in everyday life. A request no longer requires precise textual phrasing – it can be shown, spoken, or indicated through context.

Impact of AI on Human-Computer Interaction

Changing the Form of Interaction

The technical capabilities of systems are only one side of the story. No less significant are the changes in how people interact with tools that incorporate AI components.

Interfaces are becoming less formal. While interacting with computing systems previously required a strict knowledge of command syntax or query structure, language models allow for the use of everyday speech. This lowers the barrier to entry and expands the user base, but simultaneously creates an «illusion of understanding» where none exists. The system responds coherently, which is perceived as a sign of meaningful thought, even though the generation mechanism involves neither an understanding of meaning nor the presence of intent.

A redistribution of cognitive functions is also taking place. Some tasks that once required human attention, formulation, and judgment are being delegated to systems: rough synthesis, searching through large datasets, and generating options. Humans are increasingly finding themselves in the role of the one who evaluates the result rather than creating it from scratch. This doesn't make the human role less significant, but it changes its character. Where the ability to formulate was once required, the ability to critically evaluate is now paramount.

The normalization of AI tools in daily life is progressing gradually but steadily. Algorithmic systems are already embedded in search, navigation, recommendation services, and productivity apps. The next stage is not the emergence of new capabilities, but their seamless weaving into habitual processes. The function becomes part of the environment, ceases to be perceived as a separate technology, and becomes a «background condition» of work.

The Role of AI as Infrastructure and Environment

Infrastructuralization: From Tool to Environment

One of the key shifts in AI development is the transition from niche applications to an infrastructural role. Systems are ceasing to be separate, specialized tools and are becoming part of the general technological environment upon which other services and processes rely.

This change has several consequences. First, dependence on the functioning of this infrastructure is growing: failures, errors, or limitations in base systems begin to spread widely. Second, the visibility of the technology for the end-user decreases: they interact with an interface without realizing exactly which components are active inside. Third, a new type of dependency is forming – not on a specific product, but on an entire class of systems.

Infrastructuralization also means that issues of reliability, predictability, and control over systems become not only technical but also organizational, legal, and social. When an algorithm influences a credit decision, a medical diagnosis, or a workflow, it is no longer a matter of technical optimization, but a matter of responsibility and governance. Regulation and ethical frameworks, which the professional community has discussed for a long time, are gaining practical urgency precisely because the presence of AI in infrastructure is becoming less and less optional.

Fundamental Limitations of Modern AI Systems

Limits and Uncertainty

In tandem with the growth of capabilities, the fundamental limitations of current approaches are becoming clearer.

Modern systems are trained to predict patterns in data. They do not form causal models of the world, do not maintain context beyond the current interaction, and are incapable of goal-setting in the sense that humans are. These limitations are not accidental – they stem from the very nature of the methods underlying the current generation of systems. Improving data quality or increasing the number of parameters does not fundamentally eliminate them.

Research directions that could overcome these limitations exist, but none have reached a stage of stable results comparable to the achievements of deep learning over the last decade. Reasoning, planning, and handling novel situations without relying on training examples remain areas of active research without an obvious consensus on exactly how to move forward.

The uncertainty of the path ahead is further amplified by the fact that AI development is not a purely technical process. It depends on economic feasibility, on which tasks society deems a priority, on the regulatory climate, and on cultural norms that determine what level of machine involvement in decision-making is considered acceptable. Different social contexts will shape different trajectories of development and application – and this is not a flaw of the system, but a natural consequence of embedding technology into the living social fabric.

Paradigm shifts are also possible. The history of science and technology shows that dominant approaches are replaced not because they exhaust themselves completely, but because new methods allow for solving tasks that the old ones solved poorly or not at all. The probability that the current architectural paradigm will remain unchanged for the next twenty years is low, but predicting what will replace it and when is currently impossible.

Future Challenges and Ethical Questions of AI Integration

Summary and Open Questions

Having moved from basic concepts – what an algorithm is, how machine learning differs from programming, why generation does not equal understanding – to the applied and social dimensions of technology, one steady observation can be made: artificial intelligence today is simultaneously a tool, an infrastructure, and an environment.

As a tool, it expands human capabilities in specific tasks – where speed, scale, and the processing of large volumes of data are important. As an infrastructure, it becomes the invisible foundation upon which other systems and processes are built. As an environment, it begins to shape the conditions in which thinking, decision-making, and human interaction occur.

AI development is a social process as much as a technological one. Society adapts to new cognitive capabilities, redistributes functions, and develops norms and institutions. This process moves slower than technological change, and it is precisely this «speed gap» that creates most of the tensions accompanying the spread of AI systems.

There are no final answers here – and this is not an evasion of conclusions, but an honest recording of the state of the field.

Several questions remain fundamentally open. Where is the line between the useful automation of cognitive labor and the loss of skills vital to human development? How do we establish responsibility for decisions made with the involvement of systems that are not subjects in themselves? What should the role of AI be in areas where the cost of error is high and verification possibilities are limited? How will society deal with the growing asymmetry between those who shape the technological environment and those who exist within it?

A mature stance toward AI development is not the certainty that everything will be «good» or «bad», nor the ability to precisely predict the next step. It is a readiness to work under conditions of uncertainty: to rely on an understanding of how systems are built, to soberly assess their capabilities and limits, to participate in shaping the norms of their use, and to maintain the ability to ask questions for which there are no ready-made answers yet.

It is precisely this skill – to think about the tool without dissolving into it or distancing oneself from it – that remains something technology cannot provide on its own.

Previous Article 35. Automation and the Labor Market: Transforming Tasks, Not Vanishing Vocations AI: Frontiers, Risks, and the Future