O texto de Manlio De Domenico em "Complexity Thoughts" propõe uma reflexão crítica e fundamentada sobre o estado atual da Inteligência Artificial, afastando-se do ruído mediático habitual. O autor, um especialista em sistemas complexos, argumenta que o debate público está excessivamente polarizado entre o otimismo tecnológico cego e o medo existencial catastrófico, o que nos impede de ver os riscos reais e imediatos.
O ponto central da análise é a distinção entre a eficiência técnica e a resiliência sistémica. Enquanto a indústria foca em criar modelos de IA cada vez mais potentes e rápidos, De Domenico alerta para a "fragilidade" que estes sistemas podem introduzir na sociedade. De uma perspetiva científica, ele defende que não deveríamos estar apenas preocupados com o que a IA pode fazer, mas sim com a forma como ela interage com as infraestruturas humanas, podendo gerar pontos críticos de rutura se não for integrada com foco na robustez.
Segue a sua análise muito pertinente.
[
Fonte, com infográficos e links para artigos científicos]
That matters, but I think it is not the deepest issue: the problem is not only whether one company “bends” and another does not. The deeper problem is that institutions are increasingly willing to delegate strategically consequential functions to systems that are statistical, opaque, fallible and controlled by very few private actors. This is a systems-design problem.
To be clear, I am not arguing that current AI systems are useless, since they are often very useful for uncountable tasks. I am arguing something narrower and more important: public discourse keeps treating performance as if it were understanding, prediction as if it were judgment and scale as if it were intelligence.
Even the scientific language here is far less settled than the marketing of your favorite AI-gurus suggests: I am not the first one to report this, and it is disheartening to being continuously exposed, nearly everywhere, to such a sloppy narrative.
A recent
Nature Machine Intelligence editorial noted that terms such as intelligence, embodiment and consciousness are widely used but “notoriously difficult to define”. Melanie Mitchell, another authoritative and scientific voice on this matter,
made a related point in Science: debates about AGI are often debates about incompatible concepts of intelligence itself, not merely about engineering timelines.
This is why I remain mostly unconvinced by the familiar apocalyptic narrative according to which AI, by itself, will soon
become the agent of our destruction. In previous post I have discussed about this in detail:
Therefore, instead of technological singularity, I would bet on reaching an evolutionary stage in which our society becomes increasingly dependent on a technology that is ultimately unsustainable. This fact will force us to to either simplify our systems or face the potential collapse of our society.
The more plausible risk is more mundane and, for that reason, more dangerous: we are embedding these systems into tightly coupled social, administrative, military and informational infrastructures without first asking what happens when they fail, drift, are gamed or become too central to replace. Three mechanisms matter here.
1. We are confusing prediction with judgment
What is commonly called “AI” today is, in most high-visibility deployments, a family of large language models trained to predict likely continuations of sequences. The transformer architecture that made this leap possible was explicitly designed as a sequence model built around
attention mechanisms1. That does not make the systems trivial: on the contrary, it explains why they can be astonishingly capable. But it does mean that we should be precise about what they are doing
2 before we let institutions present them as substitutes for strategic reasoning. Let’s not
conflate a statistical analysis with a cognitive function.
This distinction matters most in high-stakes settings: a system can be extremely effective at generating plausible outputs and still be unreliable in the way that matters for command, surveillance, intelligence triage or decision support in conflict. Anthropic itself justified its refusal in part by arguing that current systems are not reliable enough for fully autonomous weapons. Whatever one thinks of the company, that specific point is hard to dismiss as ideological grandstanding: it is an (honest) engineering
claim about failure under uncertainty.
In other words, the relevant question is not whether these systems can impress us, but it is whether they deserve delegation: those are very different thresholds.
2. Centralization turns technical dependence into political dependence
Once a technology becomes infrastructural, governance follows architecture.
If a handful of firms supply the models, the cloud, the interfaces and the update pipeline through which critical institutions operate, then technical concentration becomes a form of political power. Whoever controls access, model behavior, service continuity, pricing and quality of service gains leverage over organizations that can no longer function without that stack.
The same logic applies to AI, and perhaps even more strongly: if strategic analysis, public administration, defense workflows or parts of scientific knowledge production begin to depend on a few model providers, then those providers do not merely sell tools, since they become gatekeepers of cognition at scale. That should trouble democracies regardless of one’s view of any single company.
This is the same systems argument I made in a previous post about conflict propagation:
The real question is often not who is right or wrong in a given episode, but how structure and process determine whether shocks remain local or spread across tightly coupled systems. AI centralization should be read in exactly those terms.
Once a strategic function becomes too dependent on a narrow technological layer, errors, pressures or discretionary choices no longer stay local: they propagate.
3. Hyper-centralization violates the basic logic of resilient systems
From
biology to
infrastructure, resilience rarely comes from one perfect component: it comes from diversity, redundancy, modularity and hierarchy.
Living systems do not survive because they are optimized around a single response to a single threat. They survive because they preserve multiple partially overlapping ways of functioning. In biology this is often described through modular organization and degeneracy: different components can support similar functions, which makes adaptation possible under shock.
Tightly coupled socio-technical systems become vulnerable when efficiency is purchased at the expense of diversity and replaceability.
This is not just a metaphor imported from biology for rhetorical effect: it is a general principle of complex systems. The classic work on interdependent networks showed how failures can cascade across coupled layers: robustness and resilience depend on architecture, not on optimism.
We have seen this repeatedly. For instance, the
CrowdStrike incident showed how a fault in one widely deployed digital layer could propagate rapidly across transportation, finance, communications and resource-management systems. That event is a consequence of digital monoculture and interdependence, not as an isolated software bug.
A different but related example comes from crypto. Even systems built under the banner of decentralization can undergo emergent centralization under stress. The figures below make this visible:
Hype, transaction overload and panic coupled financial activity, online attention and social behavior in ways that produced fragility rather than resilience. Centralization, in practice, can emerge from dynamics even when it was not the design ideal. That is exactly why concentration risk should be treated as a systems property, not only as a market-structure statistic.
I think that this is the point that much of the current AI debate misses: the danger is not only bad outputs, it is in correlated dependence.
The practical implication
It is not “ban AI” nor the equally lazy alternative of “accelerate and adapt”: the relevant priority is to prevent monoculture in critical functions.
That means at least three things:
- keep humans meaningfully responsible for strategic decisions;
- avoid single-vendor dependence in essential public and military workflows;
- design institutions around diversity of tools, auditability, fallback pathways and graceful degradation rather than maximum short-term efficiency.
Of course, the goal is not to stop innovation: it is to stop confusing centralization with progress. Resilient societies are not built by finding one system to trust with everything; they are built by ensuring that no single system, and no single company, becomes too central to fail. “Too big to fail” is not a sign of strength in a healthy society; instead it is evidence that fragility has already been designed into the system.
1 The word sounds cognitive, almost psychological, and that is precisely why it should be handled carefully. In humans, attention is tied to perception, goals, context and conscious or unconscious selection. In transformers, “attention” is simply
a mathematical mechanism for weighting relations among elements of an input sequence so that some tokens contribute more than others to the computation of the next token. The name is useful inside machine learning, but misleading in public discourse. It does not mean the system is focusing, understanding or attending in any human-like sense: it means that a function is assigning weights in a high-dimensional space to improve a statistical prediction.
2 The simplest way to explain a language model is to start from a Markov chain. In a Markov chain, the next state depends on the current state according to transition probabilities. For text, a toy version would say: after “good”, the word “morning” may be more likely than “screwdriver”. This is a bit crude, of course, because natural language depends on much longer context.
A large language model generalizes that idea: instead of using only the immediately previous token, it converts many tokens into numerical representations in a high-dimensional space, then uses attention mechanisms to estimate which parts of the context matter most for predicting the next token. The training objective remains predictive: given context, assign probability to the next token. In practice, repeating this at scale yields systems that can summarize, translate, code and maintain long-range coherence surprisingly well.
That is the source of their “power”: it is not, by itself, proof of understanding, judgment or intelligence in any settled scientific sense. This fact is so clear to the vast majority of computer scientists that my comment about this matter deserves no more than a footnote.
The gap between “very good at prediction” and “deserves strategic delegation” is precisely the gap that public discourse keeps trying to erase.
Sem comentários:
Enviar um comentário