top of page

When Logos Meets Dao: A Philosophical Genealogy of the Ideas Behind Today’s AI

Philosophy, institutions, economics, and politics: West and East as two alphabets of the same modernity.


Re-reading Plato and Confucius with an “AI” at my side doesn’t produce definitive answers. It produces better questions: what we treat as true, what we call just, and which parts of the world—markets, states, platforms—we decide to automate. From the sixth century BCE to 2026, a dual timeline to navigate between ideas and power.

By Andrea Viliotti - Framework GDE


Livo, February 10, 2026


Logos and Dao AI governance
Logos and Dao AI governance

For some time now, when I open a classic—Plato or Confucius, Augustine or Nāgārjuna—I keep an “AI” nearby. I don’t use it to write summaries for me. I use it the way you use a ruler when you draw. It doesn’t decide in my place, but it makes proportions visible that the naked eye tends to miss. It shows where an argument leaps, where a metaphor carries its weight and where it collapses. And, above all, it forces me to separate what a system can compute from what a society must deliberate.


That distinction, which sounds abstract, has become political and economic. In recent years—and in 2026 more than ever—the adoption of AI systems has braided itself with regulation, investment, technological sovereignty, labor, and security. It is no accident that different institutions have begun to codify principles and processes: from the U.S. approach to AI risk management to multilateral recommendations on ethics and rights, all the way to Europe’s legal framework and China’s measures on generative AI services. 34,32,33,37,39,40,42


The point is that today’s competition is not only about “who has the best model.” It is about what idea of truth, responsibility, and power gets deposited inside infrastructure. This is where the West/East contrast becomes useful—not as an exotic postcard, but as an archive of solutions (and mistakes) tested over centuries, in different languages. 49,51


My thesis, if I have to compress it into one sentence, is this: the digital age does not erase the differences between the two philosophical stories—it makes them measurable. An algorithmic lens, precisely because it is poor in wisdom, forces us to name what we used to leave implicit. And it sends us back to the original knot: what do we treat as true; what do we treat as just; who gets to decide; and which decisions should never be automated.

The “AI lens” in five moves (reading, not prophecy)

1) It breaks a text into units (words, sentences, n-grams) and measures frequencies and co-occurrences.

2) It looks for regularities: recurring definitions, chains of inference, oppositions (true/false; being/becoming; individual/community).

3) It compares contexts: how the same concept shifts across eras and languages.

4) It flags anomalies: logical jumps, ambiguities, metaphors that stand in for an argument.

5) It returns “candidates”: reading hypotheses, not conclusions. Judgment remains human.

Two alphabets of truth

If I had to describe the West’s vocation with a single word, I would choose logos. Not because reason is absent elsewhere, but because here—starting in Greece—truth often presents itself as an argument: something you can lay out, contest, defend. Aristotle opens the Metaphysics with a line that is already a program: “All human beings by nature desire to know.” Knowledge, in this tradition, is frequently a conquest: a move from the sensible to the conceptual, from the particular to the universal. 10,11


Across a large part of Asia, truth tends to appear as a path. It is not owned; it is walked. In the Analects, Confucius ties learning to practice until it becomes a habit of character: learning and practicing what one has learned is itself a kind of joy. In the Daodejing, the Dao escapes any final definition: “The Dao that can be spoken is not the enduring Dao.” And in Buddhist traditions the question is often less “what is true?” than “what liberates from suffering?” 1,2,3,4,6,7


An AI lens, when it places these texts side by side, does not “see” spirituality or secularization. It sees structures. It notices, for example, how the West likes to turn questions into well-formed problems, while many Eastern lineages tolerate the co-presence of levels—logical, experiential, ethical—that do not reduce neatly to one another. But the lens also spots the error in our most convenient caricature: the East is not only harmony; the West is not only conflict. Both have produced skepticism and dogmatism, asceticisms and techniques, holy wars and practices of compassion.


That is why the West/East contrast works only if we treat it as a dual timeline: two lines that draw close and drift apart, influence one another, translate badly and then better. It is not a duel between civilizations; it is a history of imperfect translations. Today, with AI, translation has also become economic infrastructure: models trained on global corpora, technical standards, public policy, industrial supply chains, education systems. 52,49

Dual West/East timeline (origins → 2026)

Dual timeline (not to scale: historical compression to avoid crowding in the contemporary segment): selected philosophical, institutional, and technological nodes. Note: BCE dates are reported in conventional form and, for some figures, approximate (sources: academic encyclopedias and institutional documents). Sources: 1,2,3,6,8,10,23,24,12,13,15,17,18,20,39,40,42,55


Figure 1. Dual West/East timeline. Source: author synthesis based on the numbered references above.
Figure 1. Dual West/East timeline. Source: author synthesis based on the numbered references above.

When ideas become institutions: metaphysics applied

A philosophy never stays only in books. Sooner or later it settles into a grammar of decisions: law, schooling, administration, markets, liturgy. This is where ideas stop being only discourse and become procedures. In a sense, every institution is a social algorithm: it takes particular cases and classifies them, decides, archives, sanctions.

Rome is an early laboratory. When I read the major syntheses on Roman law, I am struck by how early the West learned to imagine order as a system of categories: persons, things, obligations; and then exceptions, interpretations, hierarchies. It is a kind of normative engineering that makes empire possible but also prepares—centuries later—the modern vocabulary of rights. 26,27


In parallel, in China the Confucian tradition develops a theory of order that passes through ritual and relation: stability does not arise from a contract among abstract individuals, but from the quality of bonds and the virtue cultivated inside institutions. Power, here, is legitimate if it educates—if it turns society into a permanent school. When, many centuries later, China enters an era of economic modernization, it does so carrying this idea of administration as pedagogy. 1,2,28,29


Medieval Europe adds another ingredient: the idea that truth has a revealed dimension, yet remains discussable with rational tools. Augustine and, later, Thomas Aquinas work on a delicate balance between faith and reason—never perfectly stable, but intellectually productive. Universities, commentaries, disputations take shape: a kind of pre-modern “machine learning”—not automated, but cumulative—in which knowledge grows through variation and correction. 23,24,25


Meanwhile, across Asia, other techniques of attention mature. Buddhism, in its many schools, brings the mind to the center: consciousness is not a monolith; it is a process. With Nāgārjuna, for example, a critique of essences becomes a critique of the language’s claims to absoluteness. In Japan, centuries later, the dialogue between Buddhism and European philosophy will give rise to the Kyoto School: an explicit attempt to translate the West without being absorbed by it. 6,7,8,9,22

The “AI lens” as a map of questions (ontology, epistemology, ethics, politics)

Conceptual schema: AI is used as a lens to formulate comparable questions across traditions and arenas (not as a final authority).

Figure 2. The AI lens as a question map. Source: author schema (conceptual).
Figure 2. The AI lens as a question map. Source: author schema (conceptual).

Science, capital, empires: when reason becomes a machine

The West’s decisive passage into modernity is not only philosophical; it is technical and economic. When Descartes proposes suspending belief in anything that can be doubted, he is not only talking about knowledge; he is describing a replicable method. A mind, for him, should work like a laboratory: isolate variables, break problems down, rebuild. The metaphor of the “machine” enters Europe’s vocabulary and soon becomes infrastructure: applied science, productive organization, the fiscal state. 12


This is where Italy enters the story not only as a place of ideas, but as a hinge between ideas and markets. The Renaissance also means cities, bookkeeping, manufacturing, commercial law, banks. It is a Mediterranean version of a phenomenon that will later become Atlantic. When, much later, the United States builds pragmatism—the philosophy that measures truth by effects—it finds a question already waiting: what is an idea for, if it does not change the world?


Kant, at the heart of the Enlightenment, attempts the most ambitious operation of all: to delimit the field of reason in order to save its force. “Sapere aude” becomes the slogan of a culture that wants emancipation. Yet in the same movement a paradox is born that we now recognize inside AI systems: the more we formalize, the more we risk confusing the map with the territory. Reason, when it turns into procedure, can produce freedom; it can also produce blind bureaucracy. 13


In the nineteenth century, Hegel and Marx shift the axis further. History is not only a theater of ideas; it is a process. Marx distills the point with a single stroke: “The philosophers have only interpreted the world… the point is to change it.” From here on, European philosophy cannot pretend to be neutral with respect to labor, capital, and technique. And economics—in Europe and, especially, in the United States—takes seriously the idea that productive organization is metaphysics in action. 14,15


In Japan, the nineteenth century is also the start of an accelerated translation. The Meiji Restoration opens a modernization cycle that does not simply copy the West; it selects, reorganizes, grafts. The Japanese state builds archives, schools, infrastructure: a way of turning knowledge into public policy that we can still recognize today in institutional guidance on AI use—from public administration to business procurement. 30,31,43,44


China takes a different path: first the colonial shock and the crisis of empire, then revolution, then—starting in 1978—reform that redefines the relationship between planning and markets. Here the philosophical question does not vanish; it changes form. What does “truth” mean when growth becomes a criterion of legitimacy? And what does “order” mean when the digital network enters every transaction? 28,29


The twentieth century: language, power, technique

The twentieth century discovers—forcefully—that the problem is not only reality, but the language with which we describe it. Wittgenstein says it almost brutally: “The limits of my language mean the limits of my world.” It sounds unexpectedly contemporary: an AI model, after all, has a world as large as its data and representations. When we change language, we change what we can see—and what we can govern. 17


In the United States, pragmatism puts the criterion of effects at the center. In an era of industry, large organizations, mass markets, this approach becomes almost a national culture: truth is not an icon; it is a test. It pairs naturally with technological innovation and with a capitalism able to turn science into products. In 2026, that inheritance is visible in the way the U.S. alternates acceleration and governance, often via standards and frameworks more than via a single unified code. 16,34,36


In Europe, by contrast, the wound of wars and totalitarianisms makes a political question unavoidable: what can reason do when it becomes apparatus? Heidegger—and then many others—redirect attention to experience: the human being is not only a subject who observes; it is being-in-the-world. Foucault, in a different vocabulary, shows how knowledge and power co-produce one another: institutions do not merely apply a truth; they manufacture it. For the European Union, this lesson eventually becomes a constructive obsession: designing markets and rights together. 19,18,37,39


Italy, inside this story, is a case study. It is the homeland of a humanism that celebrates the human being—and also a laboratory of fragile modernity: a late nation-state, uneven industrialization, reconstruction, European integration. When we debate AI in Italy today—in public administration, in firms, in research—we are still negotiating that old balance between creativity and rule, between craft and institution. 45,46


In East Asia, the twentieth century is also a head-on confrontation with the West. Japan produces a rare philosophical experiment: the Kyoto School, which uses European conceptual tools to think Buddhist categories such as “nothingness” and non-duality. It is a useful reminder: translation is not imitation; it is transformation. And the same logic holds for technology today: importing a model does not automatically import its governance. 22,20


China, in the same century, moves through revolution, planning, opening. From the lens’s point of view, what shifts are the criteria of legitimation: from tradition, to revolution, to economic and social performance. When, in 2023, targeted measures are issued for generative AI services, that braid between innovation and control becomes visible: an idea of order that does not match the Western one, but is not therefore “irrational”—it is a different applied metaphysics. 40,41


2012–2026: AI as infrastructure—and governance as philosophy in action

If one date matters on the technical side, it is 2012: the year a deep neural network (AlexNet) shows, on a now-famous benchmark, how the combination of data, compute, and the right architecture can shift performance. In the years that follow, attention moves from images to language: the Transformer (2017) and then the family of large language models make practical what for decades had remained a promise—fluid interaction with text and knowledge. 55,56,57


The consequence is not only technological; it is economic. When a model becomes an interface—when it can “speak” and “write”—it enters production as a new form of capital. But the lens sends me back to the classics: every technology that promises efficiency also redraws the boundaries of responsibility. Who answers for an automated decision? What does “competence” mean when part of cognitive work becomes delegable?


The European Union has chosen an answer consistent with its own history: turning the question into law. The AI Act is built as a risk-based regulation, with progressive deadlines up to full applicability in 2026 (with some obligations kicking in earlier). It is the European idea—shaped by the twentieth century—that innovation is not neutral and that markets and rights must be designed together. 37,39,38


The United States oscillates between two impulses: a culture of rapid innovation and a growing demand for governance. Executive Order 14110 (2023) articulated an agenda of “safe, secure, and trustworthy” AI; in 2025 a new executive order revoked it, shifting emphasis toward removing barriers to innovation and strengthening leadership. In parallel, institutional tools such as NIST’s AI risk management framework remain: a characteristically American approach built around standards and distributed accountability. 35,36,34


China, by contrast, frames governance as an extension of informational order. In 2023 it publishes interim measures for generative AI services; in 2025 it opens consultation on rules for “personified” or human-like interactive AI services. Here the lens sees a deep continuity: administrative tradition as a device of cohesion and control, now applied to the digital sphere. 40,42


Japan tries a middle path: guidelines and recommendations for responsible use, both in the private sector and in public administration. This is not only a technical choice; it is a cultural reflex. After two centuries of translations—from Meiji modernization to twentieth-century philosophical syntheses—Japan often favors adaptive instruments that steer behavior without claiming to exhaust it in a single code. 43,44,30


Italy, in 2024–2026, moves within the European architecture but with its own constraints: highly uneven public administrations, a dense fabric of SMEs and industrial districts, cultural heritage that calls for protection, and a school system that must be updated. Hence the national strategy for 2024–2026 and, in 2025, a framework law that sets principles and delegations on AI, explicitly reaffirming human decision-making and vigilance over risk. 45,46,47,48


Globally, geoeconomic fragmentation—value chains, controls on critical technologies, security—forms the backdrop for all of this. Multilateral institutions debate ethics and principles (UNESCO, OECD) while also tracking productivity and the distribution of gains (IMF, BIS). Here the lens becomes practical: every time an actor invokes “innovation” or “security,” it is making a philosophical choice about what counts—and about who should bear the cost of uncertainty. 51,32,33,54,49


What to watch in 2026: five tensions between ideas, markets, and institutions

If I had to turn this journey—from origins to 2026—into an operational agenda, I would choose five tensions. They are philosophical tensions, but they have immediate consequences for investment, work, trade, and security.


1) Formalization vs. context. The West has a long history of definitions and boundaries; many Eastern traditions remind us that meaning depends on context. In AI, this tension is concrete: rigid rules vs. adaptive systems; compliance vs. continuous learning.

2) Individual rights vs. relational duties. Europe speaks the language of rights, China often that of order, Japan that of harmony, the United States that of opportunity and distributed responsibility. These are not caricatures; they are different genealogies now colliding inside platforms and supply chains. 32,33,37,36,40,43,45


3) Security vs. openness. The vocabulary of security—national, cyber, social—grows everywhere. Yet the history of ideas teaches that every closure also produces blind spots: less exchange means less critique and less cross-checking. In 2026 the question is not whether to add controls, but where—and with what accountability. 51,49


4) Productivity vs. distribution. Economic institutions watch productivity promises closely, but also the risk of concentration. Here Marx is not a monument; he is a reminder: whenever a technology raises productivity, some benefit more than others, and someone loses bargaining power. 15


5) Automation vs. deliberation. For states and firms, the temptation is to treat AI as a decision shortcut. But the shared lesson of both traditions runs the other way: what matters most—public truth, justice, dignity—is not produced by calculation alone; it requires judgment. 34,32,48

Minimal glossary (2026) for reading AI governance documents

• Risk: probability × impact; in regulation it is often categorized by use context.

• Foundation model: a model trained on broad data, reusable across multiple tasks.

• High-risk system: a use case that can affect rights and safety (definitions vary by jurisdiction).

• Alignment: techniques and processes to ensure model outputs respect human goals and constraints.

• Auditability: the ability to reconstruct data, choices, accountability, and controls across a system’s lifecycle.

• Governance: rules, incentives, controls, and responsibilities that turn a technology into an institution. 34,37,40


Throughline — The question that returns

If I line up two and a half millennia of philosophy, the West gives me one obsession: making explicit what is implicit; turning intuitions into arguments; building institutions that can withstand dissent. The East gives me another obsession: remembering that no definition exhausts experience; that every rule must contend with context; that the most dangerous power is the power that does not know it is power. AI, used as a lens, does not resolve this tension. It makes it impossible to ignore. And perhaps that is already a lot: in 2026 the choice is not between West and East, but between unconscious automation and conscious deliberation. 10,1,6,34,32


Sources and references (academic and institutional)

1.      Stanford Encyclopedia of Philosophy — Confucius (2023 (rev.)). Stanford University.

2.      Internet Encyclopedia of Philosophy — Confucius (n.d.). University of Tennessee at Martin (IEP).

3.      Stanford Encyclopedia of Philosophy — Daoism (2025 (rev.)). Stanford University.

4.      Internet Encyclopedia of Philosophy — Daoist Philosophy (n.d.). University of Tennessee at Martin (IEP).

5.      Internet Encyclopedia of Philosophy — Laozi (n.d.). University of Tennessee at Martin (IEP).

6.      Stanford Encyclopedia of Philosophy — Buddha (2011). Stanford University.

7.      Internet Encyclopedia of Philosophy — Buddha (n.d.). University of Tennessee at Martin (IEP).

8.      Stanford Encyclopedia of Philosophy — Nāgārjuna (2022 (rev.)). Stanford University.

9.      Internet Encyclopedia of Philosophy — Nāgārjuna (n.d.). University of Tennessee at Martin (IEP).

10.  Stanford Encyclopedia of Philosophy — Aristotle (2025 (rev.)). Stanford University.

11.  Internet Encyclopedia of Philosophy — Aristotle (n.d.). University of Tennessee at Martin (IEP).

12.  Stanford Encyclopedia of Philosophy — René Descartes (2020 (rev.)). Stanford University.

13.  Stanford Encyclopedia of Philosophy — Immanuel Kant (2023 (rev.)). Stanford University.

14.  Stanford Encyclopedia of Philosophy — Georg Wilhelm Friedrich Hegel (2020 (rev.)). Stanford University.

15.  Stanford Encyclopedia of Philosophy — Karl Marx (2020 (rev.)). Stanford University.

16.  Stanford Encyclopedia of Philosophy — Pragmatism (2021 (rev.)). Stanford University.

17.  Stanford Encyclopedia of Philosophy — Ludwig Wittgenstein (2024 (rev.)). Stanford University.

18.  Stanford Encyclopedia of Philosophy — Michel Foucault (2023 (rev.)). Stanford University.

19.  Stanford Encyclopedia of Philosophy — Martin Heidegger (2025 (rev.)). Stanford University.

20.  Stanford Encyclopedia of Philosophy — Japanese Zen Buddhist Philosophy (2006 (rev. 2019 archive)). Stanford University.

21.  Internet Encyclopedia of Philosophy — Huineng (n.d.). University of Tennessee at Martin (IEP).

22.  Stanford Encyclopedia of Philosophy — The Kyoto School (2023 (rev.)). Stanford University.

23.  Stanford Encyclopedia of Philosophy — Augustine of Hippo (2019). Stanford University.

24.  Stanford Encyclopedia of Philosophy — Thomas Aquinas (2022). Stanford University. .

25.  Internet Encyclopedia of Philosophy — Thomas Aquinas (n.d.). University of Tennessee at Martin (IEP).

26.  Columbia Law School Scholarship Archive — Roman Law and Economics, Vol. 1: Institutions and Organizations (2020). Columbia Law School.

27.  Cambridge University Press (Journal of Roman Archaeology) — Review: “Law in the Roman Provinces” (Oxford Studies in Roman Society and Law) (2022). Cambridge University Press.

28.  World Bank — China's reform experience to date (1992 (doc. record)). World Bank.

29.  International Monetary Fund — China: Overview of Reforms (Chapter) (1993 (eLibrary)). IMF.

30.  National Archives of Japan — Our Holdings: Kobunroku (Meiji period records, 1868–1885) (n.d.). National Archives of Japan.

31.  National Diet Library, Japan — Kaleidoscope of Books: The Dawn of Modern Japanese Architecture (post-1868 context) (n.d.). National Diet Library.

32.  UNESCO — Recommendation on the Ethics of Artificial Intelligence (2021). UNESCO.

33.  OECD — OECD AI Principles (2019). OECD.

34.  NIST — Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023). U.S. Department of Commerce.

35.  Federal Register (USA) — Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023). U.S. Federal Register.

36.  The White House — Executive Order: Removing Barriers to American Leadership in Artificial Intelligence (2025). Executive Office of the President (USA).

37.  EUR-Lex — Regulation (EU) 2024/1689 (AI Act) (2024). European Union.

38.  European Commission — AI Act enters into force (2024). European Commission.

39.  European Union – Shaping Europe’s digital future — AI Act: application timeline (incl. 2 Aug 2026 full applicability) (n.d.). European Union.

40.  Cyberspace Administration of China (CAC) — Interim Measures for the Management of Generative AI Services (2023). CAC.

41.  OECD STIP — Policy initiative: China's Interim Measures for Generative AI Services

42.  Cyberspace Administration of China (CAC) — Draft measures on management of “personified” / human-like interactive AI services (consultation) (2025). CAC.

43.  Ministry of Economy, Trade and Industry (Japan) — AI Guidelines for Business (Appendix Ver 1.1) (2024). METI (Japan).

44.  Digital Agency (Japan) — Guideline for Japanese Governments' Procurements and Utilizations of Generative AI Systems (2025). Digital Agency (Japan).

45.  AgID (Agenzia per l’Italia Digitale) — Strategia italiana per l’Intelligenza Artificiale 2024–2026 (PDF) (2024). AgID / Governo italiano.

46.  Dipartimento per la trasformazione digitale (Italia) — Pubblicato il documento completo della Strategia Italiana per l'IA 2024–2026 (2024). Governo italiano.

47.  Gazzetta Ufficiale della Repubblica Italiana — Legge 23 settembre 2025, n. 132 – Disposizioni e deleghe al Governo in materia di intelligenza artificiale (2025). Gazzetta Ufficiale.

48.  Normattiva — Legge 23 settembre 2025, n. 132 (testo vigente) (2025). Istituto Poligrafico e Zecca dello Stato.

49.  Bank for International Settlements — Annual Economic Report 2024, Chapter III: Artificial intelligence and the economy (2024). BIS.

50.  Bank for International Settlements — Annual Economic Report 2025 (PDF) (2025). BIS.

51.  International Monetary Fund — Geoeconomic Fragmentation and the Future of Multilateralism (Staff Discussion Note SDN/2023/001) (2023). IMF.

52.  World Bank — World Development Report 2021: Data for Better Lives (2021). World Bank.

53.  International Monetary Fund — World Economic Outlook, October 2025 (2025). IMF.

54.  International Monetary Fund — Blog (Jan 19, 2026): Global economy and AI productivity promises (2026). IMF.

55.  A. Krizhevsky, I. Sutskever, G. E. Hinton — ImageNet Classification with Deep Convolutional Neural Networks (NeurIPS 2012) (2012). NeurIPS.

56.  A. Vaswani et al. — Attention Is All You Need (2017). arXiv/NeurIPS. .

57.  T. B. Brown et al. — Language Models are Few-Shot Learners (2020). arXiv/NeurIPS.

Commenti

Valutazione 0 stelle su 5.
Non ci sono ancora valutazioni

Aggiungi una valutazione
bottom of page