top of page

Global Scenarios and Chinese Strategies for General Artificial Intelligence

Immagine del redattore: Andrea ViliottiAndrea Viliotti

“Issue Brief Chinese Critiques of Large Language Models: Finding the Path to General Artificial Intelligence,” authored by Wm. C. Hannas, Huey-Meei Chang, and Maximilian Riesenhuber within the Center for Security and Emerging Technology, aims to explore the stance of major Chinese institutions and researchers on large language models. The study highlights China’s interest in a diversified development of artificial intelligence, especially in achieving true General Artificial Intelligence (GAI) while noting the potential limitations of large-scale language models alone. From national strategies to approaches inspired by the workings of the human brain, it depicts a landscape in which a “human-centered” focus and social value play pivotal roles.


Strategic Insights for Entrepreneurs, Executives, and Technicians on General Artificial Intelligence

Entrepreneurs are encouraged to carefully assess their return on investment in solutions based on Large Language Models (LLMs, AI systems trained on vast bodies of text to predict or generate language). This is particularly relevant in a context where General Artificial Intelligence is becoming an increasingly important target. Although LLMs produce advanced textual outputs and responses, they may encounter uncertainty with complex tasks, raising the risk of misallocated resources if businesses fail to consider alternative avenues. According to the data presented, Western investments in LLM-related systems can amount to tens of billions of dollars. However, Chinese perspectives underscore caution and argue that overreliance on scaling language models might impede the pursuit of a true General AI. As a result, entrepreneurs are encouraged to keep an open mind, explore brain-inspired research projects, and invest in platforms that incorporate physical sensors, robots, and modular neural networks.


Executives can seize the opportunity to review how they direct internal organizational resources with an eye toward integrating General Artificial Intelligence. While LLMs are frequently employed in text analysis systems or chatbots, the study notes that they can generate imprecise results and ‘hallucinations,’ or fabricated responses. For this reason, continuous monitoring of AI projects and clearer corporate goals become essential, so that leaders can pivot swiftly to alternative paradigms if large-scale language models plateau in performance.


Technicians face the challenge of augmenting LLM-based platforms with specialized reasoning modules, biologically inspired algorithms (for instance, spiking neural networks, which replicate the spike-based signaling of neurons), and episodic memory mechanisms akin to those in the human brain. This “brain-inspired” direction focuses on an explicit “reasoning engine” capable of logical abstraction and decision-making. Such project strategies may help overcome the current hurdles in processing non-textual data, understanding nuanced semantic contexts, and preventing inconsistent outputs.

General Artificial Intelligence
General Artificial Intelligence

Chinese Diversification and Expanding Pathways Toward General Artificial Intelligence

A key finding from the study is the emphasis placed by Chinese strategic guidelines on General AI. There is deliberate intent to avoid a “monoculture” reliant solely on LLMs, which—despite their potential—are not necessarily sufficient for human-like competencies. China’s top institutional leaders, according to the authors, see investment in models like GPT or Claude (large-scale systems trained on massive text corpora) as worthwhile but also incomplete.


Instead, China promotes a diversified approach that includes brain-inspired neural networks, hybrid models combining symbolic rules with statistical approaches, and techniques involving physical sensors for “embodied” learning (in which an AI agent interacts with the real or simulated environment). National and municipal policies encourage systems that combine data and real-world interaction, all aimed at achieving robust cognitive autonomy—an ability to learn beyond textual boundaries.


Institutions like the Beijing Institute for General Artificial Intelligence and the Beijing Academy of Artificial Intelligence are central to this effort. From inception, they have prioritized surpassing the mere scaling of language models. Researchers in “brain-inspired AI” advocate replicating the complexity of the human nervous system, holding that purely transformative networks—even those enriched by massive datasets—cannot fully achieve human-like comprehension, abstraction, and creativity. According to statements cited in the study, these models excel at predicting text but lag in advanced logic and mathematical reasoning.


Economic and strategic factors underlie this approach. Overexpanding LLMs might become a technological cul-de-sac if the expected benefits for General AI never materialize. Moreover, the study argues that the West has allocated so much capital to scaling language models that it may be overlooking alternative technologies. In Europe and the United States, mainstream narratives often spotlight large-scale product launches, overshadowing smaller research initiatives that could be more promising for achieving GAI.

Another motivation is the intent to embed a distinct set of “values” into AI systems. Official discourses in China suggest that a purely statistical intelligence is not enough; machines must be aligned with specific social or governmental objectives. From a research perspective, this means building models that “understand” human sensibilities and align themselves with the population’s needs and the State’s requirements, in order to maintain control and avoid social or political risks.


The government actively supports research in technologies considered complementary to LLMs, such as humanoid robotics, spiking neural networks (which replicate the spike-based communication in biological neurons to boost efficiency and accuracy), and continuous learning approaches (algorithms that adapt and learn from real-world inputs over time). When systems learn from concrete, real-world stimuli, they transcend the limitation of processing only text-based data. This opens the door to solutions that connect the meaning of words to real-world objects or events, a concept often referred to as “grounding.”

For example, a domestically oriented robot with spiking neural networks and continuous learning can understand a plastic cup differently than a glass cup by interacting with them physically. This allows it to handle each object according to its unique properties, preventing damage and adapting to new scenarios. Such embodied learning can address many common weaknesses of large text-only models.


Challenges of Language Models and Evolving Strategies for General Artificial Intelligence

An ongoing international debate questions whether large language models can truly achieve general-purpose intelligence. One aspect concerns the “hallucinations” that LLMs occasionally generate—incorrect or fabricated responses due to purely statistical correlations learned from vast text corpora. In some cases, increasing a model’s parameter count can exacerbate specific distortions. Researchers in both the West and China, such as Xu Bo from the Chinese Academy of Sciences and Tang Jie at Tsinghua University, concur that LLMs alone may not provide advanced reasoning.


In Western contexts, techniques such as chain-of-thought (a step-by-step reasoning approach) or external plugins have been introduced to address specific deficiencies, such as arithmetic operations. Nonetheless, the study emphasizes that these “patches” cannot fix the underlying lack of a logical engine capable of discerning truth from fiction.


The document notes that private industry in the United States and Europe is dedicating tens of billions of dollars in 2023 to AI and generative technologies. Certain scholars warn that if LLMs fail to evolve into truly human-level cognition, it could lead to specialized infrastructure and skill sets that are difficult to repurpose for other sectors or applications. Diversification in the design phase can mitigate this risk. Such diversification can mean pursuing brain-inspired architectures—based on biological principles aimed at better adaptability—or hybrid systems, which integrate symbolic modules (logical rules) with statistical ones (neural networks).


For instance, a hybrid system could combine the image-recognition power of neural networks with a rule-based system for interpreting traffic signals. In autonomous driving, the AI would not only detect a stop sign visually but also logically apply traffic rules. This yields more robust performance and less dependence on a single technology.


Energy consumption and hardware considerations also stand out when pursuing General Artificial Intelligence. The latest LLMs demand enormous computing resources, using substantial energy. The study draws parallels with how the Chinese government strategically pushed innovation in photovoltaics, eventually producing 75% of solar panels worldwide. Similarly, while the West focuses on the sheer scale of language models, China invests in new chip designs, neural architectures, and training systems that reduce energy consumption. This perspective has significant ramifications for managers, who must balance raw computational power with both economic and environmental sustainability.


Another major consideration is controlling the outputs of LLMs. Methods based on guardrails or post-hoc filtering may be brittle when faced with the countless combinations of words an LLM can produce. Hence, a prominent idea in China is to embed principles and values into the underlying AI architecture itself, ensuring alignment with social and institutional objectives from the ground up.


Observing the global scene, we can discern two contrasting tendencies: one in the West, where the emphasis is on scaling model size, and another in China, where large models are complemented by biologically inspired techniques. These different strategies will shape future AI investments, define required skill sets, and influence the international technological race in the coming years.


Alternative Experiments, Global Perspectives, and Strategic Implications for General Artificial Intelligence

The study highlights how an advanced Chinese agenda brings large language models into contact with brain-inspired solutions and “embodied” systems (AIs that interact with their environment to collect information from vision, touch, sound, etc.). Scholars such as Bo Xu and Zeng Yi propose that the human style of learning—based on direct experience and the synaptic interplay of biological neurons—should guide AI design. Accordingly, spiking neural networks (which mimic the timing of neuron spikes) might achieve greater computational efficiency and fewer errors than purely statistical approaches.


Simultaneously, hardware research in China explores photonic chips (components that transmit and process data using photons rather than electrons) to reduce latency and energy consumption. Yet the interest goes beyond hardware alone: Chinese scientific literature frequently advocates an AI model that explicitly incorporates ethical and cultural objectives. This is intended to give researchers and institutional policymakers a tighter grip on technologies and direct them toward broader sociopolitical goals.


In contrast to large Western technology firms (e.g., OpenAI and Google), which predominantly invest in ever-larger LLMs, Chinese researchers argue that a genuinely general AI cannot rely solely on “predicting the next word.” It must include specialized reasoning modules, sensory interaction, and embedded ethical or cultural values. The study also acknowledges that some Western experts share these concerns, warning that scaling alone does not guarantee a fully functional reasoning engine with the capacity for truth checking or nuanced language comprehension.


From a strategic perspective, these developments mirror China’s successes in areas like solar power and electric mobility, where long-range planning led to robust industrial ecosystems. Applied to AI, this approach unites academia, government, and industry through targeted funding for neurally inspired architectures, spiking networks, or hybrid symbolic-and-neural systems. Major institutions, such as the Beijing Institute for General Artificial Intelligence, house various specialized research groups examining everything from paralinguistic signals to advanced deep learning frameworks that incorporate symbolic blocks.


An example is a virtual assistant designed with integrated sensory and symbolic components that can interpret subtle communicative cues—like sarcasm or irony—by learning from diverse conversational data. If someone says “Great job!” ironically after a mishap, a sensor-equipped, hybrid AI might detect the negative context from facial expressions or situational cues. By intertwining robust language understanding with efficient hardware and built-in ethical considerations, China’s strategy aims to foster AI that is both computationally powerful and socially aware.


Meanwhile, the West continues to expand LLM-based systems, driven largely by corporate funding. Critics caution that this one-track focus may undermine alternative research paths. China’s initiatives, combining brain-inspired platforms, sensor networks, and ethical frameworks, are attracting substantial public investment and institutional support. As global competition intensifies, the ability to embed broader sociocultural visions within AI design could prove as influential as technology-driven performance metrics.


Applications and Innovation for Achieving General Artificial Intelligence

In its concluding sections, the study underscores how companies and research centers intent on building scalable AI with human-like cognitive capacities may find it worthwhile to augment language-model approaches with symbolic reasoning, reinforcement learning (training robots through trial and error), and neuro-inspired elements (neural networks built to simulate biological processes). Several Chinese initiatives integrate visual recognition, tactile sensing, and LLM components, allowing AI systems to not only describe the environment but also physically interact with it.


One laboratory demonstration showed a robotic arm, guided by a biologically inspired network, assembling small devices while learning iteratively from mistakes—revealing greater adaptability than a standard LLM approach. Though still at an experimental stage, this signals that integrating cognitive models and embedding real-world interaction may enhance AI’s overall comprehension.


Another case involves dialogue platforms endowed with an internal “value model” for sensitive domains like healthcare or law. Rather than patching the AI after every misstep, the aim is to embed guiding principles from inception. Done properly, this would lead to safer and more predictable applications, mitigating risks related to misinformation or harmful content. However, this multidisciplinary approach demands collaboration among neuroscientists, cognitive psychologists, ethicists, and hardware engineers. China appears poised to coordinate these diverse competencies with consistent state-backed funding, while in the West, development remains largely dependent on private capital often driven by short-term returns.


For Western executives and policymakers, the stakes involve not only technological leadership but also the social and economic ramifications of AI potentially transforming entire market sectors. The combined approach—where massive text-based solutions merge with advanced cognition and environmental interaction—encourages businesses to redesign internal skill sets. Specialists in optimizing LLM parameters are needed alongside teams designing sensor- and rules-based modules. Future business models might thus emerge in hybrid form, combining the strengths of large-scale statistical correlation with cognitively advanced reasoning and knowledge representation.


Conclusions

The study, in measured and concrete terms, warns that current trends in scaling up language models will not automatically produce human-like intelligence. The massive investments in generative AI in the West may fail to lead toward true General Artificial Intelligence, all while crowding out potentially more promising alternatives. From a strategic standpoint, the implications for companies of all sizes are significant. Diversification appears not to be an academic luxury but a practical necessity for ensuring real returns on AI initiatives.


Incorporating biologically inspired elements, combining symbolic modules with neural ones, and coupling AI with specialized hardware can yield a competitive advantage. Compared to existing technologies, these approaches could lead to a deeper ability to “understand” the world, handling informational uncertainty more effectively. Policymakers also have the option of boosting public research and supporting collaborations among universities, research institutions, and companies. As shown by China’s experience, centralized government direction can encourage a variety of approaches—from biologically inspired networks to cognitive robotics—alongside large-scale language model development.


For entrepreneurs and managers, this translates into scouting market segments that may emerge from AI applications equipped with reasoning capabilities closer to human thought processes, enabling more fluid interaction between digital systems and real-world stakeholders. Ultimately, the study notes that no single path to GAI has won global consensus, leaving room for imaginative projects that go beyond LLM-based solutions alone. Investment in hybrid models, grounded learning, and value-driven architecture may, in the long run, surpass current mainstream standards. For businesses and executives, the main takeaway is to resist letting the hype around LLMs channel all resources into one area—AI’s scope is far broader, and the time to adopt a forward-looking strategy is now.


 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page