Primer

Intelligent Internet
Tue 4 Mar 2025

Table of Contents

  1. Executive Summary
  2. The AI Paradox: Unprecedented Potential, Unparalleled Challenges
    • Consolidation of Powers

    • Barriers to Entry
    Misalignment
    • Opacity

    • Systems and Infrastructure
    Vulnerability
    • Silos and Standards
    • Intelligence Commons
    • Suboptimal Resource Allocation
  3. Intelligent Internet
    • Motivation
    • Guiding Principles

    • Openness: The Power of Collective Innovation
    • Alignment: Harmonizing Technology and Society
    • Distribution: Resilience Through Diversity
    • Reversing the Trends

    • Conceptual Overview
    • Distributed Network Architecture
    • Information and Learning Flows
    • Knowledge

    • Models
    • Foundational AI
    • Specialized AI
    • Personalized AI
    • Models to Flows

    • Connectivity and Interplay
    • Model Cohesion
    • Resulting System
    • Cooperation

    • Governance and Sustainability
    • Progressive Decentralization
    • Sustainability
    • Transparency and Accountability

    • Use Cases
    • Optimizing Knowledge Flows and Organizational Intelligence
    • Powering Personalized and Adaptive Systems
    • Enhancing Global Collaboration and Problem-Solving

    • Looking Ahead: Our Collective Future

Executive Summary

This primer explores the transformative potential of generative AI alongside the significant challenges posed by its current development trajectory. AI technologies have achieved remarkable feats, positioning AI as a defining force of the 21st century. However, this potential is overshadowed by a centralized development paradigm dominated by a few major players, primarily in the United States and China. This consolidation creates high barriers to entry, limiting innovation to well-resourced entities and risking a future that prioritizes profit over public good.

Key challenges in AI include misalignment, opacity, and vulnerability. Misalignment occurs when AI models favor their creators’ cultural and linguistic contexts, often neglecting diverse global needs. Opacity arises from proprietary “black box” systems, which raise ethical and trust concerns—especially in critical sectors like healthcare and security. Furthermore, reliance on centralized infrastructure creates vulnerability, as it can expose society to widespread disruptions—illustrated by the 2024 CrowdStrike outage. Additional issues such as fragmented internet silos, lack of universal standards, exploitation of the "Intelligence Commons" (e.g., fake news), and inefficient resource allocation further compound these risks, exacerbating inequality and environmental strain.

The Intelligent Internet is proposed as a visionary solution to these problems, reimagining AI as a distributed, open, and inclusive ecosystem. Guided by principles of openness, alignment, and distribution, it leverages a hierarchical network of Hyper Nodes (training foundational models), National Nodes (refining for regional/sectoral needs), and Edge Nodes (enabling personalized AI). This architecture supports three knowledge types—common, semi-private, and private—underpinning Foundational, Specialized, and Personalized AI models, respectively. By fostering interoperability, lowering entry barriers, and enhancing resilience, Intelligent Internet aims to deliver Universal AI (UAI)—a public good accessible to all.

Through progressive decentralization and transparent governance, this paradigm seeks to reverse negative trends, democratizing AI development, aligning it with human values, and optimizing global resources. Use cases include optimizing organizational knowledge flows, powering personalized systems, and enhancing global collaboration, paving the way for a more equitable, innovative, and sustainable AI-driven future.


The AI Paradox: Unprecedented Potential, Unparalleled Challenges

Generative AI technology has made huge strides in recent years, unlocking capabilities previously considered to be science fiction. For example, AI can now generate photorealistic images from simple text descriptions, compose music in various styles, and engage in lifelike conversations on complex topics. These models have the potential to provide solutions to many of the problems humanity faces through their technical prowess. Looking at healthcare alone, AI models can ingest and analyse vast amounts of medical data to detect diseases early, optimize personalized treatment plans, and accelerate drug discovery [1].

AI is one of the most defining technologies of the 21st century. As Bill Gates said, AI is “the most important advance in technology since the graphical user interface”, a view echoed by its growing importance in daily life [2]. Yet, as AI becomes integral to our world, it brings challenges as unprecedented as its potential.

Consolidation of Powers

The United States and China lead as global AI superpowers, dominating other countries’ private investments in the industry. Even within these superpowers, there are a small number of companies behind these revolutionary models. Amid their exciting realization, the world has underemphasized one key fact—a handful of entities have amassed control over their development and deployment.

Fig 1: Graph Source: Stanford University 2024 AI Index Report  Data Source: Quid (2023)

The leading companies in the AI industry have generally adopted a centralized, proprietary, and profit-centric business model. With the development of these models being resource-heavy, this is the approach that has so far been feasible and implemented successfully, resulting in the frontier of AI being dominated by large closed-source companies. However, this consolidation shapes an industry where profit often trumps public good.

Barriers to Entry

Advancements in AI technology have led to massive increases in performance, but they have also driven an exponential increase in the computational resources required to develop cutting-edge models. So far, the industry has met these requirements, with the computational resources needed to train these models increasing by an order of magnitude every couple of years.

Fig 2: Data Source: Epoch (2024)

However, compute requirements are now reaching 50B petaFLOPs, and costs are skyrocketing—xAI’s Grok-3 used 200K H100s, for which the hardware costs alone are estimated to be in the billions of dollars [3]. This has made it impossible to enter the industry without huge sums of capital, which are usually only available to established players. Exceptions to this rule exist, such as DeepSeek, which trained its R1 model at a fraction of the cost of other approaches [4]. Yet significant breaks from the norm remain a rarity, as the current paradigm continues to create formidable obstacles for new entrants, particularly those from underrepresented regions or smaller organizations.

This “near-monopoly poses several risks, potentially leading to a ‘winner-takes-all’ scenario that stifles innovation and competition. Additionally, it can also exacerbate societal inequality, by creating a divide between those who have access to those technologies, and those who do not” [5]. This results in an industry not focused on maximizing the potential benefits of this technology on humanity’s intelligence, but one that is instead focused on beating its competition and maximizing its profits. 

Misalignment

However, not only do these next-generation models continue to originate from the same companies, they continue to originate from the same cultures and minds

Humanity is defined by its plurality. The world is filled with a myriad of cultures, beliefs, industries, and, most granularly, people. Every person has a unique fingerprint, and in turn, their own unique needs when using generative AI technologies.

AI models favor users like their creators, resulting in the prioritization of mainstream or developer-aligned users. This results in models that, for example, excel in English but struggle in other languages or skew outputs toward specific political views [6]. Forcing individuals to choose between foregoing AI benefits or using systems not tailored to their needs, culture, or beliefs, is inadequate and fails to ensure equitable access to opportunities. This approach results in ineffective use of the technology at best, and at worst, leads to significant impacts on peoples' beliefs as cultural and political biases present in models permeate daily life.

Opacity

Presently, most leading models are proprietary. Even those marketed as open-source have been challenged in their usage of the term, due to limitations on model usage, or information on their training and architecture. While improvements have been made since October 2023, “The Foundation Model Transparency Index continues to find that transparency ought to be improved” [7].

These models often operate as “black boxes”, with decision-making processes, data usage policies, and long-term strategies shrouded in secrecy. This raises serious concerns about privacy, security, and the ethical application of AI technologies. Opacity erodes trust in high-stakes fields such as healthcare where hidden flaws could harm patients, or security-critical applications like national infrastructure.

The usage of opaque solutions risks the unchecked proliferation of systems that may inadvertently (or intentionally) cause harm. When the adoption of AI is inevitable and current models are the only option, humanity faces a future filled with information asymmetry. Reliance on systems that pervade human lives, without any knowledge of their internal operations and long-term intentions, can lead to those in control of these models gaining unwarranted power and influence over society. 

Systems and Infrastructure

Further issues arise when inspecting not only the dynamics introduced by the centralization of power, but the systems themselves. The current internet infrastructure is built upon deprecated and deteriorating systems that are ill-equipped to handle the wave of AI technology [8]. Without a clear end state in mind, its unfocused development over decades has led to a  fragmented system, ill-equipped to manage the digital world in its current form, or adapt to its future evolution. A qualitatively different infrastructure is needed.

Vulnerability

The Information Technology sector has “seen many significant mergers and acquisitions over the last few years, and this trend is likely to continue” [9]. Whether it be software or hardware, the majority of products and services increasingly originate from the same few companies. In Q4 2024, Amazon Web Services, Microsoft Azure, and Google Cloud accounted for 30%, 21%, and 12% of the global cloud infrastructure services market, respectively—together comprising 63% of the market [10]. Centralized development is not new. However, this norm has also introduced  the provision of, and access to, this technology being dependent on other centralized systems and infrastructure.

Fig 3: Data Source: Synergy Research Group (2025) 

This dependence causes society to become increasingly susceptible to single-points of failure impacting the globe. Failures of these centralized infrastructures are seemingly becoming more common, the most recent—and largest— being the 19 July 2024 CrowdStrike incident, only last year. This led to 8.5 million Windows devices crashing and $5.4bn of losses for the United States’ top 500 companies [11].

Recent studies highlight that outages remain a key concern for digital infrastructure. While overall outage frequency and severity have gradually decreased compared to previous years, nearly 53% of data center operators reported experiencing an outage in the past three years, and approximately one in five impactful outages cost more than $1 million [12].

These cloud technologies rely on this centralized infrastructure, and their outages highlight the issues currently present. Society cannot have these systems as its backbone—in critical sectors such as healthcare, finance, and public services, the consequences of such system failures can lead to loss of economic value and even human life.

Silos and Standards

The Internet, built over decades, has evolved into a sprawling network of systems developed with little foresight for interoperability or standardization. This fragmented foundation has created a disjointed digital landscape—marked by inefficient resource use, redundant efforts, and systems that struggle to communicate without constant human intervention [14]. Now, AI giants, fiercely protective of their innovations, are repeating these mistakes, digging deep moats around their products with proprietary formats and a lack of coordination. This not only mirrors the internet’s fractured state but also threatens to deepen it, eroding the end-user experience and stunting collaborative progress.

To break this cycle, universal AI standards are essential. Without common protocols for data and model interoperability, the promise of cohesive AI ecosystems remains out of reach, and the gap between interconnected and isolated systems widens. Standardized frameworks are needed to enable seamless communication across diverse AI models and internet infrastructure, paving the way for fully automated systems and free-flowing information. Only by embracing such standards can we unlock the full potential of collective knowledge and resources, driving humanity toward a more unified and progressive future.

Intelligence Commons

The digital knowledge ecosystem, once heralded as a shared resource for global enlightenment, now faces a crisis resembling the “Tragedy of the Commons”. Here, individual exploitation of communal informational assets—pursuing short-term gains—erodes the system's integrity.

Recent studies indicate that the spread of fake news on social media has intensified, with AI playing a significant role. A 2024 study from the University of Southern California found that social media platforms’ reward structures, which encourage users to share information habitually, significantly contribute to the dissemination of misinformation [15]. Additionally, AI advancements, particularly in LLMs, have automated many aspects of fake news generation, making it more challenging to distinguish genuine news from fabricated content [16]. Massachusetts Institute of Technology (MIT) notes that AI has been used by governments and political actors to generate texts, images, and videos aimed at manipulating public opinion and censoring critical online content [17].

Solutions lie in decentralization and transparency. Distributing model creation across contributors, as well as provenance tracking to ensure content verifiability, can rebuild trust. Universal standards for data interoperability and accountability can reshape the information ecosystem into a resilient, inclusive framework, valuing knowledge as a shared asset, not a proprietary tool. These steps are vital to reimagine AI and information governance, fostering an equitable, enduring digital commons that serves humanity’s collective potential.

Suboptimal Resource Allocation

Current computers and phones are typically designed to be used by a single entity, with no vision of a greater network in mind. Each device is a small hoard of computational resources, and the inability of systems to efficiently share these resources results in a massive underutilization of computational power and data.

This inefficiency manifests in various ways, from idle processing capacity to redundant data storage across multiple platforms [18]. Potential breakthroughs are denied by artificial barriers between systems that could otherwise benefit from collective computing resources and shared datasets. Organizations are instead forced to invest in redundant infrastructure and duplicate data storage, resulting in unnecessary capital expenditures on both equipment and the specialized staff required to maintain them [19].

The consequences of this underutilization extend beyond immediate financial impacts. It contributes to a widening gap between organizations with access to vast computational resources and those without, exacerbating existing inequalities in technological capabilities [20]. The International Energy Agency finds that data centres use 240-340 TWh of energy each year, and this figure rises to 340-490 TWh per year when including cryptocurrency mining [21]. The latter figure equates to the energy consumption of Germany in 2023 [22]. The environmental impact of maintaining ƒ data centres and computing infrastructure is significant, contributing to the growing carbon footprint of the tech industry.


Intelligent Internet

Motivation

The current landscape of AI technology and its supporting network infrastructure faces mounting challenges—centralization, misalignment with diverse needs, and inefficiencies in resource use, to name a few. These issues are growing more complex, threatening the vitality of digital systems and the equitable distribution of AI’s potential. 

The Intelligent Internet (II) emerges as a response: a new paradigm designed to make AI tools accessible to all, revitalize failing infrastructure, and mitigate the risks of centralized control. II aims to create an adaptive foundation for an AI-centric future, whilst distributing the benefits of artificial intelligence through the creation of Universal AI (UAI)–a public good made up of open datasets, models, and systems.

Guiding Principles

II is an open, distributed intelligence system built to evolve with the world’s increasing complexity. It serves as a foundational infrastructure for the future of AI, guided by principles that ensure resilience, inclusivity, and alignment with human values.

Openness: The Power of Collective Innovation

II fosters an ecosystem where diverse contributions drive innovation. By embracing collaborative, decentralized efforts, it ensures flexibility and resistance to dominance by any single entity, creating a system that thrives on collective ingenuity.

Alignment: Harmonizing Technology and Society

Designed to strengthen social cohesion, II unites compute providers, universities, NGOs, governments, businesses, and individuals. It aims to distribute AI’s benefits broadly, shifting the focus from narrow gains to optimizing resources for the greater good.

Distribution: Resilience Through Diversity

Built on a federated network of varied actors—from supercomputers to smartphones—the II balances coordinated action with decentralized innovation. This distributed architecture enhances adaptability and resilience in a rapidly changing technological landscape.

These principles lay the groundwork for an AI-driven future that is inclusive and robust, harnessing AI’s full capacity while aligning it with global interests.

II tackles negative trends in AI and digital infrastructure head-on, aiming for a transformative shift:

  • Decentralization of Power: Its distributed structure dilutes concentrated AI power as the network grows, fostering a democratic ecosystem.
  • Alignment with Diverse Needs: Models evolve to serve varied cultures and use cases, addressing current misalignments.
  • Lowering Barriers to Entry: Resource-sharing reduces computational demands, opening AI development to broader participation.
  • Enhancing Resilience: A distributed design mitigates vulnerabilities of centralized systems, building fault tolerance.
  • Interoperability and Coordination: Universal standards and explicit coordination mechanisms break down silos, optimizing resource use. II envisions a fluid evolution toward these goals, with an end state of a robust, inclusive network.

II offers a fair, open infrastructure for humanity, breaking the oligopoly of AI development and promoting resilience through its distributed nature. It lowers barriers to entry, enabling global participation, and fosters interoperability to enhance collaboration across systems. By prioritizing open contribution, it taps into a worldwide talent pool, accelerating knowledge dissemination and breaking down sectoral silos. The result is a future where AI serves diverse needs, optimizes resources efficiently, and drives innovation for humanity’s most pressing challenges—without being tethered to narrow financial incentives.

Conceptual Overview

II is reimagining digital infrastructure by integrating AI, distributed systems, and human collaboration into a cohesive, adaptive network. It transcends traditional silos of data, computation, and decision-making, enabling intelligence to emerge dynamically across scales—from edge devices to global systems—in response to local needs and global challenges.

Distributed Network Architecture

At its core,  II relies on a hierarchical node structure to optimize computational efficiency and scalability. This structure distributes processing and decision-making, enabling scalable, context-aware intelligence:

  • Hyper Nodes: High-powered clusters (e.g., equipped with NVIDIA H100 GPUs) train large-scale, Foundational AI models, forming the network’s base intelligence. Access is controlled by the core team to ensure alignment with II’s goals.
  • National Nodes: Mid-tier clusters refine these models for specific regions or sectors (e.g., healthcare regulations, market-specific finance), operated by specialized entities to enhance relevance to local needs.
  • Edge Nodes: Running on devices such as laptops or smartphones, these permissionless nodes handle real-time processing and personalized fine-tuning, democratizing access. 

Information and Learning Flows

The II emphasizes seamless knowledge flows across its network. Nodes collaborate to process data, train models, and share insights, creating a self-improving system. National Nodes, for instance, augment global knowledge with local inputs, while Hyper Nodes anchor the system with broad, foundational models. This dynamic interplay ensures the network adapts continuously, balancing local autonomy with global coordination.

Knowledge

Common Knowledge

Description: Common knowledge encompasses widely available, public information that serves as the foundation for the generalized  AI models. This includes large datasets that are broadly representative of industry-wide knowledge, such as medical textbooks, legal codes, or financial market data.

Role in Model Training: Common knowledge is utilized in the initial stages of model training within Hyper Nodes. It provides the broad context needed for the AI models to function effectively across a wide range of general applications. These models, once trained, can then be further specialized using more localized or private data.

Semi-Private Knowledge

Description: Semi-private knowledge includes data that is not entirely public but is accessible under certain conditions, such as licensed or patented information. This might include proprietary datasets owned by corporations or institutions that areshared within specific partnerships or agreements.

Role in Model Fine-Tuning: Semi-private knowledge is essential for the fine-tuning of AI models in National Nodes. It allows models to adapt to specific industries, regions, or use cases, making them more relevant and effective. For example, a healthcare model trained on common knowledge might be fine-tuned with semiprivate data from a specific hospital or healthcare system.

Private Knowledge

Description: Private knowledge encompasses highly specific, sensitive data, such as patient records or proprietary business information. This data is typically confidential and used under strict access controls to ensure privacy and security.

Role in Specialized Applications: Private knowledge is used in the most specialized AI applications, often within Edge Nodes, where data security and privacy are cardinal. These models are tailored to very specific tasks, such as personalized medical diagnoses or customized financial advice, where the accuracy and relevance of the data are critical.

Models

In order to provide maximal coverage of model applications, II utilizes three primary forms of models. Generalized models provide a base of intelligence for other models—localized and specialized models, which are respectively refined in regional and sectoral specificity.

Foundational AI

Description: Foundational AI models are broad models developed using common knowledge datasets, forming the base of UBAI. These models are designed to be applicable across a wide range of industries and serve as the starting point for more specialized models.

Training and Deployment: These models are trained on Hyper Nodes and serve as the foundation of the II. They provide a general understanding that can be applied to various sectors, such as a base healthcare model that understands common medical conditions.

Specialized AI

Description: Specialized AI models are tailored to meet the unique demands of specific industries, regions, or niches. These models are derived from generalized  models and fine-tuned with semi-private or private knowledge to address targeted tasks in industries or region-specific applications sensitive to local regulations and cultural nuances. 

Training and Deployment: Specialized models are trained on National Nodes, leveraging access to curated, domain-specific knowledge. For geographically or culturally specialized  variants, training incorporates region-specific data to ensure compliance with local standards and relevance to regional contexts. For industry-focused variants, training draws on semi-private, high-value industry knowledge to optimize performance in niche applications.

Personalized AI

Description: Personalized AI are highly customized  models designed to meet the unique needs and preferences of individual users. Derived from Specialized AI, these models are fine-tuned with private knowledge specific to the user, enabling optimal tailoring and end-user experience. They prioritize precision and relevance at the individual level, adapting to personal habits, contexts, and goals.

Training and Deployment: Personalized AI models are trained and deployed on Edge Nodes, which leverage private, user-specific data. This edge-based approach ensures real-time adaptability and privacy, allowing the model to refine its performance based on individual interactions and feedback. By building on the foundation of Specialized AI, they inherit domain and regional expertise while adding a layer of individual customization.

Models to Flows

Our vision for UBAI is uniquely possible due to II’s distributed architecture. This method of handling models and their training data allows for maximal tailoring of intelligence to every use case whilst maintaining privacy and security through the lack of up-stream information flow. Base intelligence and knowledge are sourced at the top-level generalized models, with additional knowledge and fine-tuning included downstream at appropriate stages. This can be visualized as follows:

Fig 4: Node Architecture

Connectivity and Interplay


Model Cohesion

II aims to provide the infrastructure and protocols required for diverse AI agents to interact with each other cooperatively. These agents can encompass various types, such as Open and Auditable AI focused on transparency, Open Weight AI prioritizing accessible innovation, Personal AI for individual productivity, and Expert Systems AI excelling in elaborate problem solving. 

The ambition extends beyond human-AI interaction to include fully automated and symbiotic systems, minimizing the need for human intervention in routine tasks. This change seeks to boost efficiency, lower operational costs, and free human potential for more strategic and creative endeavours. A key to realising this vision is unlocking inter-model communication, granting the integration of these diverse AI types within future systems.

Resulting System

A primary characteristic of the II is the development of a variety of generative AI models, UBAI, that consist of not only general base intelligence, but of highly specific intelligence. The latter makes these models analogous to human experts which, whilst having a broad foundational understanding of the world, excel in a specific domain.

When sufficient expert-level models are produced, they can be combined in an ecosystem consisting of a mixture of experts, allowing each model to handle tasks that fall within their domain with high accuracy. Indeed, this is promising for an end-state of Artificial General Intelligence (AGI), but it is also an ambition of II. II aims to accelerate humanity’s progress and create tools to address the world’s problems by providing an infrastructure for the AI-centric future.

Cooperation

The II is designed as an open domain for cooperative interaction between AI agents and human participants. This open innovation will result in applications that are aligned with the broader principles of the II, whilst maintaining a low friction environment for rapid iteration.

Cooperation needs to be ensured throughout the entire stack, from model training and fine tuning to join model service provision. In a future filled with AI agents, a set of protocols and standards will be required for them to interact, settle value transactions, and resolve disputes—with some degree of federated human intervention. Intelligent Internet is set to cultivate an integrated infrastructure that nurtures collaboration, unlocking profound benefits and cultivating a more interconnected future.


Governance and Sustainability

II’s governance and sustainability model is designed to evolve through a phased approach to decentralization, transitioning from initial core team guidance toward increasing community control. This strategy ensures long-term viability, ethical alignment, and adaptability—balancing the need for accelerated initial development with the ultimate goal of a self-sustaining and community-governed AI ecosystem.

Progressive Decentralization

The II embraces progressive decentralization as its core governance strategy. This involves a planned, phased evolution from initial centralized oversight to full community control. This approach is designed to:

  • Enable swift progress and focused execution during the foundational stages.
  • Gradually incorporate broader stakeholder input and community ownership as the network matures.
  • Ultimately achieve a fully distributed governance model that is resilient, adaptable, and aligned with the interests of its participants.

This phased evolution permits a sustainable transition, cultivating community ownership and allowing for any necessary refinements along the way.

Sustainability

The long-term sustainability of the II can be conceptualized through three pillars:

  • Technical: Employing a modular, scalable architecture that can adapt to technological developments and growing demand.
  • Organizational: The distributed nature of the II allows for flexibility in the system, its impacts, and its use cases.
  • Social: Continuously engaging with stakeholders, adapting to societal needs, and encouraging a culture of responsible AI development.
Transparency and Accountability

Transparency and accountability are foundational to building trust in the II. There is a commitment to:

  • Open-source development of core components, allowing community scrutiny and contribution.
  • Establishment of clear accountability mechanisms and appeal processes for dispute resolution.

Use Cases

II is designed to address intrinsic challenges and unlock novel opportunities across all sectors of society. It seeks to provide a new foundation for innovation and collaboration. Here are some areas in which the II can make a substantial impact.

Optimizing Knowledge Flows and Organizational Intelligence

The prospect of organizations, from businesses to research institutions, capable of capturing, sharing, and maximizing knowledge across all levels. The II facilitates this by breaking down data silos, connecting diverse expertise, and enabling AI-powered insights to flow freely. This leads to faster innovation cycles, more informed decision-making, and a considerable boost in overall organizational intelligence.

Powering Personalized and Adaptive Systems

In a world increasingly demanding personalization, II can provide the infrastructure to deliver truly adaptive experiences. Whether it’s personalized education tailored to individual learning styles, customized healthcare plans based on unique patient profiles, or financial services adapted to specific needs and risk tolerances, II can allow systems to learn, adapt, and respond to individual requirements with precision.

Enhancing Global Collaboration and Problem-Solving

Many of humanity’s greatest challenges require global cooperation. II seeks to provide a platform for collaboration across geographical boundaries, cultural differences, and institutional divides. In promoting shared data resources, collaborative AI model development, and decentralized problem-solving structures, it encourages individuals and organizations worldwide to work together more effectively toward common goals, from addressing climate change to managing global health crises.

II is about building a more knowledgeable, equitable, and collaborative future for all. It is about empowering individuals, organizations, and societies to address intricate challenges and unlock new possibilities through the power of distributed intelligence.


Looking Ahead: Our Collective Future

As II evolves further  it is evident that the path to a truly open, distributed AI system is complex and multifaceted. The governance and sustainability model will continue to evolve, always guided by the core principles and commitment to beneficial AI.

This system not only enhances Global Intelligence and productivity but also promotes a more equitable and sustainable future. Universal Basic Artificial Intelligence, by democratizing  access to AI technologies, further aims to bridge the digital divide, enhance workforce capabilities, and promote a more equitable and sustainable future.

As we navigate the complexities of an AI-driven world, II offers a path forward that emphasizes  inclusivity, transparency, and ethical alignment. By harnessing collective innovation and aligning technological advancement with human values, we can unlock unprecedented opportunities for human creativity, collaboration, and progress.

This initiative to forge an equitable future in the age of AI, while fostering the growth of II, is a collective and open endeavor. The advancement of AI presents both immense opportunities and profound challenges. II harnesses AI's transformative potential while mitigating its risks. 

We invite you to join us in this ambitious endeavour. Together, it is possible to create an Intelligent Internet that empowers humanity and creates a new era of innovation, collaboration, and progress. Whether you are a technologist, policymaker, researcher, or simply an individual passionate about the future of AI, anyone can contribute.

  • Technologists and developers, to our open-source projects and collaborate on building the future of AI.
  • Policy makers, academics and industry leaders, in forums and conferences to shape the governance and ethical frameworks for responsible AI development.
  • Everyone, by joining the conversation, testing beta applications, spreading awareness, and helping us build a more intelligent, inclusive AI-driven future.

Work Cited:

  1. Hafke, T. (2023, October 24). Generative AI in Healthcare: Use Cases, Benefits, and Drawbacks. AlphaSense. https://www.alpha-sense.com/blog/trends/generative-ai-healthcare/
  2. Gates, B. (2023, March 21). The Age of AI has begun. Gatesnotes.com. https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
  3. xAI. (2025, February 19). Grok 3 Beta — The Age of Reasoning Agents. X.ai. https://x.ai/blog/grok-3
  4. Ng, K., Drenon, B., Gerken, T. and Cieslak, M. (2025). What is DeepSeek - and why is everyone talking about it? BBC News. [online] 4 Feb. Available at: https://www.bbc.co.uk/news/articles/c5yv5976z9po [Accessed 26 Feb. 2025].
  5. Faraboschi, P., Giles, E., Hotard, J., Owczarek, K., & Wheeler, A. (2024, April 12). Reducing the Barriers to Entry for Foundation Model Training. ArXiv.org. https://doi.org/10.48550/arXiv.2404.08811
  6. Zhang, X., Li, S., Hauer, B., Shi, N., & Kondrak, G. (2023). Don’t Trust ChatGPT when Your Question is not in English: A Study of Multilingual Abilities and Types of LLMs. ArXiv.org. https://arxiv.org/abs/2305.16339 ; Motoki, F., Valdemar Pinho Neto, & Rodrigues, V. (2023). More human than human: measuring ChatGPT political bias. Public Choice, 198(https://link.springer.com/article/10.1007/s11127-023-01097-2), 3–23. https://doi.org/10.1007/s11127-023-01097-2
  7. Bommasani, R., Klyman, K., Kapoor, S., Longpre, S., Xiong, B., Maslej, N., & Liang, P. (2024). The Foundation Model Transparency Index v1.1 May 2024. https://crfm.stanford.edu/fmti/paper.pdf
  8. Pavel K. (2024, May 30). Understanding Legacy IT Infrastructure and How to Update It | ModLogix. Application Modernization. https://modlogix.com/blog/legacy-it-infrastructure-what-it-is-and-how-to-modernize/ ; Lavallée, B. (2024, August 5). Networks will shape the future of Artificial Intelligence. Ciena. https://www.ciena.com/insights/blog/2024/networks-will-shape-the-future-of-artificial-intelligence
  9. Tillson, J. (2024, February 27). The Increasing Trend of Consolidation in the IT and Cybersecurity World. NetworkComputing. https://www.networkcomputing.com/network-security/the-increasing-trend-of-consolidation-in-the-it-and-cybersecurity-world
  10. Synergy Research Group. (2025, February 6). Cloud Market Jumped to $330 billion in 2024 – GenAI is Now Driving Half of the Growth. Srgresearch.com. https://www.srgresearch.com/articles/cloud-market-jumped-to-330-billion-in-2024-genai-is-now-driving-half-of-the-growth
  11. \Parametrix Insurance. (2024). CrowdStrike’s Impact on the Fortune 500. Parametrix Insurance. https://www.parametrixinsurance.com/crowdstrike-outage-impact-on-the-fortune-500 ; Weston, D. (2024, July 20). Helping our customers through the CrowdStrike outage. The Official Microsoft Blog; Microsoft. https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/
  12. Donnellan, D., & Lawrence, A. (2024, March 27). Annual outage analysis 2024. Uptime Intelligence. https://intelligence.uptimeinstitute.com/resource/annual-outage-analysis-2024
  13. Ibid.
  14. OmniaTeam. (2024, April 9). The Hidden Costs of a Fragmented Digital Workspace. OmniaTeam. https://omnia-team.com/the-hidden-costs-of-a-fragmented-digital-workspace/ ; Data Dynamics. (2024, March 19). Unlock Data Transformation: Disentangle from Silos. Data Dynamics, Inc. https://www.datadynamicsinc.com/blog-disentangle-from-the-shackles-of-data-silos-the-key-to-unlocking-the-transformative-power-of-your-data-assets/ ; Richman, J. (2023, May 19). 8 Reasons Why Data Silos Are Problematic & How To Fix Them. Estuary. https://estuary.dev/why-data-silos-problematic/
  15. Madrid, P. (2023, January 17). Study reveals key reason why fake news spreads on social media. USC Today. https://today.usc.edu/usc-study-reveals-the-key-reason-why-fake-news-spreads-on-social-media/
  16. Virginia Tech News. (2024, February 22). AI and the spread of fake news sites: Experts explain how to counteract them. News.vt.edu; Virginia Tech. https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html
  17. Ryan-Mosley, T. (2023, October 4). How generative AI is boosting the spread of disinformation and propaganda. MIT Technology Review. https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/
  18. Richman (n 14)
  19. Data Dynamics (n 14)
  20. Crooks, R. (2022). Toward People’s Community Control of Technology: Race, Access, and Education. Social Science Research Council. https://doi.org/10.35650/JT.3015.d.2022
  21. International Energy Agency. (2023, July 11). Data centres & networks. IEA. https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks
  22. U.S Energy Information Administration. (n.d.). International Electricity. EIA. Retrieved September 17, 2024, from https://www.eia.gov/international/data/world/electricity/electricity-consumption