The Overlooked Connection: Internet Governance and Artificial Intelligence

When the AI Mirage Sidelines Internet Governance

For five days in Kyoto, Japan, Internet leaders, academics, researchers, and experts alike met for the 18th annual meeting of the IGF. The theme of the event, “Shaping Digital Trust for the Internet We Want,” introduced a variety of subcategories, from Digital Divides and Inclusion to Human Rights and Freedoms. As such, these subcategories served their purpose for conversations surrounding these essential topics. However, the topic of “AI and Emerging Technologies” left too much room for fluff that has been contaminating conversation on AI. Additionally, by not explicitly mentioning AI and internet governance together (i.e., “The Internet We Want as More AI Users Proliferate”), panels and fireside chats surrounded topics such as “The use of AI in alternate dispute resolution” or how AI compares to the Internet as a technology leave listeners confused and distracted. These conversations are meaningful when discussing how AI develops and is used, but the IGF and its theme should have remained strict in its function: actively applying internet governance to AI. Ultimately, these conversations become lost within platforms that garner a lot of attention and, eventually, sponsorship for change. There were several instances during the IGF 2023 and in government spaces that showcase AI being drowned out.

On the first day of the IGF, parliamentarians spoke about their role in shaping a trustworthy internet by discussing how agile government measures can be used to keep up with constantly developing internet technology. Latifa Al-Abdulkarim, Ph.D., stated that to create an efficient practice for successful regulation, there should be an emphasis on “iterative, multistakeholder, agile, innovative regulation that considers the economic dimension which can build trust within users. Brando Benifei pointed out the tensions between stakeholders when it comes to delivering proper legislation that promotes global connection, affected by interest imbalances of power, and examining every use case to prevent further issues. While these points are vital bases for Internet governance, AI was mentioned lightly as a technology rather than an avenue within Internet usage. 

On Day 3, The Policy Network on Artificial Intelligence (PNAI) showcased and discussed their report “Strengthening multistakeholder approach to global AI governance, protecting the environment and human rights in the era of generative AI,” which focused on three key thematic areas – global AI governance interoperability, AI gender and race, and AI’s impact on the environment. Sarayu Natarajan, Ph.D., pointed out that regarding generative AI and mis/disinformation, the report provides a framework for how the spread of such information can be addressed without discussing it explicitly. However, Dr. Li argued that there should be an innovation space for generative AI and that regulation may come too early. The conversation thoroughly explored the multifaceted nature of generative AI but gave little, if any, attention to discussing its relationship with Internet governance.

This roundabout is still found in conversations between AI and Internet governance outside of the IGF 2023. AI has been searched and spoken about exponentially since the introduction of Chat-GPT and other similar products to the public. According to Google Trends, interest in Artificial Intelligence rose after November 2022, hitting a peak in April 2023. The implications of AI to society and its pillars have been touched upon, from the workplace to war practices. AI is a behemoth and a pygmy in its own right because it can be used in different areas of life, including everyday tasks. As researchers and other professionals continue to think about AI, they are faced with how to govern it. Ultimately, the prevailing discourse surrounding the intersection of AI and internet governance often falls short of addressing the pressing need for in-depth conversations that fully incorporate the multifaceted implications of AI technologies within the digital landscape. 

The United States government has issued an “executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” However, this order only briefly touches upon the Internet in two instances. Firstly, it mandates the tracking and recording of information regarding how foreign individuals or entities access and administer Internet services to verify account ownership. This includes collecting data on IP addresses used and timestamps. Additionally, foreign resellers of U.S. IaaS Products must implement measures that restrict unauthorized third-party access to this information by the order’s guidelines and applicable laws. 

Secondly, the order entrusts the Secretary of Commerce, with guidance from the Assistant Secretary of Commerce for Communications and Information and in consultation with the Secretary of State, to take specific actions within 270 days from the order’s issuance date. These actions are intended to address and manage the risks and benefits associated with AI development and usage. They may involve formulating regulations or policies to balance promoting innovation and mitigating security threats related to AI models. However, the document lacks a detailed introduction or explanation of how these policies will be developed and implemented.

Similarly, several of the European Parliament’s press releases on AI governance policy provide vague language about the Internet. For example, its publication on the Data Act does not elaborate on any Internet governance or regulatory measures, such as net neutrality, data traffic management, or ensuring equal access to online resources. A comprehensive Internet governance framework should address these aspects to maintain an open and fair digital environment. While the Data Act addresses how AI tools should extract user information and other data, it ignores the Internet use of AI tools.

Why We Forget About Internet Governance and AI 

The intersection of AI governance and Internet governance necessitates oversight for various reasons, with one of the factors being the rapid pace of AI development and its far-reaching implications on the digital landscape. These far-reaching implications of AI on the digital landscape, often described as “AI Hype,” evolved from broader discussions and observations within the field of AI, arising from fluctuations in public perception, media coverage, and the discrepancy between the potential and the actual capabilities of AI technologies. Some research on how such conversations can affect the development of AI has already been published in different fields like education and medicine

Additionally, the development of AI in the private sector comes with intellectual differences between researchers, developers, and stakeholders. This brings issues like OpenAI’s exile and reinstatement of Sam Altman, arguments about defining AI from machine learning, deep learning, and other technologies, the impact of a lack of AI developers and technology-adjacent experts and layoffs, and the idea of abandoning AI altogether. AI Hype tends to narrow discussions toward immediate concerns like technical advancements and ethical dilemmas, inadvertently overshadowing the broader context of its intersection with Internet governance. This hype accentuates the complexity of AI, making it challenging to integrate discussions about its relationship with internet governance, an equally intricate and multidimensional field. Moreover, as AI proliferates across diverse sectors beyond the Internet, the intense focus on specific applications often diverts attention from considering its broader implications within Internet governance. Policy and regulatory frameworks, struggling to keep pace with AI advancements, prioritize addressing immediate ethical and societal concerns, leaving the integration of internet governance considerations with AI for later stages. This fervent spotlight on AI also fosters separate, isolated discussions, forming silos between conversations about AI and those centered on internet governance, hindering the exploration of their interconnectedness.

Fueling AI Hype, content moderation clogs up discussions on Internet governance with AI rather than becoming part of a general discussion that can create better frameworks to protect users. PNAI’s report references content moderation on social media platforms by human beings rather than AI, which explicitly brings labor cost issues and cross-border ethical concerns to the Global South. Additionally, the differences between state legislation governing political and social discourse on the Internet turn the conversation away from AI and Internet governance towards how AI can be used for platform governance and content regulation. 

In addition to content moderation, extracting data or content created by users also takes up space within this conversation. Web crawlers, found in tools like ChatGPT and Bard, navigate through websites, following hyperlinks to discover and analyze content, and then store this information for indexing purposes. Other topics, such as the reliability or biases of web crawlers, sprawl up more questions on how they are used across the Internet rather than how web crawlers should be standardized within these popular and invasive tools. Establishing standardized frameworks for web crawlers not only addresses the rising concerns of data privacy but can foster a more transparent and accountable digital landscape. Beyond AI Hype, advocating for such standardization urges stakeholders and government bodies to prioritize discussions and enact policies to responsibly manage the utilization of web crawlers within AI-drive technologies. 

Can existing organizations handle a vehicle constantly off-roading into other concerns on the Internet? During the PNAI discussion, Li stated in the PNAI panel that the policy community does not have an organization for generative AI, such as the Internet Engineering Task Force (IETF). The IETF creates voluntary standards that assist with the trajectory of the development of the Internet, but not policies. While the IETF has conducted research on AI, it is not specific to the effect that AI tools on the Internet may have. This could be because for the IETF,  AI is involved in topics like the Internet of Things. AI becomes all-encompassing and then swallows itself whole in such topics. The Internet Telecommunications Union (ITU) also develops standards to improve information and communication technologies. It provides a subset of activities and groups concentrating on AI, from natural disaster management to health. While each may touch upon the Internet, it may provide helpful separate AI and Internet governance from the rest. 

Overall, the prevailing discourse on AI and internet governance often lacks the depth needed to fully comprehend the multifaceted implications of AI technologies within the digital landscape. Addressing this gap necessitates a concerted effort to holistically integrate AI’s governance within the broader Internet governance framework, ensuring a balanced and inclusive digital future.

A Way Forward: Key Aspects of AI and Internet Governance to Be Addressed 

New topics and concerns eternally bud from the Internet and AI, but the point of this essay is that it is not impossible to contain the two into proper governance. The task may sound daunting, but the Internet governance timeline starts in the ‘80s. What is needed is a framework that organizes the concerns between AI and Internet governance to keep researchers and policymakers on track and committed to Internet governance as it pertains to AI. A broad approach to such a framework becomes necessary with how multidisciplinary AI in the Internet proves to be. 

Key Aspects:

  1. Ethical Guidelines and Principles to Follow:
    1. Transparency: Ensuring AI systems on the Internet are transparent in their decision-making processes and can explain their actions.
    2. Bias Mitigation: Addressing biases in AI to ensure fair treatment across diverse populations in any state.
    3. Accountability and Responsibility: Clearly defining responsibilities for the actions and decisions made by AI systems, their developers, and stakeholders.
  2. Data Governance:
    1. Establish legal frameworks that outline how AI systems can collect, process, and store personal data.
      1. Including but not limited to data protection laws, data localization, encryption and security measures, and explicit consent.
    2. Guidelines for collecting, storing, and processing data, emphasizing user consent and data minimization.
      1. Including but not limited to data minimization and purpose limitation, data quality and integrity, user empowerment and control, and accountability and governance
  3. AI Privacy and Security:
    1. Cybersecurity Standards: Ensuring AI systems are protected against cyber threats and vulnerabilities. 
    2. Safety Protocols: Establishing safety measures to prevent AI systems from causing harm or risks to individuals or society.
  4. Ethical AI Research and Development:
    1. Guidelines promoting ethical AI research and development encompass practices ensuring informed consent, fair and unbiased algorithms, human-centric design, societal benefit considerations, accountability measures, safety protocols, and continuous monitoring to align AI innovations with ethical principles and positive societal impact.
    2. Encouraging research in AI while considering ethical implications and societal impact.
  5. Regulation, Compliance, and Policy:
    1. Regulatory oversight and compliance in AI encompass establishing specialized laws, standards, and oversight bodies to monitor adherence to ethical guidelines, ensuring that AI development, deployment, and usage align with legal regulations, ethical considerations, and evolving technological landscapes.
    2. Role of Governments and Private Actors: Governments establish legal frameworks and oversight, while private entities contribute through compliance, ethical expertise, and innovation support. Collaborative efforts between both ensure effective regulation, ethical implementation, and innovation in tandem with the evolving AI landscape.
    3. Challenges in AI regulation to look for: Adapting regulations to the rapid pace of technological evolution presents enforcement and adaptation challenges. Diverse global regulations create hurdles in harmonization, compounded by ethical ambiguities and stakeholder alignment complexities, impacting enforcement and compliance efforts.
  6. Education and Awareness
    1. Public Education Initiatives: These programs inform diverse audiences about AI’s applications, ethical considerations, and societal impact through tailored workshops, seminars, and collaborative sessions, fostering informed policymaking and public awareness.
    2. Inclusive Technical Training and Continuous Learning: These initiatives focus on teaching technical skills for AI work while emphasizing ongoing learning opportunities. They ensure inclusivity for individuals from diverse backgrounds, fostering a capable and diverse AI community with an understanding of ethical practices.
  7. International Collaboration and Standards:
    1. Global Cooperation for Common Standards: Nations collaborate to establish universal guidelines ensuring ethical and responsible AI development. 
    2. Harmonization of Regulations: Efforts are made to align AI policies across borders, streamlining ethical practices and legal frameworks to facilitate consistent deployment globally.
  8. AI and Critical Sectors:
    1. Sector-specific Regulations: Tailored policies for critical sectors like healthcare, finance, and transportation to ensure safe and ethical use of AI.
  9. Continuous Monitoring and Adaptation:
    1. Ethics Review Boards: Establishing bodies to assess and review the ethical implications of AI applications. 
    2. Adaptive Governance: Frameworks for ongoing evaluation and adaptation of policies as AI technology evolves. This comprehensive framework aims to balance innovation and regulation, ensuring that AI technologies developed and utilized within the internet sphere adhere to ethical principles, protect user rights, and contribute positively to society.

Many of these points may seem familiar in other discussions of AI or Internet Governance. However, when such aspects as critical sectors or regulation are brought up, the natural tendency is to drown out AI’s regulation in the digital landscape. Instead, a more disciplined multidisciplinary approach is needed for collaboration and awareness. We ensure that we stick to the themes and topics of creating safe, reliable, accessible, and supportive Internet for AI to flourish. 

The IGF, like many other significant and collaborative events, has the opportunity to mold future discussions on Internet governance and AI to keep academics, stakeholders, journalists, and others who are part of this conversation on track. This approach creates an inclusive environment beyond self-driven cars going rogue or asking companies to halt AI production. It involves conversations that will eventually be crucial in shaping ethical, legal, and technical policies. Despite the event’s aim to establish a trustworthy internet, exploring AI’s intrinsic relationship within internet governance remained peripheral, with panels often skimming the surface of AI as a technology rather than delving into its profound implications within the digital landscape. While glimpses of their interconnectedness emerged sporadically, such as agile government measures in internet regulation, the comprehensive integration of AI within internet governance discussions fell short. The intense focus on immediate concerns, including content moderation and evolving AI development, diverted attention from holistic conversations about their intersection. Shifting this discourse requires a structured framework encompassing ethical guidelines, data governance, privacy and security measures, regulatory compliance, education, international collaboration, and sector-specific regulations, all integrated within the context of internet governance to foster AI’s ethical and sustainable integration within our digital future, a direction that collaborative platforms like the IGF can shape by converging academics, stakeholders, and policymakers towards a balanced and inclusive dialogue.

ABOUT THE AUTHOR
Angie Orejuela
Angie is currently a policy and research analyst based in NYC and a fellow at the Internet Law and Policy Foundry. She has her Masters degree in International Relations and Diplomacy at Seton Hall University. While she advances her academic career, she wants to research and analyze digital innovation and how it affects global politics and foreign policy.
Read more

Discover more from Digital Medusa

Subscribe now to keep reading and get access to the full archive.

Continue reading