Nick Clegg, the former President of Global Affairs at Meta and ex-UK Deputy Prime Minister, is stepping into the AI arena with a measured perspective, eschewing both alarmist doomsaying and uncritical boosterism. His recent appointments to the boards of Nscale, a British data center firm, and Efekta, an education technology startup, signal a strategic pivot towards the practical applications of artificial intelligence. Clegg’s nuanced stance on AI, particularly his skepticism towards the often-hyped concept of "superintelligence," comes as he emerges from a period of relative quiet following his departure from Meta in January 2025, shortly before Donald Trump’s return to the White House.
Clegg’s entry into the AI sector is marked by his belief in the transformative potential of the technology, especially within the realm of education. He views AI as a powerful tool capable of automating certain inefficiencies and personalizing learning experiences, a stark contrast to the more existential anxieties surrounding AI’s future. His decision to join Efekta, a spinout of the Swiss education giant EF Education First, underscores this commitment. Efekta’s core offering is an AI-powered teaching assistant designed to adapt to individual student abilities and provide detailed progress reports to educators. This innovation aims to bridge the gap in personalized instruction, a challenge that has long eluded traditional classroom settings. The platform currently serves approximately four million students, with a significant presence in Latin America and Southeast Asia. Clegg’s role is expected to leverage his extensive experience in both political and technological spheres to guide Efekta’s global expansion.
The AI Spectrum: Rejecting Hype, Embracing Pragmatism
In a candid conversation at EF Education First’s West London office, Clegg articulated his dissatisfaction with the prevailing discourse surrounding artificial intelligence. "I somewhat disregard both kinds of hype," he stated, referring to the extreme ends of the AI debate. "Saying that AI is going to destroy life as we know it by next Tuesday is as much hype as saying it’s the most powerful thing to have happened to the human being since the invention of fire." This aversion to hyperbole stems from his observation that such claims are often propagated by individuals or entities with vested interests in overstating the capabilities of their inventions.
Clegg attributes the volatile nature of AI discussions to the technology’s dual characteristics: its immense versatility and its inherent limitations. "It is exceptionally powerful for certain things—like coding—and exceptionally useless for many others," he explained. This paradox, he believes, makes it challenging for society to engage in a clear-eyed conversation about AI’s true potential and its practical boundaries. He also touched upon the human tendency to anthropomorphize AI, a phenomenon he views as a natural but ultimately mistaken way of processing our interactions with artificial systems.
Revolutionizing Education: AI’s Role in the Classroom
Clegg expressed strong conviction in the positive impact AI will have on education. "I’m completely convinced that immersive, online teaching can have very considerable benefits to pupils," he asserted. He highlighted the long-standing aspiration of educators to personalize learning, acknowledging the inherent difficulties in providing individualized attention within large classroom settings. AI, in his view, offers a groundbreaking solution: "the secret sauce that AI provides is that it really allows for adaptive, interactive personalization."
The choice of Efekta was driven by its focus on addressing educational disparities in underserved markets. "Its focus is on very big, underserved markets in Latin America and Southeast Asia, and so on," Clegg noted. "There are chronic teacher shortages across those parts of the world." He sees Efekta’s AI teacher as a democratizing force, offering students in remote regions the same quality of responsive educational interaction as those in more privileged urban areas.
However, Clegg also addressed concerns about potential drawbacks, such as students becoming overly reliant on AI for fundamental skill development. Drawing a parallel to the introduction of calculators, he argued that while AI will undoubtedly alter educational practices, the net effect is likely to be positive, mirroring how calculators did not prevent the development of mental arithmetic skills. "Bad teachers will use it badly, and good teachers will use it very well—as they did whiteboards and calculators," he remarked.
The potential for emotional dependency on AI agents, particularly among vulnerable populations, is a significant concern for Clegg. He advocates for a "very precautionary approach" to the deployment of agentic AIs for young people, suggesting clear age-gating mechanisms. He cited Australia’s social media ban for under-16s as an example, though he pointed out the practical challenges in enforcing such age restrictions without compromising privacy. "My view for a long time has been that the only way to do that is through the choke points of iOS and Android, at an [app store] level," he proposed.
Despite these risks, Clegg remains confident that Efekta’s current AI offerings do not pose a threat of undue emotional influence. He differentiated these tools from more "agentic AIs" that might develop complex relationships with users, emphasizing that Efekta’s technology operates within a "teacher-controlled experience."
The Power Paradox: Concentration in Silicon Valley and Beyond
Clegg’s tenure at Meta provided him with a unique vantage point on the rapid evolution of AI and the intense pursuit of artificial general intelligence (AGI) and superintelligence. He expressed skepticism about the nebulous definitions and claims surrounding these advanced AI concepts. "If you ask three people at the same organization what superintelligence is, you’ll get three different answers," he observed. He believes that the pressure to appear at the forefront of AGI and superintelligence research is often driven by the need to attract top talent and funding in Silicon Valley.
A central theme in Clegg’s analysis is the "power paradox" inherent in AI development. While these technologies empower individuals, they simultaneously concentrate immense power in the hands of a select few entities, primarily in Silicon Valley and, increasingly, in China’s tech sector. The astronomical costs associated with developing and maintaining large language models (LLMs) are exacerbating this trend, leading to an oligopolistic market structure. Clegg anticipates a future "shakeout" as the immense financial demands of AI infrastructure become unsustainable for many players.
Governance and Content Moderation: Lessons from Meta
Clegg also reflected on his efforts to address the concentration of power within Meta, specifically through the establishment of the Facebook Oversight Board. He defended the board’s effectiveness, citing its ability to make binding content decisions that Meta has been compelled to implement. He acknowledged that while it may not be the "Supreme Court" some envisioned, it serves as a crucial final recourse for complex content moderation decisions, balancing free expression with platform responsibility.
A point of disappointment for Clegg is the lack of broader adoption of the Oversight Board model by other tech platforms. He attributes this partly to a shift in the US attitude towards content moderation, particularly in the post-Musk takeover era at Twitter. He criticized what he described as the "infantile tendency for the MAGA crowd to call any content moderation an act of censorship," viewing this as a "ludicrous distortion of the truth" that fetishizes the term "censorship" for political gain.
Regarding Meta’s evolving content moderation strategies, including the shift from independent fact-checkers to crowdsourced moderation, Clegg acknowledged the changes but maintained a pragmatic view. He argued that crowdsourcing misinformation identification can be effective at scale, and cautioned against romanticizing independent fact-checkers, noting that their reach is limited and that they can be perceived as ideologically biased by significant segments of the population.
Navigating the Geopolitical AI Landscape: EU Regulation and Open Source
Clegg expressed strong criticism of the European Union’s approach to AI regulation, particularly the AI Act. He described it as a "ludicrous act of self-harm" and a "classic, textbook example of how not to regulate." His primary concern is that the legislation was drafted before the widespread adoption of technologies like ChatGPT, leaving it ill-equipped to address the complexities of AI development. He argued that holding foundational model developers responsible for downstream applications is unworkable and detrimental to European entrepreneurs seeking to build globally competitive companies.
"It infuriates me, because the same people will pontificate about asserting European sovereignty and making sure that we’re not all dependent on American and Chinese technology. It’s about the worst way to guarantee our sovereignty," he stated, highlighting the irony of the EU’s regulatory approach potentially hindering its own technological independence.
In contrast to stringent, top-down regulation, Clegg has become a vocal advocate for open-source AI development. He believes this approach is the most effective means of democratizing access to AI technologies and preventing the consolidation of power in the hands of a few proprietary model providers. He pointed to China, the world’s largest autocracy, as an unexpected leader in facilitating democratized access through open-sourcing, whether by design or by coincidence. This advocacy for open access and pragmatic application positions Nick Clegg as a significant voice shaping the future discourse and development of artificial intelligence.
