It was a few months before the young RSP’s sweeping election victory in Nepal this year. At the end of his presentation of legislative vision, a prominent leader of the party concluded by lamenting that his fellow candidates had not used AI to generate their own presentations. Indeed, one of his six priorities was to promote the use of AI in governance, though he didn’t specify any policy benefits of doing so.
The viralization of large-language-model artificial intelligence tools like ChatGPT in November 2022 was undoubtedly a major milestone in the history of technology. Unfortunately, the impacts of this technology on democracy, education, the labor market, and good governance range from very helpful to shockingly harmful. LLM tools that policy makers are using (which also include Claude, Gemini, CoPilot, and DeepSeek) are based on the stunningly simple function of locating and predicting human-like patterns of words. Powered by vast training data and enormous processing, LLM tools generate plausible texts, support learning and research, decode handwritten documents, and interact persuasively in oral form (not to mention AI's expanding capacities in image generation, predictive modeling, robotic movement, real-time translation, and more).
The popular fascination with LLM tools, however, masks challenges they pose to society, which range from empowering bad actors for creating and spreading disinformation including deepfake videos at machine scale, to contaminating the broader information landscape with synthetic content, eroding public trust in expertise and authoritative sources, and destabilizing what counts as evidence in legal and policy contexts. Other challenges include large-scale intellectual property theft and copyright violation, privacy violations through data harvesting, extraction and redirecting of natural resources, exploitation of physical and emotional labor from countries with fragile economies, amplification and perpetuation of bias and discrimination including through unjust policing and state abuse of power, use of AI for psychological manipulation and targeted persuasion by bad actors and companies that want to increase social media engagement and AI adoption, and the ramping up of junk science and scholarship.
Understanding the perils of AI while harnessing its potentials into policy is the pathway forward for a smarter new generation of public leaders.
Monkey and the Ladder
Politicians do not need help generating more content. As policy makers, they instead need to gather adequate data and evidence, consult experts, seek out public input, and learn lessons from history (including about technological history). The powerful AI applications that can generate impressive speeches will only make a difference to policy making if the users are deeply knowledgeable on the subject, highly skilled in using the tool, provide them with adequate data, do not bypass expert consultation, and so on; without all these conditions, if they’re only used for generating speeches that widen the gap between promise and outcomes, the new technology will become the proverbial ladder for monkeys.
Banana market growing increasingly popular in the east
To match its technological savvy with technological literacy, the new generation must take a step back and educate itself about the documented harms of social media in the past two decades. As lawsuits in the United States are starting to expose, including where Facebook’s parent company Meta was found guilty for social harms through its products, Western societies with some clout over social media companies are starting to hold them accountable. Some of the harms they have caused include deliberately engineering addictive features (such as infinite scroll, autoplay, dopamine-loop notifications) to maximize youth engagement despite internal research showing serious harm. Research has documented alarming scales of mental health crises among adolescents: depression, anxiety, eating disorders, and more. In recent months, over two thousand individual product liability claims have been consolidated in US federal court, with cases proceeding against Meta, TikTok, Snapchat, YouTube, and others.
Researchers studying the harms of AI have clearly shown that AI is built on deficient and defective data about the global South. AI companies are developing systems that further marginalize local knowledge traditions; displace and exploit workers in fragile economies; generate biased or discriminatory outputs in facial recognition, hiring, credit, and law enforcement that disproportionately harm minority groups; enable surveillance of the public; manipulate or enable political discourses that disrupt social harmony; and sell or support digital products whose market value depends on attention driven by anger, deepfakes, and disinformation.
When a politician in Kathmandu uses an LLM to draft policy, the tool draws on a knowledge base shaped overwhelmingly by the realities and worldviews of Western societies: their material and economic issues, social conditions, and cultural values. Frontier AI systems owned by private companies in Silicon Valley are also built on extraction of information and emotion from around the world, so sensitive private and public information should not be fed to AI tools. An LLM can produce fluent, confident prose about any country’s agriculture, its governance structure, its public health system, or its energy policy. But if policy makers who do not have enough field data, community/stakeholder input, expert consultation, personal experience or exposure, they will surrender policy imagination to faraway designers of an information system that has no accountability to the people its “ideas” affect.
Barbecue in a Forest Fire
Policy makers who care about environmental and social justice must also learn about the shocking extractive foundations of AI that are documented well in many books like Kate Crawford’s Atlas of AI and Karen Hao’s Empire of AI. Training and serving large models requires intensive mining of rare earth metals, enormous amounts of water and energy, and vast quantities of human labor (where workers often experience extreme psychological harm while labeling or removing problematic content). AI systems hoover up the intellectual and creative output of artists, writers, journalists, scholars, and everyday users worldwide, typically without consent, compensation, or attribution. Countries that are net exporters of all the above “raw material” and net importers of the finished AI product are participating in an asymmetric exchange with deep systemic resemblances to earlier forms of colonial extraction. AI is arriving with far greater capabilities for resource extraction and far more insidious embedding into critical national/social infrastructure. Celebrating its adoption while remaining silent about these supply-chain realities is like enjoying barbecue at the edge of a forest fire.
Perhaps the least visible but most corrosive risk of uncritical AI adoption is what it does to the scientific and scholarship knowledge ecosystem, which is the basis of informed policy making. On the one hand, generative AI is lowering the quality of education for future leaders and knowledge workers by providing tools to bypass the experience of developing the capacity to think critically, evaluate evidence, construct informed arguments, and engage responsibly with the world. On the other hand, GenAI tools are also undermining knowledge making by providing tools to skip effortful engagement with the research process: slow and careful reading, grappling with difficulty, synthesizing conflicting sources, drafting and revising, and engaging in rigorous peer review.
Especially where democratic institutions are not well established, or norms not as strong, societies can be most impacted by the cognitive deskilling that Gen AI is documented to cause. In fact, AI companies argue that AI tools can help users skip the bottom few layers of Bloom's taxonomy of educational objectives (remembering, understanding, and applying knowledge), promising to help users analyze and evaluate issues, saving them time to “create” instead. That is absurd because, for instance, a lawyer who doesn’t know or remember enough terms and concepts, laws and cases, frameworks and approaches (but instead keeps reading from his printout or looking at his phone) cannot possibly argue a case effectively in court. Policy makers who “generate” policies without in-depth understanding of issues and contexts, people and perspectives, past experiences and failures cannot be effective leaders.
Corrupting the knowledge commons, on which informed leadership depends, AI systems are already generating texts that mimic peer-reviewed research, including fabricated citations to support plausible-sounding but hollow claims. That shared knowledge ecosystem, on which democratic deliberation, public health, legal systems, and evidence-based governance all depend, is degraded each time a plausible but false claim enters it.
Eating the Banana
All the above, however, doesn't mean that policy makers should avoid or ban AI. Here are a few more specific things that could be done to expand the benefits while shrinking the harms:
Policy makers must invest in AI literacy, educating themselves adequately with knowledge about how AI tools work, how to use them effectively, and especially how not to use them poorly or dangerously. It is not enough to learn how to write better prompts; it is necessary to learn how to evaluate and judge AI's behavior and benefits, output and actions, costs and impacts. They must become "both" capable in their domain's knowledge and skills and able to use AI in ethically and socially responsible ways. Even reading just the two books I mentioned above could be a great way to start minimizing harms and maximizing benefits.
Mandate AI literacy training for all civil servants and elected officials: go far beyond prompt-writing workshops into sound education on how LLMs work, where they fail, whose interests they serve, and what documented harms they have caused. Require disclosure whenever AI tools are used in drafting legislation, policy documents, or public communications, so that citizens and experts can evaluate what human judgment actually went into a policy. Use the help of independent technical advisory bodies (drawing on local researchers, civil society, and domain experts) to audit AI-assisted policy proposals before adoption; they can help ensure that field data, stakeholder input, and contextual knowledge have not been bypassed.
Protect public and government data from AI extraction by enacting clear data sovereignty policies that restrict what information private AI companies can harvest from national digital infrastructure, public records, and citizen interactions. Create procurement standards that prohibit government agencies from using AI tools that cannot demonstrate accountability, transparency, and compliance with local legal and ethical standards. Regulate the use of AI in law enforcement, hiring, credit, and public services, where biased outputs disproportionately harm already-marginalized communities, and require algorithmic impact assessments before deployment. Develop regional cooperation to negotiate together with Silicon Valley companies, also building shared standards for data protection, revenue sharing, and accountability.
In education and research (whose quality is foundational to good governance), protect and fund the slow, effortful processes that AI shortcuts threaten: primary research, peer review, field work, and expert deliberation. Support investigative journalism and civil society organizations that can monitor AI harms, hold companies accountable, and keep the public accurately informed.
To be excited by the mere power of AI to generate fancy speech, as it seemed RSP's speaker in my anecdote above was, is like throwing the banana inside and eating the peel (as a new-gen twist of an old Nepali proverb goes). To harness the genuine affordances of AI technology, a new generation of public leaders must use it to gather and analyze data, connect and engage people, strengthen accountability, and deepen (not replace) their own understanding of the complex realities with which they are entrusted to lead.