Throughout history, people have often been both the creators and the victims of their own ambitions. Time and again, whether in literature, in history, or in the world around us, we see inventions, ideas, and technologies escape our control, defy expectations, and sometimes cause harm.
Mary Shelley’s 1818 Gothic novel, “Frankenstein or the Modern Prometheus”, provides a timeless warning. It tells the tale of Victor Frankenstein, who brought a creature to life, only to find himself completely unprepared for what comes next.
Rejected by his creator, the creature is left to navigate a world that feared and shunned him. Victor refused to guide or care for his creation. When Victor destroyed the female companion he had promised the creature, that betrayal deepened the monster’s anger. The result was devastating. The creature murdered Victor’s brother, his friend, and his fiancée, leaving his creator consumed by grief and obsession. Victor chases the monster to the ends of the earth, but soon perishes. Shelley’s story is more than gothic horror; it is a lesson in responsibility. Creation without accountability can spiral into destruction.
History gives us another cautionary tale. Robert Oppenheimer, “the Father of the Atomic Bomb”, helped build one of humanity’s most destructive inventions. The bomb ended World War II but killed an estimated 200,000 people in Hiroshima and Nagasaki. Oppenheimer later reflected with deep regret, quoting the Bhagavad Gita: “Now I am become Death, the Destroyer of Worlds”.
Like Frankenstein, Oppenheimer learned that innovation carries weighty consequences. Both stories show that when humans wield immense power without foresight or ethical grounding, the results can be catastrophic.
Today, we see a new version of this story unfolding. Artificial Intelligence (AI) is reshaping our world in ways even its creators could not have fully anticipated. Originally developed to improve efficiency, healthcare, communication, and knowledge-sharing, AI now touches nearly every aspect of our lives. It has enormous potential but also the ability to disrupt society, politics, and daily life if left unchecked.
The political implications of AI are already clear. In January 2024, voters in New Hampshire received robocalls featuring a voice mimicking President Biden, instructing them not to vote in the state’s primary. These calls were produced with AI voice-cloning software from ElevenLabs. It triggered federal investigations and new regulations from the FCC. The problem is not confined to the United States. A 2025 report by the International Panel on the Information Environment found that 80 percent of countries holding competitive elections in 2024 experienced AI-related incidents, many designed to mislead voters or manipulate public opinion. Some attacks specifically targeted female candidates in vile AI-generated content.
AI also affects society beyond politics. By determining what information people see online, algorithms can shape opinions, reinforce biases, and influence worldviews. In some of the darkest corners of the internet, AI has been weaponised to produce sexualized deepfake content of children. In the first half of 2025, the UK’s Internet Watch Foundation verified over 1,200 AI-generated videos depicting child abuse, a sharp rise from just two cases in the same period in 2024. These disturbing developments highlight how technology, if left unchecked, can be exploited for truly horrifying purposes.
For countries like Guyana, these challenges present both a risk and a rare opportunity. Being at the beginning of the AI journey means Guyana can learn from the experiences of nations further along, avoiding costly mistakes. To protect its citizens and harness AI responsibly, Guyana must take a multi-pronged approach.
First, strong legal frameworks are essential. Existing laws in Guyana may not yet address the unique challenges posed by AI including deepfakes and misinformation. The government should prioritise legislation that sets clear boundaries for AI use, protects citizens’ rights, and holds creators and operators accountable for misuse.
Second, ethics and transparency must be embedded in AI adoption. AI should not operate as a black box whose decisions are invisible to regulators or citizens. Transparency in how AI systems work, especially in critical areas such as healthcare, banking, and government services, must be made a priority. This will build trust and prevent abuses.
Third, digital literacy and public education are crucial. Citizens must be equipped to identify misinformation and manipulation online. Social media platforms are flooded with content designed to mislead or influence behaviour, and without the skills to discern fact from fiction, people can easily fall prey to manipulation.
Just recently, a gas station was targeted by an act of terrorism that claimed the life of a child and injured several others.
A choppy video recording of the suspect was shared on social media and AI was used to predict the facial appearances of the bomber. Soon after, the suspect was apprehended by police and controversy followed as many used the AI-generated image to dispute the police’s photo of the terrorist. The Guyana Police Force and other agencies were forced to spend an inordinate amount of time in explaining that the AI image was not the actual photo of the bomber.
Fourth, capacity building and workforce development are key. Guyana should invest in training a new generation of data scientists, engineers, and regulators who can develop, manage, and oversee AI systems responsibly. By building human capacity now, the country ensures that AI technologies are implemented safely, monitored effectively, and updated as needed.
Fifth, responsible innovation must be incentivised. AI has enormous potential to improve lives, from better healthcare and education to smart infrastructure and public services. The government can encourage ethical innovation through grants, research funding, and recognition programs for AI solutions that prioritize social good. At the same time, strict oversight must prevent harmful applications or experiments that put citizens at risk.
With the Government eyeing the rollout of a “Digital Guyana” by integrating AI technologies into healthcare, education, crime-fighting and infrastructural development, data protection and privacy safeguards are necessary.
Guyana must enforce laws that protect citizens’ data, regulate how it is collected and used, and ensure that individuals have the right to know when their data is being processed by AI.
Indeed, AI can be a tool that empowers citizens, strengthens governance, and drives development, or it can become a source of harm if left unchecked. The responsibility lies with policymakers, educators, technologists, and society at large to ensure that AI serves the people rather than threatening them.
It is clear that creation brings power, but power brings responsibility. As Guyana stands at the crossroads of technology and society, the country has a rare opportunity to lead with vision, ethics, and courage. If it succeeds, it will not only protect its citizens from the dangers of AI but also show the world how a small nation can embrace innovation responsibly, harnessing the true power of technology while safeguarding human rights and safety.









