Exclusive for Stankevicius: From Lewis’s Hideous Strength to Deepfakes and the Machinery of Belief

In my previous article for Stankevicius, “The Veldt 2.0: Your Smart Home Wants Your Children,” I drew on Ray Bradbury’s 1950 short story “The Veldt” to warn that the corporate arms race in artificial intelligence is no longer confined to laboratories and trading floors; it is creeping into nurseries and playrooms. I argued that when companies such as Mattel announce plans to embed OpenAI’s language and video models into children’s toys, the Moloch trap comes home. Bradbury’s fictional HappyLife Home, with its immersive nursery, serves as a blueprint for a smart-home ecosystem in which machines monitor and mediate children’s relationships. Negative highlights are privacy breaches, the risk that intimate recordings could be repurposed into deepfake child pornography, and the broader danger that children might form their first emotional attachments with responsive algorithms rather than with human caregivers.

This exclusive Stankevicius article extends that moral inquiry from the home to the public sphere. Deepfakes, convincing audio and video fabrications generated by machine-learning models, transform images and voices into programmable surfaces. They threaten to dissolve the link between what we sense and what is real. The problem is not merely technological; it is moral and political. Drawing on C. S. Lewis’s dystopian novel That Hideous Strength (1945) to explore how technocratic institutions manipulate belief. In the book the National Institute of Co‑ordinated Experiments (N.I.C.E.) attempts to recondition public opinion by flooding society with narratives that make disbelief costly.

Today’s stakes are high. Recent incidents highlight the significant advancements in technology and the continued inadequacy of institutional preparedness. In early 2024, as reported by CNN, the British engineering giant Arup revealed as $25 million deepfake scam, centered around a finance worker in Hong Kong who transferred 39 million dollars (HK$200 million) during a video meeting, believing she was speaking to her executives; the “colleagues” were AI‑generated. 

Cowin, J. (2025, October 9). From Lewis’s Hideous Strength to Deepfakes and the Machinery of Belief. Stankevicius. https://stankevicius.co/artificial-intelligence/from-lewiss-hideous-strength-to-deepfakes-and-the-machinery-of-belief/

Expanding my AI Knowledge with Google’s AI Essentials Course

Today, I completed Google’s AI Essentials course to build upon my existing AI knowledge. As someone who believes in continuous learning, I found the course to be a valuable resource for professionals looking to enhance their AI skill set.
The course content was comprehensive, covering a wide range of topics from foundational concepts to practical applications of AI in the workplace. The hands-on exercises and real-world examples helped reinforce the learning material and provided opportunities to apply newfound knowledge.
One notable aspect of the course was its emphasis on the responsible and ethical use of AI. It provided a framework for understanding potential biases, inaccuracies, and security risks associated with AI and offered guidance on mitigating these issues.

The course content was comprehensive, covering a wide range of topics from foundational concepts to practical applications of AI in the workplace, with many aspects directly transferable to higher education and my field: teacher preparation and second language acquisition. The hands-on exercises and real-world examples helped reinforce the learning material and provided opportunities to apply newfound knowledge. The course provided insights into prompt engineering and its potential to streamline workflows and inspire creative solutions. This skill can greatly augment tasks and improve efficiency in various industries.
Throughout the course, I acquired several key skills that are applicable to both my academic work and teaching:

  • Augmenting tasks with AI: Learning how to effectively integrate AI into my workflow to enhance productivity and performance.
  • Critical thinking: Developing the ability to critically evaluate AI tools and their potential impacts on projects and decision-making processes.
  • Iterative thinking: Understanding the importance of iterative problem-solving when working with AI, refining solutions based on feedback and results.
  • Prompt engineering: Mastering the art of crafting precise and effective prompts to guide AI models in generating desired outputs.
  • Confronting AI challenges: Gaining awareness of potential biases, inaccuracies, and security vulnerabilities associated with AI systems and developing strategies to mitigate these concerns.

However, it is essential to recognize that learning AI is an ongoing process. The field is constantly evolving, and staying up-to-date requires a personal commitment to continuous learning about and exploration of new AI tools. As AI technologies advance at a rapid pace, those of us who wish to remain competitive in the field must actively seek out opportunities to expand our knowledge and skill set. This may involve attending conferences, participating in online courses, engaging with AI communities, and experimenting with emerging AI platforms to stay at the forefront of this exponentially transformative industry. And in my case, this meant using the 4th of July to complete the course.

Join my webinar for Everyone Academy: Structured AI Prompting Strategies for Language Educators

At the heart of my professional journey is a commitment to transformative education, grounded in integrating concepts like Lynda Miller’s philosophy of abundance, which counters Ruby Payne’s notion of a Culture of Poverty (2005). This philosophy of abundance emphasizes viewing experiences as assets filled with positivity and optimism, particularly valuable in an often dystopian-seeming world. Aligned with the UN’s Sustainable Development Goal 4, she has contributed to initiatives like Computers for Schools Burundi, TESOL “Train the Trainer” programs in Yemen and Morocco. As an educator in the Fourth Industrial Revolution era, I prepare future teachers by incorporating innovations in education to shape worldviews and cultivate an adaptable skillset for Volatile, Uncertain, Complex, Ambiguous (VUCA) environments. Her research explores simulations for educators-in-training, AI in education and assessment, educational Metaverse applications, and educational transformation for language educators.

Through my pro-bono work, I support SDG 4.c By 2030, substantially increase the supply of qualified teachers, including through international cooperation for teacher training in developing countries, especially least developed countries and small island developing States.

Mon, Mar 11 | Webinar Time & Location Mar 11, 2024, 4:00 PM – 4:30 PM GMT (Casablanca, Morocco)

– click the link to register

Structured AI Prompting Strategies for Language Educators

https://www.everyoneacademy.org/event-details/structured-ai-prompting-strategies-for-language-educators

An Overview: Generative AI Programs and ChatGPT Infographic by Dr. Jasmin (Bey) Cowin

One of the earliest examples of generative AI was the “Markov Chain”, a statistical method developed by Russian mathematician Andrey Markov in the early 1900s. Markov chains are a “fairly common, and relatively simple, way to statistically model random processes. They have been used in many different domains, ranging from text generation to financial modeling. A popular example is r/SubredditSimulator, which uses Markov chains to automate the creation of content for an entire subreddit.” Devin Soni

The first successful generative AI algorithm was developed in the 1950s by computer scientist Arthur Samuel, who created the Samuel Checkers-Playing Program an early example of a method now commonly used in artificial intelligence (AI) research, that is, to work in a complex yet understandable domain.

One of the early breakthroughs in generative AI was the development of Restricted Boltzmann Machines (RBMs). “It was invented in 1985 by Geoffrey Hinton, then a Professor at Carnegie Mellon University, and Terry Sejnowski, then a Professor at Johns Hopkins University.” RBMs are a type of neural network that can learn to represent complex data distributions and generate new data based on that distribution. In 2014, a team of researchers from the University of Toronto introduced the Generative Adversarial Network (GAN) framework. Jason Brownlee in A Gentle Introduction to Generative Adversarial Networks (GANs). “Generative modeling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.”

Recently, generative AI and ChatGPT have been in the news, discussed at conferences, used by students, and feared by Professors due to the generation of content that can be indistinguishable from that created by humans. Both Google’s BERT and GPT-3, are big language models and have been referred to as “stochastic parrots” because they produce convincing synthetic text devoid of any human-like comprehension. A “stochastic parrot” is, in the words of Bender, Gebru, and colleagues, “a system for randomly stitching together sequences of language forms” that have been seen in the training data “according to probabilistic knowledge about how they join, but without any reference to meaning.”

This infographic is an attempt to visualize the timeline of Generative AI Programs and ChatGPT.