An Overview: Generative AI Programs and ChatGPT Infographic by Dr. Jasmin (Bey) Cowin

One of the earliest examples of generative AI was the “Markov Chain”, a statistical method developed by Russian mathematician Andrey Markov in the early 1900s. Markov chains are a “fairly common, and relatively simple, way to statistically model random processes. They have been used in many different domains, ranging from text generation to financial modeling. A popular example is r/SubredditSimulator, which uses Markov chains to automate the creation of content for an entire subreddit.” Devin Soni

The first successful generative AI algorithm was developed in the 1950s by computer scientist Arthur Samuel, who created the Samuel Checkers-Playing Program an early example of a method now commonly used in artificial intelligence (AI) research, that is, to work in a complex yet understandable domain.

One of the early breakthroughs in generative AI was the development of Restricted Boltzmann Machines (RBMs). “It was invented in 1985 by Geoffrey Hinton, then a Professor at Carnegie Mellon University, and Terry Sejnowski, then a Professor at Johns Hopkins University.” RBMs are a type of neural network that can learn to represent complex data distributions and generate new data based on that distribution. In 2014, a team of researchers from the University of Toronto introduced the Generative Adversarial Network (GAN) framework. Jason Brownlee in A Gentle Introduction to Generative Adversarial Networks (GANs). “Generative modeling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.”

Recently, generative AI and ChatGPT have been in the news, discussed at conferences, used by students, and feared by Professors due to the generation of content that can be indistinguishable from that created by humans. Both Google’s BERT and GPT-3, are big language models and have been referred to as “stochastic parrots” because they produce convincing synthetic text devoid of any human-like comprehension. A “stochastic parrot” is, in the words of Bender, Gebru, and colleagues, “a system for randomly stitching together sequences of language forms” that have been seen in the training data “according to probabilistic knowledge about how they join, but without any reference to meaning.”

This infographic is an attempt to visualize the timeline of Generative AI Programs and ChatGPT.