When my English class was surveyed on what troubled them the most about the future, the top thing on the list was artificial intelligence (AI). With the introduction of generative AI into our everyday lives, students have begun to realize how it could affect their futures. While using ChatGPTto write an essay may seem harmless, our fears about AI taking over the world become more realistic AI gets seemingly more and more powerful by the day. So, is AI a tool that can be used to build our future, or can it be the very thing that could destroy it?
How does AI Work?
Artificial intelligence systems work by analyzing algorithms and data. First, massive amounts of data are collected and applied to algorithms or mathematical models, which use the information to detect patterns and make predictions from those patterns. This process is known as training. Once an algorithm is trained, it is deployed to handle various applications, where it is continuously trained to adapt to new data and complete more complex operations.
The primary algorithmic approach to building AI is machine learning. Machine learning uses statistical techniques to help an algorithm “learn” how to get progressively better at a task. It uses historical data as an input to predict new outputs. Machine learning consists of both supervised learning (where the expected output for the input is known thanks to labeled data sets) and unsupervised learning (where the expected outputs are unknown due to the use of unlabeled data sets).
Machine learning is typically done using neural networks, a series of algorithms that process data by mimicking the structure of the human brain. These networks consist of layers of interconnected “neurons,” that process information and pass it along to other neurons.
By adjusting the strength of connections between these neurons, the network can learn to recognize complex patterns within data, make predictions based on new inputs and even learn from its mistakes. This makes neural networks incredibly adept at recognizing images, understanding human speech, and translating words between languages. These types of AI models can also be incredibly helpful in tasks like detecting tumors from X-rays.
ChatGPT, however, is a form of generative AI,which, creates new data instead of making predictions. An early example of generative AI is a much simpler model known as a Markov chain. In machine learning, Markov models have long been used for next-word prediction tasks, like the autocomplete function in an email program. In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words.
The base models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is far larger and more complex, with billions of parameters. it’s been trained on an enormous amount of data — in this case, much of the publicly available text on the internet. While bigger datasets are one catalyst that led to the generative AI boom, a variety of major research advances also led to the evolution of more complex deep-learning architectures.
In 2014, a machine-learning architecture known as a generative adversarial network (GAN) was proposed by researchers at the University of Montreal. GANs use two models that work in tandem: One learns to generate a target output (like an image) and the other learns to discriminate true data from the generator’s output.
Diffusion models were introduced a year later by researchers at Stanford University and the University of California at Berkeley. By repeatedly refining their output, these models learn to generate new data samples that resemble samples in a training dataset, and have been used to generate realistic-looking images.
In 2017, researchers at Google introduced the transformer architecture, which has been used to develop large language models like those that power ChatGPT. A transformer encodes each word in a collection of text as a token or unit of data, and then generates an attention map, which captures each token’s (fundamental unit of data) relationships with all other tokens. This attention map helps the transformer understand context when it generates new text.
The Effects of AI
Understanding the underlying algorithms behind AI and how it works helps us figure out if it is more helpful than harmful.
AI can raise many red flags, starting with the possibility of worker shortages now that AI chatbots are used in call centers. In addition, generative AI can inherit and proliferate biases that exist in training data, or amplify hate speech and false statements. The models can also plagiarize or generate content that looks like it was produced by a specific human creator, raising potential copyright issues.
On the flip side, Devavrat Shan, a computer science professor at MIT, proposes that generative AI could empower artists who could use generative tools to help them make creative content they might not otherwise have the means to produce. In the future, he sees generative AI changing the economics of many disciplines.
He believes that AI could bridge the communication gap between humans and machines.
“The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines,” Shah said when interviewed for an MIT article with Adam Zewe.
Phillip Isola, an associate professor of electrical engineering at MIT, sees one promising future direction for generative AI: its use for fabrication. Instead of having a model make an image of a chair, perhaps it could generate a plan for a chair that could be produced.He also sees future uses for generative AI systems in developing intelligent AI agents.
“There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We can think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one tool that will empower agents to do that,” Isola said to Adam Zewe.
Artificial Intelligence doesn’t seem so scary once the underlying algorithms are understood. AI is simply making predictions from data it is fed. Generative AI, like ChatGPT works on creating data from other data. AI can have positive and negative ramifications, and the possibilities are endless. Ultimately, it is up to us to determine whether it will help or hurt our future.