Starting Line 2023
Understanding artificial intelligence: use it, don’t abuse it.

Ryan Davis | Student Life
This year’s freshman class will have the unique opportunity of being the first to go through college with artificial intelligence programs at their fingertips for all four years. As the school year begins, AI is only going to increase its presence in academic conversations. It’s important — not just for the integrity of our education, but for the precarity of our future — that we continue these conversations with an informed understanding of AI: its history, how it works, and ways to use it without abusing it.
So, what is AI, other than the one thing that everybody’s talking about and that nobody fully understands? Why are worries increasing across the globe about AI redefining sentience, stifling creativity, and fueling false information?
The discourse on AI dates as far back as the 1940s. Alan Turing invented the first computer, a device used to crack the German Enigma code and win World War II. He understood that decoding trillions of permutations every day in order to solve the Enigma code was only possible if a machine was invented that could do something no human was capable of.
While Turing’s invention changed the world, his theoretical argumentations are less remembered, at least by those not actively studying computability theory. In his 1950 paper “Computing Machinery and Intelligence,” Turing introduced hypothetical measurements for intelligence in machines. The Turing test is a way of assessing how well a machine can imitate a human. While nothing has fully passed this test, what was a thought experiment to Turing appears to be reaching actualization.
ChatGPT is not the first artificial intelligence chatbot to exist. In 1966, there was ELIZA; in 1972, there was PARRY; in 2009, there was WeChat. These are just a few, and let’s not forget about our good friends Siri, Google Now, and Alexa. Like Turing’s Enigma decoding machine, AI chatbots are trained on datasets, and they use complex algorithms to analyze trillions of permutations within minutes, predicting the next-best word to complete a human-like sentence.
As James Bridle explains in their novel “Ways of Being,” machines are not the only form of non-human intelligence. Bridle uses corporations as a prime example. A corporation can own land, have a bank account, abide by laws, and act in the interest of its own system, yet it exists outside of any one human being.
In fact, Turing machines (essentially, what would be considered very basic computers) exist in many parts of the world beyond electronic devices, including social systems and networks of the natural world. ChatGPT, like every other intelligent computer or network, is nothing more than a bunch of very powerful Turing machines.
Turing machines are far from sentient. Anyone who has read “Klara and the Sun” or watched “M3GAN” may fear this looming question of AI becoming self-aware. And while theorists may argue that our human brains also work in a series of algorithmic, DNA-encoded networks not entirely dissimilar to AI, sentience is not a realistic concern in scientific conversations. ChatGPT is just another form of computer mimicry. Its frequent mistakes wouldn’t allow it to pass a Turing test. And, even if it did pass, AI, operating on computers, provides simply one way of viewing the world.
Computers themselves see everything as data — they are coded with ones and zeroes. They do not, in themselves, create answers. Rather, they offer alternative ways of addressing complex problems. Theoretically, AI sentience or rights, given that they are built to mimic humans, can become a concern several years into the future, which is far enough out that almost anything is on the table. For now, though, that’s left for the science fiction writers to imagine. Currently, AI exists as nothing more than mechanical devices.
The large cultural shift that emerged with the release of ChatGPT is arguably more telling of the fascination and fear that humans have with intelligent machines than it is of the machines’ self-sufficient capabilities.
Ethical questions about the future and the meaning of intelligence seem to have seeped into every living person’s mind in some capacity. People around the world have been using AI for a wide range of purposes. ChatGPT’s standard model alone can create computer code in various languages; write believable letters, songs, and poems; and help brainstorm solutions to different questions and prompts. Its more complex models have higher-functioning capabilities.
AI is growing in the visual world as well, with programs like Dall E and Adobe Firefly. In addition to conversations of training models on work stolen from online images, the very concept of creativity has also been called into question.
People are typically satisfied when their fireplace works better than rubbing their hands really fast, or when a plane works better than walking by foot, or even when they have to select the easiest mode so that the computer doesn’t beat them in every game of online chess. But when a machine starts to exhibit signs of creativity — producing paintings and pictures and prose — most people aren’t too sure how to feel about it.
But is AI redefining creativity? Or is it simply shining a light on the same concerns artists have faced for centuries?
One way of addressing this uneasiness of AI art is by comparing it to a similar dilemma faced generations ago, posed by the development of the photographic camera. The camera, at first, seemed to take away artistic credibility; the process of creating was being done entirely by a machine. Charles Baudelaire wrote in 1859, “If photography is allowed to supplement art in some of its functions, it will soon supplant or corrupt it altogether, thanks to the stupidity of the multitude which is its natural ally.”
However, with artists like Henry Peach Robinson and, eventually, Alfred Stieglitz and Dorothy Lange, photography became increasingly respected as an art.
Perhaps AI is like the camera. It is not something that is here to replace artists, but rather a tool that can be used in new ways — maybe even in ways we can’t yet fully imagine. The current landscape of AI images is oversaturated with meaningless, computer-generated work. Maybe AI artists will emerge in the same way professional photographers emerged in the past.
An alternative, or possibly additional, way to address the concern of AI art is with the hope that physical, hand-crafted work will continue to be valued. Even with photographs, a lot of what culture deems valuable has to do with the quality of tools, the concept, the effort, the printing, the detail, and the craftsmanship that goes into the process of creating and showcasing the work. Baudelaire’s anti-modernity sentiments proved him wrong, with artists centuries beyond him and even today growing to fame and success as analogue artists.
It can be reasonably expected that anything completed with quality and care — drawings, paintings, photo-collages, animations, and, in some cases, AI-generated work, among many other creations — will continue to be recognized. While the tools may be changing, the values which traditionally define “good” art are not changing with them.
While sentience and lack of creativity are not yet a completely valid concern, there are certainly things to be feared about the future of AI. Computers, like every other invention, are created to help human beings. But, like every other invention, they can also be abused. It requires unrealistic optimism to think that everyone will use AI as a tool to assist human creativity and solve difficult problems for the betterment of humanity.
Cheating, deception, war, and oppression are unfortunately also human qualities, ones which have overtaken imaginative inventions very quickly in the past; there’s a reason World War I emerged right after the Second Industrial Revolution. AI can be used by the government for serious harm, and if it takes over web browsers, it can potentially obscure the process of receiving reliable information. The creators of ChatGPT themselves insist on a governing body designated to restrict and oversee the use of AI.
No human or machine is capable of changing the past or knowing the future. But what we can do is know the past and change the future. We can approach technological development not with fear and hysteria, but with an informed knowledge of its context, history, and potential for both crippling damage and unimaginable change.
Technology, when overused, has consequences; too much social media can lead to phone addiction, and too much reliance on a camera can ruin the experience of being present. Too much reliance on AI can lead to an expectation for answers and a lack of the intellectual curiosity and struggle necessary to learn and grow.
This year’s freshman class will be entering college with AI tools more powerful than any class before it. Abusing this opportunity will lead to a worse educational experience. Even if a professor is fooled — which is unlikely, considering the fact that ChatGPT writes essays at the level of a third-grader — the student’s own development of knowledge will be diminished. And art students who rely on AI rather than using it as a form of assistance will deprive themselves of their own creativity and style.
AI is not just an art supply, chat assistant, and propaganda machine; it is a classmate, learning and growing alongside us as students. How we use it and train it will define the device that it becomes and the society that we become. The future of AI can be marvelous if we recognize it as an experiment and a tool — one that can boost productivity and provide insight into our ongoing definitions of intelligence. On the other hand, we can suppress our values and allow AI to overtake honesty and morality. The future is not a matter of robots destroying humans but of whether or not we humans choose to destroy ourselves.