“Education is not the filling of a pail, but the lighting of a fire.”
The above quotation is most often attributed to poet and Nobel laureate William Butler Yeats, as the core, or essence of education. Before any more educators or graduating seniors plan to use this quote in their lectures or speeches, this caveat:
Scores of online searches link the quote to Yeats, but there is no definitive proof that Yeats ever said it. The Oxford Dictionary of Quotations, which has plenty of Yeats citations, has no pail, no fire. Neither does the American equivalent, Bartlett’s familiar Quotations.
The earliest book to contain the alleged Yeats quotation was published approximately 50 years after Yeats’ death in 1939. Suffice to say, despite his greatness, Yeats didn’t do much speaking or writing from beyond the grave.
The closest approximation to the quote to be found is from Greek philosopher Plutarch (AD 46-119) in his essay “On Listening,” first translated from the Latin to English in 1927. It notes: “For the correct analogy, the mind is not a vessel that needs filling, but wood that needs igniting — no more — then it motivates one towards originality and instills the desire for truth.”
We note this as a warning to those whose principal research source is Wikipedia and favorite speechwriting method involves ChatGPT. To quote vaunted New York Herald Tribune sportswriter Walter Wellesley “Red” Smith: “Writing is easy. All you have to do is sit down at the typewriter, cut open a vein, and bleed.”
We unabashedly acknowledge that our earliest years as a sportswriter were spent in emulation of Smith’s captivating, albeit cliched style, without resorting to his satirical extreme. We came to know, at last, that sportswriting, which covers all the news bases (current events, politics, cultural trends and lifestyle) is at once the most emotional, descriptive and memorable (stirring in the final scores) in all of journalism.
It was with disdain and chagrin, therefore, that we first learned in 2016 that chatbots from Satifi Labs, Facebook Messenger and Sapiens were touting AI-generated sports stories and passing them off to the public. A blow to the heart, that was.
We were sure that “Red” was spinning in his grave. There were no flowery, flowing phrases, but the stats and the bare-bones play-by-play-were there; good enough for reading once, maybe, but not good enough to clip and keep.
In that time, we welcomed Siri into our homes to answer our every question, learned to create deepfake photos using generative adversarial networks, watched as a robot was sent into space. More AI text-writing programs surfaced and in 2022, ChatGPT — a natural language processing chatbot — was created. By October 2023, Forbes Advisor reported that 60% of educators in both high schools and college were using AI in their classroom: in educational games, learning platforms, grading and feedback systems, lesson planning, test creation, student support and tutoring.
While it is too early yet to measure any blowback, if any, from this ascension, we should exercise caution that we don’t succumb, too much, too fast.
Overwhelmed by a digital world
We can accept digital technology at home and in the workplace, to a point. We appreciate the advances in photography and can accept digital imagery and graphics as legitimate art forms; robotics in manufacturing, the convenience and peace of mind in home-based advances in protection and security. But, and this is a big one, we have difficulty assimilating an AI format into the classroom. We, whose great-grandparents, grandparents, parents and ourselves who were teachers — balk at the AI invasion. It’s hallowed ground.
The fact that generative AI — like writing tools that can create whole essays based on a simple prompt (like ChatGPT and Gemini) — goes against our grain as a relatively new frontier, especially in classrooms. Some may think it’s novel to have a program deliver an entire piece of writing based on simple questions or requests. And while its ability to create truly original content isn’t that sophisticated just yet, it should still be an educational concern.
Here’s a few others:
Abuses — Deepfakes are pictures, videos, and audio files that look or sound like someone we know (a celebrity, a political figure, or even a family member). This can also include nude images or pornographic videos generated with AI and without a person’s consent. Kids are already using so-called “nudifying” apps to generate nude images of classmates.
Biases and misinformation — AI can only learn from its sources, so it takes on the biases, misinformation, and problematic content of the original material. And if the team of developers isn’t representative, it’s almost guaranteed that implicit bias will be woven into the framework of the tool, as facial recognition has illustrated.
Environmental impacts — Generative AI requires an enormous amount of energy and other resources, including fresh water as a cooling mechanism. In an age when we already have many climate concerns, the growing use of AI only adds to these issues.
Ethics — Because AI tools scrape content from a wide variety of sources, the material produced is a mixture of many other people’s work, and there’s often no consistent or complete credit for creators. Plus, data privacy is a murky and multi-layered issue when it comes to generative AI.
And finally, plagiarism — One of the biggest worries for teachers is plagiarism. Already, students are handing in AI-generated essays as their own.
Over time, AI is only going to get more powerful and further woven into everyday life, probably in more ways than we can currently expect, both positive and negative.
Another problem is that AI doesn’t possess genuine emotions or the ability to truly understand and respond to human feelings. The use of AI could diminish a significant portion of human interaction in the learning process, resulting in a loss of social skills and interpersonal development. To solve this, teachers should allocate more effort to identifying and responding to the emotional needs of their students. They can provide the support, encouragement, and guidance that will motivate students in a way that AI cannot.
AI technologies will surely create more jobs, such as AI specialists, data analysts, and software engineers. Roles that require human traits such as creativity, critical thinking, empathy, and complex problem-solving skills will remain in high demand. Furthermore, new professional classes are on the horizon, including strategic decision-makers, innovative thinkers, AI ethicists, and AI trainers. These roles will be pivotal in navigating the ethical, social, and strategic challenges of AI applications.
Final thoughts
There are reservations. As a proponent of science-fiction literature, we can envision a time when these AI creations are encapsulated in a pseudo-human form to do the things that humans are loathe to do. At a rudimentary level, they are already being done. One of these visions was set to print by Herbert Goldstone in a six-page short story in 1953. In the story, set in the study of a classical pianist maestro, the Maestro instructs his personal robot how to play the piano. True to form, the robot assimilates the mechanics of music in a day, and by evening, it asks him if it can continue to play while the head of house goes to bed. In the middle of the night, the Maestro awakens to see the robot playing Beethoven’s “Appassionata” sans music. “Playing? He was creating it, breathing it, drawing it through silver flame. Time became meaningless, suspended in mid-air.”
The Maestro goes back to bed, planning a recital and a tour, with his robot as the star.
In the morning, however, the Maestro is crestfallen to hear that his robot will never approach the piano again. Rollo explains that while he was playing, he saw the Maestro cry; and it was inherent in his programming to never cause harm to any human.
To translate the notes to sound, for him, was easy. But, “this ... music is not for robots. It is for man. To me, it is easy, yes ... it was not meant to be easy.”
A point to ponder, n’est-ce pas?