This past June, over one hundred educational professionals from schools across Connecticut packed the Hopkins Dining Hall, eager to gain a better understanding of how developments in generative artificial intelligence (AI) would impact the future of education. The one-day conference, created through a partnership between Hopkins and the Connecticut Association of Independent Schools (CAIS), featured an eye-opening exploration into the enormous scope of this technology.
Not lost on anyone in attendance was the fact that just seven months prior to the conference, the term ChatGPT—with GPT standing for Generative Pre-trained Transformer—was not on most people’s radars. When the California firm OpenAI made the tool readily available to the mainstream in November 2022, its release met with a mix of excitement and fear for how it could potentially disrupt many industries, including education. While many lauded the technological tool for its potential to make life easier for anyone needing to write or design content from scratch, some experts were quick to downplay its credibility as well as point out ethical concerns.Hopkins School Takes Notice
Following the keynote address of the June conference, led by educator and AI specialist Dr. Alec Couros, many voiced concerns about how this technology would affect learning outside of the classroom, such as homework assignments or essays. While some schools immediately banned ChatGPT out of fear that it could lead to plagiarism that is too difficult to detect, Couros encouraged attendees to consider the ways it could enhance the learning experience for students and also serve as a way for teachers to design and shape lesson plans.
Head of School Matt Glendinning, who devoted the February edition of his newsletter Stepping Up to exploring these important issues, shared a similar sentiment to Couros when discussing the idea of banning ChatGPT altogether.
“Such strategies seem doomed to failure and are not worth our efforts,” wrote Glendinning. “Instead, we need to be proactive in finding ways to incorporate AI into our educational methods with the goal of enhancing student learning.”
While Glendinning was supportive of ChatGPT, he was also quick to implement safety guards to ensure that students were using it for the right reasons. He said the first step the School had taken was to communicate to students that turning in AI-written material would constitute a violation of its academic integrity policy, and later devoted assembly time toward emphasizing this point.
Taking a Deeper Look
Glendinning suggested that the best way to have a deeper discussion around ChatGPT was for everyone to first better understand its full capabilities. With this in mind, he convened a working group, featuring several teachers, to consider ways the tool could be used in the classroom, and later worked with CAIS to help create and host the June conference.
Many Hopkins teachers were immediately intrigued. Science teacher Jennifer Stauffer says she’s fascinated not only by the possibilities but also by the rapid improvements already seen since the launch.
“I’m trying to imagine what human teaching and learning will look like in the not-too-distant future and I'm nervous—but admittedly excited—that the way I create and implement lesson plans and engage with my students will look different from what I’m doing now, in good and better ways,” Stauffer said.
As examples, Stauffer said the ability of generative AI to provide 24/7 tutoring capabilities, serve as a debate partner, and offer brainstorming and editing help for papers are all promising ways to enhance the learning experience. Stauffer, however, was also quick to note the potential pitfalls, including academic dishonesty and bias in the models.
“The field and issues are wide and deep and the goal line seems to be constantly changing—it really is too much for any one person to get a handle on. Fortunately, we have each other.” Stauffer said.
On the student side of things, this past summer, Hopkins offered an in-person, pre-college, project-based artificial intelligence program through Inspirit AI, an AI education program developed and taught by Stanford and MIT graduates. The program ran for 10 sessions over 11 days. During the sessions, participating students from grades 8–12 learned the fundamental concepts of AI and discovered how AI is used to build ChatGPT and generative AI, fight the COVID-19 pandemic, power self-driving cars, and more. Students also were taught how to program AI using Python, discuss ethics and bias within AI, and complete a group project applying AI to a discipline such as healthcare, astronomy, and finance, among others.Teaching AI and Machine Learning with Robotics
Hopkins physics and robotics science teacher Lynn Connelly is bringing AI and Machine Learning to her classroom starting this school year. Connelly has designed a new Spring Term science elective class leveraging her electrical engineering background that will focus on integrating Advanced Robotics projects with AI and Machine Learning. In this class, students will work with two robotic dogs from the company Unitree and teach them how to run, walk, climb stairs, and navigate their environment autonomously using control algorithms combined with reinforcement learning. Using pattern recognition and deep neural networks, students will also teach the dogs how to detect and recognize dog breeds with their computer vision. The robotics program at Hopkins has grown significantly over the past seven years. This new advanced class will greatly enhance what robotics and AI offers to all students.
“AI is really all about teaching computers how to recognize patterns that we, as humans, very easily do everyday. I want my students to learn college-level robotics technologies and combine this with AI concepts and capabilities to prepare them for their future studies in science, robotics, and/or engineering,” said Connelly. Connelly is currently working on her Master’s Degree in Robotics Engineering through the University of Michigan with a focus on AI and Machine Learning.Just the Beginning
While attendees of the CAIS conference at Hopkins were there to learn how to potentially integrate, apply, and manage AI in their work, interactive workshops gave them an opportunity to have critical discussions about the pros and cons of reimagining traditional pedagogy and potentially altering the way students experience learning.
To close out the conference, Glendinning asked if there were any major takeaways from the event. Many voiced that they were inspired not only to continue learning but to share what insights they gained with colleagues.
Stauffer, who attended the conference, said the biggest challenges for educators, students, and other stake-holders in the educational community (such as advisers and parents), will be to “become knowledgeable around generative AI usage and ethics with a goal of eventual AI literacy,” as well as to “determine how to leverage AI in ways that complement and transform (rather than compete with) best teaching and learning practices.”
Although none of the major ethical issues associated with utilizing AI were solved at the conference on the Hill, the one thing that was clear was that everyone in the room felt they were standing at the starting line of an entirely new road ahead.