top of page
Search
  • Writer's pictureThe San Juan Daily Star

At this school, computer science class now includes critiquing chatbots


Marisa Shuman with students at the Young Women’s Leadership School of the Bronx, in New York, Jan. 19, 2023. Shuman generated a lesson plan using ChatGPT, the new chatbot that can create clear prose using artificial intelligence, to examine its potential usefulness and pitfalls and to get her students to evaluate its effectiveness and think critically about artificial intelligence.

By Natasha Singer


Marisa Shuman’s computer science class at the Young Women’s Leadership School of the Bronx in New York City began as usual on a recent January morning.


Just after 11:30, energetic 11th and 12th graders bounded into the classroom, settled down at communal study tables and pulled out their laptops. Then they turned to the front of the room, eyeing a whiteboard where Shuman had posted a question on wearable technology, the topic of that day’s class.


For the first time in her decadelong teaching career, Shuman had not written any of the lesson plan. She had generated the class material using ChatGPT, a new chatbot that relies on artificial intelligence to deliver written responses to questions in clear prose. Shuman was using the algorithm-generated lesson to examine the chatbot’s potential usefulness and pitfalls with her students.


“I don’t care if you learn anything about wearable technology today,” Shuman said to her students. “We are evaluating ChatGPT. Your goal is to identify whether the lesson is effective or ineffective.”


Across the United States, universities and school districts are scrambling to get a handle on new chatbots that can generate humanlike texts and images. But while many are rushing to ban ChatGPT to try to prevent its use as a cheating aid, teachers such as Shuman are leveraging the innovations to spur more critical classroom thinking. They are encouraging their students to question the hype around rapidly evolving AI tools and consider the technologies’ potential side effects.


The aim, these educators say, is to train the next generation of technology creators and consumers in “critical computing.” That is an analytical approach in which understanding how to critique computer algorithms is as important as — or more important than — knowing how to program computers.


New York City Public Schools, the nation’s largest district, serving about 900,000 students, is training a cohort of computer science teachers to help their students identify AI biases and potential risks. Lessons include discussions on defective facial recognition algorithms that can be much more accurate in identifying white faces than darker-skinned faces.


In Illinois, Florida, New York and Virginia, some middle school science and humanities teachers are using an AI literacy curriculum developed by researchers at the Scheller Teacher Education Program at the Massachusetts Institute of Technology. One lesson asks students to consider the ethics of powerful AI systems, known as “generative adversarial networks,” that can be used to produce fake media content, such as realistic videos in which well-known politicians mouth phrases they never actually said.


With generative AI technologies proliferating, educators and researchers say understanding such computer algorithms is a crucial skill that students will need to navigate daily life and participate in civics and society.


To observe how some educators are encouraging their students to scrutinize AI technologies, I recently spent two days visiting classes at the Young Women’s Leadership School of the Bronx, a public middle and high school for girls that is at the forefront of this trend.


The hulking, beige-brick school specializes in math, science and technology. It serves nearly 550 students, most of them Latina or Black.


It is by no means a typical public school. Teachers are encouraged to help their students become, as the school’s website puts it, “innovative” young women with the skills to complete college and “influence public attitudes, policies and laws to create a more socially just society.” The school also has an enviable four-year high school graduation rate of 98%, significantly higher than the average for New York City high schools.


As part of Shuman’s lesson, the 11th and 12th graders read news articles about how ChatGPT could be both useful and error-prone. They also read social media posts about how the chatbot could be prompted to generate texts promoting hate and violence.


But the students could not try ChatGPT in class themselves. The school district has blocked it over concerns that it could be used for cheating. So, the students asked Shuman to use the chatbot to create a lesson for the class as an experiment.


Shuman spent hours at home prompting the system to generate a lesson on wearable technology such as smartwatches. In response to her specific requests, ChatGPT produced a remarkably detailed 30-minute lesson plan — complete with a warmup discussion, readings on wearable technology, in-class exercises and a wrap-up discussion.


As the class period began, Shuman asked the students to spend 20 minutes following the scripted lesson, as if it were a real class on wearable technology. Then they would analyze ChatGPT’s effectiveness as a simulated teacher.


Huddled in small groups, students read aloud information the bot had generated on the conveniences, health benefits, brand names and market value of smartwatches and fitness trackers. There were groans as students read out ChatGPT’s anodyne sentences — “Examples of smart glasses include Google Glass Enterprise 2” — that they said sounded like marketing copy or rave product reviews.


“It reminded me of fourth grade,” said Jayda Arias, 18. “It was very bland.”


The class found the lesson stultifying compared with those by Shuman, a charismatic teacher who creates course materials for her specific students, asks them provocative questions and comes up with relevant, real-world examples on the fly.


“The only effective part of this lesson is that it’s straightforward,” Alexania Echevarria, 17, said of the ChatGPT material.


“ChatGPT seems to love wearable technology,” noted Alia Goddess Burke, 17, another student. “It’s biased!”


Shuman was offering a lesson that went beyond learning to identify AI bias. She was using ChatGPT to give her pupils a message that AI was not inevitable and that the young women had the insights to challenge it.


“Should your teachers be using ChatGPT?,” Shuman asked toward the end of the lesson.


The students’ answer was a resounding “No!” At least for now.

87 views0 comments
bottom of page