Weekly Reflection #3

One thing that is very interesting and concerning about GenAI’s like ChatGPT is that they can sound confident and correct while sharing false or made-up information. I have an example of a time that I experienced this:
Last year, I was in an English Literature class, and had to write an essay about a novel that we had read as part of the class. I read the novel twice during the duration of the class, so I was quite familiar with it before starting my essay. Midway through my essay, I needed an example of a character acting a certain way to help strengthen one of my points. I had many notes about the novel, but was lacking notes about this specific situation. I asked ChatGPT for an example of that specific character acting that specific way in the novel, and what it told me was completely false. It referred to that character as another character’s wife, when she was really that character’s mother, and the whole scene that it described was not in the novel. I told ChatGPT that what it told me was untrue. It apologized, and gave me a different answer that was also false. I did this several times with the same result each time.

I took this class as an adult with a bachelor’s degree who has experience thinking critically about resources. For a student in middle school or elementary school, it would be easy to take everything that GenAI says as fact. It is important to teach students discernment in general with all types of people and media, but especially with AI.