While it is relatively simple to create a Pro's and Con's list for using generative AI as a tool from the student's point of view there is a more nuanced discussion to be had surrounding generative tools. Remember, too, that chatbots are just ONE of the AI tools that are out there. There are many other ways that AI is being used, and that students are exploring them. We encourage you to try them out for yourself as you have these discussions.
This section is meant to provide you with ideas and prompts to promote discussion in your classroom.
Does generative AI help to close the education gap that students from disparate socioeconomic backgrounds might have coming into a college classroom? Some make the argument that academics are leery of ChatGPT due to its "ability" to provide free educational information and to "remove the middle man" as it were. As a leery academic myself I think this is a valuable discussion to have.
How do you ( the student) think ChatGPT and other generative AI tools help you learn?
Do you think it is ethical to use image-generating AI that has been trained using other artists' work without their permission? MindJourney and other image-based generative tools are facing lawsuits and congressional action for using thousands of artists' work to train their models without permission, compensation, or attribution, to the artists.
Is using AI plagarism? Does using ChatGPT or generative AI violate academic integrity? What is the difference between the two? Many students understand Academic Integrity policies as being primarily about cheating and plagiarism, but how does the idea of a chatbot complicate understanding of Academic Integrity?
Where does the information from ChatGPT and other generative AI come from? Can it be verified? Is it trustworthy? A ChatBot can only replicate and produce text based on the information that its algorithm is trained on. What this means in practice is that AI is most often trained on the cheapest material - in the case of the internet that means it CANNOT access pay-walled sources like high-end journalism and also has no access to any databases of scholarly journals and other sources. If the model can only learn from non-academic, non-journalist, free resources (like forums, blogs, etc) how accurate or educational can the responses be?
What potential bias can responses from ChatGPT contain? How does it develop these biases? Bias within the learning algorithms of ChatGPT and other bots has been well documented. Check out the recommended reading section for more information.
Have you noticed how now every company is advertising that they are using AI? Or that services that explicitly are not AI are called AI by journalists and others online, does this create confusion? Has the word technology simply become synonymous with AI?
