Skip to Main Content

Online Learning and Digital Literacy

Welcome to the Generative AI Usage Guide for Students

In this guide, we’ll cover the basics of Generative Artificial Intelligence, how it’s used at our college, ethical considerations, and guidelines for using GenAI.

 

Note. Source: [GenAI: LinkedIn, n.d.].

What is Generative AI? 

Generative AI (GenAI) is a type of artificial intelligence that creates new content, such as text, images, and music, based on the data it has learned from. It uses neural networks, which are computer models inspired by the human brain, to learn patterns and make predictions. 

GenAI is trained on large datasets, and the quality and variety of this data are crucial for its performance—the better the data, the better the AI. It follows algorithms to generate new content, with machine learning models, especially deep learning models, at its core. Examples of GenAI include language models like ChatGPT, image generation tools like DALL-E, and design platforms like Canva. 

Generative AI at NorQuest College

We believe GenAI can enhance learning and creativity. Our college has a policy for using GenAI. You can find it here: NorQuest Generative AI Policy

The policy aims to provide guidelines for the use of Generative AI by students, staff and faculty, and applies to all forms of Generative AI, including text generation, image generation, and other AI-driven content creation tools. Users must adhere to ethical principles, ensuring that AI-generated content does not mislead, discriminate, or violate privacy.

Our Position at NorQuest: 

Generative Artificial Intelligence (AI) tools such as ChatGPT, Microsoft Pilot and similar technologies are impacting the workplace and the learning environment. As educators, we need to help prepare students to critically engage with tools, including the appropriate/ethical use of generative AI tools. GenAI might not be appropriate in every context. That’s why it’s always important for instructors to discuss expectations with their class and for students to ask before using a GenAI system to assist them in completing their coursework.

*INSTRUCTOR RESOURCES & Syllabus Statement

* This is accessible only for NorQuest Employees

Ethical Use and Academic Integrity with Generative AI

Note. Image generated by the prompt "Warhol painting of avocado toast," by OpenAI, DALL-E 2, 2024.

We take academic integrity seriously. As technology evolves, we must consider how GenAI impacts academic integrity. Academic integrity is the ideal standard of academic behavior. When students cheat or plagiarize their work, this is called academic misconduct. Learn more about academic misconduct.

For  more information on AI and what to be aware of when using it, take this short course: AI and Mis-/Disinformation

Consider these ethical guidelines:

  • Review Your Syllabus: Check your course syllabus for any specific rules or guidelines related to using AI tools. Your instructors may provide instructions on acceptable usage.
  • Ask for Clarification: If you’re unsure about using GenAI for a particular assignment or project, always seek clarification from your instructor. They can provide guidance on whether it’s acceptable.  
  • Plagiarism: Attribute AI-generated content properly to avoid plagiarism.
  • Misrepresentation: Don’t present AI-generated work as entirely your own.
  • Bias Awareness: Be critical of potential biases in AI outputs.
  • Transparency: Clearly indicate when content is AI-assisted.
  • Privacy: Use AI tools in compliance with privacy regulations.
Benefits of GenAI

We’re exploring ways to use GenAI in coursework. Some ways GenAI may be used or is being used in coursework are:

  • Writing Assistance: GenAI tools help students generate ideas and improve writing by giving feedback.
  • Creative Projects: Tools like DALL-E let art and design students create digital artworks from descriptions.
  • Research Support: AI tools can help summarize research articles and generate citations.
  • Instant Feedback: GenAI provides quick feedback on assignments.
  • Data Analysis: It helps analyze large datasets in fields like data science and economics.
  • Assistive Technologies: Generative AI (GenAI) can create tools to support students with disabilities including speech to text applications, real time captioning, and personalized learning aids
  • Multilingual Support: AI can also help by translating educational materials into multiple languages. This promotes inclusivity and supports a diverse student body.

Limitations of Generative AI Use

Image of iceberg: Visible above water is statistical/computational biases, below water is human biases, deeper still is systemic biases

Note. Source: (Hanacek, n.d.).

While GenAI is powerful, it has some limitations, especially in academic settings:

  • Inaccurate Outputs: GenAI can sometimes produce information that is entirely fabricated or incorrect, known as "hallucination," which can lead to misinformation. For example, when asked about a historical event, GenAI might provide a detailed but entirely incorrect narrative, such as claiming a fictitious battle occurred between historical figures who never met.
  • Bias in Data: GenAI’s quality depends on the training data; biases in data can lead to flawed information. For example, if trained on biased hiring data, GenAI might suggest predominantly male candidates for a tech job, reflecting existing gender biases in the industry.
  • Privacy: Anything uploaded into a Generative AI system could potentially be made public by the software.  This means you should not upload proprietary, confidential or personal information.
  • Hindrance to Skill Development: Over-reliance on GenAI can hinder personal, academic and work-related skill growth. For example, a college student uses GenAI to complete their calculus homework, copying answers without understanding the steps. During exams, they struggle because they haven't developed the necessary problem-solving skills, leading to poor performance and gaps in their education.
  • Lack of Common Sense: GenAI may generate technically correct but impractical responses due to its lack of common-sense reasoning. For Example: When asked how to cook a meal in 20 minutes, GenAI will not know what level of cooking skill the user has, what type of equipment they have available to them, or what type of ingredients they can afford.
  • Limited Contextual Understanding: It struggles with context, missing nuances and intended tones in conversations.  For example: When asked how much they should tip at a restaurant, GenAI may not know the quality of meal they had, the quality of service, or the local custom for tipping.
  • Emotional Intelligence Deficiency: It simulates emotions but doesn’t truly understand them. For example: When asked for advice on what questions to ask an employer in an interview, GenAI will not be able to predict how an employer will behave or the responses they will give during the interview that may make those questions inappropriate. 

These examples don’t make GenAI ineffective, it just highlights that you, as the user, must review and critically think about the answers (or outputs) that the technology is giving you.

Before Using Generative AI

Before using Generative AI for your coursework, follow these steps:

Guiding Questions for Students

Did you ask your instructor about using GenAI to make sure it is approved?

Advice: It’s essential to consult your instructor to understand if using GenAI is permissible for your specific assignment or project. Each course may have different guidelines regarding AI usage. Using GenAI without approval could be considered academic misconduct or plagiarism, as it may involve presenting AI-generated work as your own without proper attribution.

Verify that the Generative AI tool you intend to use is approved by your instructor. Approved tools will adhere to college policies and meet ethical standards, ensuring they are suitable for academic use.

Are you clearly indicating that your content is AI-assisted?

Advice: Clearly state when and where you have used GenAI tools to generate or assist in creating content. Transparency is crucial to maintaining academic integrity. Learn how to cite GenAI in the MLA Citation Guide

Have you checked the data sources and potential biases of the AI tool?

Advice: Data sources refer to the various origins of data used to train, validate, and test AI models. The quality and diversity of these data sources significantly impact the performance and versatility of generative AI models. Investigate the data sources of the GenAI tool to be aware of any biases. This understanding helps critically assess the outputs and ensures fair and unbiased use of AI-generated content. For checking the data sources, you may have to review the AI model’s documentation, cross-check references and citations from different sources (research papers etc.) and examine the Data collection methods of the AI tool.

Are you over-relying on GenAI tools?

Advice: Use GenAI tools to enhance your learning, not replace it. Over-reliance on AI can hinder your skill development and critical thinking abilities. It could also lead to academic integrity issues (plagiarism and cheating).

Have you considered the privacy implications of using GenAI tools?

Advice: Be mindful of privacy regulations and ensure that the use of GenAI tools does not violate any privacy norms, particularly when handling sensitive or personal information.

Glossary for Generative AI Usage Guide

Generative AI (GenAI): A type of artificial intelligence that creates new content, such as text, images, and music, based on the data it has learned from.

Neural Networks: Computer models inspired by the human brain that help machines learn patterns and make predictions based on data.

Training Data: The large sets of data (text, images, audio, etc.) that GenAI learns from to perform its tasks. The quality and diversity of this data affect how well the AI works.

Algorithms: The set of rules that guide how GenAI learns and creates new content. Examples include Backpropagation and Gradient Descent.

Models: The core of Generative AI, these are deep learning systems trained on large datasets to recognize patterns and generate new content.

Language Models: AI tools like ChatGPT that generate text based on the input they receive.

Canva: An online tool used for designing and creating graphics. Academic Integrity: The ideal standard of academic behavior, ensuring honesty and fairness in academic work.

Academic Misconduct: Violation of academic integrity, including cheating and plagiarism.

Plagiarism: The act of presenting someone else’s work or ideas as your own without proper attribution.

Bias: Prejudice in AI outputs that occurs when the training data used has certain leanings or lacks diversity.

Hallucination (in AI): When AI generates information that is entirely fabricated or incorrect.

Transparency: Clearly indicating when content is AI-assisted to maintain honesty and integrity.

Privacy Regulations: Rules that protect personal information from being misused.

Data Sources: The origins of data used to train, validate, and test AI models. The quality and diversity of these sources are crucial for the AI's performance.

Assistive Technologies: Tools created by GenAI to support individuals with disabilities, such as speech-to-text applications and real-time captioning.

Multilingual Support: AI’s ability to translate educational materials into multiple languages, promoting inclusivity.

Ethical Principles: Guidelines ensuring that AI-generated content is not misleading, discriminatory, or in violation of privacy norms.

Critical Thinking: The ability to think clearly and rationally, understanding the logical connection between ideas.

Syllabus: An outline of the subjects in a course of study or teaching.

Guiding Questions: Questions meant to help students make informed decisions about using GenAI in their coursework.

Data Analysis: AI assisting in analyzing large datasets in various fields.

References

[Gen AI: LinkedIn]. (n.d.). [Image]. LinkedIn. Retrieved July 17, 2024, from https://media.licdn.com/dms/image/D5622AQGEJgoedYDrKw/feedshare-shrink_800/0/1719394058640?e=1722470400&v=beta&t=4x4P3RjYBymIaJ6Ri7BGJ3LtHrlVbdcAUpoJxzK0qlk 

Hanacek, N. (n.d.). AI bias iceberg [image]. NIST. https://www.nist.gov/image/ai-bias-iceberg

OpenAI. (2023). DALL-E 2 [Large language model]. https://labs.openai.com

YourHub4Tech (2023). ChatGPT vs. Bard vs. Claude 2 vs. Perplexity [image]. Medium. https://miro.medium.com/v2/resize:fit:720/format:webp/1*0G9yNwgBK_QBOR0ZTNWzyA.png

 

Acknowledgements

Note: This document was created by Nasif Hossain and the Emerging Technologies group at NorQuest College with the assistance of OpenAI's ChatGPT (version 4) and inspiration from the University of Alberta’s resource “Teaching in the Context of AI” and the article Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education.