Generative AI is a type of artificial intelligence that can learn from and mimic large amounts of data to create content such as text, images, music, videos, code, and more, based on inputs or prompts.
Examples of Generative AI include Microsoft's Copilot, OpenAI’s ChatGPT, Google's Gemini. Generative AI can be embedded in services and tools such as OtterAi, Grammarly, library databases, and more.
While exploring Generative AI, there are important considerations to keep in mind when using these tools, including adherence to and awareness of the following:
Students are responsible for reading and following The Chicago School's University Community Norms and Standards policies including the Academic Ethics, Integrity, and Responsibility Policy which outlines clear and specific guidelines for students and faculty on the ethical use of artificial intelligence. The policy is posted in the Student Handbook in the Student Rights & Responsibilities and section.
In general, students are expected to use AI tools responsibly by ensuring transparency about the use of AI tools. Students should disclose and, when applicable, make proper attribution when using AI tools in learning, research, and scholarship activities.
Students should consult with their instructors and their syllabi to clarify expectations regarding the use of AI tools in a given course. Allegations of unauthorized use of AI will be treated similarly to allegations of unauthorized assistance (cheating) or plagiarism.
Avoid using publicly available AI tools to enter confidential or proprietary work, or your or other’s personal information. The information could be used to train the large language model and could be inadvertently conveyed to others.
Be sure to follow applicable state and federal laws, including, but not limited to, the Family Educational Rights and Privacy Act (FERPA), the Health Insurance Portability and Accountability Act (HIPAA), and the Gramm-Leach-Bliley Act (GLBA).
If using an AI tool, students, faculty and staff are encouraged to use the enterprise version of Microsoft 365 Copilot. CoPilot chat does NOT does not remember the history of what you have previously asked and any data that you submit is not available to Microsoft and is not used to train the model. Copilot also does not have access to your Microsoft 365 data such as your email, OneDrive files, Teams data, etc. This version of Copilot enables faculty, staff, and students to experiment with a more secure generative AI tool compared to consumer versions of such as ChatGPT, Google Bard, and Bing Chat and similar tools.
Note: To ensure these security and privacy protections are enabled, you need to access Copilot with your Chicago School credentials.
Use of AI comes with a number of ethical considerations, including:
Academic integrity refers to maintaining a standard of honest and ethical behavior in all types of academic work. In a rapidly changing AI landscape, understanding the limitations and potentials of generative AI is essential in fostering academic integrity and effective learning.
For more on how to use Generative AI with integrity in your writing and school work, see:
Generative AI models learn from vast amounts of data, which can be biased or contain existing societal prejudices. If these biases are not adequately addressed during the training process, AI-generated content may perpetuate and reinforce discriminatory or unfair practices. Even if specific biased resources are excluded from the model, the overall training material could underrepresent different groups and perspectives. This can have negative consequences, such as reinforcing stereotypes or excluding marginalized perspectives. Generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.
AI tools have been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Generative AI tools have the potential to create text, images, and other content that may infringe upon intellectual property rights. Development of some generative AI tools was based on using the internet. There is a lack of transparency as to what exactly is included in some models, but there are allegations that the models include pirated content, such as a number of books, images not licensed by their creators for reuse, and content from the internet that was intended to support its creators with advertising revenue.
Most Generative AI tools collect and store data about users. The ways different AI systems use data or information inputted by users is also not always transparent.
Examples: The ChatGTP privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users. While you can request to have your ChatGPT account deleted, the prompts that you input into ChatGPT cannot be deleted.
The training and use of generative AI requires very large amounts of computing power, which has huge implications for greenhouse gas emissions and climate change. There are also environmental costs associated with storing the outputs created. Read: Generative AI’s environmental impact to learn more.
Some worker communities involved with developing AI tools have been exploited. These employees, often noted as "invisible workers" or "ghost workers," can range from those who train and annotate or label the data to those who enhance and test the algorithm or the models as well as other tasks.
Generative AI has the potential to significantly amplify existing inequalities in society and contribute to the digital divide. This can arise from the creation or exacerbation of disparities in access to resources, tools, skills and opportunities. Those who can afford access to the premium AI tools and services will have an advantage over those who can’t.
The following outlines several known risks and limitations associated with the use of Generative AI tools. This list is not exhaustive; as the technology continues to evolve, additional concerns may emerge or become better understood.
Generative AI tools are prone to producing incorrect or entirely fabricated content. This includes false citations, misrepresented publications, inaccurate biographical details, and other types of information commonly used in academic research. Importantly, these inaccuracies are often presented with a tone of authority, making them difficult to detect without careful scrutiny.
For more information on how to fact-check Generative AI outputs, including using techniques such as lateral reading, see the Library's Guide on Ai Literacy
While AI tools can generate coherent responses, they may struggle with understanding the broader context or intent behind a question or request. They primarily rely on the patterns in the training data and may not possess real-world knowledge or common sense reasoning. This can lead to inaccurate or irrelevant responses in certain situations.
While AI tools can automate the process of searching for relevant literature, their search capabilities may not be as comprehensive or flexible as those of human researchers. AI tools typically rely on predefined algorithms or databases, which may not cover all possible sources or alternative perspectives. Human researchers often employ a more diverse and creative approach to literature search.
The automation of tasks requiring judgment and discernment can diminish individual agency and critical thinking. Overreliance on AI-generated content may also introduce or perpetuate biases in decision-making, often without detection. In an educational context, excessive use of AI-generated content might limit students’ opportunities for active engagement, exploration, and scholarship