Skip to Main Content
Our Guides

Artificial Intelligence (AI) Tools and Resources

Guide to generative AI concepts & tools, considerations for use, and more.

Benefits of Using AI

In academic and other contexts, some tasks generative AI can help with:

  • Brainstorming ideas
  • Generating keywords for searching in library databases
  • Explaining information in ways that are easy to understand
  • Summarizing and outlining
  • Translating text to different languages (not completely fluent in every language)
  • Helping write or debug computing code
  • Creating study schedules and time management plans
  • Accessibility, by reading out text or providing explanations in alternative formats
  • Asking questions (be sure to fact check results!)

Please note: This list is not meant to be exhaustive, but rather, the purpose is to provide some ideas for potential benefits of using generative AI.

Potential Limitations of Using AI

 Some limitations or drawbacks with using generative Ai may include:

  • Prone to Errors

Sometimes, Generative AI tools like ChatGPT may generate fictitious information, presented as factual or accurate. This can include citations, publications, biographical information, and other information commonly used in research and academic papers. These are sometimes referred to as hallucinations or confabulations. 

  • Limited Context Comprehension: 

While AI tools can generate coherent responses, they may struggle with understanding the broader context or intent behind a question or request. They primarily rely on the patterns in the training data and may not possess real-world knowledge or common sense reasoning. This can lead to inaccurate or irrelevant responses in certain situations. See the tab on "Creating Effective Prompts" to mitigate some of this limitation. 

  • Limited Search Capabilities:

While AI tools can automate the process of searching for relevant literature, their search capabilities may not be as comprehensive or flexible as those of human researchers. AI tools typically rely on predefined algorithms or databases, which may not cover all possible sources or alternative perspectives. Human researchers often employ a more diverse and creative approach to literature search.

  • Currency

Many generative AI tools (including the free version of ChatGPT)  are trained on data with cutoff dates, resulting in answers that may not be up-to-date, or exclude current information and events.  In some cases, the data cutoff date is not made clear to the user.

  • Human Interaction and Creativity:

Over-reliance on generative AI tools in education could potentially diminish the importance of human interaction and creativity. Education is not solely about information delivery but also about fostering critical thinking, collaboration, and personal expression. Excessive use of AI-generated content might limit students' opportunities for active engagement, exploration, and scholarship as conversation.

Note: In addition to many of the known limitations outlined here, generative AI may be prone to problems yet to be discovered or not fully understood.

Ethical Considerations & Criticisms:

While generative AI tools bring numerous benefits, using AI brings a range of ethical considerations,  including the potential for bias, misinformation, academic integrity concerns, data privacy, the potential for copyright infringement, inequitable access to technology, and more.

  • Academic Integrity

Academic integrity refers to maintaining a standard of honest and ethical behavior in all types of academic work. In a rapidly changing AI landscape, understanding the limitations and potentials of generative AI is essential in fostering academic integrity and effective learning.

See The Chicago School Office of Writing and Learning webpage on Using AI with Integrity  for more information on academic integrity while using Generative AI tools.

  • Bias:

Generative AI models learn from vast amounts of data, which can be biased or contain existing societal prejudices. If these biases are not adequately addressed during the training process, AI-generated content may perpetuate and reinforce discriminatory or unfair practices. Even if specific biased resources are excluded from the model, the overall training material could underrepresent different groups and perspectives. This can have negative consequences, such as reinforcing stereotypes or excluding marginalized perspectives. Generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.    

  • Misinformation

AI tools have been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.

  • Intellectual Property and Copyright

Generative AI tools have the potential to create text, images, and other content that may infringe upon intellectual property rights.

Development of some generative Ai tools was based on using the internet. There is a lack of transparency as to what exactly is included in some models, but there are allegations that the models include pirated content, such as a number of books, images not licensed by their creators for reuse, and content from the internet that was intended to support its creators with advertising revenue.

  • Privacy

Most Generative Ai tools collect and store data about users. The ways different Ai systems use data or information inputted by users is also not always transparent.

Examples:

  • The ChatGTP privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users.

  • While you can request to have your ChatGPT account deleted, the prompts that you input into ChatGPT cannot be deleted.

  • Environmental Concerns

The training and use of generative AI requires very large amounts of computing power, which has huge implications for greenhouse gas emissions and climate change. There are also environmental costs associated with storing the outputs created. 

Read: Measuring the environmental impacts of artificial intelligence compute and applications

  • Labor Concerns

As noted in Time magazine's article, "150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting."  some worker communities involved with developing Ai tools have been exploited. These employees, often noted as "invisible workers" or "ghost workers," can range from those who train and annotate or label the data to those who enhance and test the algorithm or the models as well as other tasks.

  • Digital Equity

Generative AI has the potential to significantly amplify existing inequalities in society and contribute to the digital divide. This can arise from the creation or exacerbation of disparities in access to resources, tools, skills and opportunities. Those who can afford access to the premium AI tools and services will have an advantage over those who can’t.