In academic and other contexts, some tasks generative AI can help with:
Please note: This list is not meant to be exhaustive, but rather, the purpose is to provide some ideas for potential benefits of using generative AI.
Some limitations or drawbacks with using generative Ai may include:
Sometimes, Generative AI tools like ChatGPT may generate fictitious information, presented as factual or accurate. This can include citations, publications, biographical information, and other information commonly used in research and academic papers. These are sometimes referred to as hallucinations or confabulations.
While AI tools can generate coherent responses, they may struggle with understanding the broader context or intent behind a question or request. They primarily rely on the patterns in the training data and may not possess real-world knowledge or common sense reasoning. This can lead to inaccurate or irrelevant responses in certain situations. See the tab on "Creating Effective Prompts" to mitigate some of this limitation.
While AI tools can automate the process of searching for relevant literature, their search capabilities may not be as comprehensive or flexible as those of human researchers. AI tools typically rely on predefined algorithms or databases, which may not cover all possible sources or alternative perspectives. Human researchers often employ a more diverse and creative approach to literature search.
Many generative AI tools (including the free version of ChatGPT) are trained on data with cutoff dates, resulting in answers that may not be up-to-date, or exclude current information and events. In some cases, the data cutoff date is not made clear to the user.
Over-reliance on generative AI tools in education could potentially diminish the importance of human interaction and creativity. Education is not solely about information delivery but also about fostering critical thinking, collaboration, and personal expression. Excessive use of AI-generated content might limit students' opportunities for active engagement, exploration, and scholarship as conversation.
Note: In addition to many of the known limitations outlined here, generative AI may be prone to problems yet to be discovered or not fully understood.
While generative AI tools bring numerous benefits, using AI brings a range of ethical considerations, including the potential for bias, misinformation, academic integrity concerns, data privacy, the potential for copyright infringement, inequitable access to technology, and more.
Academic integrity refers to maintaining a standard of honest and ethical behavior in all types of academic work. In a rapidly changing AI landscape, understanding the limitations and potentials of generative AI is essential in fostering academic integrity and effective learning.
See The Chicago School Office of Writing and Learning webpage on Using AI with Integrity for more information on academic integrity while using Generative AI tools.
Generative AI models learn from vast amounts of data, which can be biased or contain existing societal prejudices. If these biases are not adequately addressed during the training process, AI-generated content may perpetuate and reinforce discriminatory or unfair practices. Even if specific biased resources are excluded from the model, the overall training material could underrepresent different groups and perspectives. This can have negative consequences, such as reinforcing stereotypes or excluding marginalized perspectives. Generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.
AI tools have been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Generative AI tools have the potential to create text, images, and other content that may infringe upon intellectual property rights.
Development of some generative Ai tools was based on using the internet. There is a lack of transparency as to what exactly is included in some models, but there are allegations that the models include pirated content, such as a number of books, images not licensed by their creators for reuse, and content from the internet that was intended to support its creators with advertising revenue.
Most Generative Ai tools collect and store data about users. The ways different Ai systems use data or information inputted by users is also not always transparent.
Examples:
The ChatGTP privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users.
While you can request to have your ChatGPT account deleted, the prompts that you input into ChatGPT cannot be deleted.
The training and use of generative AI requires very large amounts of computing power, which has huge implications for greenhouse gas emissions and climate change. There are also environmental costs associated with storing the outputs created.
Read: Measuring the environmental impacts of artificial intelligence compute and applications
As noted in Time magazine's article, "150 African Workers for ChatGPT, TikTok and Facebook Vote to Unionize at Landmark Nairobi Meeting." some worker communities involved with developing Ai tools have been exploited. These employees, often noted as "invisible workers" or "ghost workers," can range from those who train and annotate or label the data to those who enhance and test the algorithm or the models as well as other tasks.
Generative AI has the potential to significantly amplify existing inequalities in society and contribute to the digital divide. This can arise from the creation or exacerbation of disparities in access to resources, tools, skills and opportunities. Those who can afford access to the premium AI tools and services will have an advantage over those who can’t.