
ChatGPT: useful to generate text, this AI tool can assist with topic and outline development, as well as with research organization and writing improvement.
Consensus: a free AI powered academic search engine finds papers in the Semantic Scholar database and synthesizes information related to the topic in retrieved items.
Copilot: this Microsoft AI tool can help with basic writing assistance, synthesizing information, and task management.
Elicit: similar to Consensus, Elicit’s free version provides limited search for papers in the Semantic Scholar database, including one sentence summaries of retrieved articles.
Gemini: this Google AI tool can help with basic writing activities, synthesizing of information, and assist with work organization.
Google Notebook LM: this Google AI research assistance tool is powered by Gemini and it allows users to upload and find sources, and to brainstorm ideas and generate outlines for research topics.
Before relying on an AI tool to support your work, take time to consider the following:
(download the file using this link)
Artificial intelligence (AI) tools like ChatGPT and other generative technologies are changing the way that people interact with, create, and evaluate information. These AI tools can be used to brainstorm ideas and topics, summarize sources, generate citations, and more. But they also raise very important and valid questions about accuracy, transparency, authorship, and ethical use.
Some of the uses of AI tools for academic purposes include assistance with:
Understanding the role of AI in today’s information environment is essential for becoming a thoughtful, informed researcher. This page will help you navigate that landscape and become a responsible user of AI. Remember that it can be a helpful tool but that it should NEVER be a replacement for your voice or mind.
On this page, you'll find guidance on:
IMPORTANT: The Q&A subsection below on this page of the guide is illustrative of the types of content/blurbs you can generate in ChatGPT, and were copied over from chats that librarians had with ChatGPT on May 21, 2025. After the content was generated by ChatGPT, based on our detailed prompts, revisions were made to better capture the librarians' voices and perspectives.
Artificial Intelligence (AI) tools are software applications designed to perform tasks that typically require human intelligence. These tasks include understanding natural language, recognizing patterns, generating content, and making decisions. Examples of AI tools include chatbots, language models, image recognition systems, and recommendation engines.
When a user provides information to an AI tool—such as typing a question or uploading data—the tool processes this input through complex algorithms and models. Many AI tools, especially those based on machine learning, analyze patterns in large datasets they were trained on to generate responses or perform actions. For instance, language models interpret the user's text by breaking it down into components like words or phrases and then predicting the most relevant or coherent output based on their training.
It is important to understand that AI tools do not “know” information like a human does; instead, they rely on statistical associations and patterns learned from vast amounts of data. Because of this, their responses may sometimes lack context or accuracy and should be used thoughtfully as a complement—not a substitute—to critical thinking and verified information sources.
| Aspect | AI-Generated Content
|
Traditional Information Sources |
|---|---|---|
| Authorship | Created by AI algorithms without direct human authorship | Written by experts, researchers, or journalists |
| Review Process
|
Typically lacks formal peer review or editorial checks | Undergoes editorial or peer review to ensure accuracy and credibility |
| Source of Information |
|
Based on original research, primary data, or vetted facts |
| Consistency | Can produce inconsistent or contextually inaccurate content | Generally reliable and fact-checked |
| Purpose | Often used for drafting, summarizing, or generating ideas quickly | Intended to inform, educate, or report with verified accuracy |
AI tools can provide useful and efficient content, but they have important limitations that users should be aware of. One major limitation is bias: AI systems learn from large datasets that may contain biased, outdated, or unrepresentative information, which can lead to biased or unfair outputs. Additionally, AI lacks true contextual understanding and may misinterpret nuances, sarcasm, or cultural context, resulting in inaccurate or inappropriate responses. AI-generated content can also contain errors or “hallucinations” — plausible-sounding but factually incorrect or misleading information.
Another limitation is the lack of source transparency; AI typically does not provide clear citations or references, making it difficult to verify the credibility or origin of its outputs. Ethical and privacy concerns also arise, as the use of AI involves issues around data privacy, consent, and the potential misuse of generated content. Finally, it’s important to remember that AI tools cannot replace human judgment, critical thinking, or ethical decision-making, all of which are essential in scholarly and professional work.
Because of these limitations, it is crucial to critically evaluate AI-generated content and cross-check it against reliable, authoritative sources.
AI tools are not considered scholarly sources because they do not produce original, peer-reviewed research or verified expert analysis. Instead, AI generates content by analyzing and synthesizing existing information from a variety of sources, often without clear citations or transparency about where the information comes from. Scholarly sources undergo rigorous evaluation by experts through peer review, ensuring the accuracy, reliability, and credibility of the information presented. In contrast, AI outputs may contain errors, biases, or unverified claims and lack the accountability and scrutiny expected of academic publications. Therefore, while AI tools can assist with research, they should not be cited as authoritative scholarly sources themselves.
Content created by AI tools is not considered a primary source because it does not present original, firsthand evidence or direct documentation of an event, experiment, or phenomenon. Primary sources are original materials created at the time of an event or by someone with direct experience, such as research data, interviews, official records, or creative works. AI content, on the other hand, is a secondary or tertiary interpretation based on patterns learned from a large dataset, not an original account or direct evidence.
People should approach content from AI tools with caution. While AI can generate useful summaries and ideas quickly, it can also produce errors, outdated information, or biased content. Because AI tools do not provide clear sources or original research, their outputs lack the transparency and accountability required for scholarly work. Therefore, it is generally not recommended to cite AI-generated content as authoritative evidence in academic or professional contexts. Instead, users should verify any information from AI tools by consulting credible, primary, or scholarly sources before trusting or citing it.
Yes, if people use AI tools to generate content—such as an outline, a sentence, or ideas—they should acknowledge that assistance. Citing AI tools helps maintain transparency about the sources and methods used in creating the work. It also gives proper credit to the technology that contributed to the process and allows readers or instructors to understand how the content was developed. While AI itself isn’t considered a scholarly source, recognizing its role promotes academic honesty and ethical use of emerging technologies.
If you’re the one generating the bulk of the content and you’re using an AI tool like me primarily for suggestions, edits, or to improve clarity, it generally does not require formal citation. Think of it like asking a peer or tutor for feedback—unless the AI is creating substantive content or original ideas that you directly incorporate, you’re still the primary author.
However, it’s good practice to be transparent if your institution or publisher requires it, or if the AI’s role was significant enough to influence the final work. Some instructors or publishers may want disclosure about AI assistance even if it’s just editorial. So, check any specific guidelines you’re working with.
In summary:
Minor help (suggestions, edits): Usually no citation needed.
Substantial content creation by AI: Citation or acknowledgment recommended.
When in doubt: Disclose use for transparency.
It’s important to think of AI as a tool because, like any tool, it can enhance human abilities but doesn’t replace critical thinking or expertise. AI assists by processing large amounts of information quickly, generating ideas, or helping with routine tasks. However, it depends on human guidance to interpret results, check for accuracy, and make informed decisions. Viewing AI as a tool encourages responsible use—leveraging its strengths while recognizing its limitations—rather than blindly trusting its outputs. This mindset helps maintain the integrity and quality of research and information use.