Trustworthiness/Reliability:
While many of these tools are drawing on vast amounts of reliable data, the black-box nature of these applications can at times make it difficult to verify the source of the the information they generate. When researchers are unable to verfiy the source of the information and contents these tools produce, and the sources it was trained on can make it difficult to rely on the accuracy of the output.
Generative AI is NOT a research database:
While these tools can be used for discovery, they are not a scholarly literature database nor are they a search engine. In some cases these products will have been trained using data from scholarly resources, but unlike a discovery platform which will point you directly to the source literature, generative AI tools are designed to "generate" their own content or output.
Privacy and Bias and Other limitations:
While some products and tools are open source others are commercial and commonly capture information about users, biases in these products are derived from the data sets used to train them. Generative AI tools commonly reproduce biases inherently found in the data sets they are trained on which can perpetuate both harm and misinformation.
Generative AI tools continue to evolve and change, with new products continually being created and introduced. We can anticipate that some existing tools and functionalities will be discontinued, while others will develop new enhancements and versions.
As different generative AI products are trained on different large language models, the data ranges and accuracy of the included information will vary from tool to tool. When searching for timely or current information, an AI tool will only be as good as the dates of coverage included in the data set it was trained on.
AI tools for research can help you to discover new sources for your literature review or research assignment. These tools will synthesize information from large databases of scholarly output with the aim of finding the most relevant articles and saving researchers' time. As with our research databases or any other search tool, however, it's important not to rely on one tool for all of your research, as you will risk missing important information on your topic of interest.
NOTE: The resources described in the table represent an incomplete list of tools specifically geared towards exploring and synthesizing research. As generative AI becomes more integrated in online search tools, even the very early stages of research and topic development could incorporate AI. If you have any questions about using these tools for your research, please contact a librarian.
AI-Powered Research Tools | ||||
---|---|---|---|---|
NAME | WHAT IT DOES | UNDERLYING DATA | IS IT FREE? | MORE INFORMATION |
Connected Papers | Like Research Rabbit (see below), Connected Papers focuses on the relationships between research papers to find similar research. You can also use Connected Papers to get a visual overview of an academic field. | Semantic Scholar database | Free with paid subscriptions available. | Connected Papers - About |
Consensus | Consensus uses large language models (LLMs) to help researchers find and synthesize answers to research questions, focusing on the scholarly authors' findings and claims in each paper. | Semantic Scholar database | Free with paid subscriptions available. | Consensus FAQs |
Elicit | Elicit uses LLMs to find papers relevant to your topic by searching through papers and citations and extracting and synthesizing key information. | Semantic Scholar database | Free with paid subscriptions available. | Elicit FAQs |
Keenious | Keenious is a recommendation tool for academic articles and topics based on papers you upload. | Open Alex | Free with paid subscriptions available. | Keenious Help File |
Research Rabbit | Research Rabbit is a citation-based mapping tool that focuses on the relationships between research works. It uses visualizations to help researchers find similar papers and other researchers in their field. | Open Alex, Semantic Scholar, and other databases | Research Rabbit is currently free. | Research Rabbit FAQs |
scite | scite has a suite of products that help researchers develop their topics, find papers, and search citations in context (describing whether the article provides supporting or contrasting evidence) | Many different sources (an incomplete list can be found on this page) | No. See pricing information. | scite FAQs; how scite works |
Scholarcy | Scholarcy summarizes key points and claims of articles into 'summary cards' that researchers can read, share, and annotate when compiling research on a given topic. | Scholarcy only uses research papers uploaded or linked by the researcher themselves. It works as a way to help you read and summarize your research, but is not a search engine. | Free with paid subscriptions available. | Scholarcy FAQs |
Semantic Scholar | Semantic Scholar (which supplies underlying data for many of the other tools on this list) provides brief summaries ('TLDR's) of the main objectives and results of papers. | Semantic Scholar database | Semantic Scholar is currently free. | Semantic Scholar FAQs |
Undermind | An AI research assistant that works with you to refine your research question and find relevant papers. | Semantic Scholar database | Free with paid subscriptions available. | Undermind FAQs (scroll down for FAQs) |
from Georgetown University |
This content is evolving and subject to change. Last updated: December 20, 2024
There remains significant legal uncertainty with the use of Generative AI tools and copyright. This is an evolving area and our understanding will develop as new policies, regulations, and case law become settled.
If you would like to use GenAI tools for content generation, consider the following before doing so:
Generative AI tools have introduced new challenges in academic integrity, particularly related to plagiarism.
Plagiarism is typically defined as presenting someone else's work or ideas as one's own. While a generative AI tool might not qualify as a "someone," using text generated from an AI tool without citing is still considered plagiarism because the work is still not the researcher's own. Individual policies for using and crediting GAI tools might vary from class to class, so looking at the syllabus and having a clear understanding from the professor is important.
A note about plagiarism detection tools:
A number of AI detection tools are currently available to publishers and institutions, but there are concerns about low rates of accuracy and false accusations. Because generative AI tools do not generate large amounts of text word-for-word from existing works, it can be difficult for automated tools to detect plagiarism. Georgetown does not currently use the AI detection feature of its plagiarism detection tool, Turnitin.
Another area of academic integrity affected by GAI tools is that of false citations.
Providing false citations in research, whether intentional or unintentional, violates Malone's Academic Integrity** policy. GAI tools such as ChatGPT have been known to generate false citations, and even if the citations represent actual papers, the cited content in ChatGPT might still be inaccurate.
** Malone Catalog - Section on “Integrity - Academic”
There are currently also multiple privacy concerns associated with the use of generative AI tools. The most prominent issues revolve around the possibility of a breach of personal/sensitive data and re-identification. More specifically, most AI-powered language models, including ChatGPT, require for users to input large amounts of data to be trained and generate new information products effectively. This translates into personal or sensitive user-submitted data becoming an integral part of the collection of material used to further train the AI without the explicit consent of the user. Moreover, certain generative AI policies even permit AI developers to profit off of this personal/sensitive information by selling it to third parties. Even in cases when clear identifying personal information is not entered by AI user, the utilization of the system carries a risk of re-identification as the submitted dataset may contain patterns allowing for the generated information to be linked back to the individual or entity.
Given these issues, extensive downloading of Library materials to build AI training corpora is prohibited. Additionally, some Library content providers prohibit any amount of their content being used with AI tools
This work is licensed under a Creative Commons Attribution NonCommercial 4.0 International License.