Skip to Main Content

Library Research Support

Using GenAI in your research

We support a responsible use of AI to help build a sustainable research community. When researchers shift from data-driven to AI-driven methods, it’s important to find a balance—considering both the benefits and the risks of using AI tools. Take some time to read SHU’s AI policy to understand more about artificial intelligence—how it can create human-like text, images, videos, and other content using models trained on existing data. By using prompt engineering, generative AI tools can produce many types of new outputs, including text, images, audio, video, computer code, and datasets. AI systems rely on machine learning, which means they learn to recognize patterns in data automatically. These patterns help algorithms respond to new inputs in smart and useful ways.

 

Generative AI tools create content—like text, images, or code—based on the prompts you type in. At Sheffield Hallam, we see these tools as both transformative and disruptive, meaning they’re changing the way people work and learn. Because of this, they’ll likely become a big part of many academic and professional settings. Instead of restricting their use, we aim to help staff and students use Generative AI confidently, responsibly, and transparently. Watch the ChatGPT and Academic Integrity video for more guidance.

infographic for different types of your output data

The University of Birmingham has created an Evaluative Framework for AI Tools to help researchers make smart and informed choices when deciding which AI tools to use in their work. They’ve also put together AI Tools Licensing Review Guidance to help researchers stay safe from confusing or risky licence terms. In addition, their Quick Review Checklist is open for everyone to use! It highlights important things to think about when using AI tools — their main features, intellectual property (IP) rights, effects on commercialisation, rules for sharing outputs, accessibility standards (WCAG), and much more.

 

Explore the University of Birmingham’s helpful AI review resources linked out below: 

Generative AI in research

Use our checklist to learn more about the risks associated with GenAI

We’ve put together a checklist with useful questions to help you take a responsible and ethical approach when using Generative AI (GenAI) tools in your research. In the PDF document below, you’ll find our recommendations for each stage of your research project. We also talk about how to use this checklist in our Open Research podcast — and you can read the transcript in the Word document linked just below the picture.

infographic for our GenAI checklist for researcher to follow an ethical approach

Recommendations for your input data

  • When using Gen AI in your research, please remember to be specific when using prompt engineering. Better input means better output!
  • Never enter confidential or personally identifiable information into an unsecured system.
  • Think about the risks if unprocessed research data could be accessed by unknown third parties and used without limits.

Our SHU policy confirms that public AI services (e.g ChatGPT Plus) typically retain data input for further training and may use it in responses to future requests by any user. To read more about data protection and AI, please visit the SharePoint site from Governance, Legal and Sector Regulation.

infographic for recommendations for your input data

Recommendations when using GenAI

Remember that GenAI models are trained on data with human bias. There is a certain skillset required when using GenAI - pay attention to new functionality features, tools and T&Cs as we navigate our way in a rapidly changing technological environment. Responsible use of GenAI also means being open and ethical.

infographic for recommendations for data modelling

Recommendations for your output data

  • Ensure transparency in your research. It must be clear which content was generated using AI tools (which, how, when). Reproducibility is always central to research integrity. Even when AI is not the main event, you still need to acknowledge AI.
  • When citing AI, please provide the name of the tools you used, how you have used them, and the date utilized.
  • In terms of authorship, it is clear that AI tools cannot take responsibility for the submitted work, therefore they cannot be listed as authors or co-authors when disseminating your research findings.
  • Provide that human layer to make your research successful.
  • Double-check your final output for hallucinations (when Gen AI tools provide incorrect or made-up information) and bias (Gen AI tools are trained on data from the internet, which is not always fair or balanced) before publishing. You can verify it with known information sources.

infographic for recommendations for your output data

Public AI systems - pros and cons

Public AI services are tools that anyone on the internet can use — some are free, while others are paid, like ChatGPT Plus. However, it’s important to know that these services often save the information you enter and may use it to train their systems. This means your data could influence or appear in responses given to other users in the future. 

 

Why is it safer to use private AI systems?

Private AI services are only available to certain groups of users. This setup helps keep your data safe, as it isn’t shared with people outside the organisation. At our University, Microsoft Copilot is an approved Private Generative AI tool. To use it safely, make sure you’re logged in with your University account and that the chat window shows the Enterprise Data Protection indicator.