Skip to Main Content

Library Research Support

Generative AI in research

Using GenAI in your research

We support a responsible use of AI that enables a sustainable research ecosystem. When choosing to move from data -driven to AI- driven workflows, researchers should follow a balanced approach where both - advantages and risks around AI systems are taken into consideration. Get familiar with our SHU policy to learn more abouartificial intelligence and how it is capable of generating human-like text, images, or other media, using models trained on existing data. Using prompt engineering Generative AI tools create new outputs in different forms: text, images, audio, video, software code and datasets. Machine learning technology is based on patterns which are recognized autonomously on the basis of training data and algorithms are trained to match the respective input (Guideline for dealing with Artificial Intelligence).

different forms of GenAI outputs

Recommendations for your input data

When using Gen AI in your research, please remember to be specific when using prompt engineering. Better input means better output! Do not put any confidential information or information that can identify individuals into an unsecure system. With any unprocessed research data, you should consider the risks and consequences of unknown third parties being able to then access it, and use it for unrestricted purposes. Our SHU policy confirms that public AI services (e.g ChatGPT Plus) typically retain data input for further training and may use it in responses to future requests by any user. To read more about data protection and AI, please visit the SharePoint site from Governance, Legal and Sector Regulation.

our recommendations for your input data

Recommendations when using GenAI

Remember that GenAI models are trained on data with human bias. There is a certain skillset required when using GenAI - pay attention to new functionality features, tools and T&Cs as we navigate our way in a rapidly changing technological environment. Responsible use of GenAI also means being open and ethical.

our recommendations for your GenAI model

Recommendations for your output data

Ensure transparency in your research. It must be clear which content was generated using AI tools (which, how, when). Reproducibility is always central to research integrity. Even when AI is not the main event, you still need to acknowledge AI. When citing AI, please provide the name of the tools you used, how you have used them, and the date utilized. In terms of authorship, it is clear that AI tools cannot take responsibility for the submitted work, therefore they cannot be listed as authors or co-authors when disseminating your research findings. Provide that human layer to make your research successful. Double-check your final output for hallucinations (when Gen AI tools provide incorrect or made-up information) and bias (Gen AI tools are trained on data from the internet, which is not always fair or balanced) before publishing. You can verify it with known information sources.

our recommendations for your output data

Public AI systems - pros and cons

Generative AI tools are used for generating content from user-entered prompts. At Sheffield Hallam, we believe these tools are transformative and disruptive, which is why they are likely to feature in many academic and professional workplaces. Rather than prohibit their use, we want to support staff and students in using them effectively, ethically, and transparently. Watch the ChatGPT and Academic Integrity video for more guidance.

Public AI services are available to users on the public internet, whether free or paid for (e.g ChatGPT Plus). These services typically retain data input for further training and may use it in responses to future requests by any user. To be compliant with the General Data Protection Regulation and the Data Protection Act, and other applicable laws and regulations, please check our Staff Guidance and Help SharePoint page. Do not upload any personal and confidential data to public AI services, and complete Data Protection Impact Assessment (DPIA) and Equality Impact Assessment (EIA) where required. Also, please consider the potential impact of AI on all stakeholders and users.

our advice when using public AI systems

Why is it safer to use private AI systems?

Private AI services are restricted to particular groups of users, which protects data from being shared with other users outside the organisation, so are safer to use. Microsoft Copilot is an allowed Private Generative AI system for our University users, but they must be logged in with University credentials and have the Enterprise Data Protection indicator in the chat window

our advice when using private AI systems

Use our checklist to learn more about the risks associated with GenAI

We have now created a checklist with a set of questions to help you follow that responsible and ethical approach when engaging with GenAI tools in your research. In the PDF document attached below you will find our recommendations for each stage of your research project. We also discuss the use of this checklist in our Open Research podcast - listen to our episode called "Use GenAI responsibly in your research"