Skip to Main Content

Library Research Support

Generative AI in research

We support a responsible use of AI that enables a sustainable research ecosystem. When choosing to move from data -driven to AI- driven workflows, researchers should follow a balanced approach where both - advantages and risks around AI systems are taken into consideration. Following our SHU policy artificial intelligence is capable of generating human-like text, images, or other media, using models trained on existing data. Using prompt engineering Generative AI tools create new outputs in different forms: text, images, audio, video, software code and datasets. Machine learning technology is based on patterns which are recognized autonomously on the basis of training data and algorithms are trained to match the respective input (Guideline for dealing with Artificial Intelligence).

different forms of AI outputs

 

 

 

 

 

 

 

 

 

 

 

When using Gen AI in your research, please remember to be specific when using prompt engineering. Better input means better output! Do not put any confidential information or information that can identify individuals into an unsecure system. With any unprocessed research data, you should consider the risks and consequences of unknown third parties being able to then access it, and use it for unrestricted purposes. Our SHU policy confirms that public AI services (e.g ChatGPT Plus) typically retain data input for further training and may use it in responses to future requests by any user. To read more about data protection and AI, please visit the SharePoint site from Governance, Legal and Sector Regulation.

considerations for input data

 

 

 

 

 

 

 

 

 

 

 

Also, remember that AI models are trained on data with human bias. There is a certain skillset required when using AI - pay attention to new functionality features and tools as we navigate our way in a rapidly changing technological environment. Responsible use of AI also means being open and ethical.

recommendations for your AI data curation process

 

 

 

 

 

 

 

 

 

 

 

 

Ensure transparency in your research. It must be clear which content was generated using AI tools (which, how, when). Reproducibility is always central to research integrity. Even when AI is not the main event, you still need to acknowledge AI. When citing AI, please provide the name of the tools you used, how you have used them, and the date utilized. In terms of authorship, it is clear that AI tools cannot take responsibility for the submitted work, therefore they cannot be listed as authors or co-authors when disseminating your research findings. Provide that human layer to make your research successful. Double-check your final output for hallucinations (when Gen AI tools provide incorrect or made-up information) and bias (Gen AI tools are trained on data from the internet, which is not always fair or balanced) before publishing. You can verify it with known information sources.

recommendations for your AI output

 

 

 

 

 

 

 

 

 

 

 

Generative AI tools are used for generating content from user-entered prompts. At Sheffield Hallam, we believe these tools are transformative and disruptive, which is why they are likely to feature in many academic and professional workplaces. Rather than prohibit their use, we want to support staff and students in using them effectively, ethically, and transparently. Watch the ChatGPT and Academic Integrity video for more guidance.

Public and private AI services:

public and private AI systems

 

 

 

 

 

 

 

 

 

 

 

 

public AI systems

 

 

 

 

 

 

 

 

 

 

 

 

Public AI services are available to users on the public internet, whether free or paid for (e.g ChatGPT Plus). These services typically retain data input for further training and may use it in responses to future requests by any user. To be compliant with the General Data Protection Regulation and the Data Protection Act, and other applicable laws and regulations, please check our Staff Guidance and Help SharePoint page. Do not upload any personal and confidential data to public AI services, and complete Data Protection Impact Assessment (DPIA) and Equality Impact Assessment (EIA) where required. Also, please consider the potential impact of AI on all stakeholders and users.

private AI systems

 

 

 

 

 

 

 

 

 

 

 

Private AI services are restricted to particular groups of users, which protects data from being shared with other users outside the organisation, so are safer to use. Microsoft Copilot is an allowed Private Generative AI system for our University users, but they must be logged in with University credentials and have the Enterprise Data Protection indicator in the chat window