Use Generative AI services safely

Gartner describes Generative AI as a capability that can “learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but doesn’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs”.  ChatGPT is a type of generative AI service based on a large language model (LLM) developed by OpenAI and it uses deep learning techniques to generate a variety of content, including human-like text responses to a wide range of prompts.

At a glance

 Ensure a third-party security assessment has been completed before using any generative AI service. All suppliers have been assessed you can find on a Third Party Register (SSO required).

 Check the relevant agreements about third-party use of information supplied to the service

 Only supply information classified up to a level for which the service has been assessed as suitable, and take particular care when processing personal data, as many of these services are not appropriate for this type of use.

 
AT OXFORD

Risk to confidentiality 

The main information security risk to the university in using generative AI cloud services is in relation to a loss of confidentiality. The university has an information classification scheme with three levels of confidentiality: Public, Internal and Confidential.

Information classified as Public does not carry a confidentiality risk. Information classified as Internal or Confidential carries risk. Before any processing of Internal or Confidential information using generative AI services, the following steps must be taken to mitigate risk.  

1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to complete a full assessment for free and open-source services and in such cases they should not be used for confidential information.

2. Information provided to generative AI services may be accessible by the service provider, its partners and sub-contractors and is likely to be used in some way, such as to train AI models. This is particularly likely when the service is free to use. Check service agreements for conditions on usage and ownership and if not explicitly set out in an agreement, the service carries an unknown risk to confidentiality. If any personal data is processed using generative AI, and you fail to opt out of the use of that data by the third party, this secondary processing must be considered in the data protection by design work. In particular, you must ensure that you have been transparent with those whose data may be fed into the generative AI model and alert them to any secondary processing that may occur. 

Data Integrity 

It is important to check generated output, particularly if used to produce code and sensitive output, as it may be false or misleading. One potential cyber risk is "poisoning" AI training data to manipulate the behaviour of the model and cause it to produce malicious output. This is an emerging threat. We will continue to watch this and other evolving threats and update our advice accordingly. 

University-licensed generative AI tools

Two major generative AI tools, ChatGPT Edu and Copilot for Microsoft 365, have been made available for licensing by University departments and colleges via the University's AI and Machine Learning Competency Centre. The Information Security GRC Team has some specific guidance relating to risk to confidentiality from use of these tools under a licence from the Competency Centre, as below. Our generic guidance regarding the potential unreliability of outputs from generative AI tools and consequent data integrity risk continues to apply to generative AI tools licensed via the Competency Centre.

ChatGPT Edu

ChatGPT Edu licensed via the Competency Centre has been approved for processing of Confidential University data by the Information Security team. University data processed by ChatGPT Edu under a Competency Centre licence will not be used to train the AI model. However, any processing of personal data using ChatGPT Edu should still be discussed with the Information Compliance team.

Copilot for Microsoft 365

Copilot for Microsoft 365 has read access to all information accessible to a licensed user via their University Microsoft 365 account. Access permissions are often poorly managed and this can go unnoticed. However the ability of Copilot to comb through large volumes of information increases the risk to confidentiality of University data. Departments and colleges purchasing a Copilot for Microsoft 365 licence should initiate reviews of access permissions and issue advice on permissions management prior to implementation of Copilot.

Use of Generative AI to launch cyber attacks 

Aside from use of generative AI to process University data, there is a lot of discussion about the use of generative AI services by criminal groups and other types of cyber attacker, for example to develop malware, write convincing phishing emails or create deepfake videos. Awareness is key to preventing this type of attack, as is adherence to the extant University information security policy and underpinning baseline security standard, to ensure a good level of security.  

Further support 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk

THE BASICS

Working with third parties

Before you entrust the University's data or information to any partner or supplier, you need to be sure they can and will keep it safe from attack.

In order to ensure that third-party partners and suppliers meet the standards of information security required by the University and your division, department or faculty, you must:

  1. Maintain an up-to-date record of all third parties that access, store or process University information on behalf of your division, department or faculty
  2. Ensure that, for all new agreements with third parties, due diligence is exercised around information security and that contractual arrangements are adequate
  3. Ensure that information security arrangements contained in existing agreements are reviewed and are adequate
  4. Monitor the compliance of third parties against your information security requirements and contractual arrangements
STUDENTS

Please note that artificial intelligence (AI) can only be used within assessments where specific prior authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student’s disability (such as voice recognition software for transcriptions, or spelling and grammar checkers).

To find out more about AI and plagiarism, visit the Plagiarism page on the Oxford Students website. This not an Information Security requirement but a requirement from the Proctors. 

Contact us

 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should  seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk