Use Generative AI services such as ChatGPT safely

Gartner describes Generative AI as a capability that can “learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but doesn’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs”.  ChatGPT is a type of generative AI service based on a large language model (LLM) developed by OpenAI and it uses deep learning techniques to generate a variety of content, including human-like text responses to a wide range of prompts.

At a glance

 Ensure a third-party security assessment has been completed before using any generative AI service. All suppliers have been assessed you can find on a Third Party Register (SSO required).

 Check the relevant agreements about third-party use of information supplied to the service

 Only supply information classified up to a level for which the service has been assessed as suitable, and take particular care when processing personal data, as many of these services are not appropriate for this type of use.

 
AT OXFORD

Risk to confidentiality 

  

The main information security risk to the university in using generative AI cloud services, is in relation to a loss of confidentiality. The university has an information classification scheme with three levels of confidentiality: Public, Internal and Confidential.

 

Information classified as Public does not carry a confidentiality risk. Information classified as Internal or Confidential carries risk and we do not recommend using generative AI services with Confidential information. Information classified as Internal can be used by taking the following steps to mitigate risk.  

 

1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to complete a full assessment for free and open-source services and in such cases they should not be used for confidential information. In the case of OpenAI a TPSA has been successfully completed. 

  

2. Information provided to generative AI services may be accessible by the service provider, its partners and sub-contractors and is likely to be used in some way, particularly when the service is free to use. Check service agreements for conditions on usage and ownership and if not explicitly set out in an agreement, the service carries an unknown risk to confidentiality. In the case of Open AI, if you use the Application Programming Interface (API) no data submitted to or generated by the API is used to train AI models. If you use ChatGPT, the default is that data you provide may be used in this way, but you can opt-out (under Data Controls) to reduce risk. If any personal data is processed using ChatGPT, and you fail to opt out of the use of that data by the third party, this secondary processing must be considered in the data protection by design work. In particular, you must ensure that you have been transparent with those whose data may be fed into the generative AI model and alert them to any secondary processing that may occur. 

  

3. It is always important to check the generated output, particularly if used to generate code and sensitive output. One potential threat is poisoning of AI training data to manipulate the behaviour of the model and cause it to produce malicious output. This is an emerging threat. We will continue to watch this and other evolving threats and update our advice accordingly. 

  

Use of Generative AI to launch cyber attacks 

  

There has been a lot of discussion about the use of generative AI services by criminal groups and other types of cyber attacker, for example to develop malware, write convincing phishing emails or create deepfake videos. From the university perspective malicious use of AI services does not bring a specific new threat and there are no recommendations beyond adhering to the extant information security policy and underpinning baseline standard, to ensure a good level of security.  

 

Further support 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk

THE BASICS

Working with third parties

Before you entrust the University's data or information to any partner or supplier, you need to be sure they can and will keep it safe from attack.

In order to ensure that third-party partners and suppliers meet the standards of information security required by the University and your division, department or faculty, you must:

  1. Maintain an up-to-date record of all third parties that access, store or process University information on behalf of your division, department or faculty
  2. Ensure that, for all new agreements with third parties, due diligence is exercised around information security and that contractual arrangements are adequate
  3. Ensure that information security arrangements contained in existing agreements are reviewed and are adequate
  4. Monitor the compliance of third parties against your information security requirements and contractual arrangements
STUDENTS

Please note that artificial intelligence (AI) can only be used within assessments where specific prior authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student’s disability (such as voice recognition software for transcriptions, or spelling and grammar checkers).

To find out more about AI and plagiarism, visit the Plagiarism page on the Oxford Students website. This not an Information Security requirement but a requirement from the Proctors. 

Contact us

 

Please contact the Information security GRC team grc@infosec.ox.ac.uk for support on information security issues.  

If you are intending to provide personal data in the use of Generative AI you should  seek advice from your local information governance lead, or the Information Compliance team  information.compliance@admin.ox.ac.uk