Risk to confidentiality
The main information security risk to the university in using generative AI cloud services, is in relation to a loss of confidentiality. The university has an information classification scheme with three levels of confidentiality: Public, Internal and Confidential.
Information classified as Public does not carry a confidentiality risk. Information classified as Internal or Confidential carries risk and we do not recommend using generative AI services with Confidential information. Information classified as Internal can be used by taking the following steps to mitigate risk.
1. As with all service providers holding or processing university information, information supplied to the tool in the form of questions or other artefacts is typically stored by the third-party service provider and is subject to the threats from cyber criminals and other malicious actors, such as hostile nation states. Therefore, all cloud-based Generative AI tools should be subject to a security risk assessment before being used. The Information Security GRC Team has a TPSA tool to help complete an assessment. It is generally not possible to complete a full assessment for free and open-source services and in such cases they should not be used for confidential information. In the case of OpenAI a TPSA has been successfully completed.
2. Information provided to generative AI services may be accessible by the service provider, its partners and sub-contractors and is likely to be used in some way, particularly when the service is free to use. Check service agreements for conditions on usage and ownership and if not explicitly set out in an agreement, the service carries an unknown risk to confidentiality. In the case of Open AI, if you use the Application Programming Interface (API) no data submitted to or generated by the API is used to train AI models. If you use ChatGPT, the default is that data you provide may be used in this way, but you can opt-out (under Data Controls) to reduce risk. If any personal data is processed using ChatGPT, and you fail to opt out of the use of that data by the third party, this secondary processing must be considered in the data protection by design work. In particular, you must ensure that you have been transparent with those whose data may be fed into the generative AI model and alert them to any secondary processing that may occur.
3. It is always important to check the generated output, particularly if used to generate code and sensitive output. One potential threat is poisoning of AI training data to manipulate the behaviour of the model and cause it to produce malicious output. This is an emerging threat. We will continue to watch this and other evolving threats and update our advice accordingly.
Use of Generative AI to launch cyber attacks
There has been a lot of discussion about the use of generative AI services by criminal groups and other types of cyber attacker, for example to develop malware, write convincing phishing emails or create deepfake videos. From the university perspective malicious use of AI services does not bring a specific new threat and there are no recommendations beyond adhering to the extant information security policy and underpinning baseline standard, to ensure a good level of security.
Please contact the Information security GRC team firstname.lastname@example.org for support on information security issues.
If you are intending to provide personal data in the use of Generative AI you should seek advice from your local information governance lead, or the Information Compliance team email@example.com.