The purpose of this research project is to address the socio-technical cybersecurity risks of operationalising machine learning models. Our objectives are:
(i) to understand the cognitive processes and user behaviours that impact on the security of machine learning operation (MLOps); (ii) to model these processes and behaviours so that we can securely deliver human-machine teaming in MLOps; (iii) o understand how to defend MLOps against subversion as well as how to attack the MLOps of adversaries.
This project aims to address the socio-technical cybersecurity risks of operationalising machine learning models. It will generate new knowledge in the areas of computer security and human-computer interaction by using a transdisciplinary research approach that brings together social and behavioural science, computer science, and data science. The outputs from the research will be models of the behavioural risks to machine learning operations; a tool for facilitating experiments to manage the risks of human-machine teaming and novel algorithms that can be used to both defend and attack machine learning operations. The benefits arising from the research with be increased trust in the operationalisation of machine learning models.