PhD studentship in Computer Science - Dynamic Validation of AI Systems in Digital Twins: A Real-Time Safety Framework for Critical Infrastructure Resilience

Updated: about 8 hours ago
Location: Newcastle upon Tyne, ENGLAND
Deadline: 15 Feb 2026

PhD studentship in Computer Science - Dynamic Validation of AI Systems in Digital Twins: A Real-Time Safety Framework for Critical Infrastructure Resilience


Award Summary

100% fees covered, and a minimum tax-free annual living allowance of £20,780 (2025/26 UKRI rate). Additional project costs will also be provided.

Overview

Research Project

Critical infrastructure systems, such as power grids, transportation networks, and water treatment plants, increasingly rely on AI-driven decision-making for efficiency and autonomy. However, these systems face unique safety challenges. Real-world conditions, including weather, cyberattacks, and equipment degradation, are unpredictable, causing AI behaviors to deviate from lab-tested performance.

Current digital twin technologies focus on predictive maintenance and optimization but lack frameworks to continuously verify AI safety in operational contexts. This project aims to develop a dynamic validation framework for AI systems using high-fidelity digital twins, enabling real-time stress-testing under simulated edge cases like cyber-physical attacks and sensor failures.

The Research Challenges

There exists a complex interplay of factors that present challenges in ensuring the resilience of AI in critical infrastructure. The main challenges are:

a) Dynamic environments: Real-world conditions are unpredictable. Environmental factors and cyberattacks can cause AI behaviors to deviate significantly from their lab-tested performance metrics.

b) Cascading failures: Errors in AI decisions can propagate across interconnected infrastructure, leading to catastrophic outcomes. A single failure point can trigger widespread system collapse in grids or transport networks.

c) Regulatory lag: Existing safety certifications, such as ISO standards, lack methods to validate AI systems in real-time, adaptive scenarios. There is a disconnect between static regulation and dynamic operational risks.

d) Verification Gaps: Current digital twin technologies lack the frameworks necessary to continuously verify AI safety in operational contexts, leaving systems vulnerable to unpredicted behaviors.

The proposed framework provides a comprehensive solution by designing resilience metrics to quantify AI safety, focusing on robustness, recoverability, and ethical compliance. To bridge digital twin simulations with physical systems, the project will deploy real-time monitoring tools enabling preemptive risk mitigation. Furthermore, it will embed regulatory rules, such as the EU AI Act, into digital twins to audit AI alignment with fairness and transparency standards.

Methodology

The PhD will be divided into 3 work packages (WP):

WP1 - Digital Twin Development: This WP focuses on building physics-informed digital twins for high-impact use cases: Smart Grids (simulating load-balancing under stress) and Autonomous Transportation (modeling traffic systems during cyberattacks). This includes creating an Adversarial Scenario Library to curate edge cases like data poisoning and hardware malfunctions.

WP2 - Dynamic Validation Framework: This WP utilizes formal methods, specifically probabilistic model checking, to provide safety guarantees and verify AI behavior against constraints (e.g., "prevent cascading blackouts"). It also incorporates Human-in-the-Loop testing to integrate operator feedback and refine AI actions during crisis simulations.

WP3 - Real-Time Resilience Integration: This WP develops a Runtime Assurance Layer by deploying lightweight anomaly detection algorithms, such as autoencoders, to flag unsafe AI decisions. It also involves the development of an Ethical Compliance Engine to embed regulatory rules into the digital twin environment.

Project Timeline

Year 1 (Month 1-12): WP1 and associated training to obtain core skills in digital twin modeling and adversarial scenario generation. Development of the Adversarial Scenario Library.

Year 2 (Month 13-24): Implementation of WP2. Application of probabilistic model checking and initial human-in-the-loop testing cycles.

Year 3 (Month 25-36): Design and implementation of the analytical tools for WP3, specifically the Runtime Assurance Layer and Ethical Compliance Engine.

Year 4 (Month 37-42): Final validation of the framework across defined use cases (Smart Grids and Transportation) and writing final thesis chapters.

Supervision Environment

Extensive training will be provided on physics-informed digital twin development and critical infrastructure simulation. Training on formal verification methods (probabilistic model checking) and AI safety compliance (EU AI Act standards) will also be provided.

Student Applicant Skills/Background

The applicant should have a solid background in computer science or systems engineering. Knowledge of AI/ML algorithms and simulation environments is highly advantageous. A keen interest in critical infrastructure resilience, cyber-physical systems, and AI safety ethics is essential to align with the focus of this research. Additionally, candidates should demonstrate analytical thinking regarding safety certifications and regulatory compliance.

Number Of Awards

1

Start Date

1 October 2026

Award Duration

4 years

Application Closing Date

15 February 2026

Sponsor

EPSRC

Supervisors

Dr. Yinhao Li , Dr Dev Jha , Dr. Charith Perera 

Eligibility Criteria

We are adopting a contextual admissions process. This means we will consider other key competencies and experience alongside your academic qualifications. An example can be found here .

A minimum 2:1 Honours degree or international equivalent in a subject relevant to the proposed PhD project is our standard entry, however we place value on prior experience, enthusiasm for research, and the ability to think and work independently. Excellent Analytical skills and strong verbal and written communication skills are also essential requirements. A Masters qualification may not be required if you have a minimum 2:1 degree or can evidence alternative experience in a work or research-based project. If you have alternative qualifications or experience, please contact us to discuss flexibilities and request an exemption.

Applicants whose first language is not English require an IELTS score of 6.5 overall with a minimum of 5.5 in all sub-skills. International applicants may require an ATAS (Academic Technology Approval Scheme ) clearance certificate prior to obtaining their visa and to study on this programme

How To Apply

Please read and complete this document as your Personal statement, and upload this with your application. Applications which do not include this document will not be considered. Further details can be found here .

You must apply through the University’s Apply to Newcastle Portal  

Once registered select ‘Create a Postgraduate Application’.  

Use ‘Course Search’ to identify your programme of study:  

·       search for the ‘Course Title’ using the programme code: 8050F

·       select PhD Computer Science (full time) as the programme of study

You will then need to provide the following information in the ‘Further Details’ section:  

·       a ‘Personal Statement’ (this is a mandatory field) – Use this template.

·       the studentship code DLA2631 in the ‘Studentship/Partnership Reference’ field.  

·       when prompted for how you are providing your research proposal - select ‘Write Proposal’. You should then type in the title of the research project from this advert. You do not need to upload a research proposal.  

You must submit one application per studentship; you cannot apply for multiple studentships on one application.

Contact Details

Dr. Yinhao Li

You can also contact: doctoral.awards@ncl.ac.uk for independent advice on your application.



Similar Positions