top of page

The Future of AI:
Responsible and Trustworthy 

Publications

In the past few years, we have published papers about AI diagnosis on conferences and journals in software engineering and security, e.g., ICSE, USENIX Security, CAV, TSE, and FSE.

These papers cover fields of trustworthy AI, including robustness, fairness, security and explainability. In addition, we won two ACM SIGSOFT Distinguished Paper Award (ICSE 2018, ICSE 2020) and one ACM SIGSOFT Research Highlights (2020).

Certified Robust Accuracy of Neural Networks Are Bounded due to Bayes Errors, CAV, Jul 2024

QuoTe: Quality-oriented Testing for Deep Learning Systems, TOSEM, Feb 2023

Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems, IJCIP, Sep 2021

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing, ICSE, May 2019
Towards Optimal Concolic Testing, ICSE, May 2018

Robustness

Fairness

TestSGD: Interpretable Testing of Neural Networks Against Subtle Group Discrimination, TOSEM, Apr 2023

Adaptive Fairness Improvement based Causality Analysis, FSE, Nov 2022

Probabilistic Verification of Neural Networks Against Group Fairness, FM, Nov 2021

Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling, TSE, Aug 2021

White-box Fairness Testing through Adversarial Sampling, ICSE, June 2020

Security

Neural Network Semantic Backdoor Detection and Mitigation: A Causality-Based Approach, USENIX Security, Aug 2024

Verifying Neural Networks Against Backdoor Attacks, CAV, Aug 2022

Causality-based Neural Network Repair, ICSE, Jul 2022

Explainability

Semantic-based Neural Network Repair, ISSTA, Jul 2023

View >>

Which Neural Network Makes More Explainable Decisions? An Approach towards Measuring Explainability, ASE-J, Nov 2022

ExAIs: Executable AI Semantics, ICSE, Jul 2022

Towards Interpreting Recurrent Neural Network through Probabilistic Abstraction, ASE, Sep 2020

Resources

bottom of page