top of page

The Future of AI:
Responsible and Trustworthy 

Publications

In the past few years, we have published papers about AI diagnosis on conferences and journals in software engineering and security, e.g., ICSE, S&P, CAV, TSE, and FSE.

These papers cover fields of trustworthy AI, including robustness, fairness, security and explainability. In addition, we won two ACM SIGSOFT Distinguished Paper Award (ICSE 2018, ICSE 2020) and one ACM SIGSOFT Research Highlights (2020).

QuoTe: Quality-oriented Testing for Deep Learning Systems, TOSEM, Feb 2023

Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems, IJCIP, Sep 2021

RobOT: Robustness-Oriented Testing for Deep Learning Systems, ICSE, May 2021

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing, ICSE, May 2019
Towards Optimal Concolic Testing, ICSE, May 2018

Robustness

Fairness

FairRec: Fairness Testing for Deep Recommender Systems, ISSTA, Jul 2023

View >>

TestSGD: Interpretable Testing of Neural Networks Against Subtle Group Discrimination, TOSEM, Apr 2023

Adaptive Fairness Improvement based Causality Analysis, FSE, Nov 2022

Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling, TSE, Aug 2021

White-box Fairness Testing through Adversarial Sampling, ICSE, June 2020

Security

Verifying Neural Networks Against Backdoor Attacks, CAV, Aug 2022

Causality-based Neural Network Repair, ICSE, Jul 2022

Explainability

Semantic-based Neural Network Repair, ISSTA, Jul 2023

View >>

Which Neural Network Makes More Explainable Decisions? An Approach towards Measuring Explainability, ASE-J, Nov 2022

ExAIs: Executable AI Semantics, ICSE, Jul 2022

Towards Interpreting Recurrent Neural Network through Probabilistic Abstraction, ASE, Sep 2020

Copyright

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models, S&P, May 2022

Resources

bottom of page