top of page

人工智能的未来:
负责任和可信赖

刊物

过去几年,我们在软件工程和安全领域的国际旗舰会议和期刊上发表了诸多关于人工智能检测的论文,例如ICSE、S&P、CAV、TSE和FSE。

这些论文涵盖了可信赖人工智能的各个领域,包括鲁棒性、公平性、安全性和可解释性等。此外,我们还获得了两项 ACM SIGSOFT 杰出论文奖(ICSE 2018、ICSE 2020)和一项 ACM SIGSOFT 研究亮点奖(2020)。

QuoTe: Quality-oriented Testing for Deep Learning Systems,TOSEM,2023 年 2 月

Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems,IJCIP,2021 年 9 月

RobOT: Robustness-Oriented Testing for Deep Learning Systems,ICSE,2021 年 5 月

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing,ICSE,2019年5月
Towards Optimal Concolic Testing,ICSE,2018年5月

​鲁棒性

​公平性

FairRec: Fairness Testing for Deep Recommender Systems,ISSTA,2023年7月

查看>>

TestSGD: Interpretable Testing of Neural Networks Against Subtle Group Discrimination,TOSEM,2023年4月

Adaptive Fairness Improvement based Causality Analysis,FSE,2022年11月

Automatic Fairness Testing of Neural Classifiers Through Adversarial Sampling,TSE,2021 年 8 月

White-box Fairness Testing through Adversarial Sampling,ICSE,2020年6月

安全性

Verifying Neural Networks Against Backdoor Attacks,CAV,2022年8

Causality-based Neural Network Repair,ICSE,2022 年 7 月

可解释性

Semantic-based Neural Network Repair,ISSTA,2023年7月

查看>>

Which Neural Network Makes More Explainable Decisions? An Approach towards Measuring Explainability,ASE-J,2022 年 11 月

ExAIs: Executable AI Semantics,ICSE,2022年7月

Towards Interpreting Recurrent Neural Network through Probabilistic Abstraction,ASE,2020 年 9 月

版权

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models,S&P,2022年5月

资源

bottom of page