Quantifying the robustness of neural networks or verifying their safety
properties against input uncertainties or adversarial attacks have become an
important research area in learning-enabled systems. Most results concentrate
around the worst-case scenario where the input of the neural network is
perturbed within a norm-bounded uncertainty set. In this paper, we consider a
probabilistic setting in which the uncertainty is random with known first two
moments. In this context, we discuss two relevant problems: (i) probabilistic
safety verification, in which the goal is to find an upper bound on the
probability of violating a safety specification; and (ii) confidence ellipsoid
estimation, in which given a confidence ellipsoid for the input of the neural
network, our goal is to compute a confidence ellipsoid for the output. Due to
the presence of nonlinear activation functions, these two problems are very
difficult to solve exactly. To simplify the analysis, our main idea is to
abstract the nonlinear activation functions by a combination of affine and
quadratic constraints they impose on their input-output pairs. We then show
that the safety of the abstracted network, which is sufficient for the safety
of the original network, can be analyzed using semidefinite programming. We
illustrate the performance of our approach with numerical experiments.
Description
[1910.04249] Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming
%0 Journal Article
%1 fazlyab2019probabilistic
%A Fazlyab, Mahyar
%A Morari, Manfred
%A Pappas, George J.
%D 2019
%K deep-learning probability robustness uncertainty
%T Probabilistic Verification and Reachability Analysis of Neural Networks
via Semidefinite Programming
%U http://arxiv.org/abs/1910.04249
%X Quantifying the robustness of neural networks or verifying their safety
properties against input uncertainties or adversarial attacks have become an
important research area in learning-enabled systems. Most results concentrate
around the worst-case scenario where the input of the neural network is
perturbed within a norm-bounded uncertainty set. In this paper, we consider a
probabilistic setting in which the uncertainty is random with known first two
moments. In this context, we discuss two relevant problems: (i) probabilistic
safety verification, in which the goal is to find an upper bound on the
probability of violating a safety specification; and (ii) confidence ellipsoid
estimation, in which given a confidence ellipsoid for the input of the neural
network, our goal is to compute a confidence ellipsoid for the output. Due to
the presence of nonlinear activation functions, these two problems are very
difficult to solve exactly. To simplify the analysis, our main idea is to
abstract the nonlinear activation functions by a combination of affine and
quadratic constraints they impose on their input-output pairs. We then show
that the safety of the abstracted network, which is sufficient for the safety
of the original network, can be analyzed using semidefinite programming. We
illustrate the performance of our approach with numerical experiments.
@article{fazlyab2019probabilistic,
abstract = {Quantifying the robustness of neural networks or verifying their safety
properties against input uncertainties or adversarial attacks have become an
important research area in learning-enabled systems. Most results concentrate
around the worst-case scenario where the input of the neural network is
perturbed within a norm-bounded uncertainty set. In this paper, we consider a
probabilistic setting in which the uncertainty is random with known first two
moments. In this context, we discuss two relevant problems: (i) probabilistic
safety verification, in which the goal is to find an upper bound on the
probability of violating a safety specification; and (ii) confidence ellipsoid
estimation, in which given a confidence ellipsoid for the input of the neural
network, our goal is to compute a confidence ellipsoid for the output. Due to
the presence of nonlinear activation functions, these two problems are very
difficult to solve exactly. To simplify the analysis, our main idea is to
abstract the nonlinear activation functions by a combination of affine and
quadratic constraints they impose on their input-output pairs. We then show
that the safety of the abstracted network, which is sufficient for the safety
of the original network, can be analyzed using semidefinite programming. We
illustrate the performance of our approach with numerical experiments.},
added-at = {2019-10-12T23:00:39.000+0200},
author = {Fazlyab, Mahyar and Morari, Manfred and Pappas, George J.},
biburl = {https://www.bibsonomy.org/bibtex/29dc324ef17fc7034d671e33cd1672d6c/kirk86},
description = {[1910.04249] Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming},
interhash = {5a550ce86ebac5094dd866b3532814be},
intrahash = {9dc324ef17fc7034d671e33cd1672d6c},
keywords = {deep-learning probability robustness uncertainty},
note = {cite arxiv:1910.04249},
timestamp = {2019-10-12T23:00:39.000+0200},
title = {Probabilistic Verification and Reachability Analysis of Neural Networks
via Semidefinite Programming},
url = {http://arxiv.org/abs/1910.04249},
year = 2019
}