May 01, 2019 · Since the pre-processed data F in is independent of the state x, the attack vector estimation A ˆ (t) ∈ S s can be obtained from F (in this paper, by using gradient descent algorithm) first, and then the state vector can reconstructed from Y ̃ through (6) x ˆ (t − τ + 1) = O ̃ (Y ̃ (t) − A ˆ (t)) where O ̃ = (O T O) − 1 O T is the pseudo-inverse of O. Everyone knows about gradient descent. But… Do you really know how it works? Have you already implemented the algorithm by yourself? If you need a reminder, this article explains in simple terms why we need it, how it works, and even show you a Python implementation!Solving with projected gradient descent Since we are trying to maximize the loss when creating an adversarial example, we repeatedly move in the direction of the positivegradient Since we also need to ensure that 3∈Δ, we also project back into this set after each step, a process known as projected gradient descent (PGD) 3≔Proj ∆ 3+= 7 73 Loss. / 1+3,6 Inspired by DDN, we propose an efficient project gradient descent for ensemble adversarial attack which improves search direction and step size of ensemble models. Our method won the first place in IJCAI19 Targeted Adversarial Attack competition.