Adversarial Machine Learning 1st Edition by Anthony Joseph, Blaine Nelson, Benjamin Rubinstein, Tygar – Ebook PDF Instant Download/Delivery: 9781108327077 ,1108327079
Full download Adversarial Machine Learning 1st Edition after payment
Product details:
ISBN 10: 1108327079
ISBN 13: 9781108327077
Author: Anthony Joseph, Blaine Nelson, Benjamin Rubinstein, Tygar
Adversarial Machine Learning 1st Edition Table of contents:
Part I Overview of Adversarial Machine Learning
1 Introduction
1.1 Motivation
1.2 A Principled Approach to Secure Learning
1.3 Chronology of Secure Learning
1.4 Overview
2 Background and Notation
2.1 Basic Notation
2.2 Statistical Machine Learning
2.2.1 Data
2.2.2 Hypothesis Space
2.2.3 The Learning Model
2.2.4 Supervised Learning
2.2.5 Other Learning Paradigms
3 A Framework for Secure Learning
3.1 Analyzing the Phases of Learning
3.2 Security Analysis
3.2.1 Security Goals
3.2.2 Threat Model
3.2.3 Discussion of Machine Learning Applications in Security
3.3 Framework
3.3.1 Taxonomy
3.3.2 The Adversarial Learning Game
3.3.3 Characteristics of Adversarial Capabilities
3.3.4 Attacks
3.3.5 Defenses
3.4 Exploratory Attacks
3.4.1 The Exploratory Game
3.4.2 Exploratory Integrity Attacks
3.4.3 Exploratory Availability Attacks
3.4.4 Defending against Exploratory Attacks
3.5 Causative Attacks
3.5.1 The Causative Game
3.5.2 Causative Integrity Attacks
3.5.3 Causative Availability Attacks
3.5.4 Defending against Causative Attacks
3.6 Repeated Learning Games
3.6.1 Repeated Learning Games in Security
3.7 Privacy-Preserving Learning
3.7.1 Differential Privacy
3.7.2 Exploratory and Causative Privacy Attacks
3.7.3 Utility despite Randomness
Part II Causative Attacks on Machine Learning
4 Attacking a Hypersphere Learner
4.1 Causative Attacks on Hypersphere Detectors
4.1.1 Learning Assumptions
4.1.2 Attacker Assumptions
4.1.3 Analytic Methodology
4.2 Hypersphere Attack Description
4.2.1 Displacing the Centroid
4.2.2 Formal Description of the Attack
4.2.3 Characteristics of Attack Sequences
4.3 Optimal Unconstrained Attacks
4.3.1 Optimal Unconstrained Attack: Stacking Blocks
4.4 Imposing Time Constraints on the Attack
4.4.1 Stacking Blocks of Variable Mass
4.4.2 An Alternate Formulation
4.4.3 The Optimal Relaxed Solution
4.5 Attacks against Retraining with Data Replacement
4.5.1 Average-out and Random-out Replacement Policy
4.5.2 Nearest-out Replacement Policy
4.6 Constrained Attackers
4.6.1 Greedy Optimal Attacks
4.6.2 Attacks with Mixed Data
4.6.3 Extensions
4.7 Summary
5 Availability Attack Case Study: SpamBayes
5.1 The SpamBayes Spam Filter
5.1.1 SpamBayes’ Training Algorithm
5.1.2 SpamBayes’ Predictions
5.1.3 SpamBayes’ Model
5.2 Threat Model for SpamBayes
5.2.1 Attacker Goals
5.2.2 Attacker Knowledge
5.2.3 Training Model
5.2.4 The Contamination Assumption
5.3 Causative Attacks against SpamBayes’ Learner
5.3.1 Causative Availability Attacks
5.3.2 Causative Integrity Attacks—Pseudospam
5.4 The Reject on Negative Impact (RONI) Defense
5.5 Experiments with SpamBayes
5.5.1 Experimental Method
5.5.2 Dictionary Attack Results
5.5.3 Focused Attack Results
5.5.4 Pseudospam Attack Experiments
5.5.5 RONI Results
5.6 Summary
6 Integrity Attack Case Study: PCA Detector
6.1 PCA Method for Detecting Trafffic Anomalies
6.1.1 Traffic Matrices and Volume Anomalies
6.1.2 Subspace Method for Anomaly Detection
6.2 Corrupting the PCA Subspace
6.2.1 The Threat Model
6.2.2 Uninformed Chaff Selection
6.2.3 Locally Informed Chaff Selection
6.2.4 Globally Informed Chaff Selection
6.2.5 Boiling Frog Attacks
6.3 Corruption-Resilient Detectors
6.3.1 Intuition
6.3.2 PCA-GRID
6.3.3 Robust Laplace Threshold
6.4 Empirical Evaluation
6.4.1 Setup
6.4.2 Identifying Vulnerable Flows
6.4.3 Evaluation of Attacks
6.4.4 Evaluation of ANTIDOTE
6.4.5 Empirical Evaluation of the Boiling Frog Poisoning Attack
6.5 Summary
Part III Exploratory Attacks on Machine Learning
7 Privacy-Preserving Mechanisms for SVM Learning
7.1 Privacy Breach Case Studies
7.1.1 Massachusetts State Employees Health Records
7.1.2 AOL Search Query Logs
7.1.3 The Netflix Prize
7.1.4 Deanonymizing Twitter Pseudonyms
7.1.5 Genome-Wide Association Studies
7.1.6 Ad Microtargeting
7.1.7 Lessons Learned
7.2 Problem Setting: Privacy-Preserving Learning
7.2.1 Differential Privacy
7.2.2 Utility
7.2.3 Historical Research Directions in Differential Privacy
7.3 Support Vector Machines: A Brief Primer
7.3.1 Translation-Invariant Kernels
7.3.2 Algorithmic Stability
7.4 Differential Privacy by Output Perturbation
7.5 Differential Privacy by Objective Perturbation
7.6 Infinite-Dimensional Feature Spaces
7.7 Bounds on Optimal Differential Privacy
7.7.1 Upper Bounds
7.7.2 Lower Bounds
7.8 Summary
8 Near-Optimal Evasion of Classifiers
8.1 Characterizing Near-Optimal Evasion
8.1.1 Adversarial Cost
8.1.2 Near-Optimal Evasion
8.1.3 Search Terminology
8.1.4 Multiplicative vs. Additive Optimality
8.1.5 The Family of Convex-Inducing Classifiers
8.2 Evasion of Convex Classes for ℓ[sub(1)] Costs
8.2.1 ε-IMAC Search for a Convex Χ[sup(+)][sub(f)]
8.2.2 ε-IMAC Learning for a Convex Χ[sup(-)][sub(f)]
8.3 Evasion for General ℓ[sub(p)] Costs
8.3.1 Convex Positive Set
8.3.2 Convex Negative Set
8.4 Summary
8.4.1 Open Problems in Near-Optimal Evasion
8.4.2 Alternative Evasion Criteria
8.4.3 Real-World Evasion
Part IV Future Directions in Adversarial Machine Learning
9 Adversarial Machine Learning Challenges
9.1 Discussion and Open Problems
9.1.1 Unexplored Components of the Adversarial Game
9.1.2 Development of Defensive Technologies
9.2 Review of Open Problems
9.3 Concluding Remarks
Part V Appendixes
Appendix A: Background for Learning and Hyper-Geometry
Appendix B: Full Proofs for Hypersphere Attacks
Appendix C: Analysis of SpamBayes
Appendix D: Full Proofs for Near-Optimal Evasion
Glossary
References
Index
People also search for Adversarial Machine Learning 1st Edition:
why is adversarial machine learning important
quantum adversarial machine learning
exploring adversarial machine learning
a taxonomy and terminology of adversarial machine learning
Tags:
Anthony Joseph,Blaine Nelson,Benjamin Rubinstein,Tygar,Adversarial Machine,Learning