您的位置:首页 > 其它

Introduction to One-class Support Vector Machines

2016-12-06 16:11 645 查看
Traditionally, many classification problems try to solve the two or multi-class situation. The goal
of the machine learning application is to distinguish test data between a number of classes, using training data. But what if you only have data of one class and the goal is to test new data and found out whether it is alike or not like the training data?
A method for this task, which gained much popularity the last two decades, is the One-Class Support Vector Machine. This (quite lengthly) blog post will give an introduction to this technique and will show the two main approaches.


Just one class?

First look at our problem situation; we would like to determine whether (new) test data is member of a specific class, determined by our training data, or is not. Why would we want this? Imagine a factory type of setting; heavy machinery under constant surveillance
of some advanced system. The task of the controlling system is to determine when something goes wrong; the products are below quality, the machine produces strange vibrations or something like a temperature that rises. It is relatively easy to gather training
data of situations that are OK; it is just the normal production situation. But on the other side, collection example data of a faulty system state can be rather expensive, or just impossible. If a faulty system state could be simulated, there is no way to
guarantee that all the faulty states are simulated and thus recognized in a traditional two-class problem.

To cope with this problem, one-class classification problems (and solutions) are introduced. By just providing the normal training data, an algorithm creates a (representational) model of this data. If newly encountered data is too different, according to some
measurement, from this model, it is labeled as out-of-class. We will look in the application of Support
Vector Machines to this one-class problem.


Basic concepts of Support Vector Machines

Let us first take a look at the traditional two-class support vector machine. Consider a data set Ω={(x1,y1),(x2,y2),…,(xn,yn)}Ω={(x1,y1),(x2,y2),…,(xn,yn)};
points xi∈Rdxi∈Rd in
a (for instance two-dimensional) space where xixi is
the ii-th
input data point and yi∈{−1,1}yi∈{−1,1} is
the ii-th
output pattern, indicating the class membership.

A very nice property of SVMs is that it can create a non-linear decision boundary by projecting the data through a non-linear function ϕϕ to
a space with a higher dimension. This means that data points which can’t be separated by a straight line in their original space II are
“lifted” to a feature space FFwhere
there can be a “straight” hyperplane that separates the data points of one class from an other. When that hyperplane would be projected back to the input space II,
it would have the form of a non-linear curve. The following video illustrates this process; the blue dots (in the white circle) can not be linearly separated from the red dots. By using a polynomial kernel for projection (later more on that) all the dots are
lifted into the third dimension, in which a hyperplane can be used for separation. When the intersection of the plane with the space is projected back to the two dimensional space, a circular boundary arises.

The hyperplane is represented with the equation wTx+b=0wTx+b=0,
with w∈Fw∈F and b∈Rb∈R.
The hyperplane that is constructed determines the margin between the classes; all the data
points for the class −1−1 are
on one side, and all the data points for class 11 on
the other. The distance from the closest point from each class to the hyperplane is equal; thus the constructed hyperplane searches for the maximal margin (“separating power”) between the classes. To prevent the SVM classifier from over-fitting with noisy
data (or to create a soft margin), slack variables ξiξi are
introduced to allow some data points to lie within the margin, and the constant C>0C>0 determines
the trade-off between maximizing the margin and the number of training data points within that
margin (and thus training errors). The objective function of the SVM classifier is the following minimization formulation:

minw, b, ξi∥w∥22+C∑i=1nξi subject
to: yi(wTϕ(xi)+b)≥1−ξiξi≥0 for
all i=1,…,n for
all i=1,…,nminw, b, ξi⁡‖w‖22+C∑i=1nξi subject
to: yi(wTϕ(xi)+b)≥1−ξi for all i=1,…,nξi≥0 for all i=1,…,n

When this minimization problem (with quadratic programming) is solved using Lagrange multipliers, it gets really interesting. The decision function (classification) rule for a data point xx then
becomes:

f(x)=sgn(∑i=1nαiyiK(x,xi)+b)f(x)=sgn⁡(∑i=1nαiyiK(x,xi)+b)

Here αiαi are
the Lagrange multipliers; every αi>0αi>0 is weighted
in the decision function and thus “supports” the machine; hence the name Support Vector Machine.
Since SVMs are considered to be sparse, there will be relatively few Lagrange multipliers with a non-zero value.


Kernel Function

The function K(x,xi)=ϕ(x)Tϕ(xi)K(x,xi)=ϕ(x)Tϕ(xi) is
known as the kernel function. Since the outcome of the decision function only relies on the
dot-product of the vectors in the feature space FF (i.e.
all the pairwise distances for the vectors), it is not necessary to perform an explicit projection to that space (as was done in the above video). As long as a function KK has
the same results, it can be used instead. This is known as the kernel trick and it is what
gives SVMs such a great power with non-linear separable data points; the feature space FF can
be of unlimited dimension and thus the hyperplane separating the data can be very complex. In our calculations though, we avoid that complexity.

Popular choices for the kernel function are linear, polynomial, sigmoidal but mostly the Gaussian Radial Base Function:

K(x,x′)=exp(−∥x−x′∥22σ2)K(x,x′)=exp⁡(−‖x−x′‖22σ2)
where σ∈Rσ∈R is
a kernel parameter and ∥x−x′∥‖x−x′‖ is
the dissimilarity measure.

With this set of formulas and concepts we are able to classify a set of data point into two classes with a non-linear decision function. But, we are interested in the case of a single class of data. Roughly there are two different approaches, which we will
discuss in the next two sections.


One-Class SVM according to Schölkopf

The Support
Vector Method For Novelty Detection by Schölkopf et al. basically separates all the data points from the origin (in feature space FF)
and maximizes the distance from this hyperplane to the origin. This results in a binary function which captures regions in the input space where the probability density of the data lives. Thus the function returns +1+1 in
a “small” region (capturing the training data points) and −1−1 elsewhere.

The quadratic programming minimization function is slightly different from the original stated above, but the similarity is still clear:

minw, ξi, ρ12∥w∥2+1νn∑i=1nξi−ρ subject
to: (w⋅ϕ(xi))≥ρ−ξiξi≥0 for
all i=1,…,n for
all i=1,…,nminw, ξi, ρ⁡12‖w‖2+1νn∑i=1nξi−ρ subject
to: (w⋅ϕ(xi))≥ρ−ξi for all i=1,…,nξi≥0 for all i=1,…,n

In the previous formulation the parameter CC decided
the smoothness. In this formula it is the parameter νν that
characterizes the solution;

it sets an upper bound on the fraction of outliers (training examples regarded out-of-class) and,

it is a lower bound on the number of training examples used as Support Vector.

Due to the importance of this parameter, this approach is often referred to as ν-SVMν-SVM.

Again by using Lagrange techniques and using a kernel function for the dot-product calculations, the decision function becomes:

f(x)=sgn((w⋅ϕ(xi))−ρ)=sgn(∑i=1nαiK(x,xi)−ρ)f(x)=sgn⁡((w⋅ϕ(xi))−ρ)=sgn⁡(∑i=1nαiK(x,xi)−ρ)

This method thus creates a hyperplane characterized by ww and ρρ which
has maximal distance from the origin in feature space FF and
separates all the data points from the origin. An other method is to create a circumscribing hypersphere around
the data in feature space. This following section will show that approach.


One-Class SVM according to Tax and Duin

The method of Support
Vector Data Description by Tax and Duin (SVDD) takes a spherical, instead of planar, approach. The algorithm obtains a spherical boundary, in feature space, around the data. The volume of this hypersphere is minimized, to minimize the effect of incorporating
outliers in the solution.

The resulting hypersphere is characterized by a center aa and
a radius R>0R>0 as
distance from the center to (any support vector on) the boundary, of which the volume R2R2 will
be minimized. The center aa is
a linear combination of the support vectors (that are the training data points for which the Lagrange multiplier is non-zero). Just as the traditional formulation, it could be required that all the distances from data points xixi to
the center is strict less then RR,
but to create a soft margin again slack variables ξiξi with
penalty parameter CC are
used. The minimization problem then becomes:

minR, aR2+C∑i=1nξi subject
to: ∥xi−a∥2≤R2+ξiξi≥0 for
all i=1,…,n for
all i=1,…,nminR, a⁡R2+C∑i=1nξi subject
to: ‖xi−a‖2≤R2+ξi for all i=1,…,nξi≥0 for all i=1,…,n

After solving this by introduction Lagrange multipliers αiαi,
a new data point zz can
be tested to be in or out of class. It is considered in-class when the distance to the center is smaller than or equal to the radius, by using the Gaussian kernel as a distance function over two data points:

∥z−x∥2=∑i=1nαiexp(−∥z−xi∥2σ2)≥−R2/2+CR‖z−x‖2=∑i=1nαiexp⁡(−‖z−xi‖2σ2)≥−R2/2+CR

You can see the similarity with the traditional two-class method, the algorithm by Schölkopf and Tax and Duin. So far the theoretical fundamentals of Support Vector Machines. Lets take a very quick look to some applications of this method.


Applications (in Matlab)

A very good and much used library for SVM-classification is LibSVM,
which can be used for Matlab. Out of the box it supports one-class SVM following the method of Schölkopf. Also available in the LibSVM tools is the method
for SVDD, following the algorithm of Tax and Duin.

To give a nice visual clearification of how the kernel mapping (to feature space FF works),
I created asmall
Matlab script that lets you create two data sets, red and blue dots (note: this simulates a two-class example). After clicking, you are able to inspect the data after being projected to the three-dimensional space. The data will then result in a shape
like the following image.






Application to change detection

As a conclusion to this post I will give a look at the perspective from which I am using one-class SVMs in my current research for my master thesis (which is performed at the Dutch research company Dobots).
My goal is to detect change points in a time series data; also known as novelty detection. One-class SVMs have already been applied to novelty detection for time series data. I will apply it specifically to accelerometer data, collection by smartphone sensors.
My theory is that when the change points in the time series are explicitly discovered, representing changes in the activity performed by the user, the classification algorithms should perform better. Probably in a next post I will take a further look at an
algorithm for novelty detection using one-class Support Vector Machines.


Update: GitHub repository

Currently I am using the SVDD method by Tax and Duin to implement change detection and temporal segmentation for accelerometer data. I am using the Matlab dd_tools package,
created by Tax, for the incremental version of SVDD. You can use my implementation and fork it from the oc_svm
github repository. Most of the functions are documented, but it is under heavy development and thus the precise workings differ from time to time. I am planning to write a good readme, but if you are interested I advise you to look at the
apply_inc_svdd.m
file,
which creates the SVM classifier and extracts properties from the constructed model.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: