Among all, neural network–based SVM is one of the most powerful and efficient feed-forward neural networks used for classification and regression problem. It can be used for both linear and nonlinear data classification. It is basically a binary classifier where a nonlinear mapping is considered for transforming the original training data into a higher dimension. The data from one class are separated from another class in this new dimension by using a decision boundary (i.e., a hyperplane). The hyperplane is found by using the support vectors (training tuples) (Beale, Demuth, & Hagan, 1996). In Figure 13.8, the structure of the SVM classifier is presented.
It is explained as follows:
Let DM be a set of M labeled data points in an N-dimensional hyperspace:DM=[(y1,a1),….(yM,aM)]∈(Y×A)M(13.1)
where
yi∈Y, Y is the input space, and ai∈A,A={−1,+1}.
It is formulated for designing ψ, such that
ψ: Y → A, d is predicted from the input y.
Y can be transformed to an equivalent or high-dimensional feature space to make it linearly separable. The issue of finding a nonlinear decision boundary limit in Y has been mapped to finding an optimal hyperplane for separating two classes.
In the transformed domain, the hyperplane or feature space can be parameterized by (z, c) pair, such thatφ∑i=1Qziφi(y)+c=0.(13.2)
It is required to calculate the mapping function φ(.) explicitly asφφ〈φ(yi),φ(yj)〉=K(yi,yj).(13.3)
In the proposed SVM the kernel function is considered as radial basis function. In the input space the patterns after the completion of the transformation are not able to be separated linearly. In Figure 13.9, the structure of the SVM structure is presented.
Leave a Reply