DNN is a type of advanced neural network based on machine learning model where multiple number of layers are present in the input and output layers. The difference between a simple neural network and DNN is presented in Figure 13.12. This model is one of the popular artificial neural networks and can be used for different disease classifications. The proposed DNN-based model is designed with two stages. In the first stage, the model automatically learns the features from the input dataset. After successful completion of the feature learning procedure, a fully connected multilayer perceptron classifies the initially learned features. After these two stages, there is a feature identifier module which includes the convolutional and pooling layers. The feature map from the previous layer is convolved by using the convolutional filter (kernel) present in the convolutional layer. After completion of the convolution process, it passes through the activation function in order to get the activation map for the next layer. At the same time, the pooling (subsampling) layer creates the activation map to but it increases (Figure 13.12).
In the convolutional layer, the activation map from the past layer is convolved utilizing convolutional channel (or piece) which is included with predisposition and in this way nourished to the actuation capacity to produce an initiation map for the following layer. It is used after the convolutional network. The output of the last layer can be calculated byCji,a=σ(da+∑m=1Mwmaxi+m−10a),(13.4)
where xi0=(x1,x2,x3,…,xm) is the input vector; m, the total number of electrocardiograph (ECG) segments; j, the layer index; and d, the bias of the feature map. Also, σ is the activation function and M the filter size. wma is the weight for mth filter index.
Pooling layer is one of the building blocks of DNN that gradually reduces the size of the activation map to decrease the number of factors and time of computation for the neural network. This layer operates on every feature map separately. Max pooling is one of the common pooling layers used in DNN. The output of a max pooling layer can be found by the maximum activation over a non-overlapping section of input (Mohapatra & Mohanty, 2019; Mohapatra, Srivastava, & Mohanty, 2019; Wu et al., 2018).
Activation function maps an output to a set of inputs. An activation function is generally employed after every convolutional layer. It is a nonlinear transfer function used over the input data. The transfer output is then transmitted to the next layer as the input. Generally, in DNN two types of activation functions are used: (i) Rectified linear unit (Relu), (ii) Softmax.
Leave a Reply