The Generalisation of the Recursive Deterministic Perceptron
The 2006 IEEE International Joint Conference on Neural Network Proceedings
The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two classes X and Y of IR d are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of
... opposite sides of IR d delimited by this hyperplane). For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. A study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the level of generalisation. The networks were trained and tested using the following standard benchmark classification datasets: IRIS and SOYBEAN.