**Título:** Error measures based on winner-takes-all and the performance of multilayer perceptron classifier

**Autores:** Santos, Michel M. dos; Santos, Wellington P. dos

**Resumo:** The squared error is a measure commonly employed for training neural networks. Alternative objective functions, directed for classification tasks, may be relevant for training neural networks classifiers. The present paper discusses why some types of objective function, used in training neural networks for classification, have shown better performance compared to the usual mean squared error (MSE). The study deals with the concept of winner-takes-all (WTA). For the condition of a trained network, it is demonstrated through a variance inequality that the application of the smooth WTA criterion (softmax) in the output estimates made by a multilayer perceptron (MLP) classifier tends to reduce the variance of such estimates. However, as the softmax approximates the abrupt WTA criterion, there is no variance reduction due to such inequality. On the other hand, considering a network to be trained, error measures based on the composition of MSE with WTA are defined to serve as objective function of particle swarm optimizer in training a MLP for solving logic gates classification. Experimental results point out that the steeper is the WTA criterion, the faster is reaching a solution for the AND gate, moreover, the functions depending on a steeper WTA were superior to the simple MSE. Curiously, in the case of XOR problem, the solution was more often reached when using the pure MSE, whereas most objective functions based on the WTA criterion were not able to achieve a solution before the maximum number of evaluations.

**Palavras-chave:**

**Páginas:** 6

**Código DOI:** 10.21528/CBIC2013-252

**Artigo em pdf:** bricsccicbic2013_submission_252.pdf

**Arquivo BibTex:** bricsccicbic2013_submission_252.bib