Abstract:A novel neuralnet-based method of constructing optimized prototypes for nearest-neighbor classifiers is proposed. Based on an effective classification oriented error function containing class classification and class separation components, the corresponding prototype and feature weight update rules are derived. The proposed method consists of several distinguished properties. First, not only prototypes but also feature weights are constructed during the optimization process. Second, several instead of one prototype not belonging to the genuine class of input sample x are updated when x is classified incorrectly. Third, it intrinsically distinguishes different learning contribution from training samples, which enables a large amount of learning from constructive samples, and limited learning from outliers. Experiments have shown the superiority of this method compared with LVQ2 and other previous works.