Classification with multiple prototypes

J.C. Bezdek, T.R. Reichherzer, G. Lim, Y. Attikiouzel
Proceedings of IEEE 5th International Fuzzy Systems  
act We compare learning vector quantization, f u z z y learning vector quantization, and a deterministic scheme called the dog-rabbit [OR) model f o r generation of multiple prototypes from labeled data f o r classifier design. We also compare these three models to three other methods: a clumping method due to C . L. Chang; our modijkation of C.L. Chang's method; and a derivative of the batch f u z z y c-means algorithm due to Yen and C.W. Chang. All six methods are superior to the labeled
more » ... to the labeled subsample means, The nearest prototype (1-np) Classifier. Given any c prototypes V = (V.E 32' : 12 j l c J } , one v. /class, and any dis-similarity J measure 6 on 3 ' : for any z E 3': Decide z E class i Ties in (3) are arbitrarily resolved. The crisp 1-np design can be implemented using [ 121 that his method finds c = 14 prototypes that replace Iris and preserve a zero resubstitution error rate. The prototypes were not listed in [ 1 11. J J EDV,-l .6E J j J 1J We modified Chang's approach in two ways (cf. Appendix). First, instead of using
doi:10.1109/fuzzy.1996.551812 fatcat:a7fuax5aw5hu5kor5nhp65tcve