In this paper, we present a comparison between two methods of features extraction; the first one is the Krawtchouk invariant moment (KIM). The second one is the Zernike invariant moment (ZIM). These moments are used for printed Arabic characters recognition in different situations: translated, rotated or resized and noisy. In the pre-processing phase we use the thresholding technique. In the learning-classification phase we use the multi-layer perceptron (MLP) that is considered as a neural network based on a supervised learning. The simulation result that we have obtained demonstrates that the KIM is more robust than ZIM in this recognition.
In this paper, we present a comparison between two methods for learning-classification; the first one is called Kohonen network or Self-Organizing Maps (SOM) which is characterized by an unsupervised learning. The second one is called Support Vector Machine (SVM) which is based on a supervised learning. These techniques are used for recognition of handwritten Latin numerals that's extracted from MNIST database. In the pre-processing phase we use the thresholding, centering and skeletization techniques in the features extraction we use the zoning method. The simulation result demonstrates that the SVM is more robust than the SOM method in the recognition of handwritten numerals Latin.
In this paper, we present a comparison between methods of learning-classification, the first one is called Hidden Model Markov (HMM) which is based on a unsupervised learning, and the second one called Support Vector Machine (SVM) which is based on a supervised learning. Those techniques are used for printed Latin numerals recognition, in different situation: rotated, resized and noisy. In the pre-processing phase we use the thresholding technic and in the features extraction we use the Hu invariants moments (HIM). The simulation results demonstrate that SVM is more robust than the HMM technic in the printed Latin numerals recognition.