Classifying Vehicle Types from Video Streams for Traffic Flow Analysis Systems

Imran B. Mu'azam, Nor Fatihah Ismail, Salama A. Mostafa, Zirawani Baharum, Taufik Gusman, Dewi Nasien
2022 JOIV: International Journal on Informatics Visualization  
This paper proposes a vehicle types classification modelfrom video streams for improving Traffic Flow Analysis (TFA) systems. A Video Content-based Vehicles Classification (VC-VC) model is used to support optimization for traffic signal control via online identification of vehicle types.The VC-VC model extends several methods to extract TFA parameters, including the background image processing, object detection, size of the object measurement, attention to the area of interest, objects clash or
more » ... overlap handling, and tracking objects. The VC-VC model undergoes the main processing phases: preprocessing, segmentation, classification, and tracks. The main video and image processing methods are the Gaussian function, active contour, bilateral filter, and Kalman filter. The model is evaluated based on a comparison between the actual classification by the model and ground truth. Four formulas are applied in this project to evaluate the VC-VC model's performance: error, average error, accuracy, and precision. The valid classification is counted to show the overall results. The VC-VC model detects and classifies vehicles accurately. For three tested videos, it achieves a high classification accuracy of 85.94% on average. The precession for the classification of the three tested videos is 92.87%. The results show that video 1 and video 3 have the most accurate vehicle classification results compared to video 2. It is because video 2 has more difficult camera positioning and recording angle and more challenging scenarios than the other two. The results show that it is difficult to classify vehicles based on objects size measures. The object's size is adjustable based on the camera altitude and zoom setting. This adjustment is affecting the accuracy of vehicles classification.
doi:10.30630/joiv.6.1.739 fatcat:zjqx25jcqnahrkygskvgkkkrvm