Driver Assisting Feature for Collision Avoidance, Sign and Traffic Signal Detection

K.R.K.S. Anirudh
2021 International Journal for Research in Applied Science and Engineering Technology  
Theme of the project: Solving problems faced in one's day to day life and make life easier. The Problem: If the driver is tired of driving continuously, he may get disturbed and may cause accidents. Lot of accidents is taking place which are causing several deaths. The team's approach to solve the problem: Sign and traffic signal detection: Using Raspberry Pi Camera with image processing technique the signs as well as traffic signals are detected based on which the motors of the vehicle are
more » ... tioned. Using image processing all the decisions will be taken by the raspberry pi and it is semi control device. Obstacle Avoidance: Using ultrasonic sensors connected to an Arduino board the obstacle distance is calculated. With the help of sensor the vehicle stops at a certain distance and accidents can be avoided. Object detection takes place by which object can be avoided without collision with the help of this accident prevention is done. I. INTRODUCTION Robots are part of Science and Engineering. It uses machinery, electricity and other forms of engineering. Nowadays the use of robots is evident in a variety of activities. In India in particular we are seeing more and more road breaches increasing day by day. Due to the lack of information and driver laziness many accidents occur. To avoid this we have come up with the idea that drivers help features that will help drivers with more things. Traffic light acquisition and awareness are essential for independent driving in urban areas. A camera-based algorithm for the detection and recognition of real-time headlights was developed, and was designed primarily for private cars. While reliable algorithms for the recognition of reliable robots work well, most of them are designed to be located in a fixed location and the effect on autonomous vehicles under real-world conditions is still limited. Other methods get higher accuracy in private vehicles, but they cannot operate normally without the help of a more accurate map. The flow of image processing can be divided into three steps, including preprocessing, detection and recognition. First, the red-green (RGB) space is converted to hue-saturation-value (HSV) as the primary content for preprocessing. In the detection phase, the transcendental color method is used for initial filtration, meanwhile, pre-existing information is performed to scan the location to quickly establish the constituencies of the candidates. The proposed system in our private car was tested. With voting schemes, the proposal could provide sufficient accuracy for private vehicles in urban areas. Light traffic awareness plays an important role in traffic control and collision avoidance. Road accidents are the second leading cause of car crashes, which are only caused by rear crashes. The robotic monitoring system can alert drivers who are disturbed by changes in oncoming lights, or trigger an automatic response from the vehicle itself. Robots produce one of three colors -red, yellow, or green. The basic idea of color blocking is to limit the image where these colors are. Any image in red, yellow, or green is set to zero (black). In order to properly distinguish the colors we like, there are a few important things that need to be covered: the color space, the boundary is determined, and the variety of lighting. Normal images are represented in the RGB color space. However, RGB mixes color and size information across all its channels. This makes the RGB format sensitive to changes in light. If our goal is to find robots, we cannot be different from light (e.g. sunny, rainy, cloudy, etc.) We set our limit on choosing the right colors. To combat this, many prefer to switch to color spaces that separate chroma, or color information, Luma, or image intensity. Other notable examples well represented in the literature are HSV, HSL, CIELab, and YCbCr. The idea of this method of finding that robots will appear much brighter than their surroundings. The image is converted to a gray scale and a white hat filter is applied. The top hat filter highlights areas that are much brighter than the surrounding areas. The method of obtaining color light is powerful with a variety of lighting because the component (kernel) is used locally. Which means that unbalanced background lighting is not a problem? We can start by cutting the lower part of the image because we know that the robots will always be from the upper part of the image. Every camera setup is different, but in Bosch data set the safe starting number is 45%
doi:10.22214/ijraset.2021.34775 fatcat:umbaq5wtdvdhtpagaqhrumoxei