Safety-based Advanced Driver Assistance Systems (ADAS) and the rise of the semi- and fully autonomous vehicle continue to create increased demands for camera-based systems. However, there are many factors to consider in order to deliver successful vision-based automotive systems.
While totally autonomous vehicles are grabbing headlines, mass-market availability and adoption remains some way off. However, more and more driver aids are being implemented in our cars, each moving us a step forward in the evolution from semi-autonomous to fully autonomous vehicles.
The goal is clear — safer roads. The challenge is this: converge and implement systems that are more vigilant and less error-prone than humans and that performs 24/7/365. While modern cars incorporate a myriad of sensors, they are often task-specific and cannot offer the all-encompassing awareness of the human eye; for that we need camera-based vision systems. These cameras must allow the car to read street signs, sense obstructions and detect other hazards in real time.
However, one technology advancement is hampering another. As vehicle lighting, street sign backlighting, and variable message signs move to LED technology for energy efficiency and longevity, flicker becomes a problem. Humans don't notice this; but, to an electronic eye, the display is pulsing many times a second.
In practical terms, as the camera passes a sign, the flicker from LED lighting causes light and dark areas, often leading to missing sections or artifacts. This makes the sign impossible to read accurately using image processing software.
That’s why the latest image sensor technologies are designed to eliminate high-frequency LED flicker from traffic signs and vehicle LED lighting. ON Semiconductor’s LFM (LED Flicker Mitigation) technology, for example, uses HDR (High Dynamic Range) principles to capture four images at different exposures and overlay them. This gives in excess of 120dB of dynamic range and allows the extraction of the detail from a scene in any lighting condition.
Assessing information in various lighting conditions is fundamental to many safety systems, including in-cabin driver and passenger monitoring. Here, techniques such as ‘global shutter’ pixel design can allow the sensor to produce clear, low-noise images, in both low-light and bright scenes. This performance enables, for example, the required eye tracking and gesture detection functionality in next generation automotive in-cabin systems.
Operating across the widest possible range of ambient lighting conditions, there is another challenge that vehicle vision designers need to address if their systems are to truly replace—and improve on—the human eye/brain combination: field of view. Field of view extensions can be done in a number of ways, including the use of fish-eye lenses. However, it is important that the underlying image processing system—which manages the data fusion of information from sensors, cameras, driver and other sources—is able to correct for the distortions that are introduced in using such optics. Alternatively, using multiple camera systems may be the optimal solution, when it comes to the safety critical systems designed to protect vehicle occupants and other road users.