This article explains how a 360-degree camera system mounted in a vehicle to assist drivers when parking and navigating through narrow streets generates footage and corrects for distortions caused by the position of the lens and camera to give drivers a safe view that is close to the real world.
A variety of devices exist to help drivers when parking or maneuvering through narrow streets. One of the most notable of these is a device that uses footage from cameras mounted on the front, back, and sides of the vehicle to create a 360° view of the vehicle’s surroundings, as if from above, and presents it to the driver on a monitor in the car. This gives the driver a bird’s eye view of their surroundings, helping them drive and park safely. Let’s take a look at the process of how this view is presented to the driver.
First, you need to lay out a checkerboard-like grid on the ground around your vehicle and shoot with a camera. The cameras used in these devices are usually equipped with wide-angle lenses to provide a large field of view, which reduces blind spots and helps drivers see better. However, wide-angle lenses suffer from distorted footage due to the inherent curvature that occurs when light passes through the lens. The image is convex at the center and more curved away from the center, which is called lens-induced phase distortion. The features of the camera itself that affect this distortion are called internal variables and are represented by a distortion factor. If you know the internal variables correctly, you can set up a distortion model to compensate for the distortion.
The process of correcting for distortion is quite sophisticated. You need to minimize distortion in the footage captured by the camera, so that what the driver sees matches the real world as closely as possible. Distortion correction algorithms are used to do this, and along with the characteristics of the lens, the position and angle of the camera on the vehicle also play an important role in this process. The distortion caused by the tilt of the vehicle-mounted camera is called external variables. By comparing the captured video with the real-world grating, we can determine the angle at which the grating is rotated in the video, or the tilt of the camera through changes in the grating’s position. Based on this, you can correct the external variables to compensate for the distortion.
Once the distortion has been corrected, the next step is perspective transformation, which estimates the points in the three-dimensional real world that correspond to the points in the video, and from this we get a perspective-free image. Normally, when a camera projects the three-dimensional real world into a two-dimensional image, objects of the same size appear smaller the farther away they are from the camera. However, in an image from a top-down perspective, the size of objects should not change with distance, so it is important to remove this perspective effect.
If you know the positions of a few points in the video obtained through perspective transformation and their corresponding points on a real-world grid, you can describe the correspondence between all the points in the video and the points on the grid using an imaginary coordinate system. Using this correspondence, the points in the image can be placed on a plane so that the shape of the grid and the relative sizes of the grids are the same as in the real world, resulting in a two-dimensional image. By stitching together the images from each direction in this way, the driver can see a 360° view on the monitor, as if they were looking down on their vehicle from above.
The technology used in this process is very complex and precise, but the results are extremely helpful to the driver. Especially in tight parking spaces or complex road conditions, these devices play an important role in ensuring driver safety. Advances in this technology are greatly improving the safety and comfort of driving a vehicle and will be an important foundation for the development of autonomous vehicles in the future.