Creating low cost and high performance “Eyes” for autonomous vehicles and drones
One of the key aspects of the promising AI industry, represented by autonomous vehicles, drones, robots, etc., is improving the ability to perceive the environment and movements detected around a vehicle. A system should be able to understand the overall 3D structure of its surroundings while recognizing the surrounding objects and its own movement to identify its location based on the surroundings. In line with this, there has been robust research carried out using “Computer Vision,” a technology for identifying the surrounding environment based on video images provided by a vision sensor. Though conventional cameras are the most commonly used vision sensor, they have their technical limitations. To tackle such limitations, Professor Kuk-Jin Yoon and his team have developed a driving environment recognition algorithm using a new type of vision sensor. They have conducted an in-depth study of omni-directional camerabased vision technology and developed an event camera-based technology that can obtain robust vision information under dynamic lighting and movement environments. Many relevant industry corporations have also taken keen interest to their new approach.
Maximizing the strengths of 360-degree and Neuromorphic cameras
Prof. Yoon and his team have developed an 360-degree omnidirectional perception technology for autonomous driving that can perceive all directions. The developed technology can be attached to mobility systems, helping them understand their surrounding 3D environment, including people and cars, as well as identifying possible driving paths. The 360-degree camera has a wider viewing angle than conventional cameras, making it possible for it to acquire more visual information with fewer cameras. However, as the images taken are modeled using images spheres, the images experience some distortion when representing them using rectangular planar images. Against this backdrop, the research team has developed a new expression method using a regular icosahedron, enabling various perception algorithms to be implemented on 360-degree images.
The team has also developed a method of generating high resolution images from neuromorphic cameras called “Event cameras”. The event camera is a sensor that asynchronously detects changes in brightness of individual pixels, enabling low latency and high speed capture. This camera can be used in
various applications such as autonomous vehicles and drones as it operates stably even with dynamic changes in lighting. However, the sensor is limited in that its resolution is relatively low and the data type requires new algorithms. To address such limitations, the team has developed an artificial intelligence deep learning technology that uses neural networks to generate high-resolution images from low-resolution event data. The proposed network not only generates high-quality images from event data, but also increases the low-resolution of the event data to high-resolution, ultimately allowing the model to generate high quality, high-resolution images.
A comprehensive visionary recognition technology comparable to the human eye
Prof. Yoon said, “The 360-degree camera has many advantages over existing cameras, but it also has disadvantages. We developed an algorithm that obtains good results and compensates for the shortcomings.” He continued, “Event cameras also have many advantages in dynamic environments,
so they can be used in autonomous vehicles, drones, robots, and military operations. We will continue our study to widen the usage of event cameras.” The results of their research were published in IEEE/CVF CVPR, an international conference on computer vision and pattern recognition, as well as in IEEE
TPAMI, an international academic journal for pattern analysis and machine intelligence.
Prof. Kuk-Jin Yoon
2020 KI Annual Report