Background
Comprehending the surrounding environment is a key requirement for automated and connected vehicles - which requires the ability to sense the environment under different conditions with a comprehensive field of view. Cooperative perception (CP) unlocks perception bottlenecks such as physical occlusion and limited sensing by fusing the information from spatially separated entities - such as other vehicles and roadside units. Fusion methods can be broadly classified as early fusion, intermediate fusion and late fusion.
Technology
Researchers at UCR have developed a novel, adaptive cooperative perception (ACP) framework which improves perception capabilities in connected and autonomous vehicles, particularly in situations with limited communication bandwidth. The ACP framework features pillar attention encoder and an adaptive feature filtering mechanism which together ensures that crucial information is shared even in situations where communication bandwidth is limited.
Conceptual illustration of the ACP framework. The top shows the state-of-the-art CP methods and bottom shows the current method that can support more adaptive cooperative conditions.
Schematic of the ACP framework
Pillar Attention Encoder process
Please review all inventions by the Transportation Systems Research team at UCR.
Please visit the Transportation Systems Research group's website to learn more about their research.
Patent Pending
3-D object detection, automated vehicles, autonomous vehicles, autonomous driving, autonomous navigation, C2V, connected vehicles, cooperative perception, V2X