The webinar will begin shortly - TERAKI
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
The webinar will begin shortly… IF YOU HAVE NOT SIGNED IN WITH YOUR FULL NAME & COMPANY, PLEASE DO SO YOU CAN “RENAME” YOURSELF IN THE “PARTICIPANTS” TAB IF YOU DO NOT INCLUDE YOUR NAME AND COMPANY YOU MAY BE EJECTED FROM THE WEBINAR! The Hype-Free Global Wireless Community ™ 1
Welcome to Webinar Wednesday! HOUSEKEEPING: • Make sure you SIGN IN with your FULL NAME & COMPANY, as we may need it to unmute you and bring you into the discussion (If you do not include your name and company, you may be ejected from the call!) • You can “rename” yourself in the “participants” tab • Questions can be asked by either: ➢ Clicking the ”Participants” tab and then “Raise hand” Theuse ➢ Click the “Chat” tab and Hype-Free Wireless text box to Industry Community tm ask a question ➢ you can direct this privately to me or to everyone • The PRESENTATION PDF and RECORDING of this webinar will be available for download to members from the IWPC research library at www.iwpc.org The Hype-Free Global Wireless Community ™ 2
IWPC Webinars & Q1 2021 “Virtual” Workshops May 19: V-WORKSHOP: In Search of Optimum Automotive Sensors July 15: V-WORKSHOP: Exploring the 6G Vision May 26: 3.5 GHz Beam Steering Antenna Design and Test Results July 21: Improving the Integrity of Public Safety and Private Network Wireless Connectivity through Disruptive Technology June 9: V-WORKSHOP: End-to-End Network Slicing July 28: 5G Millimeter Wave: A Paradigm Shift in System Engineering June 16: iNEMI: 5G Materials Characterization Challenges and DPD Implementation and Customer Value Opportunities Aug 11: V-WORKSHOP: CBRS and Private Networks June 23: V-WORKSHOP: 5G Orchestration and Automation June 30: How Can Network Cabling Protect Mission-Critical Public Aug 18: Building The 5G Network Of The Future Safety Communications? The Hype-Free Global Wireless Community ™
Welcome! The Hype-Free Global Wireless Community ™ Intelligent Edge Processing To Achieve Better Radar-Fed AI-Models Speaker Geert-Jan van Nunen Chief Commercial Officer IWPC Teraki The Hype-Free Global Wireless Community ™ Members Only
Automotive RADAR market – set for strong growth 400M Drivers: 7x • Complementary to AUTOMOTIVE 223M camera 4x • Lidars too expensive RADAR RADAR units 40 M L4+ cars • Tesla Growth projection 55M • Radars getting more 5.5 M
Automotive Radar – Applications L4 cars typically have: The L4 car o 4-6 normal Radars o 1 long range Radar L2+ Radar applications Adaptive Cruise Autonomous Control (ACC) Emergency Braking (AEB) Blind Spot Forward Collision Detection (BSD) Warning System Intelligent Park ADAS and L4 Short range RADAR Long range RADAR Assist Applications RADAR has a strong role to play in the development of safety systems and autonomous driving
Need for high-resolution Radar & ML: detect, localize & classify Low Angular Resolution High Angular Resolution 4Rx/2Tx Antenna Array 32Rx/2Tx Antenna Array Why higher resolution Radar? 1. Enhance the sensing capability of automotive radar in dense and complicated urban environments. 2. Provides high resolution in terms of number of point cloud per frame for better object detection, localization and classification. 3. This enables high performance perception and sensor fusion models required for AV level L4+. For safe L2-models more accurate information is needed. This can only be delivered by higher resolution.
Zoomed-in For safe L2-models more accurate information is needed. This can only be delivered by higher resolution.
High-res radar comes with severe data-challenges - Application latency High-res radars are very data intensive: factor 50 to 100 more *). Leading to severe challenges in: - Hardware costs - Power consumption *) based on configuration of previous slide. But can go up to factor 1.000 when comparing older radar with imaging radar Data cube in Range-Doppler domain of modern high-res radars easily exceeds hundreds of thousands of points. The entire 4D-cube can be up to millions of points. Bandwidth bottleneck for data transmission and application latency. Good data reduction strategies are necessary to address the data size challenges (costs, latency, power). Teraki intelligent pre-processing overcomes the high-res radar data size problem
Data reduction ML-based
How to reduce data without losing accuracy? Radar processing scheme Digitalization Application & decisions Raw signal Pre-processing & early fusion FFT FFT range doppler Raw ADC Teraki Pre- Range Profile Range Doppler 3D data cube processing Teraki pre-processing algorithm: Target Detection (CFAR) - Learns the distribution of the clutter and background noise in range doppler maps. - Reduces/suppresses noise components significantly Direction of Arrival stronger than those of targets of interest. (DoA) Introducing Teraki in the conventional FMCW Radar Signal Processing Pipeline
SNR improvement with Teraki pre-processing 1.5 ms SNR gain by applying Teraki pre-processing: ➢ 31% SNR improvement on target ➢ at 90% reduction 31% Range-doppler map before and after processing Teraki computes signal's power for target identification (car) ➢ Pre-processing time of 1.5ms for 32bit data points Teraki software improves Signal to Noise ratio by 31% in pre-processing time of 1.5 ms for 32-bit data points
Radar target noise removal Object Object Signal strength (dBFS) Unprocessed signals have more noise around the target making it difficult to identify object Teraki intelligent pre-processing reduces the noise while preserving the amplitude of the object Keeps or improves the target-to-noise level while compressing the radar data cube Teraki extracts the maximum of information and allowing to use low-powered hardware
Radar Detection ROI-based
Sensor Fusion – Radar ROI detector Step 1: Fast ROI detection - Lightweight, unsupervised ROI detection to get some coarse ROIs, with high false-positive rate. Step 2: Refined ROIs using ML: - 3D features are computed for each proposed ROI, from the 3D radar cube. - ROIs are classified as true positives or false positives, to get the TRUE ROIs coming from road users (pedestrians, cars, etc.) → increasing the accuracy. - The true ROIs can then be classified as pedestrian, car, static object, etc.
Teraki ROI detector accuracy – radar only Step 1. Lightweight (unsupervised) ROI detection: 0.56 F1 score - radar only Step 2: Refined (ML-based) ROI detection step: → 0.83 F1 score *) - radar only *) Note: Done on limited training. With more training F1-scores will improve. ROIs correspond to road users, i.e. cars, trucks or pedestrians. Teraki refined ROI detection on radar-only data has an F1 score of 0.83
Sensor Fusion On low-powered hardware
Independent ROI-detection using two different sensor types Camera strengths: Radar strengths: Classification, detect visual texture. Robust to bad weather conditions, accurate distance. Image-based object detector: • Accurate bounding box Radar-based object detector: • Accurate object class • Range information • Velocity information • Additional 3D features (intensity, cross section, object shape, variance etc.) Teraki "ROI" data strategy applied on two different signal types
Teraki hybrid sensor fusion processed on a single core Object azimuth, range, velocity, class, additional features Raw radar data Lightweight ROI ROI classification Radar object (3D cube) detection Radar object detector Azimuth x range x velocity Sensor fusion Fused object Aurix (TC4). TM 1 ARM core (R52, A72). 3D spatial coordinates, velocity, object class, object confidence Raw camera data Joint detection / classification Camera object Image object detector Object class, pixel coordinates, confidence level, additional features Hybrid sensor fusion approach improves the accuracy and can be processed on a single ARM-core / Aurix TM.
ML-based object detection & sensor fusion Fusion step: Radar objects are projected on the image, and image objects and radar objects are matched. Example with 2 cars, and 3 pedestrians walking side-by-side. Teraki integrates detections from Radar and Camera together to achieve best accuracy of detections
Radar and Camera Sensor Fusion in action - city The radar correctly detects an object behind the trees The camera later correctly identifies the class of that (early detection). object ("car").
Radar and Camera Sensor Fusion in action - highway Blue= cars Green = trucks Highway scenario. Different environment - higher speed. All road users can be identified, along with their All objects are detected and classified. corresponding object class, range from the ego vehicle, and radial velocity.
Latency Low for real-time applications
Sensor fusion latency CAR CLOUD TOI Inference ROI Inference Encoding Transmission Sensor Fusion Unit Decoding + Camera 1920x1024 2D - 5 12.9 Up to 40x with ROI Lidar/camera: 8 ARM A72/Single Core - Radar/Camera: Radar 128x128x12 3D - 10.5 10x without ROI ARM R52/Single Core 3.2 Latencies in ms Latencies in ms Camera + Radar + IMU data Reference performances for SoC 3D: ARM A72 1 core CPU Raw vs Reduced: 2D: Nvidia Jetson Nano GPU 2D Camera: 40x faster transmission (10x Codec, 4 x ROI) Radar: 2Ghz Intel CPU, 1 Core 3D RADAR: 10x faster transmission PROCESS RAW Radar Data ROI Inference (RADAR) ROI Encoding (2D) Teraki accelerating the new L2+ functions with high precision, low latency on series production hardware.
Benefits Summary
Value proposition: accurate sensor fusion on production hardware With With More accurate machine learning 10x more efficient CPU utilization Customers can continuously train and Up to 10X more efficient utilization of update pre-processing and their models in available computing power resources the drone. SNR is improved with 31% and additional to AI-chip optimization. Up to 6X L2+ AI-model accuracies are increased more efficient RAM utilisation on top of AI with 20-30%. chip optimization. With With Lower costs Quicker applications Production-grade hardware can do the job. Customers L2+ models achieve lower Hereby customers save on hardware costs latencies. 60% of application latency is data (chipsets, bandwidth) when designing pre-processing and 40% is on ML. Teraki production cars and lower the production reduces this 60% with factor 10X. BoM. Improving accuracy of radar-driven AI-models at lower latencies on low-powered hardware.
Thank you For more details contact us at: info@teraki.com
Q&A For more details contact us at: info@teraki.com
IWPC Webinars & Q1 2021 “Virtual” Workshops May 19: V-WORKSHOP: In Search of Optimum Automotive Sensors July 15: V-WORKSHOP: Exploring the 6G Vision May 26: 3.5 GHz Beam Steering Antenna Design and Test Results July 21: Improving the Integrity of Public Safety and Private Network Wireless Connectivity through Disruptive Technology June 9: V-WORKSHOP: End-to-End Network Slicing July 28: 5G Millimeter Wave: A Paradigm Shift in System Engineering June 16: iNEMI: 5G Materials Characterization Challenges and DPD Implementation and Customer Value Opportunities Aug 11: V-WORKSHOP: CBRS and Private Networks June 23: V-WORKSHOP: 5G Orchestration and Automation June 30: How Can Network Cabling Protect Mission-Critical Public Aug 18: Building The 5G Network Of The Future Safety Communications? The Hype-Free Global Wireless Community ™
You can also read