Radar Vision

While radar technology has been around for a long time, it has only been through recent developments in mmwave FMCW Radar technology that true “high” resolution Radar Vision has become achievable. The RadarIQ-M1 sensor uses the latest in 60GHz mmwave technology to make radar vision possible like never before.

mmwave radar works by emitting fast bursts of radio waves from the sensor and listening for reflections. These reflections can be processed to produce “Radar Vision” images, allowing the world to be seen by radar.

There are four main types of radar vision data:

  1. Heatmaps
  2. Point Clouds
  3. Tracked Objects
  4. Sensor Fusion

Heatmaps

Heatmaps are the most primitive way of visualizing radar data, that said, they are very valuable in understanding what the radar sensor can "see" because sometimes this is not intuitive. Heatmaps are usually plotted with distance on one axis and either the angle or x-coordinate on the other (depending on if the data is viewed in either the polar or Cartesian coordinate system). The strength of the reflection is represented by the color. 

Polar vs Cartesian

Point Cloud

Radar heatmaps generate copious amounts of data, which includes both the reflections from objects as well as all the space in between. For most applications only the reflections are of interest. Radar data can be consolidated down to a series of points, where each point represents a single reflection. Point Clouds make it much easier to process data in an expedient way when compared to raw heatmap images. The RadarIQ-M1 sensor outputs a sparse point cloud of data, where each point includes the 3D position of the reflection (x,y,z coordinates) as well as the instantaneous velocity and reflection intensity.

Point Cloud

Tracked Objects

The third way of viewing radar vision data is to apply higher-level processing to point cloud data to form individual tracked objects. This involves applying some form of clustering to the point cloud data to identify groups of points which likely come from one object. Additional tracking algorithms like Kalman filters can also be used on top of the clustering algorithm to improve performance.

Object tracking is difficult because there are so many variables that need to be accounted for, and many difficult situations which can throw off the tracking (such as two people moving so close to each other that they appear as one).

Object Tracking

Sensor Fusion

One of the most used applications of radar vision data is with sensor fusion. Sensor fusion is where data from two or more sensors is combined together into a single representation of the world. Every sensor has its strengths and weaknesses, and vision sensors are no exception. Cameras are great at high resolution data but don't provide much in the way of depth data and struggle under difficult lighting conditions. Radar on the other hand has poor resolution of data (when compared to a camera) but is super robust and works under almost any conditions.

Sensor Fusion

Sensor fusion with radar typically tries to achieve three things:

  1. To add depth information.
  2. To increase level of confidence to detections made with other sensors.
  3. To 'fill the gaps' when primary sensors struggle.

There are many approaches to sensor fusion, which use either heatmap, point cloud, or tracked objects as their radar data source. A lot of complex algorithms are needed to get sensor fusion to work well. The key thing these algorithms need to do is:

  • Align the sensors field of view so that objects detected with each sensing technology show up in the same position.
  • Make determinations as to what is an object which requires action.

 

 

 

Distance and Velocity

River Level Sensing, Silo Monitoring, Wave Measurement, Industrial Automation - We produce reliable Radar Solutions for robust measuring in indoor and outdoor environments.

Learn more