Aaron Heuckroth (Nesciosquid)

Analyzing mathematical constraints for beam steering

I want to do this:

In that simulation, a device scans a beam across a 3D space, where it hits sensors which calibrate the beam’s position and orientation so that it can be pointed at specific objects. As I move towards implementing a real-life version, I wanted to nail down some of the constraints which will determine how well the system can perform, in terms of scanning speed and resolution.

Skip to a section:

  1. Field of View
  2. Scanning Speed
  3. Beam Width
  4. Resolution and Scanning Time
  5. Light Intensity

Field of View

Our beam-steering device has the ability to redirect an incident beam up to some half-angle . In the case of the device I’ll be working with, I’ve been told that this is around 30 degrees in any direction – up, down, left, or right.

We can model the field of view of the device as a cone with a half-angle of . At a given distance , we can use trig to compute the width of this field of view as

For example, if our device has a half-angle of 30 degrees and is positioned 10 feet away from a surface, we can estimate that the field of view will be approximately feet wide.

Scanning Speed

The next factor to consider is the speed at which the device can scan. For our device, the time it takes to travel from one end of its range to the other, , is about 12 milliseconds. In order for a sensor to detect the presence of a beam pointed at it, the microcontroller must sample the output of the sensor while the beam is still shining upon the sensor’s surface. This means that the sampling rate of the microcontroller determines how long the beam must shine on a sensor in order to successfully register a hit.

Let’s assume we want the beam to sweep as fast as possible. That means that, in order to ensure that any sensor within the field of view will be illuminated long enough to see the beam as it scans across, the radius of the beam must be equal to or larger than the distance traveled by the beam over the course of one sample. (This makes the assumption that the center of the beam passes directly over the sensor, but it’s just an estimate!)

This concept is shown in the animation below. The blue circle is twice as wide as the red circle. The red and blue numbers represent how long each circle overlaps the black spot in the center as they travel across it. You can see that the blue circle overlaps the dot roughly twice as long as the red one does. You can find the code for this demonstration here.

Beam Width

Instead of computing the ideal beam radius at a particular distance, we can determine what proportion of the field of view must be occupied by the beam at any time to ensure that sensors will be tripped. We know that the beam travels the full width of the field of view in 12 milliseconds. Assuming a sampling time of 100 microseconds (standard for Arduino analog reads), this means that the sample time is 120x smaller than our sweep time. That means that, independent of the distance , the beam width must be greater than or equal to one 120th of the field of view width.

From earlier, was 10 feet and was 11.5 feet. That means must be at least = .096 feet, or 1.15 inches.

In general, this tells us that we can compute the minimum beam half-angle based only on , and .

This gives us a of = .276 degrees.

This also tells us that we can further reduce the minimum beam width by decreasing . It looks like there are some tricks to reduce analog Arduino reads down to about 16 microseconds. Plugging that in for , we get a minimum of .0441 degrees, and a minimum of .184 inches.

Resolution and Scanning Time

Since our beam width determines the resolution of our scan, this means that by increasing the sampling frequency of our sensors, we can perform higher-resolution scans at faster speeds.

There’s a catch, though – the smaller we make the beam, the longer the scan will take. The smaller is, the more rows we have to scan to cover the entire space. Assuming that the FOV is square (which it isn’t) and that the beam is 750x smaller than its width, this could take milliseconds. That’s way too long!

Thankfully, our device gives us dynamic control over the beam width, and we’re not required to scan the laser over the entire field of view for every scan. The trick will be to tune the beam width for faster or slower scans, and to develop a scanning algorithm that maximizes resolution while minimizing overall scan time. We’ll also have to account for the fact that, since the cross-sections of the FOV and beam are both approximately circular, we’ll need to adjust these calculations to account for partial overlap between the beam and the sensor and the reduced space of the FOV which lies within a square projection if we want to take a row-scanning approach.

Light Intensity

Pairing the power of our laser to the sensitivity of our sensors is going to be tricky. As gets larger, the same amount of light intensity is spread over a larger area upon impacting a surface. This means that the intensity of the light hitting our sensors decreases dramatically with the distance to the target, , since the area of the cross-section of the beam increases with the square of its radius, and we’re treating light intensity as energy per unit area.

To overcome this, we’ll either have to make sure our sensors are sensitive enough to pick up small changes in light intensity, or we’ll have to use a stronger laser. I’m going to have to consult with experts to ensure that our solution works at functional ranges without posing a danger to users, since any laser being scanned across an indoor space needs to be eye-safe.