Exploring Hidden Potential: HPWREN Cameras Beyond Basic Viewing

14 July 2024


HPWREN cameras, known for providing stunning vistas and time-lapse videos, offer a treasure trove of data with applications that extend far beyond simple observation. This article explores the innovative ways researchers and developers are utilizing these cameras. From enhancing situational awareness during emergencies to enabling early fire detection with AI, these cameras are a powerful tools for a diverse range of purposes. This article will explore some of these advancements, showcasing the potential of such cameras to transform various fields.

It should be noted that these tools are often prototypes, and not production level implementations, do discuss their usefulness, for example, in support of fire fighting activities. Again, feedback is encouraged, including to explore other ideas about new and advanced uses of the HPWREN camera system.

Fixed Field of View cameras, such as the ones HPWREN predominantly uses, are ideal for detecting certain emergencies as they happen in the surrounding environment, especially when the location for an event cannot be predetermined, e.g., for detecting a fire ignition location and the azimuth relative to the camera. Another example of an event with a more predictable location could be flooding of a river.

At more than 35 HPWREN locations, typically on mountain tops in Southern California, four cameras are installed, each with an at least 90 degree (plus overlap area) field of view. The cameras typically point into a cardinal direction, with such overlap between adjacent views. In addition, those camera sets have two separate imager chips, one for color images and the other for monochrome, which are about 75 mm apart, center-center. The advantage of the monochrome imager is that it is sensitive to near infrared light, while also being much more light sensitive in general, which is particularly interesting for events during night time. The result of this is that an HPWREN camera site often includes eight Fixed FoV cameras, which are the four color ones and the four monochrome cameras. In addition, ALERTCalifornia often deploys one or two pan/tilt/zoom cameras at the same locations, as part of their California-wide network of PTZ cameras for environment monitoring, similar to the HPWREN objectives.

In most cases the cameras are being queried for a new image once per minute all year long, although in some cases the interval has been shortened to an image every 10 second. The cameras are able to provide images at a much higher rate, for example during a fire, which then results in a requirement for more storage and bandwidth.

HPWREN produces open data, while making such data available to various uses, including emergency management and response agencies, researchers, environmental groups, and the general public.

1. Semitransparent real-time data layered above a baseline image

HPWREN cameras are typically deployed on mountain tops to observe the surrounding environment, especially for emergency situations, such as fires and floods. The areas of interest may be fairly far away from the camera location. As a consequence, obstructions like fog, clouds and fire plumes, as well as it simply being dark at night can seriously impact an ability to determine specific locations within an image. The range of the reduction invisibility maybe between a limited view to a severe obstruction.

While not an all encompassing solution, some of the issue can be addressed by overlaying the real-time image as a semi-transparent layer above an older baseline image for comparison. The examples below illustrate this point for a fire situation. The rows are showing different points in time, and the three columns include the original baseline image on the left with no overlays, a real-time overlay with medium transparency in the middle and one with high transparency on the right. For the images where the real time data is used as an overlay, the fire plume is becoming transparent and non-real-time visibility of locations is restored.

Clicking on an image will expand it to a much larger version:

-
Original
video
Opaque overlay:
25 percent
Opaque overlay:
70 percent

In addition, time series of images can be turned into videos :

Original
video
Opaque overlay
at 25 percent
Opaque overlay
at 70 percent

This can also particularly helpful at night time, when most locations shown in an image may not be obvious.

A not entirely complete implementation (some baseline images are missing, and non-3072x2048 pixel image sizes are not handled well) of this for real-time data is available on the operational HPWREN cameras web page, then clicking “details” next to an image set, and then scroll down to “Semitransparent real-time image overlay.” Note that if the camera has shifted since the last baseline image was taken, the two images will not align.

2. Interactive Images Flow Interface (IIFI)

The Interactive Images Flow Interface is an alternative to the 3-hour videos per camera which HPWREN has provided on its website for many years. It prompts a browser to download the complete set of original sized JPEG images to the user machine across a user-defined period of time the images were taken, for example 30 minutes ago and until the newest image, or at the 3rd of this month between 8AM and 8PM. The images are being displayed while loading, after which an image of the time series will be displayed, reflective of the location of the mouse on the screen window. When at that time the mouse is moved horizontally, the time series image will move with it, with the left side of the window representing the location for the first image of the time series and the right side of the window showing the last. for fine-grained image by image moves the keyboard arrow keys can be used. Note that a large time period can easily download thousands of images and more than a gigabyte of data to the use machine, and the recommendation is to start with short time periods.

The principal benefit of this, as compared to MP4 video, is that the images are downloaded as JPEG chunks into the user machine, and as such they can be processed much more quickly than re-downloading the images from the web server, based on user interactions. This is particularly important if a user wants to step forward and backwards in a time series very quickly, while displaying each and every frame, as compared to typical video applications which often go backwards on key frames only, skipping many frames. Since there is no interframe compression, this requires a significant amount of bandwidth and significant amounts of memory in the user machine, as compared to a compressed video. An individual time series is currently limited to a 24 hour period, largely to keep the amount of network traffic for downloaded images within a reasonable scope. The default is to display the last 30 minutes of a selected camera. A six hour download could consist of hundreds of JPEG images and they may exceed 100 MB.

IIFI can be found at http://hpwren.ucsd.edu/IIFI/

3. 360 degree surround views around HPWREN mountain-top sites

With the four 90+ degree cameras covering the complete viewshed around a mountain top, viewing the images side-by-side shows the full 360 degree including the overlapping areas across adjacent images for both the color and the monochrome cameras:


Two, three or all four of those images then can be used for multi camera images or multi camera videos. Examples of such multicamera time-lapse videos have been included at an HPWREN YouTube page for the last 10 years or so.

The example side-by-side collage below is derived from those individual images.  No lens correction has been done,  but an effort was made to remove the overlap areas. That choice was made because the objective was to have functional and usable results, and not necessarily to produce the prettiest pictures. This can be shown on a computer screen, or, since the aspect ratio is the needed 2:1, it can also be displayed on the VR/AR device for surround viewing of a 360 degree image or video. A Quest 3 was used for testing.


As a meaningful prototype example, the four images per measurement interval of a small set of HPWREN cameras is currently being collated to 360 degree images in real-time, and can be accessed via its HPWREN web page. This currently experimental page will likely change over time.

4. 3D stereo and hyperstereo image aggregation

In addition, both the color and monochrome images can be used together to create a limited 3D effect. It appears that the effect can be amplified by rolling the right-eye image by 15 pixels or so to the right along the X axis, to get somewhat of a hyperstereo effect. Real hyperstereo requires a longer baseline between the two lenses. While we have not directly increased the about 75 mm interaxial base by, for example, installing two cameras of a single set that are pointed into the same direction on different sides of a tower or even on separate towers, some of the 3D effects can be exaggerated, as described, by shifting one of the images by a number of pixels. The following image looks very much like the previous one. However, zooming to the left edge of the image will show the 15 pixel shift.



For a 3D VR visualization, the previous image can be used for the right eye, which is the same position as it would be on the camera, and the non-rolled greyscale one, seen in the lower half of the image, for the right eye. This works best on a canvas with a 1:1 aspect ratio to accommodate the two 180 by 90 degrees left and right eye images with their 2:1 aspect ratio. For a JPEG image to include the views of both eyes, the component pictures can either be stacked vertically (top-bottom, as seen in the next image while using the 1:1 aspect ratio, or horizontally side-side as seen in the next one after.




For a comparison between us two methods, unshifted as well as 15-pixel right rolled/shifted video versions are available.

Note that this mixture of Individual color and monochrome images, each with their own setting about chip sensitivity and exposure time, it is not always very pleasant on the eyes. so please use this with care. 

If one or more of the component images is grey, for now just try to reload the page.

5. Fire Ignition image Library (FIgLib)

A significant area of interest for several years has been to enable and facilitate activities that utilize HPWREN images for early fire detection, as, for example, described in the 2019 Image Processing Experiments for Fire Plume Detection via Fixed HPWREN Cameras summary. Projects such as AI For Mankind and Open Climate Tech have long shown substantial progress, often being able to identify a fire plume within its first few image frames of visibility. A summary about image processing experiments has resulted in the HPWREN Fire Ignition images Library for neural network training (FIgLib), which provides hundreds of image sequences of fire ignitions. Another collaboration, which included WIFIRE and SAGE, in conjunction with the Argonne National Laboratory, had been working on labeling the FIgLib data sets with both boxes and contours, so the data would be directly usable for neural network training. The labeling itself was done by ANL, and a demonstration videos about "How Fires Start: Fire ignition and early progression labels" and "Real-Time Wildfire Smoke Detection" are referenced below.

How Fires Start:
Fire ignition and early progression labels
FIgLib & SmokeyNet:
Real-Time Wildfire Smoke Detection

FIgLib is available at https://www.hpwren.ucsd.edu/FIgLib although that web page needs some updating.