(The official version of this article was made available in April, 2022. What follows is a slightly edited version of the original text.)

Method to Add Historical Cloud Data to a Solar Path Display

Timothy D. Greer (Summarizing some development done in connection with 2021 Call for Code. Also contributing were Walter Church, Jacob Gagnon, and John Hollenbeck.)

The original impetus for this idea was for deciding whether a garden location would be considered "full sun", "partial sun", or "shade". One approach was to capture photographs at known directions and angles (e.g., due south, tilted 45 degrees from vertical) and use that information plus latitude and longitude to calculate where, at any time during the growing season, the sun would be in the image. By finding the horizon in the image, this approach also enables the calculation of how much of the solar path was obscured, and then uses the ratio to evaluate the location. Because the position of the sun considerably varies both through the day and through the year, displaying the entire range of possible positions in a single image can help answer the original question. In addition to gardening applications, such a display might be used in evaluating real estate (e.g., Will this room be too sunny in the summer?) or in considering modifications (What if we add an awning here?).

Some existing cell phone apps almost provide this functionality. While in some of these apps the precise edge of the path is vague, it is easy to see that modification of the display format would be straightforward. However, no current applications consider the variable of varying cloud cover during the growing season.

A system or method is needed that can find the sky in an image and then superimpose the sun's path and some indication of its effectiveness as modified by likely weather.

The proposed solution is a system to obtain historical cloud cover information for a nearby location and then paint the sun's path on the image and vary the brightness of each painted pixel according to the historical cloud cover for that date and time. Unless the location is subject to great variation (e.g., a monsoon), the brightness variation can be subtle. To accommodate this, the novel system applies histogram equalization to the brightness to improve the visualization.

In addition to the visual display, a useful function is to calculate a fraction for the actual expected direct solar radiation. Without considering weather data, this can be obtained by determining the portion of the sky through which the sun passes. With weather data for each point, instead of counting each pixel as one, the system can weight by one (1) minus the cloudiness fraction to get a weighted average, which is a more appropriate number for agricultural use. The system can apply additional weightings such as reflecting additional plant leaf area(s) later in the season.

Given the local latitude, the camera pointing direction (altitude and azimuth), and the field of view specifications of the camera (height and width of the image, in degrees, for example), astronomical algorithms are available for determining whether a pixel represents a point the sun would occupy at some time during the year and, if so, what date(s) and time(s) the sun would be there.

For application, the novel system also first determines which pixels represented the sky in the image (by the crude approach of "the sky is that bright region at the top of the picture") and ignores anything not "sky"; this is an optional step.

To avoid re-working this calculation, the system can create a work image consisting of zeros for all pixel intensities, except tagging those on the solar path and in the sky, with a non-zero intensity. This requires historical data for cloud cover for each date and time. NOAA [1] has such historical data, recorded hourly, sufficient to build such a table for most locations in the United States. Data for years can be averaged so that a table is available of the fraction of the sky covered by clouds at each hour throughout the year.

Having determined which pixels in the image are on the solar path (and in the sky, if using that additional step), the system calculates all the corresponding days/times. It collects the historical cloud cover data corresponding to that range of dates and times, and then determines the maximum and minimum cloud cover in the date/time range, giving a cloud cover fraction range. The system splits the cloud cover fraction range into equally spaced bins (e.g., 256 bins) and obtains a histogram of the cloud cover. Using this histogram, the system determines a corresponding histogram-equalized [2] array to use as a look-up table for pixel values to then use for a specific cloud cover fraction.

Finally, the system modifies the original image as follows:

  1. For each tagged pixel, replaces the original red/green/blue element with 0/n/0, where n is determined by calculating the date/time corresponding to the pixel, looking up the cloud coverage for that particular data/time (interpolating as necessary).
  2. Uses the resulting fraction to look up the brightness number in the histogram-equalized array. Depending on the range of dates used (e.g., one might choose to consider only spring or both spring and summer) and the resolution of the image, more than one date/time value may correspond to a given pixel. In that case, the system can average the results, although this can smooth the output.
  3. The result is a green band across the sky, with the brightness of the green indicating the probability that the sun will actually shine. Optionally,
    1. A user can select another base color for the band.
    2. The solution can define a method that retains the red and green elements and uses the blue element for modification. (This approach is particularly useful if the entire solar path on the image is being displayed, not just that part in the sky.)
  4. Renders each element's red/green/modified-if-tagged blue values to obtain a pixel retaining some of the original image but also displaying solar radiance information. Many variations of this last display method are possible, for example making all red/green/blue values the same (converting the pixel to monochrome), halving the intensity, and then adding half the green element value from the originally described method.

The system can calculate an appropriate number for comparing locations as follows:

  1. The ratio of the tagged pixels using sky-only pixels to the tagged pixels ignoring the sky gives the fraction of possible sunniness compared to if there were no obstructions.
  2. If in the numerator the system instead uses the sum of tagged sky pixels, each weighted by (1 - cloudiness fraction), then the process obtains a value that can be reasonably compared to locations with different weather profiles.

References

  1. https://www.noaa.gov/
  2. https://en.wikipedia.org/wiki/Histogram_equalization


The information provided, and views expressed on this site are my own and do not represent the IBM Corporation.



 

Return to the author's home page.