Clouds tickle our imagination with their ever-changing shapes and inspire fear with rolling thunderheads that announce coming storms. However, they play a bigger role in controlling global climate than most of us realize. Scientists are learning more and more about the architecture and life cycles of clouds, and this growing knowledge could lead to a breakthrough in understanding global temperature shifts.

Clouds are major drivers of climate because they cover over 50 percent of the earth’s surface, acting as middlemen between our planet and outer space. They cool the earth by reflecting sunlight, but they also warm it by trapping infrared radiation.

Given their complex effects, clouds are important for climate models—but simulating realistic clouds in global models, with their turbulent flows and multiple phases of water, is too computationally expensive. To solve this problem, scientists must reduce the intricate structure of a cloud to a few numbers, such as the amount of rain produced. This formidable task is a hot research topic, and to tackle it scientists need a better understanding of cloud formation and evolution.

A big challenge in studying clouds is their relative inaccessibility to observation. However, using digital cameras and modern supercomputers, scientists can learn much about how these ethereal forms originate as well as how they flow, change shape, and eventually decay.

Recently, Dr. David Romps, assistant professor of earth and planetary science at UC Berkeley, and his team revamped a technology called stereophotogrammetry to track clouds over their lifecycles. Stereophotogrammetry combines optics with image processing to generate three-dimensional observations of objects from two-dimensional images. In this technique, two digital cameras spaced one kilometer apart take pictures of a scene. Like our eyes, the cameras capture the same scene, but one camera’s image is slightly shifted relative to the other’s image.  Along with the cameras’ parameters (such as positions, orientations, and lens’ focal length), this shift is used to calculate the depth of points in the scene, making it possible to generate 3D images.

Standard stereophotogrammetry relies on fixed landmarks, such as buildings or trees, to accurately calculate the cameras’ parameters, a process known as calibrating the cameras. But because standard stereophotogrammetry needs landmarks, it can’t be used to study clouds over oceans, which have very different properties from clouds over land and cover much more of the earth’s surface.  Enter Romps and his collaborators, who gave stereophotogrammetry a much-needed makeover and developed a landmark-free calibration procedure for gathering data over oceans.

In creating this new procedure, a major challenge was accurately estimating the cameras’ orientations.  Without fixed landmarks, Romps turned to the two objects that are constant in every ocean scene: the sun and the horizon.  The cameras’ orientations can be computed by comparing the locations of the sun and the horizon in an image to the physical coordinates of these objects at the time the picture was taken. This procedure required Romps to devise formulas to calculate the coordinates of the sun and the horizon based on the time of day, the earth’s curvature, and the camera’s height. However, even after developing a new calibration procedure, Romps’ team still had to take on the grueling process of reconstructing the clouds in 3D.

To generate a 3D image of one scene, scientists place the two camera images side by side.  First, in one image, they hand-select and mark feature points, such as those that outline a cloud.  Then they find these feature points in the second image.  Finally, they calculate the 3D position of each feature point using its location in the images.  Putting all the points together reconstructs the 3D scene.

This manual process is taxing and time-consuming: it took Romps and Project Scientist Rusen Oktem two hours to complete one scene.  This led Oktem to develop new algorithms  using computers to automatically identify feature points in one image and find these points in the sister image.  This technology sped up the reconstruction dramatically. “With this algorithm, we can take all of these images, throw them into the supercomputer, and in under a day, get out 35 million feature points that matched.” says Romps.  “I calculated that it would take a human 20 years without sleeping to do the same thing manually.”

With this new data, Romps hopes to answer several questions about clouds, including how their lifecycles advance.  Clouds don’t die—they linger, and this tends to give off more heat. As Romps and other scientists build a deeper understanding of cloud evolution, climate models will improve, and more accurate predictions of global temperatures can be made. So the next time you look up and admire our white fluffy friends, remember that they play a significant role in our planet’s future.

– Dharshi Devendran is a postdoctoral researcher in the Computational Research Division at LBNL.

Leave a Reply