The Chronicle of Higher Education
From the issue dated November 24, 2000


http://chronicle.com/weekly/v47/i13/13a05101.htm

Weather Forecasting: a Model Effort

'Superensemble' technique combines strongest elements of several predictions

By FLORENCE OLSEN

Tallahassee, Fla.

Everyone knows weather forecasts are wrong a fair amount of the time. Now one of academe's most respected climate researchers is turning some of those mistakes into a resource to improve the predictions of computerized weather models.

The researcher is T.N. Krishnamurti -- "Krish" to his friends here at Florida State University. Mr. Krishnamurti, a professor of meteorology, is the creator of one of the world's two dozen or so computer programs for modeling and predicting weather and climate change around the world.

More recently, he has become a champion of a new weather-prediction technique known as superensemble forecasting, in which a computer program essentially merges the predictions of different computerized weather models, taking from each the elements on which it has the best track record. That's where Mr. Krishnamurti's database of proven mistakes comes in -- he uses it to evaluate each model's strengths and weaknesses. He hopes the superensemble technique will lead to more objective and reliable long-range forecasts.

It was a superensemble forecast that earned Mr. Krishnamurti his 15 minutes of network-television fame last year: His technique correctly predicted the path of Hurricane Floyd four days before the storm made landfall near Wilmington, N.C. Forecasters at the National Hurricane Center in Miami had predicted that the storm would move farther west -- an erroneous forecast that triggered a massive evacuation from the coastal communities of Florida, Georgia, and South Carolina.

This hurricane season has been less eventful for Mr. Krishnamurti. As he clicks to open up a view of his own virtual-climate system, he describes the immense data feeds that generate the bright yellow, orange, and dark-red contours displayed on his computer screen. "We're taking data from the Arctic to the Antarctic, and from the surface of the Earth all the way up to the stratosphere and above," Mr. Krishnamurti says, stretching his arms toward the ceiling of his office. "We take everything," including data from five U.S. military satellites.

Global models like Mr. Krishnamurti's are complex software programs made up of mathematical equations representing the laws of atmospheric physics -- insofar as they are known. To complicate matters, natural feedback processes give weather its bedeviling trait of "nonlinearity": The weather continually reacts to itself, with unpredictable results. Computer models make their predictions by solving nonlinear equations of fluid dynamics.

The three-dimensional computer models are among the few tools that scientists have for studying how the world's climate is changing and getting warmer. Although models have detected the warming trend, scientists say the models haven't answered the more controversial question of whether human activities are the cause. Only a limited number of global climate models exist throughout the world -- some say 20, others say 30 or more -- and no two are alike. But all of the models require, Mr. Krishnamurti says, "an immense amount of computing."

Ever since the first computer modeling experiments of the 1950's, meteorologists have used the ever-greater capacity of computers to add more realism, or resolution, to their models. "There's a lot of pride involved in getting a model to represent accurately what's going on," says Christian D. Kummerow, an associate professor of atmospheric sciences at Colorado State University.

Before his meteorology lab began acquiring multiprocessor computers about five years ago, Mr. Krishnamurti and his graduate students ran large-scale weather and climate simulations on supercomputers owned by the Energy and Defense Departments and the National Center for Atmospheric Research, a research facility in Boulder, Colo., operated by a consortium of North American universities. Mr. Krishnamurti's picture hangs on a wall at NCAR. "He's always been their No. 1 customer," says C. Eric Williford, a Ph.D. candidate in meteorology at Florida State who has been a coauthor of some of Mr. Krishnamurti's papers.

So it comes as no surprise to his Florida State colleagues that Mr. Krishnamurti will be the first researcher to run a large-scale model on the university's new $8-million supercomputer. When it is fully installed next year, the terascale computer -- an International Business Machines RS/6000 SP2 with 680 processors -- will have a peak capacity of 2.5 trillion calculations per second. In addition to his position in the meteorology department, Mr. Krishnamurti is program director for computational climate dynamics in the university's new School of Computational Science and Information Technology, whose faculty members and graduate students have full access to the supercomputer.

Mr. Krishnamurti's passion is his climate model, which he has spent more than two decades perfecting and which he calls the Florida State University Global Spectral Model. Over the years, his graduate students have contributed 45 or more dissertations to the collective work.

For climate researchers like Mr. Krishnamurti, the modeling problem can be maddening and beautiful at the same time. The researchers readily acknowledge numerous ways that computer models can go wrong in making both long- and short-range forecasts. The models themselves may make faulty assumptions about the processes that cause the weather and climate to change. The data fed into the models is often erroneous. The models and the data may not be completely tuned for efficient computation. "We are dealing in the world of error," Mr. Williford says.

"In theory, if you had a perfect model with infinite resolution, and you had infinitely good initial data, then you'd get a perfect forecast," says John P. Boyd, a professor of atmospheric, oceanic, and space sciences at the University of Michigan at Ann Arbor. In reality, Mr. Boyd says, weather and climate models rely on equations to describe the relationships between what are merely average values for atmospheric properties within imaginary "grid boxes" superimposed over the real atmosphere. "There are necessarily errors made in doing this," he says.

Indeed, weather and climate are far too complex for any one model to get every forecast right. But some models are better than others, depending on the forecast locations and conditions, Mr. Krishnamurti says. Much of his research has involved painstakingly identifying the systematic forecast errors produced by his own and other global models, then storing those in a huge "errors database" that has grown to 10 million statistics.

Mr. Krishnamurti collects these statistics to improve his ability to make forecasts using models. To assess how realistic their hurricane forecasts are, for instance, Mr. Krishnamurti's research group measures the differences between their model's forecast of a particular hurricane's path and the actual path of that hurricane. "That's hindcasting," he says.

Recent efforts to produce more accurate model forecasts have led meteorologists to devise a variety of new forecasting techniques, and ensemble forecasting is prominent among them. It "has a lot of potential for doing good," Michigan's Mr. Boyd says. The technique involves running the same forecast model, say, 10 different times, starting each time with slightly different data about the initial weather conditions. In a variation on the technique, researchers run 10 different forecast models with the same initial data to measure the extent to which the forecasts agree.

"If all the forecasts are very similar, it means you can trust that forecast more," says Zoltan Toth, leader of the global prediction group of the General Sciences Corporation at the National Weather Service's Environmental Modeling Center, in Camp Springs, Md. But if the forecasts differ a great deal, it means "on that particular day you cannot give a really precise forecast."

A year ago, Mr. Krishnamurti incorporated into his research what he describes as an enhanced ensemble technique. Mr. Krishnamurti calls it "multimodel superensemble" forecasting. His method, which differs from the way other meteorologists do ensemble forecasting, uses multiple regression techniques to "post-process" and analyze the results of an ensemble of different models' forecasts.

Other meteorologists, among them Mr. Kummerow, at Colorado State, agree that Mr. Krishnamurti has developed a potentially useful technique. Mr. Krishnamurti creates his multimodel superensemble forecast by using statistical techniques and the data on the models' past performances to correct what he says are the collective forecast biases of the 11 global model forecasts that make up a superensemble forecast.

"One model will have better data and a better way of representing a lake and its physics, another model could be better at representing a mountain, so the superensemble pulls the best from all of them," Mr. Krishnamurti says. "The superensemble is the collective wisdom of everybody, so to speak."

In a paper published in Science last year, Mr. Krishnamurti and his colleagues describe how their multimodel superensemble forecast made smaller errors in hindcasting the path and intensity of dozens of tropical storms in 1997 and 1998 than did any one of the model forecasts individually. "The superensemble outperforms all models," Mr. Krishnamurti says without hesitation.

Each computer model adds unique and valuable information, Mr. Williford adds. The Florida State global model, for instance, "has been tuned to the tropics," he says. The researchers are now studying whether a superensemble can have an optimal number of forecasts. They think, for instance, that each forecast has unique strengths and that removing any one of them weakens the ensemble forecast. Using statistical weighting techniques, the researchers try to minimize the effects of known flaws in a particular model's forecasting skills.

Researchers from Mr. Krishnamurti's group say they have also programmed their computer model to simulate about 95 percent of the precipitation that can be detected by global satellites, which further improved their forecasting accuracy. For a time, the best they could do was 30 percent. When a global model simulates how certain initial weather conditions will change during a period of, say, 24 hours, even tiny errors in describing atmospheric conditions at the beginning of the model run can grow out of control, "contaminating the forecast," Mr. Williford says.

Whatever the count of models, the more forecasts the researchers add to the superensemble, the more computing power is required. It takes three hours for the Florida global model, running on a small supercomputer with four I.B.M. Power3 processors, to churn out a six-day forecast. Add more forecasts, and it takes longer.

The researchers are still analyzing the results of their experiments during this year's hurricane season, which was much less busy than last year's, when Floyd disrupted many lives. And they are sending superensemble forecasts twice a day to the National Hurricane Center in Miami for its forecasters to evaluate.

Having always worked with limited computing resources, weather researchers generally say they are used to accepting lower thresholds of resolution, or realism, in their models, as a tradeoff for faster computations. As Mr. Williford puts it, "Better computational capacity is only going to help this process."

In the future, says Mr. Krishnamurti, meteorologists working on faster computers may be able to apply the superensemble concept to high-resolution, regional models for more objective and accurate forecasts of the path and intensity of hurricanes.

"This is coming, but it's maybe going to be another five years," he says.

http://chronicle.com
Section: Information Technology
Page: A51


Front page | Career Network | Search | Site map | Help

Copyright © 2000 by The Chronicle of Higher Education