Transitions Newsletter Header

Issue 6 | Autumn 2014

Lead Story

Evaluating WRF performance over time

Contributed by Jamie Wolff
As modifications and additions are made to WRF code and released to the community, users often ask, “Is WRF really improving?”
Time series plot of 2m T (C) bias across CONUS domain over the warm season for WRF versions 3.4 (green), 3.4.1 (blue), 3.5 (red), 3.5.1 (orange), and v3.6 (purple). Median values of distribution are plotted with 99% confidence intervals. The gray boxes around forecast hour 30 and 42 correspond to the times shown in next figure.

This is a hard question to answer, largely because “WRF” means something different to each user with a specific model configuration for their application. With the numerous options available in WRF, it is difficult to test all possible combinations, and resulting improvements and/ or degradations of the system may differ for each particular configuration. Prior to a release, the WRF code is run through a large number of regression tests to ensure it successfully runs a wide variety of options; however, extensive testing to investigate the skill of the forecast is not widely addressed. In addition, code enhancements or additions that are meant to improve one aspect of the forecast may have an inadvertent negative impact on another.

In an effort to provide unbiased information regarding the progression of WRF code through time, the DTC has tested one particular configuration of the Advanced Research WRF (ARW) dynamic core for several releases of WRF (versions 3.4, 3.4.1, 3.5, 3.5.1, and 3.6). For each test, the end-to-end modeling system components were the same: WPS, WRF, the Unified Post Processor (UPP) and the Model Evaluation Tools (MET). Testing was conducted over two three-month periods (a warm season during July-September 2011 and a cool season during January-March 2012), effectively capturing model performance over a variety of weather regimes. To isolate the impacts of the WRF model code itself, 48-h cold start forecasts were initialized every 36h over a 15-km North American domain.

The particular physics suite used in these tests is the Air Force Weather Agency (AFWA) operational configuration, which includes WSM5 (microphysics), Dudhia/RRTM (short/long wave radiation), M-O (surface layer), Noah (land surface model), YSU (planetary boundary layer), and KF (cumulus). To highlight the differences in forecast performance with model progression, objective model verification statistics are produced for surface and upper air temperature, dew point temperature and wind speed for the full CONUS domain and 14 sub-regions across the U.S. Examples of the results (in this case, 2 m temperature bias) are shown in the figures. A consistent cold bias is seen for most lead times during the warm season for all versions (figure on page 1). While there was a significant degradation in performance during the overnight hours with versions 3.4.1 and newer, a significant improvement is noted for the most recent version (v3.6). Examining the distribution of 2 m temperature bias spatially by observation site (figure below), it is clear that for the 30-hour forecast lead time (valid at 06 UTC), v3.6 is noticeably colder over the eastern CONUS. However, for the 42-hour forecast lead time (valid at 18 UTC), v3.4 is significantly colder across much of the CONUS. For the full suite of verification results, please visit: WRF Version Testing website at www.dtcenter.org/eval/meso_mod/version_tne

The four-panel figure shows average 2 m temperature (C) bias by observation station over the warm season for WRF version 3.4 (left) and 3.6 (right) at forecast hour 30 (top) and 42 (bottom).

 


Director's Corner

Kevin Kelleher

Kevin Kelleher, NOAA GSD
Contributed by Kevin Kelleher

During my first 15 months as the ESRL Global Systems Division Director, I have learned about the DTC and its role in the modeling community. The DTC has made remarkable gains in supporting the WRF model within the community that has contributed to the great success and usage of the model both nationally and internationally. I believe the DTC is unique in how it is funded and operated as a joint effort between NCAR and NOAA, along with partners from the Air Force. It is my observation that there is a significant effort to develop global models at resolutions traditionally associated with mesoscale/ regional models. Therefore, it is a good time for the mission of the DTC to be reviewed and possibly updated such that the DTC has a viable and robust future, should global models eventually begin to replace mesoscale/regional models within NOAA NCEP operations, for example. At GSD, we have recently reorganized in response to these changes in the modeling community.

All of our modeling efforts are now under a single Branch, the Earth Modeling Branch led by Dr. Stan Benjamin. Nearly all of the GSD DTC efforts now fall within this Branch, which includes researchers working on modeling scales from the storm scale through global scale. Having convenient access to such a wide range of talented personnel should benefit DTC tasking in the future. In addition, in my role as a member on the DTC Management Board, I have begun to work closely with NWS NCEP & EMC and Dr. Bill Kuo to work toward improving alignment of the current NCEP operational needs with the DTC mission, capabilities, and services.

 


Who's Who

Tara Jensen

An old management adage says that if you need some new task done NOW, ask someone who is already busy. Although she probably cringes to hear it, this applies well to Tara, with an exception: she would first offer to take it on. Her tireless approach to work follows pretty directly from a commitment to accept new challenges in new places. In practice, it has led to twists and turns along the way. After completing her Master’s degree at Colorado State University in aerosol/cloud interactions in marine stratus, her graduate work at Colorado State University was interrupted to act as a flight and support scientist in private industry, followed by taking part in every level (forecasting, modeling, data management, ground control, flight scientist and management) of weather modification field programs in the United Arab Emirates and Wyoming. Much of her field work was in airplanes (to which she credits having a strong stomach). Tara left NCAR briefly to participate in wind energy research in St. Paul, Minnesota but quickly returned to NCAR to join DTC with a focus on verification-related work (specifically with the MET package). She has been involved with extensive real-time and retrospective verification for the Hydrometeorological Testbed (HMT) and the Hazardous Weather Testbed (HWT), most recently as the verification task lead. Her most enjoyable DTC experience, she says, has been teaching at workshops and tutorials, and her work on HWT verification, particularly on interpreting results, has been especially satisfying. Keeping up with Tara’s work life is easy, since much of the verification-related activity at DTC (MET tutorials, workshops, AMS meetings, and prediction exercises) has her fingerprint on it. Keeping track of her outside-of-work life is a different story. She continues her busy lifestyle by participating in her nine-year-old daughter’s school activities, sports and other activities, all of which somehow seem to take on a life of their own!

 


Bridges to Operations

Bridges to Operations

Did You Know?

Researchers from the DTC plan to provide numerical model runs from a preliminary version of the North American Rapid Refresh Ensemble system (Pre-NARRE) to the Hydrometeorological Testbed of the Weather Prediction Center (HMT/ WPC) during their current Winter Exercise. The DTC Ensemble Task will run the ensemble system (most likely on the NOAA hjet computing system) and post-process some of the results for HMT/WPC. Members of the ensemble (eight in total) will be produced from both WRF/ RUC and NMMB dynamical cores, and will include different combinations of microphysical, planetary boundary layer, surface physics, convective parameterization, and initial and boundary condition options (as in the chart below). Although the WPC will evaluate the runs on the CONUS domain, the computational domain will be set to the larger existing RAP domain, at 13 km resolution out to 24-48h, depending on computing resources. One hopeful outcome of the experiment will be an opportunity to compare NARRE forecasts with parallel runs from the Environmental Modeling Center’s (EMC) operational regional ensemble forecast system (SREF), which will be provided by EMC. In addition, results from the experiment will be used to extend previous assessments of NARRE performance to wintertime regimes.

Contributed by Isidora Jankov and Ed Tollerud.

Version of the North American Rapid Refresh Ensemble system (Pre-NARRE) provided to the Hydrometeorological Testbed of the Weather Prediction Center (HMT/WPC)

 


Community Connections

The 2014 GSI Community Tutorial

GSI review Committee Meeting
Contributed by Hui Shao

The DTC hosted the 5th Community Gridpoint Statistical Interpolation (GSI) Tutorial on July 14-16 of this summer at the NCAR Foothills Laboratory in Boulder, Colorado. One of several outreach events sponsored recently by the DTC, this tutorial was held in collaboration with other major GSI development teams from around the United States. With an ultimate goal of providing operational capabilities to the research community, this series of tutorials has become a primary training resource whereby both operational and research users can gain knowledge essential to running and further developing GSI.

The tutorial this year was a three-day venture that included both invited lectures and practical hands-on sessions relevant to GSI. Within the program were lectures designed to cover both fundamental (e.g., compilation, execution, and di agnos t i c s ) and advanced (pre-processing, radiance and radar data assimilation, hybrid techniques, and GSI infrastructure) topics.

Lecturers and practical session instructors were invited from major GSI development/support teams, including NCEP/EMC, NASA/GMAO, NOAA/ESRL, and NCAR/MMM, along with DTC members from NOAA/ESRL and NCAR/ RAL. The principal guest speaker from the university community this year was Dr. Milija Zupanski from Colorado State University. Attended by 41 students from the U. S. and other international agencies and universities, the tutorial easily reached maximum capacity.

Tutorial presentations and lectures are posted at http:// www.dtcenter.org/com-GSI/users/docs/index.php. For more information about the GSI system itself and its community support, please visit: http://www.dtcenter. org/com-GSI/users/index.php.

On July 17 after the Community Gridpoint Statistical Interpolation (GSI) Community Tutorial (summarized on page 4), the GSI Review Committee also met at the NCAR Foothills Laboratory in Boulder. Established in 2010, this committee continues to coordinate GSI development from both operational and research communities, and is also responsible for reviewing and approving GSI code updates. During general review of ongoing GSI development efforts and future plans for GSI, the committee specifically discussed potential community support of the NOAA Ensemble Kalman Filter (EnKF) system, which is currently a part of the GSI-based hybrid ensemble variational system of the NOAA Global Forecast System (GFS), and a potential candidate for other operational applications. The decision was then made to establish code management for this EnKF system that follows the existing GSI code management protocol. As a consequence, the GSI review committee effectively becomes a joint review committee for both GSI and EnKF, and new membership (NOAA/ESRL, and the University of Maryland as a deputy member) was approved to represent the EnKF development effort. This new DA review committee thus includes members from NCEP/ EMC, NOAA/ESRL, NASA/GMAO, NESDIS, AFWA, NCAR, the University of Maryland, and the DTC.

 


Announcement

News from the DTC

Announcements, Publications and More

Announcements, Publications and More

VISITOR PROJECT AWARDS

Paul Roebber (Univ of Wisconsin - Milwaukee):Demonstration Project: Development of a Large Member Ensemble Forecast System for Heavy Rainfall using Evolutionary Programming.

WORKSHOPS, TUTORIALS, EVENTS

Physics Workshop The DTC and NOAA will convene a workshop entitled ‘Parameterization of moist process for next-generation numerical weather prediction models’ on January 27-29 2015 at the NOAA Center for Weather and Climate Prediction in College Park, MD.

The workshop is intended to stimulate the development of moist process parameterization for the Next- Generation Global Prediction System (NGGPS) and similar global models at scales and resolutions ranging from synoptic- to convection-permitting. The theme of the workshop will also be highly relevant to current and future generation regional and mesoscale models. Observational, modeling and theoretical perspectives will be addressed. Details will be available at http://www.dtcenter.org/events/workshops15/moist_phys/

PUBLICATIONS

Wolff, Jamie, M. Harrold, T. Fowler, J. Halley Gotway, L. Nance, and B. Brown, 2014 Weather and Forecasting: Beyond the basics: Evaluating model-based precipitation forecasts using traditional, spatial, and object-based methods; http://journals.ametsoc.org/doi/pdf/10.1175/WAF-D-13-00135.1

Bernardet, Ligia et al. Bulletin of the American Meteorological Society: Community Support and Transition of Research to Operations for the Hurricane Weather Research and Forecast (HWRF) Model; http://journals. ametsoc.org/doi/pdf/10.1175/BAMS-D-13-00093.1

 


PROUD Awards

HOWARD SOH, Software Engineer III, NSF NCAR RAL & DTC |

Howard Soh is a Software Engineer III with the NSF NCAR Research Applications Laboratory who contributes to the development and support of the Model Evaluation Tools (MET) statistical component of the METplus system at the DTC.

Howard splits his time between the National Security Applications Program (NSAP) and the Joint Numerical Testbed (JNT), but his contributions to METplus far exceed expectations for that allocation. He is a fully engaged and active member of the METplus team and is a leader in several critical aspects of MET: SonarQube static code analysis, Python embedding, NetCDF APIs, point observation data ingest, and adding support for unstructured grids. Howard also monitors the METplus Discussions forum, where users submit questions for help, and provides his expertise when needed. He often answers the questions via email or Slack well before the person monitoring Discussions for that day has even read the question!

The DTC is proud to recognize Howard for consistently demonstrating excellent technical ability, initiative, communication, and leadership skills with all of his team members. He is not only a talented software engineer on the METplus team, but is also eager to lead new development tasks, takes a proactive approach in supporting customers, is friendly and approachable, and is always happy to help the team.

Thank you, Howard, for all the work you do for your DTC colleagues, partners, and customers!

,