Transitions Newsletter Header

Issue 12 | Autumn 2016

Lead Story

NOAA Selects GFDL’s Dynamical Core

In August 2014, numerical weather prediction modelers attended a workshop to discuss dynamic core requirements and attri- butes for the NGGPS, and developed a battery of tests to be conducted in three phases over 18 months. Six existing dynamical cores were identified as potential candidates for NGGPS.

During Phase 1, a team of evaluators ran benchmarks to look at performance, both meteorological and computational, and the stability of the core. The performance benchmark measured the speed of each candidate model at the resolution run currently in National Centers for Environmental Prediction (NCEP) operations, and at a much higher resolution expected to be run operation- ally within 10 years. They also evaluated the ability of the models to scale across many tens of thousands of processor cores.

Assessment of the test outcomes from Phase 1 resulted in the recommendation to reduce the candidate pool to two cores, NCAR’s Model for Prediction Across Scales (MPAS) and GFDL’s Finite-Volume on a Cubed Sphere (FV3), prior to Phase 2.

In Phase 2, the team evaluated the two remaining candidates on meteorological performance using both idealized physics and the operational GFS physics package. Using initial conditions from operational analyses produced by NCEP’s Global Data Assimila- tion System (GDAS), each dynamical core ran retrospective forecasts covering the entire 2015 calendar year at the current opera- tional 13 km horizontal resolution. In addition, two cases, Hurricane Sandy in October 2012, and the May 18-20, 2013 tornado outbreak in the Great Plains were run with enhanced resolution (approximately 3 km) over North America. The team assessed the ability of the dynamical cores to predict severe convection without a deep convective parameterization, using operational initial conditions and high-resolution orography.

The results of Phase 2 tests showed that GFDL’s FV3 satisfied all the criteria, had a high level of readiness for operational imple- mentation, and was computationally highly efficient. As a result, the panel of experts recommended to NOAA leadership that FV3 become the atmospheric dynamical core of the NGGPS. NOAA announced the selection of FV3 on July 27, 2016.

Phase 3 of the project, getting underway now, will involve integrating the FV3 dynamical core with the rest of the operational global forecast system, including the data assimilation and post-processing systems. See results, https://www.weather.gov/sti/sti- modeling_nggps_implementation_atmdynamics.

Contributed by Jeff Whitaker.

Hindcast of the 2008 hurricane season, simulated by the FV3-powered GFDL model at 13 km resolution.

NGGPS Dynamical Core: Phase 1 Evaluation Criteria

  • Simulate important atmospheric dynamical phenomena, such as baroclinic and orographic waves, and simple moist convection
  • Restart execution and produce bit-reproducible results on the same hardware, with the same processor layout (using the same executable with the same model configuration)
  • High computational performance (8.5 min/day) and scalability to NWS operational CPU processor counts needed to run 13 km and higher resolutions expected by 2020
  • Extensible, well-documented software that is performance portable
  • Execution and stability at high horizontal resolution (3 km or less) with realistic physics and orography
  • Evaluate level of grid imprinting for idealized atmospheric flows

Phase 2 Evaluation Criteria

  • Plan for relaxing the shallow atmosphere approximation (deep atmosphere dynamics) to support tropospheric and space-weather requirements.
  • Accurate conservation of mass, tracers total energy, and entropy that have particular importance for weather and climate application.
  • Robust model solutions under a wide range of realistic atmospheric initial conditions, including strong hurricanes, sudden stratospheric warmings, and intense upper-level fronts with associated strong jet-stream wind speeds using a common (GFS) physics package
  • Computational performance and scalability of dynamical cores with GFS physics
  • Demonstrated variable resolution and/or nesting capabilities, including physically realistic simulations of convection in the high-resolution region
  • Stable, conservative long integrations with realistic climate statistics
  • Code adaptable to NOAA Environmental Modeling System (NEMS)/ Evaluated Earth System Modeling Framework (ESMF)
  • Detailed dycore (dynamical core) documentation, including documentation of vertical grid, numerical filters, time-integration scheme and variable resolu- tion and/or nesting capabilities.
  • Performance in cycled data assimilation tests to uncover issues that might arise when cold-started from another assimilation system
  • Implementation plan including costs

 


Director's Corner

Paula Davidson

Contributed by Paula Davidson

NOAA’s testbeds and proving grounds (NOAA TBPG) are an important link between research advances and applications, and especially NOAA operations. Some are long-recognized, like the Developmental Testbed Center (DTC), while others have been chartered more recently. With the 2015 launch of the Arctic Testbed in Alaska, twelve NOAA TBPG follow execution and governance guidelines to be formally recognized by NOAA. These facilities foster and host competitively-selected, collabora- tive transition testing projects to meet NOAA mission needs. Projects are supported through dedicated or in-kind facility sup- port, and programmatic resources both internal and external to NOAA. Charters and additional information on NOAA TBPG, as well as summaries of recent coordination activities and workshops, are posted at the web portal. See www.testbeds.noaa. gov.

NOAA’s testbeds and proving grounds (NOAA TBPG) are an important link between research advances and applications,and especially NOAA operations. Some are long-recognized, like the Developmental Testbed Center (DTC), while others have been chartered more recently. With the 2015 launch of the Arctic Testbed in Alaska, twelve NOAA TBPG follow execution and governance guidelines to be formally recognized by NOAA. These facilities foster and host competitively-selected, collabora- tive transition testing projects to meet NOAA mission needs. Projects are supported through dedicated or in-kind facility sup- port, and programmatic resources both internal and external to NOAA. Charters and additional information on NOAA TBPG, as well as summaries of recent coordination activities and workshops, are posted at the web portal. See www.testbeds.noaa. gov.

Along with adopting systematic guidelines for function, execution, and governance of NOAA TBPG, in 2011 NOAA instituted formal coordination among the TBPG, to better leverage progress across the spectrum of testing, and provide a consistent voice and advocacy for programs and practices involving the TBPG. The coordination committee hosts annual workshopsfeaturing collaborative testing on high-value mission needs, fosters practices consistent with rigorous, transparent testing and increased communication of test results, and provides a forum to advance program initiatives in transitions of research to operations and of operations to research.

NOAA’s TBPG conducts transition testing to demonstrate the degree of readiness of advanced research capabilities for operations/applications. Over the past two years, these facilities completed more than 200 transition tests, demonstrating readiness for NOAA operations for more than 70 candidate capabilities. More than half have already been deployed. Beyond the simple transition statistics, NOAA TBPG have generated a wealth of progress in developing science capabilities for use by NOAA and its partners through more engaged partnerships among researchers, developers, operational scientists and end- user communities. Incorporating appropriate operational systems and practices in development and testing is a key factor in speeding the integration of new capabilities into service and operations.

DTC, in collaboration with public and private-sector partners, plays an increasingly important role in NOAA transitions of advanced environmental modeling capabilities to operations, and with rigorous testing to evaluate performance and potential readiness for NOAA operations. Readiness criteria include capability-specific metrics for objective and subjective performance, utility, reliability and software engineering/production protocols. DTC facilitates R&D partners’ use of NOAA’s current and developmental community modeling codes in related research, leading to additional evaluation and incorpora- tion of partner-generated innovations in NOAA’s operational models.

NOAA programs that have recently supported projects conducted at NOAA TBPG, and especially at DTC, include the Next Generation Global Prediction System (NGGPS), Collaborative Science and Technology Applied Research Program, Climate Program Office, the US Weather Research Program, and the Hurricane Forecast Improvement Program. Under NGGPS auspic- es, the DTC added a new unit for testing prototypes for the NOAA’s next global prediction system. DTC’s contributions to the success of NGGPS will be the foundation for improved forecasts in critical mission areas such as high-impact severe/extreme weather in the 0-3 day time frame, in the 6-10 day time frame, and for weeks 3-4. As chair of NOAA’s TBPG coordinating committee, I am excited about the tremendous opportunity and capability that the DTC brings to these efforts to enhance NOAA’s science-based services.

 


Who's Who

Kathryn Newman

As a Junior Atmospheric Science major, at the University of North Dakota (UND), Kathryn Newman organized 25 weather labs for hundreds of aviation students that were required to take Meteorology (ATSC 110). The multi-tasking and organizational skills she developed come in handy as the DTC Hurricane Task Lead to oversee transitioning Hurricane Task research to EMC. “It’s challenging to keep track of all the moving parts in modeling research and getting them into operations,” she says, “But it is cool to be involved at this level--to know what goes into the models.” She also serves on the Model Evaluation Team and the Data Assimilation Team.

Kathryn has been with NCAR since 2009. She led the development of a functionally-similar operational environment for the Air Force Weather Agency to determine an appropriate initial configuration for their impending Gridpoint Statistical Information data assimilation for operations.. She is proud to have wrapped that project up and the results published.

She earned her B.S. and M.S. from UND. Her Master’s work was ground validating satellite products for atmospheric radiation. She also worked on MM5 and WRF particle (aerosol?) dispersion applications for the Army High Performance Computing Research Center.

Kathryn grew up Anoka, Minnesota where she remembers trying to get her Halloween costume over her snowsuit. In college, she braved the cold with other students to stand in line to get the best seats. UND posted signs: “Stand at your own risk,” with a thermometer nearby for bragging rights. She and her husband road-trip to Omaha, Colorado Springs, or even Minneapolis to catch college hockey games in their spare time.

Kathryn wanted to be a veterinarian when she was younger because she loved animals. Though life took her in a different direction, she likes to hike Chautauqua trails with Lucy, her beagle. She says she and her husband like to do typical “Colorado stuff;” ski, hike, and visit craft breweries. Avery Brewing Co. is currently at the top of her list.

 


Visitors

Object-based Verification Methods

Visitors: Jason Otkin, Chris Rozoff, and Sarah Griffin

As visitors to the DTC in 2015, Jason Otkin, Chris Rozoff, and Sarah Griffin explored using object-based verification methods to assess the accuracy of cloud forecasts from the experimental High Resolution Rapid Refresh (HRRR) model. Though the forecast accuracy could be assessed using traditional statistics such as root mean square error or bias, additional information about errors in the spatial distribution of the cloud field could be obtained by using more sophisticated object-based verification methods.

The primary objective of their visit to the DTC was to learn to use the Meteorological Evaluation Tools’ Method for Object-Based Diagnostic Evaluation (MODE). Once they learned how MODE defines single objects and clusters of objects, they could use MODE output of individual objects and matched pairs to assess the forecast accuracy.

The team also wanted to develop innovative methods using MODE output to provide new insights. For example, they were able to calculate and compare how well certain characteristics of the forecast cloud object, suchs as its size and location, match those of the observed cloud object.

One outcome of their DTC visit was the development of the MODE Skill Score (MSS). The MSS uses the interest val- ues generated by MODE, which characterize how closely the forecast and observed objects match each other, along with the size of the observed object, to portray the MODE output as a single number.

For their project, they assessed the 1-h experimental HRRR forecast accuracy of cloud objects occurring in the upper troposphere, where satellite infrared brightness temperatures are most sensitive. They used simulated Geostation- ary Operational Environmental Satellite (GOES) 10.7μm brightness temperatures generated for each HRRR forecast cycle, and compared them to the corresponding GOES observations. Forecast statistics were compiled during August 2015 and January 2016 to account for potential differences in cloud characteristics between the warm and cool seasons.

Overall, the higher interest value scores during August indicate that the sizes of the forecast objects more closely match those of the observed objects, and that the spatial displacement between their centers’ of mass is smaller. They also found smaller cloud objects have less predictability than larger objects, and that the size of the 1-h HRRR forecast cloud objects is generally more accurately predicted than their location.

The researchers hope this knowledge helps HRRR model developers identify reasons why a particular forecast hour or time period is more accurate than another. It could also help diagnose problems with the forecast cloud field to make forecasts more accurate.

Otkin, Rozoff, and Griffin were visiting from the University of Wisconsin-Madison Space Science and Engineering Center and Cooperative Institute for Meteorological Satellite Studies. They were hosted by Jamie Wolff of NCAR. The DTC visitor project allowed the team to discuss methods, insights, and results face-to-face. The team feels this project scratched the surface of how to use satellite observations and object-based verification methods to assess forecast accuracy, and that the door is open for future collaboration.

Contributed by Jason Otkin, Sarah Griffin, and Chris Rozoff.

 


Did you know?

Did you know there are suggested topics for Visitor Projects that receive special consideration?

  • Advance the forecast skill of the DTC-supported HWRF modeling system through improved physics and/or initialization

  • Advance the analysis capability of the DTC-supported Gridpoint Statistical Interpolation and/or the NOAA Ensemble Kalman Filter (EnKF) Data Assimilation systems through development, testing, and evaluation of advanced data assimilation techniques and the addition of new data types or measurements

  • Transition innovations in atmospheric physical parameterizations to NOAA’s Next-Generation Global Prediction System (NGGPS)

  • Adding new capabilities to the Model Evaluation Tools

For more information and to apply, go to http://www.dtcenter.org/visitors/

 


PROUD Awards

HOWARD SOH, Software Engineer III, NSF NCAR RAL & DTC |

Howard Soh is a Software Engineer III with the NSF NCAR Research Applications Laboratory who contributes to the development and support of the Model Evaluation Tools (MET) statistical component of the METplus system at the DTC.

Howard splits his time between the National Security Applications Program (NSAP) and the Joint Numerical Testbed (JNT), but his contributions to METplus far exceed expectations for that allocation. He is a fully engaged and active member of the METplus team and is a leader in several critical aspects of MET: SonarQube static code analysis, Python embedding, NetCDF APIs, point observation data ingest, and adding support for unstructured grids. Howard also monitors the METplus Discussions forum, where users submit questions for help, and provides his expertise when needed. He often answers the questions via email or Slack well before the person monitoring Discussions for that day has even read the question!

The DTC is proud to recognize Howard for consistently demonstrating excellent technical ability, initiative, communication, and leadership skills with all of his team members. He is not only a talented software engineer on the METplus team, but is also eager to lead new development tasks, takes a proactive approach in supporting customers, is friendly and approachable, and is always happy to help the team.

Thank you, Howard, for all the work you do for your DTC colleagues, partners, and customers!

,