In the First-Person Science series, scientists describe how they made significant discoveries over years of research. Jacqueline Chen is a senior scientist at the Combustion Research Facility at Sandia National Laboratories.
Conducting extreme-scale computational science at some of the world’s largest computing facilities has been a wonderful ride.
My work over the last 25 years has focused on using leadership-class computers at the Department of Energy’s user facilities to model the flames that result from turbulent combustion. This is the combustion that happens in engines burning fuels such as natural gas, biofuels, or gasoline.
Understanding these interactions is critically important to developing cars and trucks that use less fuel and produce lower emissions. It’s paramount for researchers to find ways to increase the engine efficiency of vehicles over the next couple of years. It’s an important path to energy security and reducing greenhouse gas emissions and small particulates that are harmful to human health. Studying these processes is also helpful for creating more efficient low carbon gas turbines for power generation that would use hydrogen-rich fuels like blends of ammonia, hydrogen, nitrogen, and air.
My work draws heavily on computational fluid dynamics, a type of research that uses computing models and data analysis to understand how fluids move. I started on this topic during my PhD at Stanford University, where there was a concentration of people working on it. At the same time, NASA Ames Research Center was hosting some of the world’s largest supercomputers. There was this burgeoning supercomputing capability together with some very good researchers, all focused on applying computational fluid dynamics at the largest computational scales.
From there, I moved to Sandia Labs’ Combustion Research Facility in Livermore, California. Since it was established in the 1970s during the oil crisis, the Combustion Research Facility has developed a world-wide reputation as being the place to go for combustion research. There are more than 100 scientists, engineers, and technicians, all driven towards a common purpose. There are also more than 120 visitors each year coming from various universities, labs, and institutes worldwide. They add greatly to the intellectual vitality of the place.
From its early days, the research there not only emphasized fluid dynamics, but chemically reacting flows. These are the complex and multi-step chemical reactions that happen in fluids. My collaborators and I merged the two so we could examine the turbulence of chemistry interactions in practical systems relevant to our daily lives like cars, airplanes, and electricity production.
My work helped the lab establish a high-accuracy simulation capability known as direct numerical simulation (DNS). DNS creates a computer model of how fluids move by numerically resolving the full range of turbulence scales. I was interested in developing a high performance computing DNS capability that would allow me and other researchers to probe the intricacies of turbulence chemistry interaction at very small scales (at the order of microns – one millionth of a meter).
You can think of DNS as a numerical microscope that scientists can use to zoom in to a very small volume. It’s very useful for understanding how combustion flames and turbulence – the chaotic straining and wrinkling of those flames – interact.
Modeling chemical reactions requires enormously fine resolution, both in space and time. You need to be able to zoom in to the smallest spaces in very short time periods. That translates to billions of reference points if you want to represent it fully. At the same time, to get to some of the comparatively slow chemical processes, you have to be able to simulate out to milliseconds or tens of milliseconds. So, you have these reactions that take a relatively long time that you need to track with very small steps. Turbulence competing with chemistry requires even more fine resolution, especially as flames are subjected to turbulent motions in less predictable ways. For example, turbulence can end up quenching flames because the chemical reactions can’t occur fast enough.
In addition, there’s very little information at the individual scales, either large or small. The insights are in the interactions between the scales, where you can see the reactions occurring. This is especially important in combustion flames, which have lots of different reactions and movements happening at the same time.
DNS is a unique method. It models interactions at about a quarter scale of an internal combustion engine itself, all the way down to the tiny scales where heat is lost. It helps us understand these fine details of chemistry and turbulence at length and time scales that are often prohibitive to measure.
The research has evolved over more than 30 years. DNS of turbulent combustion has tracked the evolution of high-performance computing, which has doubled roughly every two years.
Back in the 1990s, we could zoom in to maybe the order of a millimeter. In a 1996 paper, we looked at how the flame’s temperature, shape, and the movement of the heat and mass interacted. The technology was so limited that we had to simplify a 40 step reaction mechanism of the chemical reaction down to four global reactions. Although that allowed us to describe the flame’s structure fairly well, we couldn’t show the relationship between the structure and the underlying chemical reactions or the turbulence of the flame. Even with these limitations, the DNS showed how certain chemical species, that are crucial to the cascade of reactions in a flame and which quickly diffuse, can affect the overall burning rate when the flame is coupled with unsteady flow that creates highly curved flames.
In the 2000s, we had moved from 2D models to 3D ones, a large step forward. By 2007, we could run a DNS of a 3D flame with a much more detailed description of the chemical reactions. We followed 13 chemical species (a group of atoms and molecules that go through a chemical process) through 73 reactions of methane burning. We could also incorporate turbulence into our models, allowing us to watch the flame get more wrinkled over time.
By 2009, we could zoom in even more. DNS lays a computational mesh over a region containing turbulence and flames. The mesh we made had nearly a billion points and squares one-tenth the size of a human hair. These models required computers that could do more than a trillion calculations per second. At the Oak Ridge Leadership Computing Facility, a DOE user facility, we ran our model for 15 days on 10,000 processors at the same time.
Now, 30 years from when I started, we can encompass entire centimeters of turbulence interaction with flames. You can zoom in and out of them as needed. You can probe them in exquisite detail, measuring all of the different variables, including temperature, species, and velocity, in three dimensions over time. It’s not possible to do that experimentally.
It’s amazing how high-performance computing has made it possible to model more and more complex fuel chemistry. By refactoring our code(s) for the new supercomputers, we’ve been able to home in on much more complex chemistry turbulence interactions. The lion’s share of those variables belong to chemical species, upwards of hundreds of chemical species. As leadership class computers have grown, we’ve grown our problem sizes and complexity.
We started with the simplest combustion system, hydrogen-oxygen. That forms the underpinning of all hydrocarbon combustion systems. From there, we progressed to investigating methane-oxygen systems, like in natural gas engines or turbines.
Modern supercomputers allow us to look at far more complicated fuels. We can include chemical reactions that accurately describe complex fuels like biofuels and surrogates for gasoline and diesel. Some of these large biodiesel fuels require upwards of thousands of species and tens of thousands of elementary reaction steps. At this point in time, we’re up to 20 billion cells with 116 chemical species, simulating a reference fuel for a gasoline blend.
We can also see ignition over a wider range of temperatures than before. In 2017, we started being able to look at low-temperature ignition followed by intermediate-temperature ignition, then high-temperature ignition. That’s important because that’s the sequence of processes you follow in a diesel engine. The diesel ignition process controls the combustion phasing or ignition timing in an engine, as well as the combustion rate and emissions that follow. Soot generation is highly dependent on the initial degree of mixing of the cold liquid fuel with the heated ambient air. Understanding the full process is critically important to developing clean, efficient engines.
With that study, we were very surprised to find that turbulence has such a big effect on multi-stage auto ignition. When we saw the simulation results, we thought, “Wow, this is unusual.” We did not expect to find that these low-temperature chemical reaction waves, the so-called ‘cool flames’, formed during low-temperature ignition would have such a big effect on high-temperature ignition. It was a technical surprise for us. But a very impactful one in terms of engine design. If you’re trying to develop an engine, the flame would ignite in a different physical region and mixture composition than you would have thought. That would affect the combustion phasing of your engine.
Combining chemical complexity and fluid mechanics is allowing our models to mimic conditions that span an enormous range of both temperature and pressures. Our models are now more faithful to the true physics. You can’t represent true turbulence physics in 2D. You couldn’t do multi-stage ignition. Now we can.
In the last five years, we’ve been able to map out the incredibly complex process of burning in a diesel combustion engine. In 2018, we conducted the first fully-resolved 3D DNS of a turbulent diesel flame at thermo-chemical conditions similar to real life. We identified how there are two stages of ignition, moving from low temperatures to high temperatures. Each one occurs at specific times and in specific regions in a turbulent shear flow, which we couldn’t map out before then.
Now, in addition to our work for DOE’s Office of Science, we’re working with the Vehicle Technologies Office, part of DOE’s Office of Energy Efficiency and Renewable Energy. We’re part of a multi-laboratory partnership with the automotive engine manufacturers and independent software vendors. It will help these companies develop more predictive models to design fuel-efficient, clean, and cost-effective engines. Through this partnership, my postdocs and I are responsible for providing DNS data to examine issues with spark ignition and early flame growth under different engine operating conditions.
Our anticipation is that in the end, industry will have much more improved, more targeted models to solve their problems.
The advances in computing make this work possible but are also challenging in their own way. Computers change out every two to three years. It’s not possible for an application scientist who may be writing a couple million lines of code to update that code every couple of years.
We could only do this because of the close collaboration across multiple disciplines. Using a concept called application co-design, application scientists interact very closely with computer scientists. They’re much more familiar with the software stack and operating system than I am. Together, we make sure the DNS codes perform extremely well on these exotic computer architectures. We work as a very close-knit team to find numerical methods and programming models that allow us to move our software to new machines without changing the underlying sequential code that scientists are most familiar with.
Having people in different disciplines also allows us to address a lot of crosscutting issues. There are tools that a computer scientist might bring to bear on a fluids problem that we combustion scientists wouldn’t necessarily read about in our scientific literature. For example, visualization computer scientists from UC Davis brought methods for volume and particle visualization combined with data analytics for identifying and tracking flow or combustion features that I would have never otherwise found on my own.
Similarly, working with applied mathematicians has enabled us to simulate turbulent combustion at high pressure where there are huge disparities in length scales between turbulence and flames. By adaptively refining the mesh where the flame zone is located at a given time, I can concentrate fine meshes only where they are needed. That allows us to not waste computational resources where nothing interesting is happening. Applied mathematicians are also helping solve the problem of the high dimensionality of chemistry -- the hundreds of species and reactions alluded to – that are transported in DNS. They develop reduced order models based on intrinsic low-dimensional manifolds of the species and temperature. To capture the intermittency with turbulence chemistry interactions, we need on-the-fly reduced order modeling and sensitivity analysis. Hence, we need the combined efforts of both computer scientists and applied mathematicians to orchestrate such complex simulation and data science workflows at extreme scale.
Understanding the language to communicate with computer scientists and mathematicians has been a big challenge. That includes appreciating their difficulties and jointly coming up with a solution or at least a process of how to come up with a solution. It takes a while to get familiar with the lingo and the way of doing things in a different community. We meet weekly to go over stumbling blocks or help each other make the next move. It’s critical to have that communication, with everybody trying to understand and see things from the other discipline’s side.
You don’t learn a different field overnight. You have to ask the same dumb questions over and over again. Eventually you get it. Having postdocs and students who have no built-in boundaries or pre-conceived notions is helpful. They soak all this stuff up like a sponge.
I’ve gained so much from interacting with collaborators from all over the world. I think the people I get to work with are the most rewarding aspect of the work and why I’m still in this business.
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit www.energy.gov/science.