Thoughts on modelling the ecogeomorphic behavior of the Jornada

 

Vince Gutschick     15-18 Nov. 03

 

It was a treat to talk science for 2.5 hours yesterday, and to think in depth about how to describe the Jornada vegetation and landforms at a variety of spatial scales.  I've gathered thoughts from our discussions and added ideas that have come to me since.  I hope these are useful.

 

What are the descriptors of the system?

 

In any dynamic model, particularly one described by differential equations in time, the basic parts are:

 

State variables: these describe the current status/response of the system. These differ according to the fineness of scale.  For example, at large scales, these might include percent veg. cover by several species (or functional groups - evergreen shrubs, deciduous shrubs, ...), soil water content, etc.  At fine scales, the state variables might even include individual plants....

Initial conditions: values of the state variables at the beginning of the simulation. 

Parameters: Fixed traits of the system.   In the short term, soil type and slope/aspect, physiological traits of plant species, etc. are parameters.   Once we start looking at geomorphic changes, slope/aspect can change, becoming state variables, for example.

Boundary conditions: relations among state variables that are continously enforced - e.g., that flux of water becomes zero at a finite depth of soil, or, as a dynamic case, that the air blowing across the boundary of our system has its windspeed, temperature, and humidity specified as a function of time from real weather data or (for future weather) from GCM simulations...or that solar fluxes are specified similarly.

   When the boundary conditions are dynamic, such as weather variables are, we may split them off and rename them as driving or forcing variables.  Whatever we call them, they enter the math the same way.

"Equations of motion:" How state variables respond to each other and to forcing variables.  Obvious examples are how water infiltrates soil, how soil erodes under wind or water flows, and how plants respond to light, temperature, and so on (photosynthesis, growth, reproduction but also damage).  

  We can separate these into conservation equations and constitutive equations.  Conservation equations must be satisifed at all time - e.g., water is neither created nor destroyed, only moved among compartments (precipitation inputs, soil water, overland flows as runoff and runon, water evaporated into the air, water incorporated into plants).  Constitutive equations are specific to the players such as plants - how does photosynthetic rate of Larrea respond to current light, temperature, and so on and to acclimation state of the plant, or how does soil particle movement respond to velocity of water moving over it.

 

OK, how do we set up a description?

 

First, it depends on what scale we are working and on what questions we want to ask.  Models can be used in several distinct ways.  I detail these in Appendix 1 (for those who are interested).  To summarize here, these may be described so:

 

1. Predicting behavior, with the assumption that we know the system and its drivers well enough to write the process descriptions accurately.  This works in physics at times, or in biology rarely - more often, we make better and better models, after initial attempts fail and - if we are wise - the failures show us where we need better descriptions.  In this case, we're really pursuing a second or third type of modelling effort:

 

2. Synthesis of our knowledge of the system, to seek "emergent properties."  Some compelling examples exist.  Models have been devised to ask if relatively simple combinations of rate processes in bulk solution and on membranes can generate a "biological clock," with properties of entrainment and ability to be reset in phase.   Other models took the information from studies of the tiny (20-neuron) flight-control system of the honeybee and asked if it could actually direct complex behavior in insect flight, including obstacle avoidance, prey recognition, and capture.  (Answers: yes, and yes.) For the Jornada, wehave to think hard about what knowledge we want to synthesize - not everything ever studied on the Jornada (from soil erosion to yucca/moth interactions to LIDAR soundings of veg. height to...), nor everything ever studied about the plant species out there at any site.

 

3. Generating hypotheses and/or delimiting experimental studies. A reasonably accurate model will indicate if certain processes are important on a chosen scale of space and time....and also how many more are relatively unimportant.  A model can be especially useful, thus, in planning experiments.  It can weed out the thousands of experiments that are unlikely to produce significant results.  For example, models of water-use efficiency that I developed about 15 years ago indicate that WUE might be improved minimally by plant breeding (for changes in specific leaf area [SLA] and in stomatal control as reflected in leaf-internal CO2 levels [Ci])....but that a judicious choice of SLA and [Ci] could improve yield with very little penalty in WUE.  Similarly, another model predicted that soybean yields could improve 8% if leaf chlorophyll were reduced, by allowing light sharing to lower leaves with little penalty in high-light photosynthesis; the model-derived hypothesis was tested by John Hesketh at the Univ. of Illinois, who found 8% gains.  (Breeders were reluctant to look for true-breeding mutants; only heterozygotes of low Chl were known in soybean at the time.)

 

4. Inverse modelling: taking results and asking what caused them, to put it most simply.  "Simple" cases include measuring light penetration through vegetation and asking how much vegetation above (as leaf area or leaf area index) caused this pattern.  With much more complexity, people like Greg Asner ask how the reflected radiation from a landscape, at a variety of wavelengths and directions, tells us what types and amounts of vegetation are down there.  Inverse models tend to be "ill-conditioned"
(inferred quantities are extremely sensitive to small errors in the data), so I don't see us doing this to understand changes in vegetation and geomorphology on the Jornada!

 

What are the results we want from analysis of the model outputs?

 

“End products:” do we predict the real course of events?  This would be tough to verify, given that many changes we expect at the “important” large scales take decades to develop.  The changes we would see at smaller scales are of interest, but perhaps not unique (other models have been tested), though necessary to show our model validity at all scales.

 

Response patterns: It’s not just the final results (final state variables) that are useful products of models.  There are patterns of response in the state variables.  For example, in some mechanical or chemical models, the values of all the state variables fall into “eigenstates,” basically fixed combinations of state variables.  The total response is the sum of (eigenstate number “I”)*(a function of time only for eigenstate “i”).  You solve once for the eigenstates and then get the state variables at any time with just a time response.

            It won’t fall out this simply for the Jornada models, which are, above all, nonlinear, so the eigenstates can’t be defined. However, we have this intuition that there are patterns, in which the system stays near one stable combination of state variables (a “state”, really, more like an eigenstate, with nearly fixed values of each state variable, not an arbitrary combination).  In nonlinear dynamics, these patterns have various names, such as attractors (not necessarily a single state; it can be “orbits” or trajectories around some center). 

            The “states” (really, special states, for lack of a better term) may not be easy to guess for the Jornada, though we have our suspicions (as in the teeter-totter model, in which they are described only qualitatively.  However, if our final model is good, the special states will fall out of the results.  I would take it as a requirement of the model, and as a considerable value at the same time.

           

We know from the start that we want to work at multiple scales of space (and time).  It is the coupling across scales where we lack understanding and  comprehensive experimental verification.  We put out ideas on this in the site review.

 

  How do we do this/what do we gain from doing this?

 

  How do we decide scale separations?  Various criteria may apply, and we must decide:

     1. New scales arise at natural scale transitions, where new phenomena emerge. An example in modelling fluxes to and from vegetation (water loss, CO2 uptake) is at the transition from plant to whole stand.  We can’t take atmospheric conditions such as air temperature and humidity as being independent givens any more.  They are affected by the heat and water fluxes from the set of plants, which are impeded by the canopy boundary layer (limited mixing zone). 

     2. New scales arise when new time scales naturally arise (maybe).   Sheetflow of water crosses a 1-m plot in a few seconds, but a playa in minutes to hours.   This does not necessarily give us any natural breaks, if time scales simply with distance.  A break in time scale only occurs if we look at different phenomena…so it looks like we default to criterion 1 again.

     3. (In view of nonlinearities and stochasticities:) New spatial  scales arise where aggregating interferes with accuracy - that is, when variations become large enough across the patch.  For example, in general circulation models of weather and climate, grid cells often cover fairly homogeneous areas (over oceans, or sections of the Great Plains)...but also are large enough that Mount Whitney and Death Valley are in the same cell.  Clearly, processes of cloud condensation, solar radiation interception, etc. differ greatly between these two places.  Thus, one needs submodels run on finer grids that model the variation accurately and then are incorporated into the larger grid, not as dynamic processes (we don't save any computational effort that way) but as "parametrizations" - rescaling of responses that are defined only at the large scale.

Thus, we no longer use any “real” vegetated surface, but an “effective” surface with rescaled properties  - an albedo that isn’t simply the average over all the pieces in the cell, a soil moisture release curve that is similarly not a simple arithmetic average over all the pieces, etc.  The only way to decide what the rescaled responses are is to run the model at the small scale and see how the large cell responds to drivers (solar radiation, etc.) defined on the large scale.  This is the same as saying we don’t follow individual molecules or valence electrons in modelling chemical reactions with ordinary chemical kinetics.

    

  Problems already noted in our discussion on Friday:

     1. Spatial scales at which to change models vary with the location on the Jornada - e.g., the alluvial fan slopes have the berms and banding, but mesquite dunes have, of course, dunes, on a different (smaller) scale of about 10 m vs. 150 m for berms.     That's OK - the grids can be uneven in modern modelling methods.

     2. The phenomena that distinguish scales might differ by location - wind erosion, vs. arroyo cutting by water, vs.  transitions in soil infiltration rates.  If so, then the sub-grid models are different at different locations.  This is OK, if there’s an unambiguous classification of every small area into one type or another.  Transition zones become a problem, and these are in good part our focus.   What do we do?  I don’t have an immediate answer.

 

How do we validate models running on multiple scales? 

    How do we validate models  on the fine scale, where there are many more cells than we can go out and sample experimentally?

  The only way to sample a large number of places is with remote sensing, after it has been ground-truthed at a sufficient number of sites.   Can we do this on the Jornada?  For some variables such as vegetation or energy flux, for which sensors exist for needed variables, we might do OK.  We won’t be able to detect animals, plant propagules, etc.; we’ll have to infer a lot.   We need to think a lot about verification, in general.

 

Do we need stochastic modelling?

   This can be a buzz-word.  What does it mean?

  All models include stochastic variation - e.g., leaf-to-leaf variation in photosynthetic performance,  plant-to-plant variation in reproductive allocation, patch-to-patch variation in soil water infiltration.  We typicallly model the mean behavior and hope (or sometimes test) that the variations don't shift the mean.

  We might want to explicitly model the variations under certain conditions:

    1. If the patterns of variation are of the essence – that is, our models predict a great variation, and perhaps in patterns that are regular, or random, or of other character.

    2. If the variations shift the mean value.  Nonlinear interactions do this, linear ones don’t.  As an example, I include a plot from one of my models, of how plant transpiration, E, varies with two parameters of plant physiology, namely, the Ball-Berry slope, m (how wide the stomata open in response to opportunities for photosynthesis and good water-use efficiency) and the photosynthetic capacity, Vc,max (amount of  Rubisco enzyme, in essence). 

 

 

The variation of transpiration with Vc,max is very linear, so that variations matter little; only the mean value need be measured, or modelled.  Variation of E with slope m is a bit nonlinear at high m and low Vc,max, but not very much.  This saves on experimental design and on modelling efforts.

 

Back to basics: what questions do we want to answer? / What hypotheses do we have?

 

We have some intuition, both on unperturbed behavior of the system and on effects of management.  Examples include that continuing shrub encroachment is partly due to rising CO2, that overgrazing contributed to pushing much (not yet all) of the Jornada into a new special state of shrub dominance, and that water redirection on small scales can help reestablish grasses.  Some of these have proven particularly difficult to test, much less to reach a conclusion.  In any case,  the model(s) should help, by taking the processes we do understand and predicting behaviors to test, including emergent behaviors that surpass our intuition.  Why build a model if it won't tell us anything new?

 

Do we have any starts on the models and the descriptions of the dynamics, that we could build in to integrated multi-scale models?

 

  Teeter-totter model (Schlesinger et al.)

    Classic desertification models that emphasize abiotic consequences of changes in vegetation (changes in albedo, heat transfer, mesoscale weather patterns: Charney; Eagleson; Lyons; ...)

    Patch models of colonization, death, dispersal (Reynolds et al.; Peters et al.)

    Leaf- to plant- to stand-scale models of physiology: photosynthesis, water use, nutrient acquisition and use, growth, etc. (Gutschick; Huenneke and Miller; Cunningham et al.; Reynolds and Ogle) - including responses to future conditions such as high CO2 (BassiriRad et al.)

    NPP models based on rain use and WUE (Huenneke; Gutschick)

    SEBAL models of water and energy transfers at the surface (in part)

    Hydrology models, incl. geomorphic processes and runon-runoff (Parsons and Wainwright)

    Aeolian models (Gilette)

    Nutrient dynamics models, in part (small scale)

  

 

What parts are we missing?

     Berm and banding models (but there are dune models; adaptable?)

     Mesoscale circulation models applied to the Jornada

     Large-scale models of hydrology, incl. subsurface flows

     Predictive (vs. descriptive) models for biodiversity affecting NPP

     Management models (altered landforms, reseeding, shrub removal, ...)

    

Some conclusions: where might we go from here?

 

Above all, we need to keep talking to find out what processes we do know how to describe and which ones we don't.  We need a focus modelling group, just as we need focus groups for well-honed experiments.  I think we need to ramp up some heavy-duty modelling before heavy-duty experiments, so we do the most informative experiments. Do we need new expertise?  I think we can collaborate outside for a few things, such as mesoscale climate modelling.  We need a common viewpoint, even to get terminology straight - not as a party line, but as a center to work from.  We should, and it looks like we will, meet often to answer the hard questions, some of which I've tried to phrase here.