Complexity vs Simplicity in Geophysics

In our book Mathematical GeoEnergy, several geophysical processes are modeled — from conventional tides to ENSO. Each model fits the data applying a concise physics-derived algorithm — the key being the algorithm’s conciseness but not necessarily subjective intuitiveness.

I’ve followed Gell-Mann’s work on complexity over the years and so will try applying his qualitative effective complexity approach to characterize the simplicity of the geophysics models described in the book and on this blog.

from Deacon_Information_Complexity_Depth.pdf

Here’s a breakdown from least complex to most complex

1. Say we are doing tidal analysis by fitting a model to a historical sea-level height (SLH) tidal gauge time-series. That’s essentially an effective complexity of 1 because it just involves fitting amplitudes and phases from known lunisolar sinusoidal tidal cycles.

Figure 1: Conventional tidal analysis by fitting amplitudes and phases of known tidal periods

This image has been resized to fit in the page. Click to enlarge.

2. The same effective complexity of 1 applies for the differential length-of-day (dLOD) time-series, as it involves straightforward additive tidal cycles.

Figure 2 : Long-period tidal cycles map directly to dLOD.
The long-term modulation shown follows the 18.6 year nodal cycle

This image has been resized to fit in the page. Click to enlarge.

3. The Chandler wobble model developed in Chapter 13 has an effective complexity of 2 because it takes a single monthly tidal forcing and it multiplies it by a semi-annual nodal impulse (one for each nodal cycle pass). Just a bit more complex than #1 or #2 but the complexity already may be too great for geophysicists to accept, as the consensus instead argues for a stochastic forcing stimulating a resonance.

Figure 3 : Chandler Wobble model consisting of a lunisolar tidal forcing.
The beat frequency is annual (sun) nodal against lunar nodal cycle.

This image has been resized to fit in the page. Click to enlarge.

4. The QBO model described in Chapter 11 is also estimated at an effective complexity of 2, as it is impulse-modulated by nearly the same mechanism as for the Chandler wobble of #3. But instead of a bandpass filter for the Chandler wobble, the QBO model applies an integrating filter to create more of a square-wave-like time-series. Again, this is too complex for consensus atmospheric physics to accept.

Figure 4 : QBO model has a fundamental cycle of ~2.33 years.
This is an aliasing of lunar nodal cycle against the annual cycle

This image has been resized to fit in the page. Click to enlarge.

5. The ENSO model described in Chapter 12 is an effective complexity of 3 because it adds the nonlinear Laplace’s Tidal Equation (LTE) modulation to the square-wave-like fit of #4 (QBO), tempered by being calibrated by the tidal forcing model for #2 (dLOD). Of course this additional level of physics “complexity” is certain to be above the heads of ocean scientists and climate scientists, who are still scratching their heads over #3 and #4.

Figure 5 : Higher complexity of ENSO model due to nonlinear modulation of the LTE solution

This image has been resized to fit in the page. Click to enlarge.

The ENSO model is complex due to the non-linearity of the solution. The cyclic tidal factors can create harmonics from both the inverse cubic gravitational pull and from the LTE solution, and together with the annual impulse modulation creates an additional nasty aliasing that requires painstaking analysis to reveal.

By comparison, most GCMs of climate behaviors have effective complexities much more than this because — as Gell-Man defined it — the shortest algorithmic description would require pages and pages of text to express. To climate scientists perhaps the massive additional complexity of a GCM is preferred over the intuition required for enabling incremental complexity.

Since this post started with a Gell-Mann citation, may as well stick one here at the end:

“Battles of new ideas against conventional wisdom are common in science, aren’t they?”

“It’s very interesting how these certain negative principles get embedded in science sometimes. Most challenges to scientific orthodoxy are wrong. A lot of them are crank. But it happens from time to time that a challenge to scientific orthodoxy is actually right. And the people who make that challenge face a terrible situation. Getting heard, getting believed, getting taken seriously and so on. And I’ve lived through a lot of those, some of them with my own work, but also with other people’s very important work. Let’s take continental drift, for example. American geologists were absolutely convinced, almost all of them, that continental drift was rubbish. The reason is that the mechanisms that were put forward for it were unsatisfactory. But that’s no reason to disregard a phenomenon. Because the theories people have put forward about the phenomenon are unsatisfactory, that doesn’t mean the phenomenon doesn’t exist. But that’s what most American geologists did until finally their noses were rubbed in continental drift in 1962, ’63 and so on when they found the stripes in the mid-ocean, and so it was perfectly clear that there had to be continental drift, and it was associated then with a model that people could believe, namely plate tectonics. But the phenomenon was still there. It was there before plate tectonics. The fact that they hadn’t found the mechanism didn’t mean the phenomenon wasn’t there. Continental drift was actually real. And evidence was accumulating for it. At Caltech the physicists imported Teddy Bullard to talk about his work and Patrick Blackett to talk about his work, these had to do with paleoclimate evidence for continental drift and paleomagnetism evidence for continental drift. And as that evidence accumulated, the American geologists voted more and more strongly for the idea that continental drift didn’t exist. The more the evidence was there, the less they believed it. Finally in 1962 and 1963 they had to accept it and they accepted it along with a successful model presented by plate tectonics….”

https://scienceblogs.com/pontiff/2009/09/16/gell-mann-on-conventional-wisd

With all that, progress is being made in earth geophysics by looking at other planets. My high-school & college classmate Dr. Alex Konopliv of NASA JPL has lead the first research team to detect the Chandler wobble on another planet (this case for Mars), see “Detection of the Chandler Wobble of Mars From Orbiting Spacecraft” Geophysical Research Letters(2020).

In the body of the article, a suggestion is made as to the source of the forcing for the Martian Chandler wobble. The Martian moon Phobos is quite small and cycles the planet in ~7 hours, so this may not have the impact that the Earth’s moon has on our Chandler Wobble.

The wobble is small, about 10 cm on average.

Since a Mars year is 687 Earth days, only the 3rd harmonic (229 days) is close to the measured wobble of 206.9 days. With the Earth, it’s quite simple how the nodal lunar cycle interferes with the annual cycle to line up exactly with the Earth’s 433 day Chandler wobble (see Figure 3 up-thread in this post) , creating that wobble as a forced response, but nothing like that on Mars, which may be a natural response wobble.

7 thoughts on “Complexity vs Simplicity in Geophysics

  1. There’s an interesting recent paper at Nature Geosciences

    Benestad, R., Sillmann, J., Thorarinsdottir, T. et al., "New vigour involving statisticians to overcome ensemble fatigue". Nature Clim Change 7, 697–703 (2017). https://doi.org/10.1038/nclimate3393

    It describes the role of ensembles of climate models, very much in line with things like Bayesian model averaging or boosted regression trees or random forests for regression, that is, ensemble methods. (See Zhi-Hua Zhou, Ensemble Methods, 2012, Taylor & Francis, for instance.)

    I’m raising this matter on this blog post for discussion because it seems these may be the antithesis of the direction suggested in the post and it may be useful to critique them for contrast.

    What I did not appreciate until I read this paper is that ensembles of climate models sometimes include incomplete models. I don’t mean models that are less better at simulating some features of the climate system than others. That’s true of nearly any two arbitrarily chosen models. I mean some models are “experts” at a small piece of the climate system or even one observable and they have nothing, even wrong things to say about other pieces. This is closer to the approach used in some statistical learning approaches (also called “machine learning”, “ML”, or even “AI”, even if I disagree these are AI approaches). There the bit or primitive learners may get something right, but they are bad for another part of the space, yet if enough of these are hardnessed — even if they are trained randomly — performance can arise when answers are produced by a combination of their outputs.

    One critique of this approach is its lack of perspicuity … Why does the ensemble produce the answer it does? One can examine the models that respond to a set of inputs and their weightings, but insight then depends upon understanding all the models. Instead, understanding might arise by testing the ensemble on a series of standardized graded examples and seeing what answers it produces. Even better, one might use a simpler non-parametric but more transparent statistical method or algorithm and train it on the the outputs of the ensemble for a limited set of initial and boundary conditions. Here the outputs of the ensemble are the responses and the initial-with-boundary conditions are the predictors. The idea is that then exploration of the algorithm’s responses across varying conditions gives physical insights and is the basis for understanding what the ensemble “is saying”.

    Liked by 1 person

  2. I always thought the ensemble was there because they were solving a statistical mechanics problem, where the ensemble was needed to generate all the different trajectories in an ensemble. But that doesn’t work for a phenomena that is essentially a single standing-wave mode.

    Then you need to assume that they are doing an ensemble because they are trying to achieve a “wisdom of crowds” experiment to narrow down on a most likely model composite, which is perhaps what you are getting at.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s