I am a physical oceanographer who knows nothing about the Chandler wobble, is only slightly familiar with the QBO, but is a longtime expert on ENSO.

To be blunt, trying to shoehorn ENSO into a periodic tidal framework stretches reality to fit someone’s preconceived theory. Only the most motivated reasoning can believe this.… (more stuff)

I am sorry to have wasted an hour on this.Billy Kessler, NOAA/PMEL, Seattle

Interactive comment on Earth Syst. Dynam. Discuss., https://doi.org/10.5194/esd-2020-74,

2020.

Billy also wrote this on his web site (emphasis mine):

4.

An idea for a science fair project.

Requested by a parent.Here’s an idea. This experiment is similar to what actual scientists are doing right now.

The project is to construct some forecast models of El Niño’s development over the next few months. We don’t know what it will do. Will it get more intense?, weaken?, remain strong?, and if so for how long? These are the subject of

much debate in the scientific community right now, and many efforts are under way to predict and understand it.The models would be forecasts made using several assumptions, and the main result would be graphs showing how the forecasts compared with actual evolving conditions.

One model would be called “persistence”. That is, whatever conditions are occurring now, they will continue. Surprisingly, persistence is often a hard-to-beat forecast, and weather forecasters score themselves on how much better than persistence thay

can do. A second model is continuation of the trend. That is, if the sea surface temperature (SST) is warming up it will continue to warm at the same rate. Obviously that can’t go on forever but in many ways a trend is a good indicator of future trends. A third model is random changes. Get a random number generator (or pick numbers out of a hat). Each day or week, use the random numbers to predict what the change of SST will be (scale the numbers to keep it reasonable). Those are three simple models that can be used to project forward from current conditions. Essentially that’s what weather forecast models do, just more sophisticatedly (see question 13).(sp)Maybe you can think of some other ways to make forecasts (if you get something that works, send it in!)Choose a few buoys from our network in different regions of the tropical Pacific (for example, on the equator, off the equator, in the east, and the west). Get the data from our web page (click for detailed instructions to get this data). Make and graph predictions for each buoy chosen for a month or two ahead, then collect observations as they come in (the data files are updated daily). Graph the observations against the three predictions. My guess is that each model would be successful in some regions for some periods of time. Other extensions would be to compare forecasts beginning at different times. Perhaps a forecast begun with September comditions

is good for 3 months, but one begun in December is only good for one month. Etc.(sp)Another simple project is to determine how significant an effect El Niño has on your local region. Do this by gathering an assortment of local weather time series from your region (monthly rainfall, temperature, etc) (available at the web pages of the National Weather service). Then get an index of El Niño like the Southern Oscillation Index (see Question 17 for a description and graphic, and download the values at NOAA’s Climate Prediction Center. The specific data links are: values for 1951-today and 1882-1950. Note that the SOI monthly values are very jumpy and must be smoothed by a 5-month running mean). Compare the turns of the El Niño/La Niña cycle with changes in your local weather; this could either be through a listing of El Niño/La Niña years and good/bad local weather, or by correlation of the two time series (send me e-mail for how to do correlation). You will probably find out that some aspects of your local weather are related to the El Niño/La Niña cycle and some are not. Also that some strong El Niño or La Niña years make a difference but some do not. This reflects the fact that, far from the center of action in the tropical Pacific, El Niño is only one of many influences on weather.

If your

FAQ from http://faculty.washington.edu/kessler/occasionally-asked-questions.html#q4are pretty good at math and computer programming (at least 8th-grade math), then I have a more advanced project that you can find here.(sp)

Just an example of “your” typical Trumpian science. “Thay” certainly are sophisticatedly

]]>Something I learned early on in my research career is that complicated frequency spectra can be generated from simple repeating structures. Consider the spatial frequency spectra produced as a diffraction pattern produced from a crystal lattice. Below is a reflected electron diffraction pattern of a reconstructed hexagonally reconstructed surface of a silicon (Si) single crystal with a lead (Pb) adlayer ( **(a)** and** (b)** are different alignments of the beam direction with respect to the lattice). Suffice to say, there is enough information in the patterns to be able to reverse engineer the structure of the surface as** (c)**.

Now consider the ENSO pattern. At first glance, neither the time-series signal nor the Fourier series power spectra appear to be produced by anything periodically regular. Even so, let’s assume that the underlying pattern is tidally regular, being comprised of the expected fortnightly 13.66 day tropical/synodic cycle and the monthly 27.55 day anomalistic cycle synchronized by an annual impulse. Then the forcing power spectrum of *f(t)* looks like the **RED **trace on the left-side of the figure below, *F( ω)*. Clearly that is not enough of a frequency spectra (a few delta spikes) necessary to make up the empirically calculated Fourier series for the ENSO data comprising ~40 intricately placed peaks between 0 and 1 cycles/year in

Yet, if we modulate that with an Laplace’s Tidal Equation solution functional *g(f(t))* that has a *G( ω)* as in the yellow inset above — a cyclic modulation of amplitudes where

So essentially what this is suggesting is that a few tidal factors modulated by two sinusoids produces enough spectral detail to easily account for the ~40 peaks in the ENSO power spectra. It can do this because a modulating sinusoid is an efficient harmonics and cross-harmonics generator, as the Taylor’s series of a sinusoid contains an effectively infinite number of power terms.

To see this process in action, consider the following three figures, which features a slider that allows one to get an intuitive feel for how the LTE modulation adds richness via harmonics in the power spectra.

- Start with a mild LTE modulation and start to increase it as in the figure below. A few harmonics begin to emerge as satellites surrounding the forcing harmonics in RED.

2. Next, increase the LTE modulation so that it models the slower sinusoid — more harmonics emerge

3. Then add the faster sinusoid, to fully populate the empirically observed ENSO spectral peaks (and matching the time series).

It appears as if by magic, but this is the power of non-linear harmonic generation. Note that the peak labeled AB amongst others is derived from the original A and B as complicated satellite-cross terms, which can be accounted for by expanding all of the terms in the Taylor’s series of the sinusoids. This can be done with some difficulty, or left as is when doing the fit via solver software.

To complete the circle, it’s likely that being exposed to mind-blowing Fourier series early on makes Fourier analysis of climate data less intimidating, as one can apply all the tricks-of-the-trade, which, alas, are considered routine in other disciplines.

**Individual charts**

https://imagizer.imageshack.com/img922/7013/VRro0m.png]]>

I don’t do that kind of stuff and don’t think I ever will.

If this comes out of a human mind, then that same information can be fed into a knowledgebase and either a backward or forward-chained inference engine could make similar assertions.

And that explains why I don’t do it — a machine should be able to do it better.

What makes an explanation good enough? by Santa Fe Institute

]]>- Konopliv, Alex S., et al. “Detection of the Chandler Wobble of Mars From Orbiting Spacecraft.”
*Geophysical Research Letters*47.21 (2020): e2020GL090568.

What’s also predictable is that the JPL team probably have a better handle of what causes the wobble on Mars than we have on what causes the Chandler wobble (CW) here on Earth. Such is the case when comparing a fresh model against a stale model based on an early consensus that becomes hard to shake with the passage of time.

Of course, we have our own parsimoniously plausible model of the Earth’s Chandler wobble (described in Chapter 13), that only gets further substantiated over time.

The latest refinement to the geoenergy model is the isolation of Chandler wobble spectral peaks related to the asymmetry of the northern node lunar torque relative to the southern node lunar torque.

In the figure below, the main Chandler wobble frequency is indicated by the upward **GREEN **arrow in the CW power spectrum. The frequency of 0.843/year is predicted by the aliasing of an annual impulse with the fortnightly draconic/nodal lunar tidal cycle, providing a twice-annual sharp torquing (but variable due to aliasing) and thus sustaining the polar axis cyclic wobble. Consider also that if the southern node torque is equal to the northern node torque, then spectral peaks at 0.157/year and 1.843/year will not emerge in the calculated Fourier terms. Thus the symmetric-forcing model spectrum in **RED **does not reveal the additional satellite terms, even though these satellite terms do occur in the data, indicated by the pair of downward **GREEN **arrows pointing to peaks in the the **BLUE** curve.

Although these are at low power levels they are the only ones emerging above the unity intensity level away from the main Chandler spectral peak, so are important from the perspective of a foundational understanding of the physical mechanism behind the wobble. And sure enough, by introducing an asymmetry in the relative northern node to southern node impulse, the two additional satellite peaks emerge clearly from the background, and *precisely *align with the empirical observations. So look again in the chart below at the asymmetric semi-annual impulse in the inset and then note how much that it raises the intensity of the modeled satellite peaks to match the level of the observed peaks (pointed at by the downward **GREEN** arrows).

**Summary**: For the two impulse variations (symmetric vs. asymmetric), the summed intensity are equal so it has no effect on the primary spectral peak, but does cause the emergence of the additional satellite terms, and thus further substantiates the lunar torquing model as a parsimonious mechanism to explain the empirical observations. And since the northern hemisphere does have differing characteristics than the southern hemisphere, such an asymmetry-related feature *should *plausibly emerge from the empirical observations. A remaining puzzle is to isolate the spring and fall impulses to the corresponding northern vs southern nodal swings — since there is always a lag phase associated with a forced response, it is not yet perfectly clear that the fall impulse is associated with the northern swing (larger impulse due to a larger land mass) and the spring impulse with the southern swing.

See my latest submission to the ESD Ideas issue for an ongoing critical discussion on the CW : ESDD – ESD Ideas: Long-period tidal forcing in geophysics – application to ENSO, QBO, and Chandler wobble (copernicus.org) The modeling software is available from GitHub.

And a reminder:

]]>In Chapter 12, we described the model of QBO generated by modulating the draconic (or nodal) lunar forcing with a hemispherical annual impulse that reinforces that effect. This generates the following predicted frequency response peaks:

The 2nd, 3rd, and 4th peaks listed (at 2.423, 1.423, and 0.423) are readily observed in the power spectra of the QBO time-series. When the spectra are averaged over each of the time series, the precisely matched peaks emerge more cleanly above the red noise envelope — see the bottom panel in the figure below (click to expand).

The inset shows what these harmonics provide — essentially the jagged stairstep structure of the semi-annual impulse lag integrated against the draconic modulation.

It is important to note that these harmonics are not the traditional harmonics of a high-Q resonance behavior, where the higher orders are integral multiples of the fundamental frequency — in this case at 0.423 cycles/year. Instead, these are clear substantiation of a forcing response that maintains the frequency spectrum of an input stimulus, thus excluding the possibility that the QBO behavior is a natural resonance phenomena. At best, there may be a 2nd-order response that may selectively amplify parts of the frequency spectrum.

See my latest submission to the ESD Ideas issue : ESDD – ESD Ideas: Long-period tidal forcing in geophysics – application to ENSO, QBO, and Chandler wobble (copernicus.org)

]]>I presented at the 2018 AGU Fall meeting on the topic of cross-validation. From those early results, I updated a fitted model comparison between the Pacific ocean’s ENSO time-series and the Atlantic Ocean’s AMO time-series. The premise is that the tidal forcing is essentially the same in the two oceans, but that the standing-wave configuration differs. So the approach is to maintain a common-mode forcing in the two basins while only adjusting the Laplace’s tidal equation (LTE) modulation.

If you don’t know about these completely orthogonal time series, the thought that one can avoid overfitting the data — let alone two sets simultaneously — is unheard of (Michael Mann doesn’t even think that the AMO is a real oscillation based on reading his latest research article called “Absence of internal multidecadal and interdecadal oscillations in climate model simulations“).

This is the latest product (click to expand)

Read this backwards from **H** to **A**.

**H** = The two tidal forcing inputs for ENSO and AMO — differs really only by scale and a slight offset

**G** = The constituent tidal forcing spectrum comparison of the two — primarily the expected main constituents of the **Mf **fortnightly tide and the **Mm **monthly tide (and the **Mt **composite of **Mf** × **Mm**), amplified by an annual impulse train which creates a repeating Brillouin zone in frequency space.

**E&F** = The LTE modulation for AMO, essentially comprised of one strong high-wavenumber modulation as shown in **F**

**C&D** = The LTE modulation for ENSO, a strong low-wavenumber that follows the El Nino La Nina cycles and then a faster modulation

**B** = The AMO fitted model modulating **H** with **E**

**A** = The ENSO fitted model modulating the other **H** with **C**

Ordinarily, this would take eons worth of machine learning compute time to determine this non-linear mapping, but with knowledge of how to solve Navier-Stokes, it becomes a tractable problem.

Now, with that said, what does this have to do with cross-validation? By fitting only to the ENSO time-series, the model produced does indeed have many degrees of freedom (DOF), based on the number of tidal constituents shown in **G**. Yet, by constraining the AMO fit to require essentially the same constituent tidal forcing as for ENSO, the number of additional DOF introduced is minimal — note the strong spike value in **F**.

Since parsimony of a model fit is based on information criteria such as number of DOF, as that is exactly what is used as a metric characterizing order in the previous post, then it would be reasonable to assume that fitting a waveform as complex as **B **with only the additional information of **F **cross-validates the underlying common-mode model according to any information criteria metric.

For further guidance, this is an informative article on model selection in regards to complexity — “A Primer for Model Selection: The Decisive Role of Model Complexity“

*excerpt*:

For the LTE formulation along the equator, the analytical solution reduces to *g(f(t))*, where *g(x)* is a periodic function. Without knowing what *g(x)* is, we can use the frequency-domain entropy or spectral entropy of the Fourier series mapping an estimated *x*=*f(t)* forcing amplitude to a measured climate index time series such as ENSO. The frequency-domain entropy is the sum or integral of this mapping of *x* to *g(x)* in reciprocal space applying the Shannon entropy –*I(f)^{.}ln(I(f))* normalized over the

This measures the entropy or degree of disorder of the mapping. So to maximize the degree of order, we minimize this entropy value.

This calculated entropy is a single scalar metric that eliminates the need for evaluating various cyclic* g(x) *patterns to achieve the best fit. Instead, what it does is point to a highly-ordered spectrum (top panel in the above figure), of which the delta spikes can then be reverse engineered to deduce the primary frequency components arising from the the LTE modulation factor *g(x)*.

The approach works particularly well once the spectral spikes begin to emerge from the background. In terms of a physical picture, what is actually emerging are the principle standing wave solutions for particular wavenumbers. One can see this in the LTE modulation spectrum below where there is a spike at a wavenumber at 1.5 and one at around 10 in panel **A** (isolating the sin spectrum and cosine spectrum separately instead of the quadrature of the two giving the spectral intensity). This is then reverse engineered as a fit to the actual LTE modulation *g(x)* in panel **B**. Panel **D** is the tidal forcing *x=f(t)* that minimized the Shannon entropy, thus creating the final fit *g(f(t))* in panel **C** when the LTE modulation is applied to the forcing.

The approach does work, which is quite a boon to the efficiency of iterative fitting towards a solution, reducing the number of DOF involved in the calculation. Prior to this, a guess for the LTE modulation was required and the iterative fit would need to evolve towards the optimal modulation periods. In other words, either approach works, but the entropy approach may provide a quicker and more efficient path to discovering the underlying standing-wave order.

I will eventually add this to the LTE fitting software distro available on GitHub. This may also be applicable to other measures of entropy such as Tallis, Renyi, multi-scale, and perhaps Bispectral entropy, and will add those to the conventional Shannon entropy measure as needed.

]]>

IntroductionFrom predictions of individual thunderstorms to projections of long-term global change, knowing the degree to which Earth system phenomena across a range of spatial and temporal scales are practicably predictable is vitally important to society. Past research in Earth System Predictability (ESP) led to profound insights that have benefited society by facilitating improved predictions and projections. However, as there is an increasing effort to accelerate progress (e.g., to improve prediction skill over a wider range of temporal and spatial scales and for a broader set of phenomena), it is increasingly important to understand and characterize predictability opportunities and limits. Improved predictions better inform societal resilience to extreme events (e.g., droughts and floods, heat waves wildfires and coastal inundation) resulting in greater safety and socioeconomic benefits. Such prediction needs are currently only partially met and are likely to grow in the future. Yet, given the complexity of the Earth system, in some cases we still do not have a clear understanding of whether or under which conditions underpinning processes and phenomena are predictable and why. A better understanding of ESP opportunities and limits is important to identify what Federal investments can be made and what policies are most effective to harness inherent Earth system predictability for improved predictions.

They outline these primary goals:

- Goal 1: Advance foundational understanding and theory for an improved knowledge of Earth system predictability of practical utility.
- Goal 2: Reduce gaps in the observations-based characterization of conditions, processes, and phenomena crucial for understanding and using Earth system predictability.
- Goal 3: Accelerate the exploration and effective use of inherent Earth system predictability through advanced modeling.
- Cross-Cutting Goal 1: Leverage emerging new hardware and software technologies for Earth system predictability R&D.
- Cross-Cutting Goal 2: Optimize coordination of resources and collaboration among agencies and departments to accelerate progress.
- Cross-Cutting Goal 3: Expand partnerships across disciplines and with entities external to the Federal Government to accelerate progress.
- Cross-Cutting Goal 4: Include, inspire, and train the next generation of interdisciplinary scientists who can advance knowledge and use of Earth system predictability.

Essentially the idea is to get it done with whatever means are available, including applying machine learning/artificial intelligence. The problem is that they wish to *“train the next generation of interdisciplinary scientists who can advance knowledge and use of Earth system predictability”*. Yet, *interdisciplinary *scientists are not normally employed in climate science and earth science research. How many of these scientists have done materials science, condensed-matter physics, electrical, optics, controlled laboratory experimentation, mechanical, fluid, software engineering, statistics, signal processing, virtual simulations, applied math, AI, quantum and statistical mechanics as prerequisites to beginning study? It can be argued that all the tricks of these trades are required to make headway and to produce the next breakthrough.

The simple idea is that tidal forces play a bigger role in geophysical behaviors than previously thought, and thus helping to explain phenomena that have frustrated scientists for decades.

The idea is simple but the non-linear math (see figure above for ENSO) requires cracking to discover the underlying patterns.

The rationale for the ESD Ideas section in the EGU Earth System Dynamics journal is to get discussion going on innovative and novel ideas. So even though this model is worked out comprehensively in Mathematical Geoenergy, it hasn’t gotten much publicity.

]]>I’ve followed Gell-Mann’s work on complexity over the years and so will try applying his qualitative * effective complexity* approach to characterize the simplicity of the geophysics models described in the book and on this blog.

Here’s a breakdown from least complex to most complex

**1.** Say we are doing tidal analysis by fitting a model to a historical sea-level height (SLH) tidal gauge time-series. That’s essentially an effective complexity of **1** because it just involves fitting amplitudes and phases from known lunisolar sinusoidal tidal cycles.

_{This image has been resized to fit in the page. Click to enlarge.}

**2.** The same effective complexity of **1** applies for the differential length-of-day (dLOD) time-series, as it involves straightforward additive tidal cycles.

_{This image has been resized to fit in the page. Click to enlarge.}

**3.** The Chandler wobble model developed in Chapter 13 has an effective complexity of **2** because it takes a single monthly tidal forcing and it multiplies it by a semi-annual nodal impulse (one for each nodal cycle pass). Just a bit more complex than **#1** or **#2** but the complexity already may be too great for geophysicists to accept, as the consensus instead argues for a stochastic forcing stimulating a resonance.

_{This image has been resized to fit in the page. Click to enlarge.}

**4.** The QBO model described in Chapter 11 is also estimated at an effective complexity of **2**, as it is impulse-modulated by nearly the same mechanism as for the Chandler wobble of **#3**. But instead of a bandpass filter for the Chandler wobble, the QBO model applies an integrating filter to create more of a square-wave-like time-series. Again, this is too complex for consensus atmospheric physics to accept.

_{This image has been resized to fit in the page. Click to enlarge.}

**5. **The ENSO model described in Chapter 12 is an effective complexity of **3** because it adds the nonlinear Laplace’s Tidal Equation (LTE) modulation to the square-wave-like fit of **#4** (QBO), tempered by being calibrated by the tidal forcing model for **#2** (dLOD). Of course this additional level of physics “complexity” is certain to be above the heads of ocean scientists and climate scientists, who are still scratching their heads over **#3** and **#4**.

_{This image has been resized to fit in the page. Click to enlarge.}

The ENSO model is complex due to the non-linearity of the solution. The cyclic tidal factors can create harmonics from both the inverse cubic gravitational pull and from the LTE solution, and together with the annual impulse modulation creates an additional nasty aliasing that requires painstaking analysis to reveal.

By comparison, most GCMs of climate behaviors have effective complexities much more than this because — as Gell-Man defined it — the shortest algorithmic description would require pages and pages of text to express. To climate scientists perhaps the massive additional complexity of a GCM is preferred over the intuition required for enabling incremental complexity.

Since this post started with a Gell-Mann citation, may as well stick one here at the end:

“Battles of new ideas against conventional wisdom are common in science, aren’t they?”

“It’s very interesting how these certain negative principles get embedded in science sometimes. Most challenges to scientific orthodoxy are wrong. A lot of them are crank. But it happens from time to time that a challenge to scientific orthodoxy is actually right. And the people who make that challenge face a terrible situation. Getting heard, getting believed, getting taken seriously and so on. And I’ve lived through a lot of those, some of them with my own work, but also with other people’s very important work. Let’s take continental drift, for example. American geologists were absolutely convinced, almost all of them, that continental drift was rubbish. The reason is that the mechanisms that were put forward for it were unsatisfactory. But that’s no reason to disregard a phenomenon. Because the theories people have put forward about the phenomenon are unsatisfactory, that doesn’t mean the phenomenon doesn’t exist. But that’s what most American geologists did until finally their noses were rubbed in continental drift in 1962, ’63 and so on when they found the stripes in the mid-ocean, and so it was perfectly clear that there had to be continental drift, and it was associated then with a model that people could believe, namely plate tectonics. But the phenomenon was still there. It was there before plate tectonics. The fact that they hadn’t found the mechanism didn’t mean the phenomenon wasn’t there. Continental drift was actually real. And evidence was accumulating for it. At Caltech the physicists imported Teddy Bullard to talk about his work and Patrick Blackett to talk about his work, these had to do with paleoclimate evidence for continental drift and paleomagnetism evidence for continental drift. And as that evidence accumulated, the American geologists voted more and more strongly for the idea that continental drift didn’t exist.

https://scienceblogs.com/pontiff/2009/09/16/gell-mann-on-conventional-wisdThe more the evidence was there, the less they believed it.Finally in 1962 and 1963 they had to accept it and they accepted it along with a successful model presented by plate tectonics….”

With all that, progress is being made in earth geophysics by looking at other planets. My high-school & college classmate Dr. Alex Konopliv of NASA JPL has lead the first research team to detect the Chandler wobble on another planet (this case for Mars), see “Detection of the Chandler Wobble of Mars From Orbiting Spacecraft” Geophysical Research Letters(2020).

In the body of the article, a suggestion is made as to the source of the forcing for the Martian Chandler wobble. The Martian moon Phobos is quite small and cycles the planet in ~7 hours, so this may not have the impact that the Earth’s moon has on our Chandler Wobble.

The wobble is small, about 10 cm on average.

Since a **Mars year** is 687 Earth days, only the 3rd harmonic (229 days) is close to the measured wobble of 206.9 days. With the Earth, it’s quite simple how the nodal lunar cycle interferes with the annual cycle to line up exactly with the Earth’s 433 day Chandler wobble (see Figure 3 up-thread in this post) , creating that wobble as a * forced* response, but nothing like that on Mars, which may be a