“Wobbling” Moon trending on Twitter

Twitter trending topic

This NASA press release has received mainstream news attention.

The 18.6 year nodal cycle will generate higher tides that will exaggerate sea-level rise due to climate change.

Yahoo news item:

https://news.yahoo.com/lunar-orbit-apos-wobble-apos-173042717.html

So this is more-or-less a known behavior, but hopefully it raises awareness to the other work relating lunar forcing to ENSO, QBO, and the Chandler wobble.

Cited paper

Thompson, P.R., Widlansky, M.J., Hamlington, B.D. et al. Rapid increases and extreme months in projections of United States high-tide flooding. Nat. Clim. Chang. 11, 584–590 (2021). https://doi.org/10.1038/s41558-021-01077-8

Inverting non-autonomous functions

This is an algorithm based on minimum entropy (i.e. negative entropy) considerations which is essentially an offshoot of this paper Entropic Complexity Measured in Context Switching.

The objective is to apply negative entropy to find an optimal solution to a deterministically ordered pattern. To start, let us contrast the behavior of autonomous vs non-autonomous differential equations. One way to think about the distinction is that the transfer function for non-autonomous only depends on the presenting input. Thus, it acts like an op-amp with infinite bandwidth. Or below saturation it gives perfectly linear amplification, so that as shown on the graph to the right, the x-axis input produces an amplified y-axis output as long as the input is within reasonable limits.

Continue reading

Low #DOF ENSO Model

Given two models of a physical behavior, the “better” model has the highest correlation (or lowest error) to the data and the lowest number of degrees of freedom (#DOF) in terms of tunable parameters. This ratio CC/#DOF of correlation coefficient over DOF is routinely used in automated symbolic regression algorithms and for scoring of online programming contests. A balance between a good error metric and a low complexity score is often referred to as a Pareto frontier.

So for modeling ENSO, the challenge is to fit the quasi-periodic NINO34 time-series with a minimal number of tunable parameters. For a 140 year fitting interval (1880-1920), a naive Fourier series fit could easily take 50-100 sine waves of varying frequencies, amplitudes, and phase to match a low-pass filtered version of the data (any high-frequency components may take many more). However that is horribly complex model and obviously prone to over-fitting. Obviously we need to apply some physics to reduce the #DOF.

Since we know that ENSO is essentially a model of equatorial fluid dynamics in response to a tidal forcing, all that is needed is the gravitational potential along the equator. The paper by Na [1] has software for computing the orbital dynamics of the moon (i.e. lunar ephemerides) and a 1st-order approximation for tidal potential:

The software contains well over 100 sinusoidal terms (each consisting of amplitude, frequency, and phase) to internally model the lunar orbit precisely. Thus, that many DOF are removed, with a corresponding huge reduction in complexity score for any reasonable fit. So instead of a huge set of factors to manipulate (as with many detailed harmonic tidal analyses), what one is given is a range (r = R) and a declination ( ψ=delta) time-series. These are combined in a manner following the figure from Na shown above, essentially adjusting the amplitudes of R and delta while introducing an additional tangential or tractional projection of delta (sin instead of cos). The latter is important as described in NOAA’s tide producing forces page.

Although I roughly calibrated this earlier [2] via NASA’s HORIZONS ephemerides page (input parameters shown on the right), the Na software allows better flexibility in use. The two calculations essentially give identical outputs and independent verification that the numbers are as expected.

As this post is already getting too long, this is the result of doing a Laplace’s Tidal Equation fit (adding a few more DOF), demonstrating that the limited #DOF prevents over-fitting on a short training interval while cross-validating outside of this band.

or this

This low complexity and high accuracy solution would win ANY competition, including the competition for best seasonal prediction with a measly prize of 15,000 Swiss francs [3]. A good ENSO model is worth billions of $$ given the amount it will save in agricultural planning and its potential for mitigation of human suffering in predicting the timing of climate extremes.

REFERENCES

[1] Na, S.-H. Chapter 19 – Prediction of Earth tide. in Basics of Computational Geophysics (eds. Samui, P., Dixon, B. & Tien Bui, D.) 351–372 (Elsevier, 2021). doi:10.1016/B978-0-12-820513-6.00022-9.

[2] Pukite, P.R. et al “Ephemeris calibration of Laplace’s tidal equation model for ENSO” AGU Fall Meeting, 2018. doi:10.1002/essoar.10500568.1

[3] 1 CHF ~ $1 so 15K = chump change.


Added: High resolution power spectra of ENSO forcing
see link

Review: Modeling of ocean equatorial currents in the phase of El Niño and La Niña

https://www.sciencedirect.com/science/article/abs/pii/S037702652100018X#!

The equatorial zone acts as a waveguide. As highlights they list the following bullet-points, taking advantage that the Coriolis effect at the equator vanishes or cancels.

This is a critical assertion, since — as shown in Mathematical Geoenergy –the Chandler wobble (a nutational oscillation) is forced by tides, then transitively so is the El Nino. So when the authors state the consequence is of both nutation and a gravity influence, it is actually the gravity influence of the moon and sun (and slightly Jupiter) that is the root cause.

The article has several equations that claim analytical solutions, but the generated PDF format has apparently not rendered the markup correctly. Many “+” signs are missing from equations. I have seen this issue before when I have tried to generate PDF pages from a markup doc, and assume that is what is happening. Assume the hard-copy version is OK so may have to go to the library to retrieve it, or perhaps ask the authors for a hard-copy.

main author:

Sergey А. Arsen’yev

Dept. of Earth and Planetary Physics of Schmidt’s Institute of the Earth’s Physics, Russian Academy of Sciences, 10 Bolshaya Gruzinskaya, Moscow, 123995, Russia

Arsy7@mail.ru

Nonlinear Generation of Power Spectrum : ENSO

Something I learned early on in my research career is that complicated frequency spectra can be generated from simple repeating structures. Consider the spatial frequency spectra produced as a diffraction pattern produced from a crystal lattice. Below is a reflected electron diffraction pattern of a reconstructed hexagonally reconstructed surface of a silicon (Si) single crystal with a lead (Pb) adlayer ( (a) and (b) are different alignments of the beam direction with respect to the lattice). Suffice to say, there is enough information in the patterns to be able to reverse engineer the structure of the surface as (c).

from link

Now consider the ENSO pattern. At first glance, neither the time-series signal nor the Fourier series power spectra appear to be produced by anything periodically regular. Even so, let’s assume that the underlying pattern is tidally regular, being comprised of the expected fortnightly 13.66 day tropical/synodic cycle and the monthly 27.55 day anomalistic cycle synchronized by an annual impulse. Then the forcing power spectrum of f(t) looks like the RED trace on the left-side of the figure below, F(ω). Clearly that is not enough of a frequency spectra (a few delta spikes) necessary to make up the empirically calculated Fourier series for the ENSO data comprising ~40 intricately placed peaks between 0 and 1 cycles/year in BLUE.

click to expand

Yet, if we modulate that with an Laplace’s Tidal Equation solution functional g(f(t)) that has a G(ω) as in the yellow inset above — a cyclic modulation of amplitudes where g(x) is described by two distinct sine-waves — then the complete ENSO spectra is fleshed out in BLACK in the figure above. The effective g(x) is shown in the figure below, where a slower modulation is superimposed over a faster modulation.

So essentially what this is suggesting is that a few tidal factors modulated by two sinusoids produces enough spectral detail to easily account for the ~40 peaks in the ENSO power spectra. It can do this because a modulating sinusoid is an efficient harmonics and cross-harmonics generator, as the Taylor’s series of a sinusoid contains an effectively infinite number of power terms.

To see this process in action, consider the following three figures, which features a slider that allows one to get an intuitive feel for how the LTE modulation adds richness via harmonics in the power spectra.

  1. Start with a mild LTE modulation and start to increase it as in the figure below. A few harmonics begin to emerge as satellites surrounding the forcing harmonics in RED.
drag slider right for less modulation and to the left for more modulation

2. Next, increase the LTE modulation so that it models the slower sinusoid — more harmonics emerge

3. Then add the faster sinusoid, to fully populate the empirically observed ENSO spectral peaks (and matching the time series).

It appears as if by magic, but this is the power of non-linear harmonic generation. Note that the peak labeled AB amongst others is derived from the original A and B as complicated satellite-cross terms, which can be accounted for by expanding all of the terms in the Taylor’s series of the sinusoids. This can be done with some difficulty, or left as is when doing the fit via solver software.

To complete the circle, it’s likely that being exposed to mind-blowing Fourier series early on makes Fourier analysis of climate data less intimidating, as one can apply all the tricks-of-the-trade, which, alas, are considered routine in other disciplines.


Individual charts

https://imagizer.imageshack.com/img922/7013/VRro0m.png




 


Modern Meteorology

This is what amounts to forecast meteorology — a seasoned weatherman will look at the current charts and try to interpret them based on patterns that have been associated with previous observations. They appear to have equal powers of memory recall, pattern matching, and the art of the bluff.

I don’t do that kind of stuff and don’t think I ever will.

If this comes out of a human mind, then that same information can be fed into a knowledgebase and either a backward or forward-chained inference engine could make similar assertions.

And that explains why I don’t do it — a machine should be able to do it better.


What makes an explanation good enough? by Santa Fe Institute

Plausibility and parsimony are the 2P’s of building a convincing argument for a scientific model. First one has to convince someone that the model is built on a plausible premise, and secondly that the premise is simpler and more parsimonious with the data that any other model offered. As physics models are not provable in the sense of math models, the model that wins is always based on a comparison to all other models. Annotated from  DOI: 10.1016/j.tics.2020.09.013

note: https://andthentheresphysics.wordpress.com/2018/01/13/a-little-bit-of-sociology-of-science/#comment-110013

Chandler Wobble Forcing

Amazing finding from Alex’s group at NASA JPL

What’s also predictable is that the JPL team probably have a better handle of what causes the wobble on Mars than we have on what causes the Chandler wobble (CW) here on Earth. Such is the case when comparing a fresh model against a stale model based on an early consensus that becomes hard to shake with the passage of time.

Of course, we have our own parsimoniously plausible model of the Earth’s Chandler wobble (described in Chapter 13), that only gets further substantiated over time.

The latest refinement to the geoenergy model is the isolation of Chandler wobble spectral peaks related to the asymmetry of the northern node lunar torque relative to the southern node lunar torque.

Continue reading

QBO Aliased Harmonics

In Chapter 12, we described the model of QBO generated by modulating the draconic (or nodal) lunar forcing with a hemispherical annual impulse that reinforces that effect. This generates the following predicted frequency response peaks:

From section 11.1.1 Harmonics

The 2nd, 3rd, and 4th peaks listed (at 2.423, 1.423, and 0.423) are readily observed in the power spectra of the QBO time-series. When the spectra are averaged over each of the time series, the precisely matched peaks emerge more cleanly above the red noise envelope — see the bottom panel in the figure below (click to expand).

Power spectra of QBO time-series — the average is calculated by normalizing the peaks at 0.423/year.
Each set of peaks is separated by a 1/year interval.

The inset shows what these harmonics provide — essentially the jagged stairstep structure of the semi-annual impulse lag integrated against the draconic modulation.

It is important to note that these harmonics are not the traditional harmonics of a high-Q resonance behavior, where the higher orders are integral multiples of the fundamental frequency — in this case at 0.423 cycles/year. Instead, these are clear substantiation of a forcing response that maintains the frequency spectrum of an input stimulus, thus excluding the possibility that the QBO behavior is a natural resonance phenomena. At best, there may be a 2nd-order response that may selectively amplify parts of the frequency spectrum.

See my latest submission to the ESD Ideas issue : ESDD – ESD Ideas: Long-period tidal forcing in geophysics – application to ENSO, QBO, and Chandler wobble (copernicus.org)

Overfitting+Cross-Validation: ENSO→AMO

I presented at the 2018 AGU Fall meeting on the topic of cross-validation. From those early results, I updated a fitted model comparison between the Pacific ocean’s ENSO time-series and the Atlantic Ocean’s AMO time-series. The premise is that the tidal forcing is essentially the same in the two oceans, but that the standing-wave configuration differs. So the approach is to maintain a common-mode forcing in the two basins while only adjusting the Laplace’s tidal equation (LTE) modulation.

If you don’t know about these completely orthogonal time series, the thought that one can avoid overfitting the data — let alone two sets simultaneously — is unheard of (Michael Mann doesn’t even think that the AMO is a real oscillation based on reading his latest research article called “Absence of internal multidecadal and interdecadal oscillations in climate model simulations“).

This is the latest product (click to expand)

Read this backwards from H to A.

H = The two tidal forcing inputs for ENSO and AMO — differs really only by scale and a slight offset

G = The constituent tidal forcing spectrum comparison of the two — primarily the expected main constituents of the Mf fortnightly tide and the Mm monthly tide (and the Mt composite of Mf × Mm), amplified by an annual impulse train which creates a repeating Brillouin zone in frequency space.

E&F = The LTE modulation for AMO, essentially comprised of one strong high-wavenumber modulation as shown in F

C&D = The LTE modulation for ENSO, a strong low-wavenumber that follows the El Nino La Nina cycles and then a faster modulation

B = The AMO fitted model modulating H with E

A = The ENSO fitted model modulating the other H with C

Ordinarily, this would take eons worth of machine learning compute time to determine this non-linear mapping, but with knowledge of how to solve Navier-Stokes, it becomes a tractable problem.

Now, with that said, what does this have to do with cross-validation? By fitting only to the ENSO time-series, the model produced does indeed have many degrees of freedom (DOF), based on the number of tidal constituents shown in G. Yet, by constraining the AMO fit to require essentially the same constituent tidal forcing as for ENSO, the number of additional DOF introduced is minimal — note the strong spike value in F.

Since parsimony of a model fit is based on information criteria such as number of DOF, as that is exactly what is used as a metric characterizing order in the previous post, then it would be reasonable to assume that fitting a waveform as complex as B with only the additional information of F cross-validates the underlying common-mode model according to any information criteria metric.

For further guidance, this is an informative article on model selection in regards to complexity — “A Primer for Model Selection: The Decisive Role of Model Complexity

excerpt: