This topic will gain steam in the coming years. The following paper generates quite a good cross-validation for SOI, shown in the figure below.

Xiaoqun, C. et al. ENSO prediction based on Long Short-Term Memory (LSTM). IOP Conference Series: Materials Science and Engineering, 799, 012035 (2020).

The x-axis appears to be in months and likely starts in 1979, so it captures the 2016 El Nino (El Nino is negative for SOI). Still have no idea how the neural net arrived at the fit other than it being able to discern the cyclic behavior from the historical waveform between 1979 and 2010. From the article itself, it appears that neither do the authors.

For the tidal forcing that contributes to length-of-day (LOD) variations [1], only a few factors contribute to a plurality of the variation. These are indicated below by the highlighted circles, where the V_{0}/g amplitude is greatest. The first is the nodal 18.6 year cycle, indicated by the N’ = 1 Doodson argument. The second is the 27.55 day “Mm” anomalistic cycle which is a combination of the perigean 8.85 year cycle (p = -1 Doodson argument) mixed with the 27.32 day tropical cycle (s=1 Doodson argument). The third and strongest is twice the tropical cycle (therefore s=2) nicknamed “Mf”.

These three factors also combine as the primary input forcing to the ENSO model. Yet, even though they are strongest, the combinatorial factors derived from multiplying these main harmonics are vital for generating a quality fit (both for dLOD and even more so for ENSO). What I have done in the past was apply the recommended mix of first- and second-order factors that appear in the dLOD spectra for the ENSO forcing.

Yet there is another approach that makes no assumption of the strongest 2nd-order factors. In this case, one simply expands the primary factors as a combinatorial expansion of cross-terms to the 4th level — this then generates a broad mix of monthly, fortnightly, 9-day, and weekly harmonic cycles. A nested algorithm to generate the 35 constituent terms is :

Counter := 1;
for J in Constituents'Range loop
for K in Constituents'First .. J loop
for L in Constituents'First .. K loop
for M in Constituents'First .. L loop
Tf := Tf + Coefficients (Counter) * Fundamental(J) *
Fundamental(K) * Fundamental(L) * Fundamental (M);
Counter := Counter + 1;
end loop;
end loop;
end loop;
end loop;

This algorithm requires the three fundamental terms plus one unity term to capture most of the cross-terms shown in Table 3 above (The annual cross-terms are automatic as those are generated by the model’s annual impulse). This transforms into a coefficients array that can be included in the LTE search software.

What is missing from the list are the evection terms corresponding to 31.812 (Msm) and 27.093 day cycles. They are retrograde to the prograde 27.55 day anomalistic cycle, so would need an additional 8.848 year perigee cycle bring the count from 3 fundamental terms to 4.

The difference between adding an extra level of harmonics, bringing the combinatorial total from 35 to 126, is not very apparent when looking at the time series (below), as it simply adds shape to the main fortnightly tropical cycle.

Yet it has a significant effect on the ENSO fit, approaching a CC of 0.95 (see inset at right for the scatter correlation). Note that the forcing frequency spectra in the middle right inset still shows a predominately tropical fortnightly peak at 0.26/yr and 0.74/yr.

These extra harmonics also helps in matching to the much more busy SOI time-series. Click on the chart below to inspect how the higher-K wavenumbers may be the origin of what is thought to be noise in the SOI measurements.

Is this a case of overfitting? Try the following cross-validation on orthogonal intervals, and note how tight the model matches the data to the training intervals, without degrading too much on the outer validation region.

I will likely add this combinatorial expansion approach to the LTE fitting software on GitHub soon, but thought to checkpoint the interim progress on the blog. In the end the likely modeling mix will be a combination of the geophysical calibration to the known dLOD response together with a refined collection of these 2nd-order combinatorial tidal constituents. The rationale for why certain terms are important will eventually become more clear as well.

References

Ray, R.D. and Erofeeva, S.Y., 2014. Long‐period tidal variations in the length of day. Journal of Geophysical Research: Solid Earth, 119(2), pp.1498-1509.

In Chapter 12 of the book we model — via LTE — the canonical El Nino Southern Oscillation (ENSO) behavior, fitting to closely-correlated indices such as NINO3.4 and SOI. Another El Nino index was identified circa 2007 that is not closely correlated to the well-known ENSO indices. This index, referred to as El Nino Modoki, appears to have more of a Pacific Ocean centered dipole shape with a bulge flanked by two wing lobes, cycling again as an erratic standing-wave.

If in fact Modoki differs from the conventional ENSO only by a different standing-wave wavenumber configuration, then it should be straightforward to model as an LTE variation of ENSO. The figure below is the model fitted to the El Nino Modoki Index (EMI) (data from JAMSTEC). The cross-validation is included as values post-1940 were used in the training with values prior to this used as a validation test.

The LTE modulation has a higher fundamental wavenumber component than ENSO (plus a weaker factor closer to a zero wavenumber, i.e. some limited LTE modulation as is found with the QBO model).

The input tidal forcing is close to that used for ENSO but appears to lead it by one year. The same strength ordering of tidal factors occurs, but with the next higher harmonic (7-day) of the tropical fortnightly 13.66 day tide slightly higher for EMI than ENSO.

The model fit is essentially a perturbation of ENSO so did not take long to optimize based on the Laplace’s Tidal Equation modeling software. I was provoked to run the optimization after finding a paper yesterday on using machine learning to model El Nino Modoki [1].

It’s clear that what needs to be done is a complete spatio-temporal model fit across the equatorial Pacific, which will be amazing as it will account for the complete mix of spatial standing-wave modes. Maybe in a couple of years the climate science establishment will catch up.

In Chapter 11 of the book Mathematical GeoEnergy, we model the QBO of equatorial stratospheric winds, but only touch on the related cycle at even higher altitudes, the semi-annual oscillation (SAO). The figure at the top of a recent post geometrically explains the difference between SAO and QBO — the basic idea is that the SAO follows the solar tide and not the lunar tide because of a lower atmospheric density at higher altitudes. Thus, the heat-based solar tide overrides the gravitational lunar+solar tide and the resulting oscillation is primarily a harmonic of the annual cycle.

Our book Mathematical Geoenergy presents a number of novel approaches that each deserve a research paper on their own. Here is the list, ordered roughly by importance (IMHO):

Laplace’s Tidal Equation Analytic Solution. (Ch 11, 12) A solution of a Navier-Stokes variant along the equator. Laplace’s Tidal Equations are a simplified version of Navier-Stokes and the equatorial topology allows an exact closed-form analytic solution. This could classify for the Clay Institute Millenium Prize if the practical implications are considered, but it’s a lower-dimensional solution than a complete 3-D Navier-Stokes formulation requires.

Model of El Nino/Southern Oscillation (ENSO). (Ch 12) A tidally forced model of the equatorial Pacific’s thermocline sloshing (the ENSO dipole) which assumes a strong annual interaction. Not surprisingly this uses the Laplace’s Tidal Equation solution described above, otherwise the tidal pattern connection would have been discovered long ago.

Model of Quasi-Biennial Oscillation (QBO). (Ch 11) A model of the equatorial stratospheric winds which cycle by reversing direction ~28 months. This incorporates the idea of amplified cycling of the sun and moon nodal declination pattern on the atmosphere’s tidal response.

Origin of the Chandler Wobble. (Ch 13) An explanation for the ~433 day cycle of the Earth’s Chandler wobble. Finding this is a fairly obvious consequence of modeling the QBO.

The Oil Shock Model. (Ch 5) A data flow model of oil extraction and production which allows for perturbations. We are seeing this in action with the recession caused by oil supply perturbations due to the Corona Virus pandemic.

The Dispersive Discovery Model. (Ch 4) A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.

Ornstein-Uhlenbeck Diffusion Model (Ch 6) Applying Ornstein-Uhlenbeck diffusion to describe the decline and asymptotic limiting flow from volumes such as occur in fracked shale oil reservoirs.

The Reservoir Size Dispersive Aggregation Model. (Ch 4) A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.

Origin of Tropical Instability Waves (TIW). (Ch 12) As the ENSO model was developed, a higher harmonic component was found which matches TIW

Characterization of Battery Charging and Discharging. (Ch 18) Simplified expressions for modeling Li-ion battery charging and discharging profiles by applying dispersion on the diffusion equation, which reflects the disorder within the ion matrix.

Anomalous Behavior in Dispersive Transport explained. (Ch 18) Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.

Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials. (Ch 20) The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.

Wind Energy Analysis. (Ch 11) Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany. Found a universal BesselK distribution which improves on the conventional Rayleigh distribution.

Terrain Slope Distribution Analysis. (Ch 16) Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.

Thermal Entropic Dispersion Analysis. (Ch 14) Solving the Fokker-Planck equation or Fourier’s Law for thermal diffusion in a disordered environment. A subtle effect but the result is a simplified expression not involving complex errf transcendental functions. Useful in ocean heat content (OHC) studies.

The Maximum Entropy Principle and the Entropic Dispersion Framework. (Ch 10) The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.

Solving the Reserve Growth “enigma”. (Ch 6) An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.

Shocklets. (Ch 7) A kernel approach to characterizing production from individual oil fields.

Reserve Growth, Creaming Curve, and Size Distribution Linearization. (Ch 6) An obvious linearization of this family of curves, related to Hubbert Linearization but more useful since it stems from first principles.

The Hubbert Peak Logistic Curve explained. (Ch 7) The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.

Laplace Transform Analysis of Dispersive Discovery. (Ch 7) Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.

Gompertz Decline Model. (Ch 7) Exponentially increasing extraction rates lead to steep production decline.

The Dynamics of Atmospheric CO2 buildup and Extrapolation. (Ch 9) Convolving a fat-tailed CO2 residence time impulse response function with a fossil-fuel emissions stimulus. This shows the long latency of CO2 buildup very straightforwardly.

Reliability Analysis and Understanding the “Bathtub Curve”. (Ch 19) Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.

The Overshoot Point (TOP) and the Oil Production Plateau. (Ch 8) How increases in extraction rate can maintain production levels.

Lake Size Distribution. (Ch 15) Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison).

The Quandary of Infinite Reserves due to Fat-Tail Statistics. (Ch 9) Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.

Oil Recovery Factor Model. (Ch 6) A model of oil recovery which takes into account reservoir size.

Network Transit Time Statistics. (Ch 21) Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.

Particle and Crystal Growth Statistics. (Ch 20) Detailed model of ice crystal size distribution in high-altitude cirrus clouds.

Rainfall Amount Dispersion. (Ch 15) Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.

Earthquake Magnitude Distribution. (Ch 13) Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold.

IceBox Earth Setpoint Calculation. (Ch 17) Simple model for determining the earth’s setpoint temperature extremes — current and low-CO2 icebox earth.

Global Temperature Multiple Linear Regression Model (Ch 17) The global surface temperature records show variability that is largely due to the GHG rise along with fluctuating changes due to ocean dipoles such as ENSO (via the SOI measure and also AAM) and sporadic volcanic eruptions impacting the atmospheric aerosol concentrations.

GPS Acquisition Time Analysis. (Ch 21) Engineering analysis of GPS cold-start acquisition times. Using Maximum Entropy in EMI clutter statistics.

1/f NoiseModel (Ch 21) Deriving a random noise spectrum from maximum entropy statistics.

Stochastic Aquatic Waves (Ch 12) Maximum Entropy Analysis of wave height distribution of surface gravity waves.

The Stochastic Model of Popcorn Popping. (Appx C) The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production. Can use this to model epidemics, etc.

Dispersion Analysis of Human Transportation Statistics. (Appx C) Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.

A ridiculous paper on the uncertainty of climate models is under post-publication review at peerpub.com

What drives me more nuts is why everyone is trying to correct what a blithering idiot (P. Frank) is advancing instead of just solving the differential equations and modeling the climate variability. Does everyone think we will actually make any progress by correcting the poor sod’s freshman homework assignment?

Instead, let’s get going and finish off the tidal model of ENSO. That will do more than anything else to quash the endless discussion over how much natural climate variability is acceptable to be able to discern an AGW trend.

The modeled QBO cycle is directly related to the nodal (draconian) lunar cycle physically aliased against the annual cycle. The empirical cycle period is best estimated by tracking the peak acceleration of the QBO velocity time-series, as this acceleration (1st derivative of the velocity) shows a sharp peak. This value should asymptotically approach a 2.368 year period over the long term. Since the recent data from the main QBO repository provides an additional acceleration peak from the past month, now is as good a time as any to analyze the cumulative data.

The new data-point provides a longer period which compensated for some recent shorter periods, such that the cumulative mean lies right on the asymptotic line. The jitter observed is explainable in terms of the model, as acceleration peaks are more prone to align close to an annual impulse. But the accumulated mean period is still aligned to the draconic aliasing with this annual impulse. As more data points come in over the coming decades, the mean should vary less and less from the asymptotic value.

The fit to QBO using all the data save for the last available data point is shown below. Extrapolating beyond the greenarrow, we should see an uptick according to the red waveform.

Adding the recent data-point and the bluewaveform does follow the model.

There was a flurry of recent discussion on the QBO anomaly of 2016 (shown as a split peak above), which implied that perhaps the QBO would be permanently disrupted from it’s long-standing pattern. Instead, it may be a more plausible explanation that the QBO pattern was not simply wandering from it’s assumed perfectly cyclic path but instead is following a predictable but jittery track that is a combination of the (physically-aliased) annual impulse-synchronized Draconic cycle together with a sensitivity to variations in the draconic cycle itself. The latter calibration is shown below, based on NASA ephermeris.

This is the QBO spectral decomposition, showing signal strength centered on the fundamental aliased Draconic value, both for the data and the set by the model.

The main scientist, Prof. Richard Lindzen, behind the consensus QBO model has been recently introduced here as being “considered the most distinguished living climate scientist on the planet”. In his presentation criticizing AGW science [1], Lindzen claimed that the climate oscillates due to a steady uniform force, much like a violin oscillates when the steady force of a bow is drawn across its strings. An analogy perhaps better suited to reality is that the violin is being played like a drum. Resonance is more of a decoration to the beat itself.
Keith ?

[1] Professor Richard Lindzen slammed conventional global warming thinking warming as ‘nonsense’ in a lecture for the Global Warming Policy Foundation on Monday. ‘An implausible conjecture backed by false evidence and repeated incessantly … is used to promote the overturn of industrial civilization,’ he said in London. — GWPF

The Madden-Julian Oscillation (MJO) is a climate index that captures tropical variability at a finer resolution (i.e. intra-annual) than the (inter-annual) ENSO index over approximately the same geographic region. Since much of the MJO variability is observed as 30 to 60 day cycles (and these are traveling waves, not standing waves), providing MJO data as a monthly time-series will filter out the fast cycles. Still, it is interesting to analyze the monthly MJO data and compare/contrast that to ENSO. As a disclaimer, it is known that inter-annual variability of the MJO is partly linked to ENSO, but the following will clearly show that connection.

This is the fit of MJO (longitude index #1) using the ENSO model as a starting point (either the NINO34 or SOI works equally well).

The constituent temporal forcing factors for MJO and ENSO align precisely

This is not surprising because the monthly filtered MJO does show the same El Nino peaks at 1983, 1998, and 2016 as the ENSO time-series. The only difference is in the LTE spatial modulation applied during the fitting process, whereby the MJO has a stronger high-wavenumber factor than the ENSO time series.

This is the SOI fit over the same 1980+ interval as MJO, with an almost 0.6 correlation.

The Arctic Oscillation (AO) dipole has behavior that is correlated to the North Atlantic Oscillation (NAO) dipole. We can see this in two ways. First, and most straight-forwardly, the correlation coefficient between the AO and NAO time-series is above 0.6.

Secondly, we can use the model of the NAO from the last post and refit the parameters to the AO data (data also here), but spanning an orthogonal interval. Then we can compare the constituent lunisolar factors for NAO and AO for correlation, and further discover that this also doubles as an effective cross-validation for the underlying LTE model (as the intervals are orthogonal).

Top panel is a model fit for AO between 1900-1950, and below that is a model fit for NAO between 1950-present. The lower pane is the correlation for a common interval (left) and for the constituent lunisolar factors for the orthogonal interval (right)

Only the anomalistic factor shows an imperfect correlation, and that remains quite high.