The forcing spectrum like this, with the aliased draconic (27.212d) factor circled:
For QBO, we remove all the lunar factors except for the draconic, as this is the only declination factor with the same spherical group symmetry as the semi-annual solar declination.
And after modifying the annual (ENSO spring-barrier) impulse into a semi-annual impulse with equal and opposite excursions, the resultant model matches well (to first order) the QBO time series.
Although the alignment isn’t perfect, there are indications in the structure that the fit has a deeper significance. For example, note how many of the shoulders in the structure align, as highlighted below in yellow
The peaks and valleys do wander about a bit and might be a result of the sensitivity to the semi-annual impulse and the fact that this is only a monthly resolution. The chart below is a detailed fit of the QBO using data with a much finer daily resolution. As you can see, slight changes in the seasonal timing of the semi-annual pulse are needed to individually align the 70 and 30 hBar QBO time-series data.
The underlying forcing of the ENSO model shows both an 18-year Saros cycle (which is an eclipse alignment cycle of all the tidal periods), along with a 6-year anomalistic/draconic interference cycle. This modulation of the main anomalistic cycle appears in both the underlying daily and monthly profile, shown below before applying an annual impulse. The 6-year is clearly evident as it aligns with the x-axis grid 1880, 1886, 1892, 1898, etc.
Daily profile above, monthly next, both reveal Saros cycle
The 6-year cycle in the LOD is not aligned as strictly as the tidal model and it tends to wander, but it seems a more plausible and parsimonious explanation of the modulation than for example in this paper (where the 6-year LOD cycle is “similarly detected in the variations of C22 and S22, the degree-2 order-2 Stokes coefficients of the Earth’s gravitational field”).
Cross-validation confidence improves as the number of mutually agreeing alignments increase. Given the fact that controlled experiments are impossible to perform, this category of analyses is the best way to validate the geophysical models.
The underlying structure of the solution shouldn’t be surprising, since as with Mach-Zehnder, it’s fundamentally related to a path integral formulation known from mathematical physics. As derived via quantum mechanics (originally by Feynman), one temporally integrates an energy Hamiltonian over a path allowing the wave function to interfere with itself over all possible wavenumber (k) and spatial states (x).
Because of the imaginary value i in the exponential, the result is a sinusoidal modulation of some (potentially complicated) function. Of course, the collective behavior of the ocean is not a quantum mechanical result applied to fluid dynamics, yet the topology of the equatorial waveguide can drive it to appear as one, see the breakthrough paper “Topological Origin of Equatorial Waves” for a rationale. (The curvature of the spherical earth can also provide a sinusoidal basis due to a trigonometric projection of tidal forces, but this is rather weak — not expanding far beyond a first-order expansion in the Taylor’s series)
Moreover, the rather strong interference may have a physical interpretation beyond the derived mathematical interpretation. In the past, I have described the modulation as wave breaking, in that the maximum excursions of the inner function f(t) are folded non-linearly into itself via the limiting sinusoidal wrapper. This is shown in the figure below for progressively increasing modulation.
(click on image to expand)
In the figure above, I added an extra dimension (roughly implying a toroidal waveguide) which allows one to visualize the wave breaking, which otherwise would show as a progressively more rapid up-and-down oscillation in one dimension.
Perhaps coincidentally (or perhaps not) this kind of sinusoidal modulation also occurs in heuristic models of the double-gyre structure that often appears in fluid mechanics. In the excerpt below, note the sin(f(t)) formulation.
The interesting characteristic of the structure lies in the more rapid cyclic variations near the edge of the gyre, which can be seen in an animation (Jupyter notebook code here).
Whether the equivalent of a double-gyre is occurring via the model of the LTE 1-D equatorial waveguide is not clear, but the evidence of double-gyre wavetrains (Lagrangian coherent structures, Kelvin–Helmholtz instabilities), occurring along the equatorial Pacific is abundantly clear through the appearance of tropical instability (TIW) wavetrains.
These so-called coherent structures may be difficult to isolate for the time being, especially if they involve subtle interfaces such as thermocline boundaries :
Mercator analysis does show higher levels of waveguide modulation, so perhaps this will be better discriminated over time (see figure below with the long wavelength ENSO dipole superimposed along with the faster TIW wavenumbers in dashed line, with the double-gyre pairing in green + dark purple), and something akin to a 1-D gyre structure will become a valid description of what’s happening along the thermocline. In other words, the wave-breaking modulation due to the LTE modulation is essentially the same as the vortex gyre mapped into a 1-D waveguide.
Sea-level height has several scales. At the daily scale it represents the well-known lunisolar tidal cycle. At a multi-decadal, long-term scale it represents behaviors such as global warming. In between these two scales is what often appears to be noisy fluctuations to the untrained eye. Yet it’s fairly well-accepted [1] that much of this fluctuation is due to the side-effects of alternating La Nina and El Nino cycles (aka ENSO, the El Nino Southern Oscillation), as represented by measures such as NINO34 and SOI.
To see how startingly aligned this mapping is, consider the SLH readings from Ft. Denison in Sydney Harbor. The interval from 1980 to 2012 is shown below, along with a fit used recently to model ENSO.
(click to expand chart)
I chose a shorter interval to somewhat isolate the trend from a secular sea-level rise due to AGW. The last point is 2012 because tide gauge data collection ended then.
As cross-validation, this fit is extrapolated backwards to show how it matches the historic SOI cycles
Much of the fine structure aligns well, indicating that intrinsically the dynamics behind sea-level-height at this scale are due to ENSO changes, associated with the inverted barometer effect. The SOI is essentially the pressure differential between Darwin and Tahiti, so the prevailing atmospheric pressure occurring during varying ENSO conditions follows the rising or lowering Sydney Harbor sea-level in a synchronized fashion. The change is 1 cm for a 1 mBar change in pressure, so that with the SOI extremes showing 14 mBar variation at the Darwin location, this accounts for a 14 cm change in sea-level, roughly matching that shown in the first chart. Note that being a differential measurement, SOI does not suffer from long-term secular changes in trend.
Yet, the unsaid implication in all this is that not only are the daily variations in SLH due to lunar and solar cyclic tidal forces, but so are these monthly to decadal variations. The longstanding impediment is that oceanographers have not been able to solve Laplace’s Tidal Equations that reflected the non-linear character of the ocean’s response to the long-period lunisolar forcing. Once that’s been analytically demonstrated, we can observe that both SLH and ENSO share essentially identical lunisolar forcing (see chart below), arising from that same common-mode linked mechanism.
Many geographically located tidal gauge readings are available from the Permanent Service for Mean Sea Level (PSMSL) repository so I can imagine much can be done to improve the characterization of ENSO via SLH readings.
REFERENCES
[1] F. Zou, R. Tenzer, H. S. Fok, G. Meng and Q. Zhao, “The Sea-Level Changes in Hong Kong From Tide-Gauge Records and Remote Sensing Observations Over the Last Seven Decades,” in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 6777-6791, 2021, doi: 10.1109/JSTARS.2021.3087263.
Revisiting earlier modeling of the North Atlantic Oscillation (NAO) and Arctic Oscillation (AO) indices with the benefit of updated analysis approaches such as negative entropy. These two indices in particular are intimidating because to the untrained eye they appear to be more noise than anything deterministically periodic. Whereas ENSO has periods that range from 3 to 7 years, both NAO and AO show rapid cycling often on a faster-than-annual pace. The trial ansatz in this case is to adopt a semi-annual forcing pattern and synchronize that to long-period lunar factors, fitted to a Laplace’s Tidal Equation (LTE) model.
Start with candidate forcing time-series as shown below, with a mix of semi-annual and annual impulses modulating the primarily synodic/tropical lunar factor. The two diverge slightly at earlier dates (starting at 1880) but the NAO and AO instrumental data only begins at the year 1950, so the two are tightly correlated over the range of interest.
(click on any image to expand)
The intensity spectrum is shown below for the semi-annual zone, noting the aliased tropical factors at 27.32 and 13.66 days standing out.
The NAO and AO pattern is not really that different, and once a strong LTE modulation is found for one index, it also works for the other. As shown below, the lowest modulation is sharply delineated, yet more rapid than that for ENSO, indicating a high-wavenumber standing wave mode in the upper latitudes.
The model fit for NAO (data source) is excellent as shown below. The training interval only extended to 2016, so the dotted lines provide an extrapolated fit to the most recent NAO data.
Same for the AO (data source), the fit is also excellent as shown below. There is virtually no difference in the lowest LTE modulation frequency between NAO and AO, but the higher/more rapid LTE modulations need to be tuned for each unique index. In both cases, the extrapolations beyond the year 2016 are very encouraging (though not perfect) cross-validating predictions. The LTE modulation is so strong that it is also structurally sensitive to the exact forcing.
Both NAO and AO time-series appear very busy and noisy, yet there is very likely a strong underlying order due to the fundamental 27.32/13.66 day tropical forcing modulating the semi-annual impulse, with the 18.6/9.3 year and 8.85/4.42 year providing the expected longer-range lunar variability. This is also consistent with the critical semi-annual impulses that impact the QBO and Chandler wobble periodicity, with the caveat that group symmetry of the global QBO and Chandler wobble forcings require those to be draconic/nodal factors and not the geographically isolated sidereal/tropical factor required of the North Atlantic.
It really is a highly-resolved model potentially useful at a finer resolution than monthly and that will only improve over time.
(as a sidenote, this is much better attempt at matching a lunar forcing to AO and jet-stream dynamics than the approach Clive Best tried a few years ago. He gave it a shot but without knowledge of the non-linear character of the LTE modulation required he wasn’t able to achieve a high correlation, achieving at best a 2.4% Spearman correlation coefficient for AO in his Figure 4 — whereas the models in this GeoenergyMath post extend beyond 80% for the interval 1950 to 2016! )
Climate scientists as a general rule don’t understand crystallography deeply (I do). They also don’t understand cryptography (that, I don’t understand deeply either). Yet, as the last post indicated, knowledge of these two scientific domains is essential to decoding dipoles such as the El Nino Southern Oscillation (ENSO). Crystallography is basically an exercise in signal processing where one analyzes electron & x-ray diffraction patterns to be able to decode structure at the atomic level. It’s mathematical and not for people accustomed to existing outside of real space, as diffraction acts to transform the world of 3-D into a reciprocal space where the dimensions are inverted and common intuition fails.
Cryptography in its common use applies a key to enable a user to decode a scrambled data stream according to the instruction pattern embedded within the key. If diffraction-based crystallography required a complex unknown key to decode from reciprocal space, it would seem hopeless, but that’s exactly what we are dealing with when trying to decipher climate dipole time-series -— we don’t know what the decoding key is. If that’s the case, no wonder climate science has never made any progress in modeling ENSO, as it’s an existentially difficult problem.
The breakthrough is in identifying that an analytical solution to Laplace’s tidal equations (LTE) provides a crystallography+cryptography analog in which we can make some headway. The challenge is in identifying the decoding key (an unknown forcing) that would make the reciprocal-space inversion process (required for LTE demodulation) straightforward.
According to the LTE model, the forcing has to be a combination of tidal factors mixed with a seasonal cycle (stages 1 & 2 in the figure above) that would enable the last stage (Fourier series a la diffraction inversion) to be matched to empirical observations of a climate dipole such as ENSO.
The forcing key used in an ENSO model was described in the last post as a predominately Mm-based lunar tidal factorization as shown below, leading to an excellent match to the NINO34 time series after a minimally-complex LTE modulation is applied.
In diffraction terms, the LTE transform from the forcing time series (upper panel) to the ENSO intensity (lower panel) produces a wave interference relationship
Critics might say and justifiably so, that this is potentially an over-fit to achieve that good a model-to-data correlation. There are too many degrees of freedom (DOF) in a tidal factorization which would allow a spuriously good fit depending on the computational effort applied (see Reference 1 at the end of this post).
Yet, if the forcing key used in the ENSO model was reused as is in fitting an independent climate dipole, such as the AMO, and this same key required little effort in modeling AMO, then the over-fitting criticism is invalidated. What’s left to perform is finding a distinct low-DOF LTE modulation to match the AMO time-series as shown below.
This is an example of a common-mode cross-validation of an LTE model that I originally suggested in an AGU paper from 2018. Invalidating this kind of analysis is exceedingly difficult as it requires one to show that the erratic cycling of AMO can be randomly created by a few DOF. In fact, a few DOFs of sinusoidal factors to reproduce the dozens of AMO peaks and valleys shown is virtually impossible to achieve. I leave it to others to debunk via an independent analysis.
addendum: LTE modulation comparisons, essentially the wavenumber of the diffraction signal:
(click on links to expand)
This is the forcing power spectrum showing the principal Mm tidal factor term at period 3.9 years, with nearly identical spectral profiles for both ENSO and AMO.
According to the precepts of cryptography, decoding becomes straightforward once one knows the key. Similarly, nature often closely guards its secrets, and until the key is known, for example as with DNA, climate scientists will continue to flounder.
References
Chao, B. F., & Chung, C. H. (2019). On Estimating the Cross Correlation and Least Squares Fit of One Data Set to Another With Time Shift. Earth and Space Science, 6, 1409–1415. https://doi.org/10.1029/2018EA000548 “For example, two time series with predominant linear trends (very low DOF) can have a very high ρ (positive or negative), which can hardly be construed as an evidence for meaningful physical relationship. Similarly, two smooth time series with merely a few undulations of similar timescale (hence low DOF) can easily have a high apparent ρ just by fortuity especially if a time shift is allowed. On the other hand, two very “erratic” or, say, white time series (hence high DOF) can prove to be significantly correlated even though their apparent ρ value is only moderate. The key parameter of relevance here is the DOF: A relatively high ρ for low DOF may be less significant than a relatively low ρ at high DOF and vice versa.“
The research category is topological considerations of Laplace’s Tidal Equations (LTE = a shallow-water simplification of Navier-Stokes) applied to the equatorial thermocline — the following citations provides an evolutionary understanding that I have developed via presentations and publications over the last 6 years (working backwards)
Given that I have worked on this topic persistently over this span of time, I have gained considerable insight into how straightforward it has become to generate relatively decent fits to climate dipoles such as ENSO. Paradoxically, this is both good and bad. It’s good because the model’s recipe is algorithmically simply described in terms of plausibility and parsimony. That’s largely because it’s a straightforward non-linear extension of a conventional tidal analysis model. However that non-linearity opens up the possibility for many similar model fits that are equally good, yet difficult to discriminate between. So it’s bad in the sense that I can come to an impasse in selecting the “best” model.
This is oversimplifying a bit but the framing issue is if you knew the answer was 72, but have a hard time determining whether the question being posed was one of 2×36, 3×24, 4×18, 6×12, 8×9, 9×8, 12×6, 18×4, 24×3, or 36×2. Each gives the right answer, but potentially not the right mechanism. This is a fundamental problem with non-linear analysis.
A conventional tidal analysis by itself is just a few fundamental tidal factors (exactly 4) but made devastatingly accurate by the introduction of 2nd-order harmonics and cross-harmonics. All these harmonics are generated by non-linear effects but the frequency spectrum is so clean and distinct for a sea-level-height (SLH) time-series that the equivalent solution to k × F = 72 is essentially a scaling identification problem where the k is the scale factor for the corresponding cyclic tidal factors F.
Yet, by applying the non-linear LTE solution to the problem of modeling ENSO, we quickly discover that the algorithm is a wickedly effective harmonics and cross-harmonics generator. Any number of combinations of harmonics can develop an adequate fit depending on the variable LTE modulation applied. So it could be a small LTE modulation mixed with a wide assortment of tidal factors (the 2×36 case) or it could be a large LTE modulation mixed with a minimum of tidal factors (the 18×4 case). Or it could be something in between (e.g. the 8×9 case). This is all a result of the sine-of-a-sine non-linearity of the LTE formulation, related to the Mach-Zehnder modulation used in optical cryptography applications. The latter bit is the hint that things may not be unambiguously decoded given the fact that M-Z has been discovered to be nature’s own built-in encryption device.
However, there remains lots of light at the end of this tunnel, as I have also discovered that the tidal factor spread is likely largely governed by a single lunar tidal constituent, the 27.55 day anomalistic Mm cycle interfering with an annual impulse. That’s essentially 2 of the 4 tidal factors, with the other 2 lunar factors providing a 2nd-order correction. For the longest time I had been focused on the 13.66 day tropical Mf cycle as that also lead to a decent fit over the years, specifically since the first beat harmonic of the Mf cycle with the annual impulse is 3.8 years while the Mm cycle is 3.9 years. These two terms are close enough that they only go out-of-phase after ~130 years, which is the extent of the ENSO time-series. Only when you try to simplify a model fit by iterating over the space of factor combinations will you discover the difference between 3.8 and 3.9.
In terms of geophysics, the Mf factor is a tractional tidal forcing operating parallel to the ocean’s surface influenced by the moon’s latitudinal declination, while the Mm factor is a largely perpendicular gravitational forcing influenced by the perigean cycle of the Moon-to-Earth distance. The latter may be the “right mechanism” as each can give close to the “right answer”.
So the gist of the fitting observations is that far fewer harmonic factors are required for a decent Mm-based model than for a Mf-based model. This is slightly at the expense of a stronger LTE modulation, but the parsimony of an Mm-based model can’t be beat, as I will show via the following analysis charts…
This is a good model fit based on a slightly modified Mm-based factorization, with a sample-and-hold placed on a strong annual impulse
The comparison of the modified Mm tidal factorization to the pure Mm is below (the reason the 27.55 day periodicity doesn’t appear is because of the monthly aliasing used in plotting).
The slight pattern on top of pure signal is due to a 6-year beat of the Mm period with the 27.212 day lunar draconic pattern indicating the time between equatorial nodal crossings of the moon. This is the strongest factor of the ascension cycle described in the solar and lunar ephemeris published recently by Sung-Ho Na. As highlighted above by numbered cycles, ~20 occur in the span of 120 years.
Below, an expanded look showing how slight a correction is applied
The integrated forcing after the annual impulse is shown below. The sample-and-hold integration exaggerates low-frequency differences so the distinction between the pure Mm forcing and Mm+harmonics is more apparent. The 6-year periodicity is obscured by longer term variations.
The log-scaled power spectra of the integrated tidal forcing is shown below. Note the overwhelmingly strong peak near the 0.25/year frequency (3.9 year cycle). The rest of the peaks are readily matched to periodicities in the Na ephemerides [1] according to their strength.
The LTE modulation is quite strong for this factorization. As shown below, the forcing levels need to sinusoidally fold several times over to match the observed ENSO behavior. See the recent post Inverting non-autonomous functions for a recipe to aid in iterating for the underlying LTE modulation.
The parsimony of this model can’t be emphasized enough. It’s equivalent to the agreement of a conventional tidal forcing analysis to a SLH time-series in that only a single lunar tidal factor accounts for a majority of the modulation. Only the challenge of finding the correct LTE modulation stands in the way of producing an unambiguously correct model for the underlying ENSO behavioral dynamics.
References
[1] Sung-Ho Na, Chapter 19 – Prediction of Earth tide, Editor(s): Pijush Samui, Barnali Dixon, Dieu Tien Bui, Basics of Computational Geophysics, Elsevier, 2021, Pages 351-372, ISBN 9780128205136, https://doi.org/10.1016/B978-0-12-820513-6.00022-9. (note: the ephemerides for the Earth-Moon-Sun system matches closely the online NASA JPL ephemerides generator available at https://ssd.jpl.nasa.gov/horizons, but this paper is more useful in that it algorithmically states the contributions of the various tidal factors in the source code supplied. Source code also available at https://github.com/pukpr/GeoEnergyMath/tree/master/src)
This 2-D heat map, from Jialin Lin’s research group at The Ohio State University, shows the eastward propagation of the ocean subsurface wave leading to switch from La Niña to El Niño.
Jialin Lin, associate professor of geography, has spent the last two decades tackling those challenges, and in the past two years, he’s had breakthroughs in answering two of forecasting’s most pernicious questions: predicting the shift between El Niño and La Niña and predicting which hurricanes will rapidly intensify.
Now, he’s turning his attention to creating more accurate models predicting global warming and its impacts, leading an international team of 40 climate experts to create a new book identifying the highest-priority research questions for the next 30-50 years.
… still to be published
Lin set out to create a model that could accurately identify ENSO shifts by testing — and subsequently ruling out — all the theories and possibilities earlier researchers had proposed. Then, Lin realized current models only considered surface temperatures, and he decided to dive deeper.
He downloaded 140 years of deep-ocean temperature data, analyzed them and made a breakthrough discovery.
“After 20 years of research, I finally found that the shift was caused by an ocean wave 100 to 200 meters down in the deep ocean,” Lin said, whose research was published in a Nature journal. “The propagation of this wave from the western Pacific to the eastern Pacific generates the switch from La Niña to El Niño.”
The wave repeatedly appeared two years before an El Niño event developed, but Lin went one step further to explain what generated the wave and discovered it was caused by the moon’s tidal gravitational force.
“The tidal force is even easier to predict,” Lin said. “That will widen the possibility for an even longer lead of prediction. Now you can predict not only for two years before, but 10 years before.”
Essentially, the idea is that these subsurface waves can in no way be caused by surface wind as the latter only are observed later (likely as an after-effect of the sub-surface thermocline nearing the surface and thus modifying the atmospheric pressure gradient). This counters the long-standing belief that ENSO transitions occur as a result of prevailing wind shifts.
The other part of the article concerns correlating hurricane intensification is also interesting.
Following up from this post, there is a recent sequence of articles in an AGU journal on Water Resources Research under the heading: “Debates: Does Information Theory Provide a New Paradigm for Earth Science?”
By anticipating all these ideas, you can find plenty of examples and derivations (with many centered on the ideas of Maximum Entropy) in our book Mathematical Geoenergy.
Here is an excerpt from the “Emerging concepts” entry, which indirectly addresses negative entropy:
“While dynamical system theories have a long history in mathematics and physics and diverse applications to the hydrological sciences (e.g., Sangoyomi et al., 1996; Sivakumar, 2000; Rodriguez-Iturbe et al., 1989, 1991), their treatment of information has remained probabilistic akin to what is done in classical thermodynamics and statistics. In fact, the dynamical system theories treated entropy production as exponential uncertainty growth associated with stochastic perturbation of a deterministic system along unstable directions (where neighboring states grow exponentially apart), a notion linked to deterministic chaos. Therefore, while the kinematic geometry of a system was deemed deterministic, entropy (and information) remained inherently probabilistic. This led to the misconception that entropy could only exist in stochastically perturbed systems but not in deterministic systems without such perturbations, thereby violating the physical thermodynamic fact that entropy is being produced in nature irrespective of how we model it.
In that sense, classical dynamical system theories and their treatments of entropy and information were essentially the same as those in classical statistical mechanics. Therefore, the vast literature on dynamical systems, including applications to the Earth sciences, was never able to address information in ways going beyond the classical probabilistic paradigm.”
That is, there are likely many earth system behaviors that are highly ordered, but the complexity and non-linearity of their mechanisms makes them appear stochastic or chaotic (high positive entropy) yet the reality is that they are just a complicated deterministic model (negative entropy). We just aren’t looking hard enough to discover the underlying patterns on most of this stuff.
“Science and data compression have the same objective: discovery of patterns in (observed) data, in order to describe them in a compact form. In the case of science, we call this process of compression “explaining observed data.” The proposed or resulting compact form is often referred to as “hypothesis,” “theory,” or “law,” which can then be used to predict new observations. There is a strong parallel between the scientific method and the theory behind data compression. The field of algorithmic information theory (AIT) defines the complexity of data as its information content. This is formalized as the size (file length in bits) of its minimal description in the form of the shortest computer program that can produce the data. Although complexity can have many different meanings in different contexts (Gell-Mann, 1995), the AIT definition is particularly useful for quantifying parsimony of models and its role in science. “
Parsimony of models is a measure of negative entropy
Yes, it's not so much that the moon itself wobbles but that the moon's orbit appears to wobble up and down with respect to the earth's equatorial plane.
So this is more-or-less a known behavior, but hopefully it raises awareness to the other work relating lunar forcing to ENSO, QBO, and the Chandler wobble.
Whut's this all about? Tidal forces are known to create ocean tides (obvious of course) but less well known to control El Nino and other geophysical and climate behaviors. https://t.co/HZSRrbFXV9
Thompson, P.R., Widlansky, M.J., Hamlington, B.D. et al. Rapid increases and extreme months in projections of United States high-tide flooding. Nat. Clim. Chang.11, 584–590 (2021). https://doi.org/10.1038/s41558-021-01077-8