This is a straightforward validation of the forcing used on the lunar-driven ENSO model.
The paper by Chao et al  provides a comprehensive spectral analysis of the earth’s length of day (LOD) variations using both a wavelet analysis and a power spectrum analysis. The wavelet analysis provides insight into the richness of the LOD cyclic variations (c.f. the Chao ref 6 in a recent post) :
Both the wavelet and the power spectrum (below) show the 6-year Fourier component that appears in the ENSO model as a mixed tidal forcing.
The original premise is that the change in LOD via the equivalent angular momentum change will impart a forcing on the Pacific ocean thermocline as per a reduced-gravity model:Calculating a spectral analysis of the best fit ENSO model forcing, note that all of the model peaks (in RED) match those found by Chao et al in their ΔLOD analysis :
There are additional peaks not found by Chao but those are reduced in magnitude, as can be inferred from the log (i.e. dB) scale. If these actually exist in the Chao spectrum, they may be buried in the background noise. Also, the missing Sa and Ssa peaks are the seasonal LOD variations that are taken into account separately by the model, as most ENSO data sets are typically filtered to remove seasonal data.
The tidal constituents shown above in the Chao power spectra are defined in the following Doodson table . Chao likely is unable to discriminate the tropical values from the draconic and anomalistic values, being so close in value. On the other hand, the ENSO model needs to know these values precisely. Each of the primary Mm, Mf, Mtm, and Mqm and satellite Msm, Msf, Mstm, Msqm factors align with the first 4 harmonics of the mixed nonlinear ENSO model with the 2nd order satellites arising from the anomalistic correction.This is an excellent validation test because this particular LOD power spectrum has not been used previously in the ENSO model fitting process. If the peaks did not match up, then the original premise for LOD forcing would need to be reconsidered.
 B. F. Chao, W. Chung, Z. Shih, and Y. Hsieh, “Earth’s rotation variations: a wavelet analysis,” Terra Nova, vol. 26, no. 4, pp. 260–264, 2014.
 A. Capotondi, “El Niño–Southern Oscillation ocean dynamics: Simulation by coupled general circulation models,” Climate Dynamics: Why Does Climate Vary?, pp. 105–122, 2013.
 D. D. McCarthy (ed.): IERS Conventions (1996) (IERS Technical Note No. 21) :
21 thoughts on “ENSO tidal forcing validated by LOD data”
Another view. The forcing factors congregate around the main long-period M-series tidal terms. Upper panel is log scale and lower is linear.
Have you looked at other ENSO indexes to see if the same analysis holds true and/or reveals interesting differences?
I’ve been playing around with temperature data trying to see what I find for natural variability – much like what you did with CSALT. Log(CO2) is obviously the biggie, once that’s subtracted there’s only +/- a few tenths left to explain. So I then I subtracted the NINO34 data. I did it again substituting CTI for ENSO34 and it did have some interesting differences.
Kevin, that’s an excellent one to evaluate, thanks.
The first thing I would try is a sliding correlation coefficient between CTI and NINO34, to see how it compares against SOI.
Here’s a graph of HADCRUT – CO2 – TSI – LOD – ENSO using both the NINO34 dataset and the CTI dataset. Standard deviations are virtually the same. NINO34 yields a result 2/1000 of a degree better.
The immediate obervation is that spike during WWII, which is at least partly due to differing temperature calibrations between military and commercial ships.
I have that chart which shows where other discrepancies are
Yes, I was familiar with the WWII instrument problem/change. Do you know if there’s an explanation for the 1956 negative spike?
I just went and substituted BEST for HADCRUT – standard deviations a few thousandths worse than with HADCRUT:
NINO34 corrected = 0.111045
CTI corrected = 0.116576
The 1956 spike is a mystery to me
I took the model output from the spreadsheet you sent me and substituted it for the CTI dataset. Here are the results:
For this application, ENSO adjustment of the GMST record, the model output works as well as the NINO34 dataset to within 4 ten-thousandths of a degree and actually slightly better than the CTI.
Interesting. There might also be an AMO factor, but it’s tricky because it has to be detrended
Pingback: Identification of Lunar Parameters and Noise | context/Earth
This may be of interest: What Earth’s climate system and topological insulators have in common describing this paper; Topological origin of equatorial waves; Pierre Delplace, J. B. Marston, Antoine Venaille1;
Science 05 Oct 2017. DOI: 10.1126/science.aan8819
Could be the what I solved for last year — reducing the Coriolis forces at the equator.
I simplified much more than what they did.
It reminded me of something you wrote about Kelvin waves on the Azimuth Forum a couple of years ago.
Indeed, John posted an anti-vortex article last year at Azimuth
I made a comment at the time:
“These curl equations are fascinating and are of course endemic in applications from electromagnetics to fluid dynamics. Perhaps there is some overlap with the model of the QBO equatorial winds that we are working on at John’s Azimuth Forum …http://contextearth.com/2016/09/23/compact-qbo-derviation/#comment-199906 “
A few years ago, I referenced this paper by Marston, who did the new research, which was a call-to-arms to solving climate problems: https://physics.aps.org/articles/v4/20
This paper by Vallis is a good inspiration to look at simplifying the physics before doing CFD http://contextearth.com/2016/09/03/geophysical-fluid-dynamics-first-and-then-cfd/
As we would expect the various ENSO indexes are highly correlated with each other except for one outlier in each group (grouped by coverage years).
It’s not wholly unexpected that training on one dataset yields good results when comparing the model output against the other SOI Indexes. But to achieve the best results it’s important not to Solve for Max, but an arbitrary value below the expected Max. This prevents overfitting to both the training dataset and the training interval.
So, just in case anyone was wondering, the model is not sensitive to the choice of ENSO Index used for training or comparison.
” But to achieve the best results it’s important not to Solve for Max, but an arbitrary value below the expected Max.”
That’s a really good insight Kevin. What I often find is that the interim cross-validation often passes through what looks like a really good out-of-band match before it starts to over-fit and go completely uncorrelated. I saw that originally with an early ENSO fit and that’s what has provided the incentive to keep trying to find the minimal set of parameters. What to choose for this interim max is the question and the lower it is, the greater number of states the solution could exist in. The temptation is always to wait it out to make sure that it can jump out of a local minimum and let it find the best fit.
Two images should have accompanied the above comment
Kevin, Nice idea including the SOI_noise time series in the mix, as that provides the control to partially reject over-fitting. Typically, a randomly-generated red noise series is created as the test, but this seems even better.
The SOI_noaa series cc is puzzling.
The cross-validation tests look good too, and I assume that you didn’t fit on the training interval to the max?
Pingback: Reverse Engineering the Moon’s Orbit from ENSO Behavior | context/Earth
Pingback: Approximating the ENSO Forcing Potential | context/Earth