Granted that Dunkerton says dumb stuff on Twitter but his highly cited research is also off-base. That’s IMO only because recent papers by others in the field of atmospheric science do continue to cite his ideas as primary, if not authoritative. For example, from a recently published paper “The Gravity Wave Activity during Two Recent QBO Disruptions Revealed by U.S. High-Resolution Radiosonde Data”, citations 1 & 12 both refer to Dunkerton, and specifically to his belief that the QBO period is a property of the atmospheric medium itself
Straight-forward to debunk this Dunkerton theory since the length of the cycle directly above the QBO layer is semi-annual and thus not a property of the medium but of the semi-annual nodal forcing frequency. If we make the obvious connection to the other nodal forcing — that of the moon — then we find the QBO period is fixed to 28 months. I have been highlighting this connection to the authors of new QBO papers under community review, often with some subsequent feedback provided such as here: https://doi.org/10.5194/acp-2022-792-CC1 . Though not visible yet in the comments, I received some personal correspondence that showed that the authors under peer-review are taking the idea seriously and attempting to duplicate the calculations. They seem to be methodical in their approach, asking for clarification and further instructions where they couldn’t follow the formulation. They know about the GitHub software, so hopefully that will be of some help.
In contrast, Dunkerton also knows about my approach but responds in an inscrutable (if not condescending) way. Makes you wonder if scientists such as Dunkerton and Lindzen are bitter and taking out their frustrations via the media. Based on their doggedness, they may in fact be intentionally trying to impede progress in climate science by taking contrarian stances. In my experience, the top scientists in other research disciplines don’t act this way. YMMV
]]>“Nonlinear aspects plays a major role in the understanding of fluid flows. The distinctive fact that in nonlinear problems cause and effect are not proportional opens up the possibility that a small variation in an input quantity causes a considerable change in the response of the system. Often this type of complication causes nonlinear problems to elude exact treatment. “
https://doi.org/10.1029/2012JC007879
From my experience if it is relatively easy to generate a fit to data via a nonlinear model then it also may be easy to diverge from the fit with a small structural perturbation, or to come up with an alternative fit with a different set of parameters. This makes it difficult to establish an iron-clad cross-validation.
This doesn’t mean we don’t keep trying. Applying the dLOD calibration approach to an applied forcing, we can model ENSO via the NINO34 climate index across the available data range (in YELLOW) in the figure below (parameters here)
The lower right box is a modulo-2π reduction of the tidal forcing as an input to the sinusoidal LTE modulation, using the decline rate (per month) as the divisor. Why this works so well per month in contrast to per year (where an annual cycle would make sense) is not clear. It is also fascinating in that this is a form of amplitude aliasing analogous to the frequency aliasing that also applies a modulo-2π folding reduction to the tidal periods less than the Nyquist monthly sampling criteria. There may be a time-amplitude duality or Lagrangian particle-relabeling in operation that has at its central core the trivial solutions of Navier-Stokes or Euler differential equations when all segments of forcing are flat or have a linear slope. Trivial in the sense that when a forcing is flat or has a 1st-order slope, the 2nd derivatives due to divergence in the differential equations vanish (quasi-static). This means that only the discontinuities, which occur concurrently with the annual ENSO predictability barrier, need to be treated carefully (the modulo-2π folding could be a topological Berry phase jump?). Yet, if these transitions are enhanced by metastable interface instabilities as during thermocline turn-over then the differential equation conditions could be transiently relaxed via a vanishing density difference. Much happens during a turn-over, but it doesn’t last long, perhaps indicating a geometric phase. MV Berry also discusses phase changes in the context of amphidromic tidal singularities here.
Suffice to say that the topological properties of reduced dimension volumes and at interfaces remain mysterious. The main takeaway is that a working NINO34-fitted ENSO model is produced, and if not here then somewhere else a machine-learning algorithm will discover it.
The key next step is to apply the same tidal forcing to an AMO model, taking care not to change the tidal factors enough to produce a highly sensitive nonlinear response in the LTE model. So we retain an excluded interval from training (in YELLOW below) and only adjust the LTE parameters for the region surrounding this zone during the fitting process (parameters here).
The cross-validation agreement is breathtakingly good in the excluded (out-of-band) training interval. There is zero cross-correlation between the NINO34 and AMO time-series to begin with so that this is likely revealing the true emergent characteristics of a tidally forced mechanism.
As usual all the introductory work is covered in Mathematical Geoenergy
A community peer-review contributed to a recent QBO article is here and PDF here. The same question applies to QBO as ENSO or AMO: is it possible to predict future behavior? Is the QBO model less sensitive to input since the nonlinear aspect is weaker?
]]>An exact solution for equatorially trapped waves
Adrian Constantin, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 117, C05029, doi:10.1029/2012JC007879, 2012
These are trochoidal waves
Even within the context of gravity waves explored in the references mentioned above, a vertical wall is not allowable. This drawback is of special relevance in a geophysical context since [cf. Fedorov and Brown, 2009] the Equator works like a natural boundary and equatorially trapped waves, eastward propagating and symmetric about the Equator, are known to exist. By the 1980s, the scientific community came to realize that these waves are one of the key factors in explaining the El Niño phenomenon (see also the discussion in Cushman-Roisin and Beckers [2011]).
modulo-2π and Berry phase
]]>The input forcing is calibrated to the differential length-of-day (LOD) with a correlation coefficient of 0.9997, and only a few terms are required to capture the standing-wave modes corresponding to the ENSO dipole.
So which curve below is the time-series data of atmospheric pressure at Darwin and which is the Laplace’s Tidal Equation (LTE) model calibrated from dLOD measurements?
As a bonus, the couple of years outside of the training interval are extrapolated from the model. This shouldn’t be hard for climate scientists, …. or is it still too difficult?
If that isn’t enough to discriminate between the two, the power spectra of the LTE mapping to model and to data is shown below. This identifies a couple of the lower frequency modulations as strong peaks and a few weaker higher harmonic peaks that sharpen the model’s detail. This shows that the data’s behavior possesses a high amount of order not apparent in the time series.
Poll on Twitter =>
Why isn’t the Tahiti time-series included since that would provide additional signal discrimination via a differential measurement as one should be the complement of the other? It should accentuate the signal and remove noise (and any common-mode behavior) if the Darwin and Tahiti are perfect anti-nodes for all standing-wave modes. However, it appears that only the main ENSO standing-wave mode is balanced in all modes.
In that case, the Darwin set alone works well. Mastodon
]]>An interesting Nature paper “Seasonal overturn and stratification changes drive deep-water warming in one of Earth’s largest lakes” focusing on Lake Michigan
Note the strong impulse at the thermocline that occurs on an annual basis coinciding to an overturning event, panel B below.
This is likely the same instability that occurs along the equatorial Pacific ocean thermocline as the differences in density become smaller, and the gravitational tidal force at that moment provides an impulse to slosh the interface, leading to ENSO events.
A simple premise, yet barely considered, except in Mathematical Geoenergy, chapter 12.
In a thesis Hydrodynamic modelling of Lake Ontario, it was mentioned that “displacement of
water masses leads to rhythmic oscillations in the entire lake. These long waves or seiches have wavelengths of the same order of magnitude as the basin dimensions. Seiches are reflected at the lake boundaries and combine into standing wave patterns on the thermocline [18]”. The cited book on limnology is instructive.
Available on Google Books, likely a better description of thermocline dynamics than you will find in an oceanography textbbook.
]]>This is a powerful technique on its own as it is used frequently (and depended on) in machine learning models to eliminate poorly performing trials. But it gains even more importance when new data for validation will take years to collect. In particular, consider the arduous process of collecting fresh data for El Nino Southern Oscillation, which will take decades to generate sufficient statistical significance for validation.
So, what’s necessary in the short term is substantiation of a model’s potential validity. Nothing else will work as a substitute, as controlled experiments are not possible for domains as large as the Earth’s climate. Cross-validation remains the best bet.
As a practical aside, CV is not for the faint-of-heart, since anyone doing cross-validation will get accused of cheating (what they apparently refer to as researcher Degrees Of Freedom). Well, of course the “unexplored” data is there for anyone to see, so everyone is in the same boat when it comes to avoiding tainting the results or priming the pump, so to speak. Yet, this paranoia is strong enough that critics may use the rDOF excuse to completely ignore cross-validation results (see link above for example of me trying to get any interest in cross-validation of a Chandler wobble model — taint goes both ways apparently, in this case as prejudice i.e. biased pre-judging as to the modeler’s intent).
An effective yet non-controversial example is cross-validation of a delta Length-of-Day (dLOD) model. The LOD data is from Paris IERS and is transformed into an acceleration by taking the differential, thus dLOD. A cross-validation can easily be generated by taking any interval in the time series, fitting that to an appropriate model of the geophysics, and then extrapolating over the rest of the length (which stretches from 1962 to the current day). To do that, first we need to select the physical factors that will act to modify the angular momentum of the Earth’s rotation — these factors are simply the tidal torques as generated by the moon and sun, tabulated by R.D. Ray in “Long-period tidal variations in the length of day” (see Table 3, column labeled V_{0}/g for forcing values, with amplitude scaled by frequency since this is a differential LOD).
The strongest 30 tidal factors are arranged as a Fourier series and then optimally fit using a maximum linear regression algorithm (source code). The results (raw fit here) are shown below for 3 orthogonal training intervals, each approximately 20 years each (1962-1980, 1980-2000, and 2000-present).
It appears that Ray may have evaluated his predicted forcing values against similar LOD data, as the cross-validation agreement is very good across the board. He writes:
“The only realistic test of this new tidal LOD model is to examine how well it removes tidal energy from real LOD measurements. To test that we use the SPACE2008 time series of Earth rotation variations produced by Ratcliff and Gross [2010] from various types of space-geodetic measurements. Their method employs a specially designed Kalman filter to combine disparate types of measurements and to produce a time series with daily sampling interval; see also Gross et al. [1998]. After computing and subtracting the new LOD tidal model from the SPACE2008 data, we examine the residual spectrum for the possible presence of peaks at known tidal frequencies.”
This is an excellent example of effective cross-validation as the model is essentially stationary across the entire interval, indicating that the tidal factors represent the actual torque controlling the fastest cycling in the Earth’s LOD. Yes, it is possible that Ray applied his “researcher degrees of freedom” to further calibrate his tabulated tidal factors against this data, but it doesn’t detract from the excellent stationarity of the model itself. So as with a conventional tidal analysis, it doesn’t matter if a tidal model is calibrated via historical data cross-validation or from future data, as the foundational model has withstood the test of time as well as being internally self-consistent.
With that as a blueprint, we now enter the realm of non-linear model fitting a la ENSO. The relevant steps are reviewed in a recent blog post, which starts from the same set of tidal factors calibrated from dLOD data as described above. Keep in mind that allowing a researcher at least a few degrees of freedom to experiment with is the same as allowing them insight and educated guesses. Research would never advance without allowing flexibility when treading into unknown waters. So, the insight is to seed the initial model fitting with a few nonlinear modulation factors that represent the possible standing-wave modes of ENSO — one low-frequency modulation, and a high-frequency modulation set as a 7 & 14x harmonic of the fundamental, representing Tropical Instability Waves. The result of fitting the LTE model (using the GEM software) is shown below with the excluded-from-training intervals shown. Since these intervals were considered pristine from the point of view of the randomly mutating fitting process, any correlation between the model and the data within these intervals should be considered significant.
Click on the image’s link to magnify and get a sense on how well the model works in the highlighted validation intervals. There are essentially the 3 standing wave nodes (lower left), and the dLOD starting tidal factors (upper right) that are gradually varied to arrive at the final fit. It’s not perfect, but enough of the peaks and valleys align that not much additionally fitting is needed to model the data with a high correlation-coefficient across the entire time-series. As an example, including the pre-1880 ENSO data (which is somewhat iffy apart from the late-1870’s El Nino peak) generates this fit:
This “from scratch” cross-validation differs from the alternate approach of fitting to the entire interval and then excluding a portion before refitting, which is more suspect to bias (even though it can also show the effects of over-fitting as demonstrated in this experiment).
The difficulty in this scratch process, in contrast to the properties of the underlying dLOD model, is that the non-linear transformation required of LTE is much more structurally sensitive than the linear transformation of pure harmonic tidal analysis. For example, harmonic tidal analysis is essentially:
so changes in k or F(t) reflect as a linear scaling in the output of f(t).
Whereas with the non-linear LTE model
so that changes in k or F(t) can cause f(t) to swing wildly in both positive and negative direction.
The bottom-line is that the cross-validation results can’t be denied, but other researchers need to be involved in the process to improve the model enough to make it production quality. The LTE fitting software on GitHub is fast (it takes just a few minutes for the model to start aligning) and with a faster multi-CPU core (say try a 128 core) it would appeal to the scientific computing enthusiasts. As I designed the software to use all cores available, the speed-up would be nearly linear with number of cores. Could be down to minutes for a full fit, and thus very amenable to rapid turnaround experimentation.
]]>A highly esteemed climate scientist, Isaac Held, even participated and voiced his opinion on how feasible that would be. Eventually the forum decided to concentrate on the topic of modeling El Nino cycles, starting out with a burst of enthusiasm. The independent track I took on the forum was relatively idiosyncratic, yet I thought it held promise and eventually published the model in the monograph Mathematical Geoenergy (AGU/Wiley) in late 2018. The forum is nearly dead now, but there is recent thread on “Physicists predict Earth will become a chaotic world”. Have we learned nothing after 10 years?
My model assumes that El Nino/La Nina cycles are not chaotic or random, which is still probably considered blasphemous. In contrast to what’s in the monograph, the model has simplified, and a feasible solution can be mapped to data within minutes. The basic idea remains the same, explained in 3 parts.
The fitting process is to let all the parameters to vary slightly and so I use the equivalent of a gradient descent algorithm to guide the solution. The impulse month is seeded along with starting guesses for the two slowest wavenumbers. Another MLR algorithm is embedded to estimate the amplitude and phases required.
The multi-processing software is at https://github.com/pukpr/GeoEnergyMath/wiki
Recently it has taken mere minutes to arrive at a viable model fit to the ENSO data (the ENS ONI monthly data (1850 – Jul 2022)), starting with the initially calibrated dLOD factors. Each of the tidal factors is modified slightly but the correlation coefficient is still at 0.99 of the starting dLOD.
Even with that, the only way to make a convincing argument is to apply cross-validation during the fitting process. A training interval is used during the fitting and the model is extrapolated as a check once the training error is minimized. Even though the model is structurally sensitive, it does not show wild over-fitting errors. This is explainable as only a handful of degrees of freedom are available.
So this demonstrates that the behavior is stationary and definitely not chaotic, only obscured by the non-linear modulation applied to tidally forced waveform.
]]>Amazing number of harmonics
https://www.sciencedirect.com/science/article/pii/B9780128215128000177
]]>The most important mechanism for turbulence production in equatorial parallel shear flows is the inflectional instability, which operates at local maxima of the mean shear profile (Smyth and Carpenter, 2019). In the presence of stable stratification, inflectional instability is damped, but it may yet grow, provided that the minimum value of Ri is less than critical. In this case, the process is termed Kelvin–Helmholtz (KH) instability.
The (nearly) common forcing
with the applied LTE of a 180° phase difference
leads to adequately fitted models to the respective time series
The fact that the fundamental (and 7th harmonic) are aligned between ENSO and AMO strongly suggest that the standing-wave wavenumbers are not governed by the basin geometry but are more of a global characteristic that remains coherent across the land masses. The Atlantic basin has a smaller width than the Pacific so intuitively one might have predicted unique wavenumbers that would fit within the bounding coastlines, but this is perhaps not the case.
Instead, the LTE modulation wraps around the earth and produces an anti-phase relationship in keeping with the approximately 180° longitudinal difference between the Atlantic and Pacific.
Any additional phase shift ϕ can also easily produce the anomalously large multidecadal variations in the AMO due to the biasing properties of the sinusoidal LTE modulation.
Just a matter of time until machine-learning algorithms start discovering these patterns. But, alas, they may not know how to deal with the findings