The idea of a digital twin is relatively new in terms of coinage of terms, but the essential idea has been around for decades. In the past, a digital twin was referred to as a virtual simulation of a specific system, encoded via a programming language. In the case of a system that was previously built, the virtual simulation emulated all the behaviors and characteristics of that system, only operated on a computer, with any necessary interactive controls and displays provided on a console, either real or virtual. A widely known example of a VS is that of a flight simulator, which in historical terms was the industrial forerunner to today’s virtual reality. A virtual simulation could also be used during the design of the system, with the finished digital twin providing a blueprint for the actual synthesis of the end-product. This approach has also been practiced for decades, both in the electronics industry via logic synthesis of integrated circuits from a hardware description language and with physical products via 3D printing from CAD models.
Continue readingENSO
Azimuth Project
The Azimuth Project discussion forum has been erased from existence. So all the interesting contributions will be inaccessible from web links and Google searches. This is in the context of that forum being only one of a few open discussion forums on climate

In the spirit of continuity, a spin-off discussion forum exists on the remnants of the azimuth-project GitHub organization.
Gist Evaluation
The Gist site on GitHub allows you to comment on posts very easily. For example, images of charts can be pasted in the discussion area. Also snippets of code can be added and updated, which is useful for neural net evaluation. The following is a link to an initial Gist area for evaluating LTE models.
Sub(Surface)Stack
I signed up for a SubStack account awhile ago and recently published two articles on this account (SubSurface) in the last week.
- https://pukite.substack.com/p/machine-learning-validates-the-enso
- https://pukite.substack.com/p/machine-learning-validates-the-amo
The SubStack authoring interface has good math equation mark-up, convenient graphics embedding, and an excellent footnoting system. On first pass, it only lacks control over font color.
The articles are focused on applying neural network cross-validation to ENSO and AMO modeling, as suggested previously. I haven’t completely explored the configuration space but one aspect that may becoming clear is the value of wavelet neural networks (WNN) for time-series analysis. The WNN approach seems much more amenable to extracting sinusoidal modulation of the input-to-output mapping — trained on a rather short interval and then cross-validated out-of-band. The Mexican hat wavelet (2nd derivative of a Gaussian) as an activation function in particular locks in quickly to an LTE modulation that took longer to find with the custom search software I have developed at GitHub. I think the reason for the efficiency is that it’s optimizing to a Taylor’s series expansion of the input terms, a classic nonlinear expansion that NN’s excel at.
The following training run using the Mexican hat activation and ADAM optimizer is an eye-opener, as it achieved an admirable fit within a minute of computation.
The GREEN on BLUE is training on NINO4 data over two end-point intervals, with the RED cross-validation over the out-of-band region. The correlation coefficient is 0.34, which is impressive considering the nature of the waveform. Clearly there is similarity.
Moreover, if we compare the model fit to data via the WNN against the LTE harmonics approach, you can also see where the two fare equally poorly. Below in the outer frame is the NINO4 LTE fit with the YELLOW arrow pointing downward at a discrepancy (a peak in the data not resolved in the fit). In comparison the yellow-bordered inset shows the same discrepancy on the WNN training run. So the fingerprints essentially match with no coaching.
The neural net chain is somewhat deep with 6 layers, but I think this is needed to expand to the higher-order terms in the Taylor’s series. In the directed graph below, L01 is the input tidal forcing and L02 is the time axis (with an initial very low weighting).
It also appears temporally stationary across the entire time-span, so that the WNN temporal contribution appears minimal.
In a previous fit the horizontal striations (indicating modulation factor at a forcing level) matched with the LTE model, providing further evidence that the the WNN was mapping to an optimal modulation.

The other Sub(Surface)Stack article is on the AMO, which also reveals promising results. This is a video of the training in action
Controlled Experiments
Sorry to have to point this out, but it’s not my fault that geophysicists and climatologists can’t perform controlled experiments to test out various hypotheses. It’s not their fault either. It’s all nature’s decision to make gravitational forces so weak and planetary objects so massive to prevent anyone from scaling the effect to laboratory size to enable a carefully controlled experiment. One can always create roughly-equivalent emulations, such as a magnetic field experiment (described in the previous blog post) and validate a hypothesized behavior as a controlled lab experiment. Yet, I suspect that this would not get sufficient buy-in, as it’s not considered the actual real thing.
And that’s the dilemma. By the same token that analog emulators will not be trusted by geophysicists and climatologists, so too scientists from other disciplines will remain skeptical of untestable claims made by earth scientists. If nothing definitive comes out of a thought experiment that can’t be reproduced by others in a lab, they remain suspicious, as per their education and training.
It should therefore work both ways. As featured in the previous blog post, the model of the Chandler wobble forced by lunar torque needs to be treated fairly — either clearly debunked or considered as an alternative to the hazy consensus. ChatGPT remains open about the model, not the least bit swayed by colleagues or tribal bias. As the value of the Chandler wobble predicted by the lunar nodal model (432.7 days) is so close to the cited value of 433 days, as a bottom-line it should be difficult to ignore.

There are other indicators in the observational data to further substantiate this, see Chandler Wobble Forcing. It also makes sense in the context of the annual wobble.
As it stands, the lack of an experiment means a more equal footing for the alternatives, as they are all under equal amounts of suspicion.
Same goes for QBO. No controlled experiment is possible to test out the consensus QBO models, despite the fact that the Plumb and McEwan experiment is claimed to do just that. Sorry, but that experiment is not even close to the topology of a rotating sphere with a radial gravitational force operating on a gas. It also never predicted the QBO period. In contrast, the value of the QBO predicted by the lunar nodal model (28.4 months) is also too close to the cited value of 28 to 29 months to ignore. This also makes sense in the context of the semi-annual oscillation (SAO) located above the QBO .

Both the Chandler wobble and the QBO have the symmetry of a global wavenumber=0 phenomena so therefore only nodal cycles allowed — both for lunar and solar.
Next to ENSO. As with LOD modeling, this is not wavenumber=0 symmetry, as it must correspond to the longitude of a specific region. No controlled experiment is possible to test out the currently accepted models, premised as being triggered by wind shifts (an iffy cause vs. effect in any case). The mean value of the ENSO predicted by the tidal LOD-caibrated model (3.80 years modulated by 18.6 years) is too close to the cited value of 3.8 years with ~200 years of paleo and direct measurement to ignore.

doi:10.1007/978-1-4020-4411-3_172
In BLUE below is the LOD-calibrated tidal forcing, with linear amplification

In BLUE again below is a non-linear modulation of the tidal forcing according to the Laplace’s Tidal Equation solution, and trained on an early historical interval. This is something that a neural network should be able to do, as it excels at fitting to non-linear mappings that have a simple (i.e. low complexity) encoding — in this case it may be able to construct a Taylor series expansion of a sinusoidal modulating function.

The neural network’s ability to accurately represent a behavior is explained as a simplicity bias — a confounding aspect of machine learning tools such as ChatGPT and neural networks. The YouTube video below explains the counter-intuitive notion of how a NN with a deep set of possibilities tends to find the simplest solution and doing this without over-fitting the final mapping.
So that deep neural networks are claimed to have a built-in Occam’s Razor propensity, finding the most parsimonious input-output mappings when applied to training data. This is spot on with what I am doing with the LTE mapping, but bypassing the NN with a nonlinear sinusoidal modulation optimally fit on training data by a random search function.
I am tempted to try a NN on the ENSO training set as an experiment and see what it finds.
April 2, 2023
“I am tempted to try a NN on the ENSO training set as an experiment and see what it finds.”
Limits of Predictability?
A decade-old research article on modeling equatorial waves includes this introductory passage:

“Nonlinear aspects plays a major role in the understanding of fluid flows. The distinctive fact that in nonlinear problems cause and effect are not proportional opens up the possibility that a small variation in an input quantity causes a considerable change in the response of the system. Often this type of complication causes nonlinear problems to elude exact treatment. “
https://doi.org/10.1029/2012JC007879
From my experience if it is relatively easy to generate a fit to data via a nonlinear model then it also may be easy to diverge from the fit with a small structural perturbation, or to come up with an alternative fit with a different set of parameters. This makes it difficult to establish an iron-clad cross-validation.
This doesn’t mean we don’t keep trying. Applying the dLOD calibration approach to an applied forcing, we can model ENSO via the NINO34 climate index across the available data range (in YELLOW) in the figure below (parameters here)

The lower right box is a modulo-2π reduction of the tidal forcing as an input to the sinusoidal LTE modulation, using the decline rate (per month) as the divisor. Why this works so well per month in contrast to per year (where an annual cycle would make sense) is not clear. It is also fascinating in that this is a form of amplitude aliasing analogous to the frequency aliasing that also applies a modulo-2π folding reduction to the tidal periods less than the Nyquist monthly sampling criteria. There may be a time-amplitude duality or Lagrangian particle-relabeling in operation that has at its central core the trivial solutions of Navier-Stokes or Euler differential equations when all segments of forcing are flat or have a linear slope. Trivial in the sense that when a forcing is flat or has a 1st-order slope, the 2nd derivatives due to divergence in the differential equations vanish (quasi-static). This means that only the discontinuities, which occur concurrently with the annual ENSO predictability barrier, need to be treated carefully (the modulo-2π folding could be a topological Berry phase jump?). Yet, if these transitions are enhanced by metastable interface instabilities as during thermocline turn-over then the differential equation conditions could be transiently relaxed via a vanishing density difference. Much happens during a turn-over, but it doesn’t last long, perhaps indicating a geometric phase. MV Berry also discusses phase changes in the context of amphidromic tidal singularities here.
Suffice to say that the topological properties of reduced dimension volumes and at interfaces remain mysterious. The main takeaway is that a working NINO34-fitted ENSO model is produced, and if not here then somewhere else a machine-learning algorithm will discover it.
The key next step is to apply the same tidal forcing to an AMO model, taking care not to change the tidal factors enough to produce a highly sensitive nonlinear response in the LTE model. So we retain an excluded interval from training (in YELLOW below) and only adjust the LTE parameters for the region surrounding this zone during the fitting process (parameters here).

The cross-validation agreement is breathtakingly good in the excluded (out-of-band) training interval. There is zero cross-correlation between the NINO34 and AMO time-series to begin with so that this is likely revealing the true emergent characteristics of a tidally forced mechanism.

As usual all the introductory work is covered in Mathematical Geoenergy
- https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch11 (wind)
- https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12 (wave)
A community peer-review contributed to a recent QBO article is here and PDF here. The same question applies to QBO as ENSO or AMO: is it possible to predict future behavior? Is the QBO model less sensitive to input since the nonlinear aspect is weaker?
Added several weeks later: This monograph PDF available “Introduction to Geophysical Fluid Dynamics: Physical and Numerical Aspects”. Ignoring higher-order time derivatives is key to solving LTE.


Note the cite to Billy Kessler

Gerstner waves
An exact solution for equatorially trapped waves
Nonlinear aspects plays a major role in the understanding of fluid flows. The distinctive fact that in nonlinear problems cause and effect are not proportional opens up the possibility that a small variation in an input quantity causes a considerable change in the response of the system. Often this type of complication causes nonlinear problems to elude exact treatment. A good illustration of this feature is the fact that there is only one known explicit exact solution of the (nonlinear) governing equations for periodic two-dimensional traveling gravity water waves. This solution was first found in a homogeneous fluid by Gerstner
Adrian Constantin, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 117, C05029, doi:10.1029/2012JC007879, 2012
These are trochoidal waves

Even within the context of gravity waves explored in the references mentioned above, a vertical wall is not allowable. This drawback is of special relevance in a geophysical context since [cf. Fedorov and Brown, 2009] the Equator works like a natural boundary and equatorially trapped waves, eastward propagating and symmetric about the Equator, are known to exist. By the 1980s, the scientific community came to realize that these waves are one of the key factors in explaining the El Niño phenomenon (see also the discussion in Cushman-Roisin and Beckers [2011]).
modulo-2π and Berry phase
Darwin
It turns out that the Darwin location of the Southern Oscillation Index (SOI) dipole is brilliantly easy to behaviorally model on it’s own.

The input forcing is calibrated to the differential length-of-day (LOD) with a correlation coefficient of 0.9997, and only a few terms are required to capture the standing-wave modes corresponding to the ENSO dipole.
So which curve below is the time-series data of atmospheric pressure at Darwin and which is the Laplace’s Tidal Equation (LTE) model calibrated from dLOD measurements?
- (bottom, red) = ?
- (top, blue) = ??

As a bonus, the couple of years outside of the training interval are extrapolated from the model. This shouldn’t be hard for climate scientists, …. or is it still too difficult?
If that isn’t enough to discriminate between the two, the power spectra of the LTE mapping to model and to data is shown below. This identifies a couple of the lower frequency modulations as strong peaks and a few weaker higher harmonic peaks that sharpen the model’s detail. This shows that the data’s behavior possesses a high amount of order not apparent in the time series.

Poll on Twitter =>
Why isn’t the Tahiti time-series included since that would provide additional signal discrimination via a differential measurement as one should be the complement of the other? It should accentuate the signal and remove noise (and any common-mode behavior) if the Darwin and Tahiti are perfect anti-nodes for all standing-wave modes. However, it appears that only the main ENSO standing-wave mode is balanced in all modes.

In that case, the Darwin set alone works well. Mastodon
Limnology 101
I doubt many climate scientists have taken a class in limnology, the study of freshwater lakes. I have as an elective science course in college. They likely have missed the insight of thinking about the thermocline and how in dimictic upper-latitude lakes the entire lake overturns twice a year as the imbalance of densities due to differential heating or cooling causes a buoyancy instability.
An interesting Nature paper “Seasonal overturn and stratification changes drive deep-water warming in one of Earth’s largest lakes” focusing on Lake Michigan

Cross-validation
Cross-validation is essentially the ability to predict the characteristics of an unexplored region based on a model of an explored region. The explored region is often used as a training interval to test or validate model applicability on the unexplored interval. If some fraction of the expected characteristics appears in the unexplored region when the model is extrapolated to that interval, some degree of validation is granted to the model.
This is a powerful technique on its own as it is used frequently (and depended on) in machine learning models to eliminate poorly performing trials. But it gains even more importance when new data for validation will take years to collect. In particular, consider the arduous process of collecting fresh data for El Nino Southern Oscillation, which will take decades to generate sufficient statistical significance for validation.
So, what’s necessary in the short term is substantiation of a model’s potential validity. Nothing else will work as a substitute, as controlled experiments are not possible for domains as large as the Earth’s climate. Cross-validation remains the best bet.
Continue reading