This is the title of Chapter 4 of an Elsevier volume called “Journey through Tides”

Sophie Ward, David Bowers, Mattias Green, Sophie-Berenice Wilmes, Chapter 4 – Why is there a tide?, Editor(s): Mattias Green, João C. Duarte, A Journey Through Tides, Elsevier, 2023, Pages 81-113, ISBN 9780323908511, https://doi.org/10.1016/B978-0-323-90851-1.00001-7. (https://www.sciencedirect.com/science/article/pii/B9780323908511000017) Abstract: Tides are created by the gravitational pull of the Moon and Sun on the ocean. More exactly, it is the variation in these forces that creates tides. The Earth and Moon are held in orbit by their mutual gravitational attraction. The Moon’s gravity is exactly right at the center of the Earth, but it is a little too strong in the Earth hemisphere facing the Moon and a little too weak in the opposite hemisphere. These discrepancies make the tide generating force. As the Earth spins, the ocean experiences an oscillating force which creates long tide waves – the crest of the wave is the high tide and the trough low tide. In the deep ocean, the amplitude of the tide wave is small, but on the continental shelf, the wave is amplified by resonance, making the large tidal range we see at some coasts. Keywords: Tides; Tide generating force; Cotidal charts; Tidal dynamics; Tidal dissipation

The domain experts selected to answer this question assert this:

“While this is not an exhaustive list of why the tide is important, it is important to note here that perhaps the most physically far-reaching influence of the tide, long-term, is on the change in day length.”

The day length impact is straight-forward to understand for a rotating solid body as the total angular momentum is conserved between the Earth, smaller Moon, and much larger sun. This is essentially a linear perturbation causing the Earth’s rotational period to slightly change leading the length-of-day (LOD) to cycle. But what is it for the Earth’s oceans, which isn’t pinned to its base?

“Internal waves are another form of gravity wave which occur within the water body on internal interfaces, for example, when the interface between water masses of different densities is disturbed.”

The idea of a digital twin is relatively new in terms of coinage of terms, but the essential idea has been around for decades. In the past, a digital twin was referred to as a virtual simulation of a specific system, encoded via a programming language. In the case of a system that was previously built, the virtual simulation emulated all the behaviors and characteristics of that system, only operated on a computer, with any necessary interactive controls and displays provided on a console, either real or virtual. A widely known example of a VS is that of a flight simulator, which in historical terms was the industrial forerunner to today’s virtual reality. A virtual simulation could also be used during the design of the system, with the finished digital twin providing a blueprint for the actual synthesis of the end-product. This approach has also been practiced for decades, both in the electronics industry via logic synthesis of integrated circuits from a hardware description language and with physical products via 3D printing from CAD models.

Sorry to have to point this out, but it’s not my fault that geophysicists and climatologists can’t perform controlled experiments to test out various hypotheses. It’s not their fault either. It’s all nature’s decision to make gravitational forces so weak and planetary objects so massive to prevent anyone from scaling the effect to laboratory size to enable a carefully controlled experiment. One can always create roughly-equivalent emulations, such as a magnetic field experiment (described in the previous blog post) and validate a hypothesized behavior as a controlled lab experiment. Yet, I suspect that this would not get sufficient buy-in, as it’s not considered the actual real thing.

And that’s the dilemma. By the same token that analog emulators will not be trusted by geophysicists and climatologists, so too scientists from other disciplines will remain skeptical of untestable claims made by earth scientists. If nothing definitive comes out of a thought experiment that can’t be reproduced by others in a lab, they remain suspicious, as per their education and training.

It should therefore work both ways. As featured in the previous blog post, the model of the Chandler wobble forced by lunar torque needs to be treated fairly — either clearly debunked or considered as an alternative to the hazy consensus. ChatGPT remains open about the model, not the least bit swayed by colleagues or tribal bias. As the value of the Chandler wobble predicted by the lunar nodal model (432.7 days) is so close to the cited value of 433 days, as a bottom-line it should be difficult to ignore.

There are other indicators in the observational data to further substantiate this, see Chandler Wobble Forcing. It also makes sense in the context of the annual wobble.

As it stands, the lack of an experiment means a more equal footing for the alternatives, as they are all under equal amounts of suspicion.

Same goes for QBO. No controlled experiment is possible to test out the consensus QBO models, despite the fact that the Plumb and McEwan experiment is claimed to do just that. Sorry, but that experiment is not even close to the topology of a rotating sphere with a radial gravitational force operating on a gas. It also never predicted the QBO period. In contrast, the value of the QBO predicted by the lunar nodal model (28.4 months) is also too close to the cited value of 28 to 29 months to ignore. This also makes sense in the context of the semi-annual oscillation (SAO) located above the QBO .

Both the Chandler wobble and the QBO have the symmetry of a global wavenumber=0 phenomena so therefore only nodal cycles allowed — both for lunar and solar.

Next to ENSO. As with LOD modeling, this is not wavenumber=0 symmetry, as it must correspond to the longitude of a specific region. No controlled experiment is possible to test out the currently accepted models, premised as being triggered by wind shifts (an iffy cause vs. effect in any case). The mean value of the ENSO predicted by the tidal LOD-caibrated model (3.80 years modulated by 18.6 years) is too close to the cited value of 3.8 years with ~200 years of paleo and direct measurement to ignore.

In BLUE below is the LOD-calibrated tidal forcing, with linear amplification

In BLUE again below is a non-linear modulation of the tidal forcing according to the Laplace’s Tidal Equation solution, and trained on an early historical interval. This is something that a neural network should be able to do, as it excels at fitting to non-linear mappings that have a simple (i.e. low complexity) encoding — in this case it may be able to construct a Taylor series expansion of a sinusoidal modulating function.

The neural network’s ability to accurately represent a behavior is explained as a simplicity bias — a confounding aspect of machine learning tools such as ChatGPT and neural networks. The YouTube video below explains the counter-intuitive notion of how a NN with a deep set of possibilities tends to find the simplest solution and doing this without over-fitting the final mapping.

So that deep neural networks are claimed to have a built-in Occam’s Razor propensity, finding the most parsimonious input-output mappings when applied to training data. This is spot on with what I am doing with the LTE mapping, but bypassing the NN with a nonlinear sinusoidal modulation optimally fit on training data by a random search function.

I am tempted to try a NN on the ENSO training set as an experiment and see what it finds.

April 2, 2023

“I am tempted to try a NN on the ENSO training set as an experiment and see what it finds.”

“Nonlinear aspects plays a major role in the understanding of fluid flows. The distinctive fact that in nonlinear problems cause and effect are not proportional opens up the possibility that a small variation in an input quantity causes a considerable change in the response of the system. Often this type of complication causes nonlinear problems to elude exact treatment. “

This doesn’t mean we don’t keep trying. Applying the dLOD calibration approach to an applied forcing, we can model ENSO via the NINO34 climate index across the available data range (in YELLOW) in the figure below (parameters here)

The lower right box is a modulo-2π reduction of the tidal forcing as an input to the sinusoidal LTE modulation, using the decline rate (per month) as the divisor. Why this works so well per month in contrast to per year (where an annual cycle would make sense) is not clear. It is also fascinating in that this is a form of amplitude aliasing analogous to the frequency aliasing that also applies a modulo-2π folding reduction to the tidal periods less than the Nyquist monthly sampling criteria. There may be a time-amplitude duality or Lagrangian particle-relabeling in operation that has at its central core the trivial solutions of Navier-Stokes or Euler differential equations when all segments of forcing are flat or have a linear slope. Trivial in the sense that when a forcing is flat or has a 1st-order slope, the 2nd derivatives due to divergence in the differential equations vanish (quasi-static). This means that only the discontinuities, which occur concurrently with the annual ENSO predictability barrier, need to be treated carefully (the modulo-2π folding could be a topological Berry phase jump?). Yet, if these transitions are enhanced by metastable interface instabilities as during thermocline turn-over then the differential equation conditions could be transiently relaxed via a vanishing density difference. Much happens during a turn-over, but it doesn’t last long, perhaps indicating a geometric phase. MV Berry also discusses phase changes in the context of amphidromic tidal singularities here.

Suffice to say that the topological properties of reduced dimension volumes and at interfaces remain mysterious. The main takeaway is that a working NINO34-fitted ENSO model is produced, and if not here then somewhere else a machine-learning algorithm will discover it.

The key next step is to apply the same tidal forcing to an AMO model, taking care not to change the tidal factors enough to produce a highly sensitive nonlinear response in the LTE model. So we retain an excluded interval from training (in YELLOW below) and only adjust the LTE parameters for the region surrounding this zone during the fitting process (parameters here).

The cross-validation agreement is breathtakingly good in the excluded (out-of-band) training interval. There is zero cross-correlation between the NINO34 and AMO time-series to begin with so that this is likely revealing the true emergent characteristics of a tidally forced mechanism.

As usual all the introductory work is covered in Mathematical Geoenergy

A community peer-review contributed to a recent QBO article is here and PDF here. The same question applies to QBO as ENSO or AMO: is it possible to predict future behavior? Is the QBO model less sensitive to input since the nonlinear aspect is weaker?

It turns out that the Darwin location of the Southern Oscillation Index (SOI) dipole is brilliantly easy to behaviorally model on it’s own.

The input forcing is calibrated to the differential length-of-day (LOD) with a correlation coefficient of 0.9997, and only a few terms are required to capture the standing-wave modes corresponding to the ENSO dipole.

As a bonus, the couple of years outside of the training interval are extrapolated from the model. This shouldn’t be hard for climate scientists, …. or is it still too difficult?

If that isn’t enough to discriminate between the two, the power spectra of the LTE mapping to model and to data is shown below. This identifies a couple of the lower frequency modulations as strong peaks and a few weaker higher harmonic peaks that sharpen the model’s detail. This shows that the data’s behavior possesses a high amount of order not apparent in the time series.

Poll on Twitter =>

Why isn’t the Tahiti time-series included since that would provide additional signal discrimination via a differential measurement as one should be the complement of the other? It should accentuate the signal and remove noise (and any common-mode behavior) if the Darwin and Tahiti are perfect anti-nodes for all standing-wave modes. However, it appears that only the main ENSO standing-wave mode is balanced in all modes.

In that case, the Darwin set alone works well. Mastodon

Cross-validation is essentially the ability to predict the characteristics of an unexplored region based on a model of an explored region. The explored region is often used as a training interval to test or validate model applicability on the unexplored interval. If some fraction of the expected characteristics appears in the unexplored region when the model is extrapolated to that interval, some degree of validation is granted to the model.

This is a powerful technique on its own as it is used frequently (and depended on) in machine learning models to eliminate poorly performing trials. But it gains even more importance when new data for validation will take years to collect. In particular, consider the arduous process of collecting fresh data for El Nino Southern Oscillation, which will take decades to generate sufficient statistical significance for validation.

So, what’s necessary in the short term is substantiation of a model’s potential validity. Nothing else will work as a substitute, as controlled experiments are not possible for domains as large as the Earth’s climate. Cross-validation remains the best bet.

In an earlier post, the observation was that ENSO models may not be unique due to the numerous possibilities provided by nonlinear math. This was supported by the fact that a tidal forcing model based on the Mf (13.66 day) tidal factor worked equally as well as a Mm (27.55 day) factor. This was not surprising considering that the aliasing against an annual impulse gave a similar repeat cycle — 3.8 years versus 3.9 years. But I have also observed that mixing the two in a linear fashion did not improve the fit much at all, as the difference created a long interference cycle which isn’t observed in the ENSO time series data. But then thinking in terms of the nonlinear modulation required, it may be that the two factors can be combined after the LTE solution is applied.

The forcing spectrum like this, with the aliased draconic (27.212d) factor circled:

For QBO, we remove all the lunar factors except for the draconic, as this is the only declination factor with the same spherical group symmetry as the semi-annual solar declination.

And after modifying the annual (ENSO spring-barrier) impulse into a semi-annual impulse with equal and opposite excursions, the resultant model matches well (to first order) the QBO time series.

Although the alignment isn’t perfect, there are indications in the structure that the fit has a deeper significance. For example, note how many of the shoulders in the structure align, as highlighted below in yellow

The peaks and valleys do wander about a bit and might be a result of the sensitivity to the semi-annual impulse and the fact that this is only a monthly resolution. The chart below is a detailed fit of the QBO using data with a much finer daily resolution. As you can see, slight changes in the seasonal timing of the semi-annual pulse are needed to individually align the 70 and 30 hBar QBO time-series data.

The underlying forcing of the ENSO model shows both an 18-year Saros cycle (which is an eclipse alignment cycle of all the tidal periods), along with a 6-year anomalistic/draconic interference cycle. This modulation of the main anomalistic cycle appears in both the underlying daily and monthly profile, shown below before applying an annual impulse. The 6-year is clearly evident as it aligns with the x-axis grid 1880, 1886, 1892, 1898, etc.

The 6-year cycle in the LOD is not aligned as strictly as the tidal model and it tends to wander, but it seems a more plausible and parsimonious explanation of the modulation than for example in this paper (where the 6-year LOD cycle is “similarly detected in the variations of C22 and S22, the degree-2 order-2 Stokes coefficients of the Earth’s gravitational field”).

Cross-validation confidence improves as the number of mutually agreeing alignments increase. Given the fact that controlled experiments are impossible to perform, this category of analyses is the best way to validate the geophysical models.

In our book Mathematical GeoEnergy, several geophysical processes are modeled — from conventional tides to ENSO. Each model fits the data applying a concise physics-derived algorithm — the key being the algorithm’s conciseness but not necessarily subjective intuitiveness.

I’ve followed Gell-Mann’s work on complexity over the years and so will try applying his qualitative effective complexity approach to characterize the simplicity of the geophysics models described in the book and on this blog.

Here’s a breakdown from least complex to most complex