This NASA press release has received mainstream news attention.

The 18.6 year nodal cycle will generate higher tides that will exaggerate sea-level rise due to climate change.

Yahoo news item:

https://news.yahoo.com/lunar-orbit-apos-wobble-apos-173042717.html

So this is more-or-less a known behavior, but hopefully it raises awareness to the other work relating lunar forcing to ENSO, QBO, and the Chandler wobble.

**Cited paper**

Thompson, P.R., Widlansky, M.J., Hamlington, B.D. *et al.* Rapid increases and extreme months in projections of United States high-tide flooding. *Nat. Clim. Chang.* **11, **584–590 (2021). https://doi.org/10.1038/s41558-021-01077-8

The following is recent research on mobility dispersion, contrast to something I blogged on years ago.

https://phys.org/news/2021-05-mobility-reveals-universal-law-cities.html

]]>This is an algorithm based on minimum entropy (i.e. negative entropy) considerations which is essentially an offshoot of this paper Entropic Complexity Measured in Context Switching.

The objective is to apply negative entropy to find an optimal solution to a deterministically ordered pattern. To start, let us contrast the behavior of autonomous vs non-autonomous differential equations. One way to think about the distinction is that the transfer function for non-autonomous only depends on the presenting input. Thus, it acts like an op-amp with infinite bandwidth. Or below saturation it gives perfectly linear amplification, so that as shown on the graph to the right, the x-axis input produces an amplified y-axis output as long as the input is within reasonable limits.

In contrast, for an autonomous formulation, the amplification depends on prior values so it requires a time-domain convolution or a frequency-domain transfer function. The spectral response chart to the right is the classic representation of the frequency response of a linear 2nd-order differential equation. This is also known as an autonomous system class of differential equation, where the evolution of the response is time-invariant to the starting point. So for a non-autonomous behavior, the time-varying aspects essentially control the output and act to in a sense to *reset* the system continuously.

There are many non-autonomous formulations that aren’t linear, for example a companding transfer that takes the square root of the input (used for compressing the dynamic range of a signal). This essentially gradually saturates with increasing values of the absolute value of the input as shown below

What does this have to do with entropy? Consider that a non-autonomous transfer function can become even more elaborate in terms of a mapping pattern and thus possess a definable amount of underlying order. Yet that order or pattern may be difficult to discern without adequate information, which is where the concepts of entropy metrics such as Shannon entropy come in.

As an example, what if the non-autonomous transfer function itself is peculiar, such as a potentially complex sinusoidal modulation of unknown modulation frequency and phase. This occurs in Mach-Zehnder modulation or in our Laplace’s Tidal Equation formulation. The effect is to distort the input enough to essentially fold the amplitude at certain points, as shown in the chart to the right. Note that the input is not the time-value, but some other level or amplitude associated with the system. So that the output may be a positive amplification for a certain level but then will reverse (i.e. *fold *or *break*) and become negative as the level is increased. And this can then cycle for increasing input amplitude levels. If the modulation is strong enough, the output will be unrecognizable from the input.

The difficulty is if we have little knowledge of the input forcing or the modulation, we will not be able to decode anything. But with a measure such as Negative Shannon Entropy, we can see how far we can go with limited info.

So consider this output waveform that we are told is due to Mach-Zehnder modulation of an unknown input :

All we know is that there may be a basis forcing that consists of a couple of sinusoids, and that there is an obvious (but unknown) non-autonomous complex modulation that is generating the above waveform

The idea is that we test out various combinations of sinusoidal parameters and then maximize the Shannon entropy of the *power spectrum* of the transfer from input to output (see the link I first mentioned at the top of this post). We can do this by calculating a discrete Fourier transform or an FFT of the input-to-output mapping (remember that the input is not time but an *input *level) and multiplying by the complex conjugate to get the power spectrum. For a perfectly linear amplification as in the first example, it is essentially a delta function at a frequency of zero, indicating maximum order with a maximum negative Shannon entropy. And for a single sinusoidal frequency modulation, the power spectrum would be a delta *shifted* to the frequency of the modulation. Again this will be a maximally-ordered amplification, and again with a maximum in negative Shannon entropy. Yet, in practical terms, perhaps something such as a Renyi or Tsallis entropy measure would work even better than Shannon entropy. Actually, the Tsallis entropy is close to describing a mean-square variance error in a signal, whereby it exaggerates clusters or strong excursions when compared against a constant background.

So this is what I have used that works quite well. I essentially maximize the normalized mean-squared variance of the power spectrum

The result of a search algorithm of input sinusoidal factors to maximize the power spectrum variance value of the unknown time-series shown in ** FIGURE 1** is this power spectrum

which arises from this optimal input forcing

Note that this is not the transfer modulation, which we still need to extract from the power spectrum.

As a result, this negative entropy algorithm is able to deconstruct or decode a Mach-Zehnder modulation of two sinusoidal factors that’s encoding an input forcing of another pair of sinusoidal factors. So essentially we are able to find 4 unknown factors (or 8 if both amplitude and phase are included) by only searching on 2 factors (or 4 if amplitude and phase are included). But how is that possible? It’s actually not a free lunch because the power spectrum calculation is essentially testing all possible modulations in parallel and the negative entropy calculation is keeping track of the frequency components that maximize the delta functions in the spectrum. I.E. the mean-square variance is weighting greater excursions than a flat highly-random background would.

From Fig.2 in our paper, the schematic to the right gives the general idea. For negative entropy we are looking for the upper spectrum, not the lower, which is a maximum entropy picture of a *disordered *system.

This works well for certain applications. It may even work better in a search algorithm than if you did a pure RMS minimization of fitting the 4 sinusoidal factors directly against the output (which is the naive brute force approach), as it may not fall into local minima as easily. Doing the power spectrum helps to immediately broaden the input search parameter space I believe.

Yet, there is more. One can also keep track of the output spectrum for possible harmonics of a spectral peak. As harmonics are indicators of even further order, if the value of a harmonically-related spectral peak (an integer multiple of the fundamental) is added to the primary (fundamental) peak, it will get a negative entropy boost against the background when squared.

I use this approach in optimizing LTE formulations to model ENSO and it does work to zero in on a best fit, especially if the model is close to begin with.

]]>So for modeling ENSO, the challenge is to fit the quasi-periodic NINO34 time-series with a minimal number of *tunable *parameters. For a 140 year fitting interval (1880-1920), a naive Fourier series fit could easily take 50-100 sine waves of varying frequencies, amplitudes, and phase to match a low-pass filtered version of the data (any high-frequency components may take many more). However that is horribly complex model and obviously prone to over-fitting. Obviously we need to apply some physics to reduce the #DOF.

Since we know that ENSO is essentially a model of equatorial fluid dynamics in response to a tidal forcing, all that is needed is the gravitational potential along the equator. The paper by Na [1] has software for computing the orbital dynamics of the moon (i.e. lunar ephemerides) and a 1st-order approximation for tidal potential:

The software contains well over 100 sinusoidal terms (each consisting of amplitude, frequency, and phase) to internally model the lunar orbit precisely. Thus, that many DOF are removed, with a corresponding huge reduction in complexity score for any reasonable fit. So instead of a huge set of factors to manipulate (as with many detailed harmonic tidal analyses), what one is given is a range (r = **R**) and a declination ( ψ=**delta**) time-series. These are combined in a manner following the figure from Na shown above, essentially adjusting the amplitudes of **R **and **delta **while introducing an additional *tangential *or *tractional *projection of delta (*sin *instead of *cos*). The latter is important as described in NOAA’s tide producing forces page.

Although I roughly calibrated this earlier [2] via NASA’s HORIZONS ephemerides page (input parameters shown on the right), the Na software allows better flexibility in use. The two calculations essentially give identical outputs and independent verification that the numbers are as expected.

As this post is already getting too long, this is the result of doing a Laplace’s Tidal Equation fit (adding a few more DOF), demonstrating that the limited #DOF prevents over-fitting on a short training interval while cross-validating outside of this band.

or this

This low complexity and high accuracy solution would win ANY competition, including the competition for best seasonal prediction with a measly prize of 15,000 Swiss francs [3]. A good ENSO model is worth billions of $$ given the amount it will save in agricultural planning and its potential for mitigation of human suffering in predicting the timing of climate extremes.

**REFERENCES**

[1] Na, S.-H. Chapter 19 – Prediction of Earth tide. in *Basics of Computational Geophysics* (eds. Samui, P., Dixon, B. & Tien Bui, D.) 351–372 (Elsevier, 2021). doi:10.1016/B978-0-12-820513-6.00022-9.

[2] Pukite, P.R. et al “Ephemeris calibration of Laplace’s tidal equation model for ENSO” AGU Fall Meeting, 2018. doi:10.1002/essoar.10500568.1

[3] 1 CHF ~ $1 so 15K = chump change.

]]>

Back to EGU abstract and presentation

**Addendum:** After this presentation was submitted, a ground-breaking paper by a group at the University of Paris came on-line. Their paper, **“On the Shoulders of Laplace”** covers much the same ground as the EGU presentation linked above.

- F. Lopes, J.L. Le Mouël, V. Courtillot, D. Gibert, On the shoulders of Laplace,
*Physics of the Earth and Planetary Interiors*, 2021, 106693, ISSN 0031-9201, https://doi.org/10.1016/j.pepi.2021.106693.

Their main thesis is that Pierre-Simon Laplace in 1799 correctly theorized that the wobble in the Earth’s rotation is due to the moon and sun, described in the treatise “*Traité de Mécanique Céleste* (Treatise of Celestial Mechanics)“.

*Excerpts from the paper “On the shoulders of Laplace”*

Moreover Lopes *et al* claim that this celestial gravitational forcing carries over to controlling cyclic climate indices, following Laplace’s mathematical formulation (now known as Laplace’s Tidal Equations) for describing oceanic tides.

This view also aligns with the way we model climate indices such as ENSO and QBO via a solution to Laplace’s Tidal Equations, as described in the linked EGU presentation above.

]]>

The equatorial zone acts as a waveguide. As highlights they list the following bullet-points, taking advantage that the Coriolis effect at the equator vanishes or cancels.

This is a critical assertion, since — as shown in Mathematical Geoenergy –the Chandler wobble (a nutational oscillation) is forced by tides, then transitively so is the El Nino. So when the authors state the consequence is of both nutation *and* a gravity influence, it is actually the gravity influence of the moon and sun (and slightly Jupiter) that is the root cause.

The article has several equations that claim analytical solutions, but the generated PDF format has apparently not rendered the markup correctly. Many “+” signs are missing from equations. I have seen this issue before when I have tried to generate PDF pages from a markup doc, and assume that is what is happening. Assume the hard-copy version is OK so may have to go to the library to retrieve it, or perhaps ask the authors for a hard-copy.

main author:

]]>Sergey А. Arsen’yev

Dept. of Earth and Planetary Physics of Schmidt’s Institute of the Earth’s Physics, Russian Academy of Sciences, 10 Bolshaya Gruzinskaya, Moscow, 123995, Russia

I am a physical oceanographer who knows nothing about the Chandler wobble, is only slightly familiar with the QBO, but is a longtime expert on ENSO.

To be blunt, trying to shoehorn ENSO into a periodic tidal framework stretches reality to fit someone’s preconceived theory. Only the most motivated reasoning can believe this.… (more stuff)

I am sorry to have wasted an hour on this.Billy Kessler, NOAA/PMEL, Seattle

Interactive comment on Earth Syst. Dynam. Discuss., https://doi.org/10.5194/esd-2020-74,

2020.

Billy also wrote this on his web site (emphasis mine):

4.

An idea for a science fair project.

Requested by a parent.Here’s an idea. This experiment is similar to what actual scientists are doing right now.

The project is to construct some forecast models of El Niño’s development over the next few months. We don’t know what it will do. Will it get more intense?, weaken?, remain strong?, and if so for how long? These are the subject of

much debate in the scientific community right now, and many efforts are under way to predict and understand it.The models would be forecasts made using several assumptions, and the main result would be graphs showing how the forecasts compared with actual evolving conditions.

One model would be called “persistence”. That is, whatever conditions are occurring now, they will continue. Surprisingly, persistence is often a hard-to-beat forecast, and weather forecasters score themselves on how much better than persistence thay

can do. A second model is continuation of the trend. That is, if the sea surface temperature (SST) is warming up it will continue to warm at the same rate. Obviously that can’t go on forever but in many ways a trend is a good indicator of future trends. A third model is random changes. Get a random number generator (or pick numbers out of a hat). Each day or week, use the random numbers to predict what the change of SST will be (scale the numbers to keep it reasonable). Those are three simple models that can be used to project forward from current conditions. Essentially that’s what weather forecast models do, just more sophisticatedly (see question 13).(sp)Maybe you can think of some other ways to make forecasts (if you get something that works, send it in!)Choose a few buoys from our network in different regions of the tropical Pacific (for example, on the equator, off the equator, in the east, and the west). Get the data from our web page (click for detailed instructions to get this data). Make and graph predictions for each buoy chosen for a month or two ahead, then collect observations as they come in (the data files are updated daily). Graph the observations against the three predictions. My guess is that each model would be successful in some regions for some periods of time. Other extensions would be to compare forecasts beginning at different times. Perhaps a forecast begun with September comditions

is good for 3 months, but one begun in December is only good for one month. Etc.(sp)Another simple project is to determine how significant an effect El Niño has on your local region. Do this by gathering an assortment of local weather time series from your region (monthly rainfall, temperature, etc) (available at the web pages of the National Weather service). Then get an index of El Niño like the Southern Oscillation Index (see Question 17 for a description and graphic, and download the values at NOAA’s Climate Prediction Center. The specific data links are: values for 1951-today and 1882-1950. Note that the SOI monthly values are very jumpy and must be smoothed by a 5-month running mean). Compare the turns of the El Niño/La Niña cycle with changes in your local weather; this could either be through a listing of El Niño/La Niña years and good/bad local weather, or by correlation of the two time series (send me e-mail for how to do correlation). You will probably find out that some aspects of your local weather are related to the El Niño/La Niña cycle and some are not. Also that some strong El Niño or La Niña years make a difference but some do not. This reflects the fact that, far from the center of action in the tropical Pacific, El Niño is only one of many influences on weather.

If your

FAQ from http://faculty.washington.edu/kessler/occasionally-asked-questions.html#q4are pretty good at math and computer programming (at least 8th-grade math), then I have a more advanced project that you can find here.(sp)

shorter: “your thay”

]]>Something I learned early on in my research career is that complicated frequency spectra can be generated from simple repeating structures. Consider the spatial frequency spectra produced as a diffraction pattern produced from a crystal lattice. Below is a reflected electron diffraction pattern of a reconstructed hexagonally reconstructed surface of a silicon (Si) single crystal with a lead (Pb) adlayer ( **(a)** and** (b)** are different alignments of the beam direction with respect to the lattice). Suffice to say, there is enough information in the patterns to be able to reverse engineer the structure of the surface as** (c)**.

Now consider the ENSO pattern. At first glance, neither the time-series signal nor the Fourier series power spectra appear to be produced by anything periodically regular. Even so, let’s assume that the underlying pattern is tidally regular, being comprised of the expected fortnightly 13.66 day tropical/synodic cycle and the monthly 27.55 day anomalistic cycle synchronized by an annual impulse. Then the forcing power spectrum of *f(t)* looks like the **RED **trace on the left-side of the figure below, *F( ω)*. Clearly that is not enough of a frequency spectra (a few delta spikes) necessary to make up the empirically calculated Fourier series for the ENSO data comprising ~40 intricately placed peaks between 0 and 1 cycles/year in

Yet, if we modulate that with an Laplace’s Tidal Equation solution functional *g(f(t))* that has a *G( ω)* as in the yellow inset above — a cyclic modulation of amplitudes where

So essentially what this is suggesting is that a few tidal factors modulated by two sinusoids produces enough spectral detail to easily account for the ~40 peaks in the ENSO power spectra. It can do this because a modulating sinusoid is an efficient harmonics and cross-harmonics generator, as the Taylor’s series of a sinusoid contains an effectively infinite number of power terms.

To see this process in action, consider the following three figures, which features a slider that allows one to get an intuitive feel for how the LTE modulation adds richness via harmonics in the power spectra.

- Start with a mild LTE modulation and start to increase it as in the figure below. A few harmonics begin to emerge as satellites surrounding the forcing harmonics in RED.

2. Next, increase the LTE modulation so that it models the slower sinusoid — more harmonics emerge

3. Then add the faster sinusoid, to fully populate the empirically observed ENSO spectral peaks (and matching the time series).

It appears as if by magic, but this is the power of non-linear harmonic generation. Note that the peak labeled AB amongst others is derived from the original A and B as complicated satellite-cross terms, which can be accounted for by expanding all of the terms in the Taylor’s series of the sinusoids. This can be done with some difficulty, or left as is when doing the fit via solver software.

To complete the circle, it’s likely that being exposed to mind-blowing Fourier series early on makes Fourier analysis of climate data less intimidating, as one can apply all the tricks-of-the-trade, which, alas, are considered routine in other disciplines.

**Individual charts**

https://imagizer.imageshack.com/img922/7013/VRro0m.png]]>

I don’t do that kind of stuff and don’t think I ever will.

If this comes out of a human mind, then that same information can be fed into a knowledgebase and either a backward or forward-chained inference engine could make similar assertions.

And that explains why I don’t do it — a machine should be able to do it better.

What makes an explanation good enough? by Santa Fe Institute

]]>