Moonfall and glacially slow geophysics advances

This blog is late to the game in commenting on the physics of the Hollywood film Moonfall — but does that really matter? Geophysics research and glacially slow progress seem synonymous at this point. In social media, unless one jumps on the event of the day within an hour, it’s considered forgotten. However, difficult problems aren’t unraveled quickly, and that’s what he have when we consider the Moon’s influence on the Earth’s geophysics. Yes, tides are easy to understand, but any other impact of the Moon is considered warily, perhaps over the course of decades, not as part of the daily news & entertainment cycle.

what if the Moon was closer?

My premise: The movie Moonfall is a more pure climate-science-fiction film than Don’t Look Up. Discuss.

Continue reading

Understanding is Lacking

Regarding the gravity waves concentrically emanating from the Tonga explosion

“It’s really unique. We have never seen anything like this in the data before,” says Lars Hoffmann, an atmospheric scientist at the Jülich Supercomputing Centre in Germany.

https://www.nature.com/articles/d41586-022-00127-1

and

“That’s what’s really puzzling us,” says Corwin Wright, an atmospheric physicist at the University of Bath, UK. “It must have something to do with the physics of what’s going on, but we don’t know what yet.”

https://www.nature.com/articles/d41586-022-00127-1
Hunga-Tonga-Hunga Ha’apai Eruption as seen by AIRS.

The discovery was prompted by a tweet sent to Wright on 15 January from Scott Osprey, a climate scientist at the University of Oxford, UK, who asked: “Wow, I wonder how big the atmospheric gravity waves are from this eruption?!” Osprey says that the eruption might have been unique in causing these waves because it happened very quickly relative to other eruptions. “This event seems to have been over in minutes, but it was explosive and it’s that impulse that is likely to kick off some strong gravity waves,” he says. The eruption might have lasted moments, but the impacts could be long-lasting. Gravity waves can interfere with a cyclical reversal of wind direction in the tropics, Osprey says, and this could affect weather patterns as far away as Europe. “We’ll be looking very carefully at how that evolves,” he says.

https://www.nature.com/articles/d41586-022-00127-1

This (“cyclical reversal of wind direction in the tropics”) is referring to the QBO, and we will see if it has an impact in the coming months. Hint: the QBO from the last post is essentially modeling gravity waves arising from the tidal forcing as driving the cycle. Also, watch the LOD.

Perhaps the lacking is in applying this simple scientific law: for every action there is a reaction. Always start from that, and also consider: an object that is in motion, tends to stay in motion. Is the lack of observed Coriolis effects to first-order part of why the scientists are mystified? Given the variation of this force with latitude, the concentric rings perhaps were expected to be distorted according to spherical harmonics.

The harmonics generator of the ocean

The research category is topological considerations of Laplace’s Tidal Equations (LTE = a shallow-water simplification of Navier-Stokes) applied to the equatorial thermocline — the following citations provides an evolutionary understanding that I have developed via presentations and publications over the last 6 years (working backwards)

“Nonlinear long-period tidal forcing with application to ENSO, QBO, and Chandler wobble”, EGU General Assembly Conference Abstracts, 2021, EGU21-10515
ui.adsabs.harvard.edu/abs/2021EGUGA..2310515P/abstract

“Nonlinear Differential Equations with External Forcing”, ICLR 2020 Workshop DeepDiffEq 
https://openreview.net/forum?id=XqOseg0L9Q

“Mathematical Geoenergy: Discovery, Depletion, and Renewal”, John Wiley & Sons, 2019, chapter 12: “Wave Energy” 
https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12

“Ephemeris calibration of Laplace’s tidal equation model for ENSO”, AGU Fall Meeting 2018, 
https://www.essoar.org/doi/abs/10.1002/essoar.10500568.1

“Biennial-Aligned Lunisolar-Forcing of ENSO: Implications for Simplified Climate Models”,
AGU Fall Meeting 2017, https://www.essoar.org/doi/abs/10.1002/essoar.b1c62a3df907a1fa.b18572c23dc245c9.1

“Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO”, AGU Fall Meeting Abstracts 2016, OS11B-04, 
ui.adsabs.harvard.edu/abs/2016AGUFMOS11B..04P/abstract

Given that I have worked on this topic persistently over this span of time, I have gained considerable insight into how straightforward it has become to generate relatively decent fits to climate dipoles such as ENSO. Paradoxically, this is both good and bad. It’s good because the model’s recipe is algorithmically simply described in terms of plausibility and parsimony. That’s largely because it’s a straightforward non-linear extension of a conventional tidal analysis model. However that non-linearity opens up the possibility for many similar model fits that are equally good, yet difficult to discriminate between. So it’s bad in the sense that I can come to an impasse in selecting the “best” model.

This is oversimplifying a bit but the framing issue is if you knew the answer was 72, but have a hard time determining whether the question being posed was one of 2×36, 3×24, 4×18, 6×12, 8×9, 9×8, 12×6, 18×4, 24×3, or 36×2. Each gives the right answer, but potentially not the right mechanism. This is a fundamental problem with non-linear analysis.

A conventional tidal analysis by itself is just a few fundamental tidal factors (exactly 4) but made devastatingly accurate by the introduction of 2nd-order harmonics and cross-harmonics. All these harmonics are generated by non-linear effects but the frequency spectrum is so clean and distinct for a sea-level-height (SLH) time-series that the equivalent solution to k × F = 72 is essentially a scaling identification problem where the k is the scale factor for the corresponding cyclic tidal factors F.

Yet, by applying the non-linear LTE solution to the problem of modeling ENSO, we quickly discover that the algorithm is a wickedly effective harmonics and cross-harmonics generator. Any number of combinations of harmonics can develop an adequate fit depending on the variable LTE modulation applied. So it could be a small LTE modulation mixed with a wide assortment of tidal factors (the 2×36 case) or it could be a large LTE modulation mixed with a minimum of tidal factors (the 18×4 case). Or it could be something in between (e.g. the 8×9 case). This is all a result of the sine-of-a-sine non-linearity of the LTE formulation, related to the Mach-Zehnder modulation used in optical cryptography applications. The latter bit is the hint that things may not be unambiguously decoded given the fact that M-Z has been discovered to be nature’s own built-in encryption device.

However, there remains lots of light at the end of this tunnel, as I have also discovered that the tidal factor spread is likely largely governed by a single lunar tidal constituent, the 27.55 day anomalistic Mm cycle interfering with an annual impulse. That’s essentially 2 of the 4 tidal factors, with the other 2 lunar factors providing a 2nd-order correction. For the longest time I had been focused on the 13.66 day tropical Mf cycle as that also lead to a decent fit over the years, specifically since the first beat harmonic of the Mf cycle with the annual impulse is 3.8 years while the Mm cycle is 3.9 years. These two terms are close enough that they only go out-of-phase after ~130 years, which is the extent of the ENSO time-series. Only when you try to simplify a model fit by iterating over the space of factor combinations will you discover the difference between 3.8 and 3.9.

In terms of geophysics, the Mf factor is a tractional tidal forcing operating parallel to the ocean’s surface influenced by the moon’s latitudinal declination, while the Mm factor is a largely perpendicular gravitational forcing influenced by the perigean cycle of the Moon-to-Earth distance. The latter may be the “right mechanism” as each can give close to the “right answer”.

So the gist of the fitting observations is that far fewer harmonic factors are required for a decent Mm-based model than for a Mf-based model. This is slightly at the expense of a stronger LTE modulation, but the parsimony of an Mm-based model can’t be beat, as I will show via the following analysis charts…

This is a good model fit based on a slightly modified Mm-based factorization, with a sample-and-hold placed on a strong annual impulse

The comparison of the modified Mm tidal factorization to the pure Mm is below (the reason the 27.55 day periodicity doesn’t appear is because of the monthly aliasing used in plotting).

The slight pattern on top of pure signal is due to a 6-year beat of the Mm period with the 27.212 day lunar draconic pattern indicating the time between equatorial nodal crossings of the moon. This is the strongest factor of the ascension cycle described in the solar and lunar ephemeris published recently by Sung-Ho Na. As highlighted above by numbered cycles, ~20 occur in the span of 120 years.

Below, an expanded look showing how slight a correction is applied

The integrated forcing after the annual impulse is shown below. The sample-and-hold integration exaggerates low-frequency differences so the distinction between the pure Mm forcing and Mm+harmonics is more apparent. The 6-year periodicity is obscured by longer term variations.

The log-scaled power spectra of the integrated tidal forcing is shown below. Note the overwhelmingly strong peak near the 0.25/year frequency (3.9 year cycle). The rest of the peaks are readily matched to periodicities in the Na ephemerides [1] according to their strength.

The LTE modulation is quite strong for this factorization. As shown below, the forcing levels need to sinusoidally fold several times over to match the observed ENSO behavior. See the recent post Inverting non-autonomous functions for a recipe to aid in iterating for the underlying LTE modulation.

The parsimony of this model can’t be emphasized enough. It’s equivalent to the agreement of a conventional tidal forcing analysis to a SLH time-series in that only a single lunar tidal factor accounts for a majority of the modulation. Only the challenge of finding the correct LTE modulation stands in the way of producing an unambiguously correct model for the underlying ENSO behavioral dynamics.

References

[1] Sung-Ho Na, Chapter 19 – Prediction of Earth tide, Editor(s): Pijush Samui, Barnali Dixon, Dieu Tien Bui, Basics of Computational Geophysics, Elsevier, 2021, Pages 351-372, ISBN 9780128205136,
https://doi.org/10.1016/B978-0-12-820513-6.00022-9. (note: the ephemerides for the Earth-Moon-Sun system matches closely the online NASA JPL ephemerides generator available at https://ssd.jpl.nasa.gov/horizons, but this paper is more useful in that it algorithmically states the contributions of the various tidal factors in the source code supplied. Source code also available at https://github.com/pukpr/GeoEnergyMath/tree/master/src)


Information Theory in Earth Science: Been there, done that

Following up from this post, there is a recent sequence of articles in an AGU journal on Water Resources Research under the heading: “Debates: Does Information Theory Provide a New Paradigm for Earth Science?”

By anticipating all these ideas, you can find plenty of examples and derivations (with many centered on the ideas of Maximum Entropy) in our book Mathematical Geoenergy.

Here is an excerpt from the “Emerging concepts” entry, which indirectly addresses negative entropy:

“While dynamical system theories have a long history in mathematics and physics and diverse applications to the hydrological sciences (e.g., Sangoyomi et al., 1996; Sivakumar, 2000; Rodriguez-Iturbe et al., 1989, 1991), their treatment of information has remained probabilistic akin to what is done in classical thermodynamics and statistics. In fact, the dynamical system theories treated entropy production as exponential uncertainty growth associated with stochastic perturbation of a deterministic system along unstable directions (where neighboring states grow exponentially apart), a notion linked to deterministic chaos. Therefore, while the kinematic geometry of a system was deemed deterministic, entropy (and information) remained inherently probabilistic. This led to the misconception that entropy could only exist in stochastically perturbed systems but not in deterministic systems without such perturbations, thereby violating the physical thermodynamic fact that entropy is being produced in nature irrespective of how we model it.

In that sense, classical dynamical system theories and their treatments of entropy and information were essentially the same as those in classical statistical mechanics. Therefore, the vast literature on dynamical systems, including applications to the Earth sciences, was never able to address information in ways going beyond the classical probabilistic paradigm.”

That is, there are likely many earth system behaviors that are highly ordered, but the complexity and non-linearity of their mechanisms makes them appear stochastic or chaotic (high positive entropy) yet the reality is that they are just a complicated deterministic model (negative entropy). We just aren’t looking hard enough to discover the underlying patterns on most of this stuff.

An excerpt from the Occam’s Razor entry, lifts from my cite of Gell-Mann

“Science and data compression have the same objective: discovery of patterns in (observed) data, in order to describe them in a compact form. In the case of science, we call this process of compression “explaining observed data.” The proposed or resulting compact form is often referred to as “hypothesis,” “theory,” or “law,” which can then be used to predict new observations. There is a strong parallel between the scientific method and the theory behind data compression. The field of algorithmic information theory (AIT) defines the complexity of data as its information content. This is formalized as the size (file length in bits) of its minimal description in the form of the shortest computer program that can produce the data. Although complexity can have many different meanings in different contexts (Gell-Mann, 1995), the AIT definition is particularly useful for quantifying parsimony of models and its role in science. “

Parsimony of models is a measure of negative entropy

Odd cycles in Length-of-Day (LOD) variations

Two papers on the analysis of >1 year periods in the LOD time series measured since 1962.

The consistency of interdecadal changes in the Earth’s rotation variations

On the ~ 7 year periodic signal in length of day from a frequency domain stepwise regression method

These cycles may be related to aliased tidal periods with the annual cycle, as in modeling ENSO.


A paper describing new satellite measurements for precision LOD measurements.

BeiDou satellite radiation force models for precise orbit
determination and geodetic applications
” from TechRxiv

Note the detail on the 13.6 day fortnightly tidal period

Nonlinear long-period tidal forcing with application to ENSO, QBO, and Chandler wobble

Model fitting process for ENSO

Back to EGU abstract and presentation


Addendum: After this presentation was submitted, a ground-breaking paper by a group at the University of Paris came on-line. Their paper, “On the Shoulders of Laplace” covers much the same ground as the EGU presentation linked above.

Their main thesis is that Pierre-Simon Laplace in 1799 correctly theorized that the wobble in the Earth’s rotation is due to the moon and sun, described in the treatise “Traité de Mécanique Céleste (Treatise of Celestial Mechanics)“.


Excerpts from the paper “On the shoulders of Laplace”

Moreover Lopes et al claim that this celestial gravitational forcing carries over to controlling cyclic climate indices, following Laplace’s mathematical formulation (now known as Laplace’s Tidal Equations) for describing oceanic tides.

Excerpt from the paper “On the shoulders of Laplace”

This view also aligns with the way we model climate indices such as ENSO and QBO via a solution to Laplace’s Tidal Equations, as described in the linked EGU presentation above.


ESD Ideas article for review

Get a Copernicus login and comment for peer-review

The simple idea is that tidal forces play a bigger role in geophysical behaviors than previously thought, and thus helping to explain phenomena that have frustrated scientists for decades.

The idea is simple but the non-linear math (see figure above for ENSO) requires cracking to discover the underlying patterns.

The rationale for the ESD Ideas section in the EGU Earth System Dynamics journal is to get discussion going on innovative and novel ideas. So even though this model is worked out comprehensively in Mathematical Geoenergy, it hasn’t gotten much publicity.

Gravitational Pull

In Chapter 12 of the book, we provide an empirical gravitational forcing term that can be applied to the Laplace’s Tidal Equation (LTE) solution for modeling ENSO. The inverse squared law is modified to a cubic law to take into account the differential pull from opposite sides of the earth.

excerpt from Mathematical Geoenergy (Wiley/2018)

The two main terms are the monthly anomalistic (Mm) cycle and the fortnightly tropical/draconic pair (Mf, Mf’ w/ a 18.6 year nodal modulation). Due to the inverse cube gravitational pull found in the denominator of F(t), faster harmonic periods are also created — with the 9-day (Mt) created from the monthly/fortnightly cross-term and the weekly (Mq) from the fortnightly crossed against itself. It’s amazing how few terms are needed to create a canonical fit to a tidally-forced ENSO model.

The recipe for the model is shown in the chart below (click to magnify), following sequentially steps (A) through (G) :

(A) Long-period fortnightly and anomalistic tidal terms as F(t) forcing
(B) The Fourier spectrum of F(t) revealing higher frequency cross terms
(C) An annual impulse modulates the forcing, reinforcing the amplitude
(D) The impulse is integrated producing a lagged quasi-periodic input
(E) Resulting Fourier spectrum is complex due to annual cycle aliasing
(F) Oceanic response is a Laplace’s Tidal Equation (LTE) modulation
(G) Final step is fit the LTE modulation to match the ENSO time-series

The tidal forcing is constrained by the known effects of the lunisolar gravitational torque on the earth’s length-of-day (LOD) variations. An essentially identical set of monthly, fortnightly, 9-day, and weekly terms are required for both a solid-body LOD model fit and a fluid-volume ENSO model fit.

Fitting tidal terms to the dLOD/dt data is only complicated by the aliasing of the annual cycle, making factors such as the weekly 7.095 and 6.83-day cycles difficult to distinguish.

If we apply the same tidal terms as forcing for matching dLOD data, we can use the fit below as a perturbed ENSO tidal forcing. Not a lot of difference here — the weekly harmonics are higher in magnitude.

Modified initial calibration of lunar terms for fitting ENSO

So the only real unknown in this process is guessing the LTE modulation of steps (F) and (G). That’s what differentiates the inertial response of a spinning solid such as the earth’s core and mantle from the response of a rotating liquid volume such as the equatorial Pacific ocean. The former is essentially linear, but the latter is non-linear, making it an infinitely harder problem to solve — as there are infinitely many non-linear transformations one can choose to apply. The only reason that I stumbled across this particular LTE modulation is that it comes directly from a clever solution of Laplace’s tidal equations.

for full derivation see Mathematical Geoenergy (Wiley/2018)

Mathematical Geoenergy

Our book Mathematical Geoenergy presents a number of novel approaches that each deserve a research paper on their own. Here is the list, ordered roughly by importance (IMHO):

  1. Laplace’s Tidal Equation Analytic Solution.
    (Ch 11, 12) A solution of a Navier-Stokes variant along the equator. Laplace’s Tidal Equations are a simplified version of Navier-Stokes and the equatorial topology allows an exact closed-form analytic solution. This could classify for the Clay Institute Millenium Prize if the practical implications are considered, but it’s a lower-dimensional solution than a complete 3-D Navier-Stokes formulation requires.
  2. Model of El Nino/Southern Oscillation (ENSO).
    (Ch 12) A tidally forced model of the equatorial Pacific’s thermocline sloshing (the ENSO dipole) which assumes a strong annual interaction. Not surprisingly this uses the Laplace’s Tidal Equation solution described above, otherwise the tidal pattern connection would have been discovered long ago.
  3. Model of Quasi-Biennial Oscillation (QBO).
    (Ch 11) A model of the equatorial stratospheric winds which cycle by reversing direction ~28 months. This incorporates the idea of amplified cycling of the sun and moon nodal declination pattern on the atmosphere’s tidal response.
  4. Origin of the Chandler Wobble.
    (Ch 13) An explanation for the ~433 day cycle of the Earth’s Chandler wobble. Finding this is a fairly obvious consequence of modeling the QBO.
  5. The Oil Shock Model.
    (Ch 5) A data flow model of oil extraction and production which allows for perturbations. We are seeing this in action with the recession caused by oil supply perturbations due to the Corona Virus pandemic.
  6. The Dispersive Discovery Model.
    (Ch 4) A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.
  7. Ornstein-Uhlenbeck Diffusion Model
    (Ch 6) Applying Ornstein-Uhlenbeck diffusion to describe the decline and asymptotic limiting flow from volumes such as occur in fracked shale oil reservoirs.
  8. The Reservoir Size Dispersive Aggregation Model.
    (Ch 4) A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.
  9. Origin of Tropical Instability Waves (TIW).
    (Ch 12) As the ENSO model was developed, a higher harmonic component was found which matches TIW
  10. Characterization of Battery Charging and Discharging.
    (Ch 18) Simplified expressions for modeling Li-ion battery charging and discharging profiles by applying dispersion on the diffusion equation, which reflects the disorder within the ion matrix.
  11. Anomalous Behavior in Dispersive Transport explained.
    (Ch 18) Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.
  12. Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.
    (Ch 20) The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.
  13. Wind Energy Analysis.
    (Ch 11) Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany. Found a universal BesselK distribution which improves on the conventional Rayleigh distribution.
  14. Terrain Slope Distribution Analysis.
    (Ch 16) Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.
  15. Thermal Entropic Dispersion Analysis.
    (Ch 14) Solving the Fokker-Planck equation or Fourier’s Law for thermal diffusion in a disordered environment. A subtle effect but the result is a simplified expression not involving complex errf transcendental functions. Useful in ocean heat content (OHC) studies.
  16. The Maximum Entropy Principle and the Entropic Dispersion Framework.
    (Ch 10) The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.
  17. Solving the Reserve Growth “enigma”.
    (Ch 6) An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.
  18. Shocklets.
    (Ch 7) A kernel approach to characterizing production from individual oil fields.
  19. Reserve Growth, Creaming Curve, and Size Distribution Linearization.
    (Ch 6) An obvious linearization of this family of curves, related to Hubbert Linearization but more useful since it stems from first principles.
  20. The Hubbert Peak Logistic Curve explained.
    (Ch 7) The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.
  21. Laplace Transform Analysis of Dispersive Discovery.
    (Ch 7) Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.
  22. Gompertz Decline Model.
    (Ch 7) Exponentially increasing extraction rates lead to steep production decline.
  23. The Dynamics of Atmospheric CO2 buildup and Extrapolation.
    (Ch 9) Convolving a fat-tailed CO2 residence time impulse response function with a fossil-fuel emissions stimulus. This shows the long latency of CO2 buildup very straightforwardly.
  24. Reliability Analysis and Understanding the “Bathtub Curve”.
    (Ch 19) Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.
  25. The Overshoot Point (TOP) and the Oil Production Plateau.
    (Ch 8) How increases in extraction rate can maintain production levels.
  26. Lake Size Distribution.
    (Ch 15) Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison).
  27. The Quandary of Infinite Reserves due to Fat-Tail Statistics.
    (Ch 9) Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.
  28. Oil Recovery Factor Model.
    (Ch 6) A model of oil recovery which takes into account reservoir size.
  29. Network Transit Time Statistics.
    (Ch 21) Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.
  30. Particle and Crystal Growth Statistics.
    (Ch 20) Detailed model of ice crystal size distribution in high-altitude cirrus clouds.
  31. Rainfall Amount Dispersion.
    (Ch 15) Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.
  32. Earthquake Magnitude Distribution.
    (Ch 13) Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold.
  33. IceBox Earth Setpoint Calculation.
    (Ch 17) Simple model for determining the earth’s setpoint temperature extremes — current and low-CO2 icebox earth.
  34. Global Temperature Multiple Linear Regression Model
    (Ch 17) The global surface temperature records show variability that is largely due to the GHG rise along with fluctuating changes due to ocean dipoles such as ENSO (via the SOI measure and also AAM) and sporadic volcanic eruptions impacting the atmospheric aerosol concentrations.
  35. GPS Acquisition Time Analysis.
    (Ch 21) Engineering analysis of GPS cold-start acquisition times. Using Maximum Entropy in EMI clutter statistics.
  36. 1/f Noise Model
    (Ch 21) Deriving a random noise spectrum from maximum entropy statistics.
  37. Stochastic Aquatic Waves
    (Ch 12) Maximum Entropy Analysis of wave height distribution of surface gravity waves.
  38. The Stochastic Model of Popcorn Popping.
    (Appx C) The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production. Can use this to model epidemics, etc.
  39. Dispersion Analysis of Human Transportation Statistics.
    (Appx C) Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.

 

Asymptotic QBO Period

The modeled QBO cycle is directly related to the nodal (draconian) lunar cycle physically aliased against the annual cycle.  The empirical cycle period is best estimated by tracking the peak acceleration of the QBO velocity time-series, as this acceleration (1st derivative of the velocity) shows a sharp peak. This value should asymptotically approach a 2.368 year period over the long term.  Since the recent data from the main QBO repository provides an additional acceleration peak from the past month, now is as good a time as any to analyze the cumulative data.



The new data-point provides a longer period which compensated for some recent shorter periods, such that the cumulative mean lies right on the asymptotic line. The jitter observed is explainable in terms of the model, as acceleration peaks are more prone to align close to an annual impulse. But the accumulated mean period is still aligned to the draconic aliasing with this annual impulse. As more data points come in over the coming decades, the mean should vary less and less from the asymptotic value.

The fit to QBO using all the data save for the last available data point is shown below.  Extrapolating beyond the green arrow, we should see an uptick according to the red waveform.



Adding the recent data-point and the blue waveform does follow the model.



There was a flurry of recent discussion on the QBO anomaly of 2016 (shown as a split peak above), which implied that perhaps the QBO would be permanently disrupted from it’s long-standing pattern. Instead, it may be a more plausible explanation that the QBO pattern was not simply wandering from it’s assumed perfectly cyclic path but instead is following a predictable but jittery track that is a combination of the (physically-aliased) annual impulse-synchronized Draconic cycle together with a sensitivity to variations in the draconic cycle itself. The latter calibration is shown below, based on NASA ephermeris.



This is the QBO spectral decomposition, showing signal strength centered on the fundamental aliased Draconic value, both for the data and the set by the model.


The main scientist, Prof. Richard Lindzen, behind the consensus QBO model has been recently introduced here as being “considered the most distinguished living climate scientist on the planet”.  In his presentation criticizing AGW science [1], Lindzen claimed that the climate oscillates due to a steady uniform force, much like a violin oscillates when the steady force of a bow is drawn across its strings.  An analogy perhaps better suited to reality is that the violin is being played like a drum. Resonance is more of a decoration to the beat itself.
Keith 🌛 ?

[1] Professor Richard Lindzen slammed conventional global warming thinking warming as ‘nonsense’ in a lecture for the Global Warming Policy Foundation on Monday. ‘An implausible conjecture backed by false evidence and repeated incessantly … is used to promote the overturn of industrial civilization,’ he said in London. — GWPF