NAO and the Median Filter

a random quote

“LLMs run a median filter on the corpus”

The North Atlantic Oscillation (NAO) time-series has always been intimidating to analyze. It appears outwardly like white-noise, but enough scientists refer to long periods of positive or negative-leaning intervals that there must be more of an underlying autocorrelation embedded in it. To reduce the white noise and to extract the signal requires a clever filtering trick. The desire is that the filter preserve or retain the edges of the waveform while reducing the noise level. The 5-point median filter does exactly that at a minimal complexity level, as the algorithm is simply expressed. It will leave edges and steep slopes alone as the median will naturally occur on the slope. That’s just what we want to retain the underlying signal.

Once applied, the NAO time-series still appears erratic, yet the problematic monthly extremes are reduced with the median filter suppressing many of them completely. If we then use the LTE model to fit the NAO time-series (with a starting point the annual-impulsed tidal forcing used for the AMO), a clear correlation emerges. The standing-wave modes of course are all high, in contrast to the one low mode for the AMO, so the higher frequency cycling is expected yet the fit is surprisingly good for the number of peaks and valleys it must traverse in the 140+ year historical interval.

Non-autonomous mathematical models

Non-autonomous mathematical formulations differ from autonomous ones in that their governing equations explicitly depend on time or another external variable. In natural systems, certain behaviors or processes are better modeled with non-autonomous formulations because they are influenced by external, time-dependent factors. Some examples of natural behaviors that qualify as non-autonomous include:

1. Seasonal Climate Variation:
Climate patterns, such as temperature changes or monsoon cycles, are influenced by external factors like the Earth’s orbit, axial tilt, and solar radiation, all of which vary over time. These changes make non-autonomous systems suitable for modeling long-term climate behavior.

2. Tidal Forces:
Tidal movements are driven by the gravitational pull of the Moon and the Sun, which vary as the positions of these celestial bodies change relative to Earth. Tidal equations thus have time-dependent forcing terms, making them non-autonomous.

3. Biological Rhythms:
Circadian rhythms in living organisms, which regulate daily cycles such as sleep and feeding, are influenced by the 24-hour light-dark cycle. These external light variations necessitate non-autonomous models.

4. Astronomical Geophysical Cycles:
Systems like the Chandler wobble (the irregular movement of Earth’s rotation axis) or the Quasi-Biennial Oscillation (QBO) in the equatorial stratosphere are influenced by periodic external factors, such as lunar cycles, making them non-autonomous. This also includes systems where lunar or Draconic cycles interact with annual cycles in non-linear ways, as explored in studies of Earth’s rotational dynamics.

5. Oceanographic Dynamical Phenomena:
Non-autonomous formulations are needed to model phenomena such as El Niño, which is influenced by complex interactions between atmospheric and oceanic conditions, themselves driven by seasonal and longer-term climatic variations.

6. Planetary Motion in a Varying Gravitational Field:
In astrophysical systems where a planet moves in the gravitational field of other bodies, such as a multi-body problem where external forces vary in time, non-autonomous dynamics become essential to account for these influences.

In contrast, autonomous systems are self-contained and their behavior depends only on their internal state variables, independent of any external time-varying influence. So that non-autonomous systems often better capture the complexity and variability introduced by time-dependent external factors.

However, many still want to find connections to autonomous formulations as they often coincide with resonant conditions or some natural damping rate.

Autonomous mathematical formulations are characterized by the fact that their governing equations do not explicitly depend on time or other external variables (they can be implicit via time derivatives though). These systems evolve based solely on their internal state variables. Many natural behaviors can be modeled using autonomous systems when external influences are either negligible or can be ignored. Here are some examples of natural behaviors that qualify as autonomous:

1. Radioactive Decay:
The decay of radioactive isotopes is governed by an internal process where the rate of decay depends only on the amount of the substance present at a given moment. The decay equation does not depend on time explicitly, making it an autonomous system.

2. Epidemiological Models (without external intervention):
Simplified models of disease spread, such as the SIR (Susceptible-Infected-Recovered) model, can be autonomous if no external factors (like seasonal effects or interventions) are considered. The evolution of the system depends only on the current number of susceptible, infected, and recovered individuals.

3. Predator-Prey Dynamics (Lotka-Volterra Model):
In the absence of external influences like seasonal changes or human intervention, predator-prey relationships, such as those described by the Lotka-Volterra equations, can be modeled as autonomous systems. The population changes depend solely on the interaction between predators and prey.

4. Chemical Reactions (closed systems):
In a closed system with no external input or removal of substances, the kinetics of chemical reactions can be modeled as autonomous. The rate of reaction depends only on the concentrations of reactants and products at any given time.

5. Newtonian Mechanics of Isolated Systems:
For an isolated mechanical system (e.g., a simple pendulum or two-body orbital system), the equations of motion can be autonomous. The system evolves based solely on the internal energy and forces within the system, without any external time-dependent influences. This relates to general oscillatory systems or harmonic oscillators — the simple harmonic oscillator (such as a mass on a spring) can be modeled autonomously if no external time-varying forces are acting on the system. The system’s behavior depends only on its position and velocity at any point in time. In the classical gravitational two-body problem in celestial mechanics, where two bodies interact only through their mutual gravitational attraction, the motion can be described autonomously. The positions and velocities of the two bodies determine their future motion, independent of any external time-dependent factors.

6. Thermodynamics of Isolated Systems:
In an isolated thermodynamic system, where there is no exchange of energy or matter with the surroundings, the internal state (e.g., pressure, temperature, volume) evolves autonomously based on the system’s internal conditions.

These examples illustrate systems where internal dynamics govern the evolution of the system, and time or external influences do not explicitly appear in the equations. However, in many real-world cases, external factors often come into play, making non-autonomous formulations more appropriate for capturing the full complexity of natural behaviors. A pendulum that is periodically synchronized as for example a child pushed on a swing set,  may be either formulated as a forced response in an autonomous set of equations or a non-autonomous description if the swing pusher carefully guides the cycle.

This is where the distinctions between autonomous vs non-autonomous and forced vs natural responses should be elaborated.

Understanding the Structure of the General Solution

In the case of a forced linear second-order dynamical system, the general solution to the system is typically the sum of two components:

Homogeneous (natural) solution: This is the solution to the system when there is no external forcing (i.e., the forcing term is zero).

Particular solution: This is the solution driven by the external forcing.

The homogeneous solution depends only on the internal properties of the system (such as natural frequency, damping, etc.) and is the solution when F(t) = 0.

The particular solution is directly related to the forcing function F(t), which can be time-dependent in the case of a non-autonomous system.

So let’s  consider the autonomous vs non-autonomous context.

Autonomous System: In an autonomous system, even though the system is subject to forcing, the forcing term does not explicitly depend on time but rather on internal state variables (such as x or dx/dt). Here, the particular solution would also be state-dependent and would not explicitly involve time as an independent variable.

Non-Autonomous System: In a non-autonomous system, the forcing term explicitly depends on time, such as F(t) = A sin(w t). This external time-dependent forcing drives the particular solution. While the homogeneous solution remains autonomous (since it’s based on the system’s internal properties), the particular solution reflects the non-autonomous nature of the system.

The key insight is that of the non-autonomous particular solution. Even though a system’s response can have components from the homogeneous solution (which are autonomous in nature), the particular solution in a non-autonomous system will be time-dependent and follow the time-dependence of the external forcing.

So consider the transition from autonomous to non-autonomous: when you introduce a periodic forcing function F(t), the particular solution becomes non-autonomous, even though the overall system response still includes the autonomous homogeneous solution. This results in the system being classified as non-autonomous, as the particular solution carries the time-dependent behavior, despite the autonomous structure of the homogeneous solution.

Summary: A forced response in a linear second-order system can include both autonomous and non-autonomous components. Even though the homogeneous solution remains autonomous, the particular solution introduces non-autonomous characteristics when the forcing term depends explicitly on time. In non-autonomous systems, the forcing introduces time dependence in the particular solution, making the overall system non-autonomous, even though part of the response (the homogeneous solution) is autonomous.

Order overrides chaos

Dimensionality reduction of chaos by feedbacks and periodic forcing is a source of natural climate change, by P. Salmon, Climate Dynamics (2024)

Bottom line is that a forcing will tend to reduce chaos by creating a pattern to follow, thus the terminology of “forced response”. This has implications for climate prediction. The first few sentences of the abstract set the stage:

The role of chaos in the climate system has been dismissed as high dimensional turbulence and noise, with minimal impact on long-term climate change. However theory and experiment show that chaotic systems can be reduced or “controlled” from high to low dimensionality by periodic forcings and internal feedbacks. High dimensional chaos is somewhat featureless. Conversely low dimensional borderline chaos generates pattern such as oscillation, and is more widespread in climate than is generally recognised. Thus, oceanic oscillations such as the Pacific Decadal and Atlantic Multidecadal Oscillations are generated by dimensionality reduction under the effect of known feedbacks. Annual periodic forcing entrains the El Niño Southern Oscillation.

In Chapters 11 and 12 in Pukite, P., Coyne, D., & Challou, D. (2019). Mathematical Geoenergy. John Wiley & Sons, I cited forcing as a chaos reducer:

It is well known that a periodic forcing can reduce the erratic fluctuations and uncertainty of a near‐chaotic response function (Osipov et al., 2007; Wang, Yang, Zhou, 2013).

But that’s just a motivator. Tides are the key, acting primarily on the subsurface thermocline. Salmon’s figure comparing the AMO to Barents sea subsurface temperature is substantiating in terms of linking two separated regions by something more than a nebulous “teleconnection”.

Likely every ocean index has a common-mode mechanism. The tidal forcing by itself is close to providing an external synchronizing source, but requires what I refer to as a LTE modulation to zero in on the exact forced response. Read the previous blog post to get a feel how this works:

As Salmon notes, it’s known at some level that an annual/seasonal impulse is entraining or synchronizing ENSO, and also likely PDO and AMO. The top guns at NASA JPL point out that the main lunisolar terms are at monthly, 206 day, annual, 3 year, and 6 year periods, and this is what is used to model the forcing, see the following two charts

Now note how the middle panel in each of the following modeled climate indices does not change markedly. The most challenging aspect is the inherent structural sensitivity of the manifold1 mapping involved in LTE modulation. As the Darwin fit shows, the cross-validation is better than it may appear, as the out-of-band interval does not take much of a nudge to become synchronized with the data. Note also that the multidecadal nature of an index such as AMO may be ephemeral — the yellow cross-validation band does show valleys in what appears to be a longer multidecadal trend, capturing the long-period variations in the tides when modulated by an annual impulse – biennial in this case.

Model config repo: https://gist.github.com/pukpr/3a3566b601a54da2724df9c29159ce16?permalink_comment_id=5108154#gistcomment-5108154


1 The term manifold has an interesting etymology. From the phonetics, it is close to pronounced as “many fold”, which is precisely what’s happening here — the LTE modulation can fold over the forcing input many times in proportion to the mode of the standing wave produced. So that a higher standing wave will have “many folds” in contrast to the lowest standing wave model. At the limit, the QBO with an ostensibly wavenumber=0 mode will have no folds and will be to first-order a pass-through linear amplification of the forcing, but with likely higher modes mixed in to give the time-series some character.

Common forcing for ocean indices

In Mathematical Geoenergy, Chapter 12, a biennially-impulsed lunar forcing is suggested as a mechanism to drive ENSO. The current thinking is that this lunar forcing should be common across all the oceanic indices, including AMO for the Atlantic, IOD for the Indian, and PDO for the non-equatorial north Pacific. The global temperature extreme of the last year had too many simultaneous concurrences among the indices for this not to be taken seriously.

NINO34

PDO

AMO

IOD – East

IOD-West

Each one of these uses a nearly identical annual-impulsed tidal forcing (shown as the middle green panel in each), with a 5-year window providing a cross-validation interval. So many possibilities are available with cross-validation since the tidal factors are essentially invariantly fixed over all the climate indices.

The approach follows 3 steps as shown below

The first step is to generate the long-period tidal forcing. I go into an explanation of the tidal factors selected in a Real Climate comment here.

Then apply the lagged response of an annual impulse, in this case alternating in sign every other year, which generates the middle panel in the flow chart schematic (and the middle panel in the indexed models above).

Finally, the Laplace’s Tidal Equation (LTE) modulation is applied, with the lower right corner inset showing the variation among indices. This is where the variability occurs — the best approach is to pick a slow fundamental modulation and generate only integer harmonics of this fundamental. So, what happens is that different harmonics are emphasized depending on the oceanic index chosen, corresponding to the waveguide structure of the ocean basin and what standing waves are maximally resonant or amplified.

Note that for a dipole behavior such as ENSO, the LTE modulation will be mirror-inverses for the maximally extreme locations, in this case Darwin and Tahiti

A machine learning application is free to scrape the following GIST GitHub site for model fitting artifacts.

https://gist.github.com/pukpr/3a3566b601a54da2724df9c29159ce16

Another analysis that involved a recursively cycled fit between AMO and PDO. It switched fitting AMO for 2.5 minutes and then PDO for 2.5 minutes, cycling 50 times. This created a common forcing with an optimally shared fit, forcing baselined to PDO.

PDO

AMO

NINO34

IOD-East

IOD-West

Darwin

Tahiti

The table above shows the LTE modulation factors for Darwin and Tahiti model fits. The highlighted blocks show the phase of the modulation, which should have a difference of π radians for a perfect dipole and higher harmonics associated with it. (The K0 wavenumber = 0 has no phase, but just a sign). Of the modes that are shared 1, 45, 23, 36, 18, 39, 44, the average phase is 3.09, close to π (and K0 switches sign).

1.23-(-1.72) = 2.95 
1.47-(-2.05) = 3.52
-2.89-(0.166) = -3.056 
-0.367-(-2.58) = 2.213 
1.59-(-2.175) = 3.765 
0.27 - (-2.84) = 3.11 
-1.87 -1.14 = -3.01 

Average (2.95+3.52+3.056+2.213+3.765+3.11+3.01)/7 = 3.0891

Contrast to the IOD East/West dipole. Only the K0 (wavenumber=0) shows a reversal in sign. The LTE modulation terms are within 1 radian of each other, indicating much less of a dipole behavior on those terms. It’s possible that these sites don’t span a true dipole, either by its nature or from siting of the measurements.

Cross-validating a large interval span on PDO

using CC

using DTW metric, which pulls out more of the annual/semi-annual signal

adding a 3rd harmonic

Complement of the fitting interval, note the spectral composition maintains the same harmonics, indicating that the structure mapped to is stationary in the sense that the tidal pattern is not changing and the LTE modulation is largely fixed.

This is the resolved tidal forcing, finer than the annual impulse sampling used on the models above.

Below can see the primary 27.5545 lunar anomalistic cycle, mixed with the draconic 27.2122/13.606 cycle to create the 6/3 year modulation and the 206 day perigee-syzygy cycle (or 412 full cycle, as 206 includes antipodal full moon or new moon orientation).

(click on any image to magnify)

Fundy-mental (continued)

I’m looking at side-band variants of the lunisolar orbital forcing because that’s where the data is empirically taking us. I had originally proposed solving Laplace’s Tidal Equations (LTE) using a novel analytical derivation published several years ago (see Mathematical Geoenergy, Wiley/AG, 2019). The takeaway from the math results — given that LTEs form the primitive basis of the GCM-specific shallow-water approximation to oceanic fluid dynamics — was that my solution involved a specific type of non-linear modulation or amplification of the input tidal. However, this isn’t the typical diurnal/semi-diurnal tidal forcing, but because of the slower inertial response of the ocean volume, the targeted tidal cycles are the longer period monthly and annual. Moreover, as very few climate scientists are proficient at signal processing and all the details of aliasing and side-bands, this is an aspect that has remained hidden (again thank Richard Lindzen for opening the book on tidal influences and then slamming it shut for decades).

Continue reading

Bay of Fundy subbands

With the recent total solar eclipse, it revived lots of thought of Earth’s ecliptic plane. In terms of forcing, having the Moon temporarily in the ecliptic plane and also blocking the sun is not only a rare and (to some people) an exciting event, it’s also an extreme regime wrt to the Earth as the combined reinforcement is maximized.

In fact this is not just any tidal forcing — rather it’s in the class of tidal forcing that has been overlooked over time in preference to the conventional diurnal tides. As many of those that tracked the eclipse as it traced a path from Texas to Nova Scotia, they may have noted that the moon covers lots of ground in a day. But that’s mainly because of the earth’s rotation. To remove that rotation and isolate the mean orbital path is tricky.  And that’s the time-span duration where long-period tidal effects and inertial motion can build up and show extremes in sea-level change. Consider the 4.53 year extreme tidal cycle observed at the Bay of Fundy (see Desplanque et al) located in Nova Scotia. This is predicted if the long-period lunar perigee anomaly (27.554 days and the 8.85 absidal precessional return cycle) amplifies the long period lunar ecliptic nodal cycle, as every 9.3 years the lunar path intersects the ecliptic plane, one ascending and the other descending as the moon’s gravitational pull directly aligns with the sun’s.  The predicted frequencies are 1/8.85 ± 2/18.6 = 1/4.53 & 1/182, the latter identified by Keeling in 2000.  The other oft-mentioned tidal extreme is at 18.6 years, which is identified as the other long period extreme at the Bay of Fundy by Desplanque, and that was also identified by NASA as an extreme nuisance tide via a press release and a spate of “Moon wobble” news articles 3 years ago.

What I find troubling is that I can’t find a scholarly citation where the 4.53 year extreme tidal cycle is explained in this way. It’s only reported as an empirical observation by Desplanque in several articles studying the Bay of Fundy tides. 

Continue reading

Proof for allowed modes of an ideal QBO

In formal mathematical terms of geometry/topology/homotopy/homology, let’s try proving that a wavenumber=0 cycle of east/west direction inside an equatorial toroidal-shaped waveguide, can only be forced by the Z-component of a (x,y,z) vector where x,y lies in the equatorial plane.

To address this question, let’s dissect the components involved and prove within the constraints of geometry, topology, homotopy, and homology, focusing on valid mathematical principles.

Continue reading

Are the QBO disruptions anomalous?

Based on the previous post on applying Dynamic Time Warping as a metric for LTE modeling of oceanic indices, it makes sense to apply the metric to the QBO model of atmospheric winds. A characteristic of QBO data is the sharp transitions of wind reversals. As described previously, DTW allows a fit to adjust the alignment between model and data without incurring a potential over-fitting penalty that a conventional correlation coefficient will often lead to.

Continue reading

Dynamic Time Warping

Useful to note that the majority of the posts written for this blog are in support of the mathematical analysis formulated in Mathematical Geoenergy (Wiley/AGU, 2018). As both new data becomes available and new techniques for model fitting & parameter estimation — aka inverse modeling (predominantly from the machine learning community) — are suggested, an iterative process of validation, fueled by the latest advancements, ensures that the GeoEnergyMath models remain robust and accurately reflective of the underlying observed behaviors. This of course should be done in conjunction with submitting significant findings to the research literature pipeline. However, as publication is pricey, my goal is to make the cross-validation so obvious that I can get an invitation for a review paper — with submission costs waived. Perhaps this post will be the deal-maker — certainly not the deal-breaker, but you can be the judge.

Continue reading

Mechanical Vibration Sidebanding

Similar to the LTE modulation in Chapter 12, modulation of a periodic impulse can lead to complex waveforms.

Find behaviors that are periodic impulse trains that are modulated by another periodic process, leading to another waveform with strong subbands.

ChatGPT: The description you provided suggests a phenomenon where a periodic impulse train, a series of regular, sharp signals, is modulated by another periodic process, leading to a composite waveform characterized by pronounced subbands. This can be observed in various fields, including signal processing, telecommunications, and even in natural systems. Here are a few examples where such behaviors might be observed:

  1. Frequency Modulation (FM) in Telecommunications: In FM, the frequency of a carrier wave is varied in accordance with the amplitude of a modulating signal. If the modulating signal is a periodic impulse train, the resulting FM signal will exhibit a spectrum with sidebands at multiples of the modulating frequency. These sidebands can be viewed as subbands in the spectrum, resulting from the modulation process.
  2. Amplitude Modulation (AM) in Radio Broadcasting: In AM, the amplitude of a carrier signal is varied in proportion to that of a modulating signal, which could be a periodic impulse train. This modulation results in a waveform where the carrier frequency is accompanied by sidebands (subbands) at frequencies equal to the sum and difference of the carrier frequency and the modulating frequencies.
  3. Biological Rhythms: In biological systems, certain rhythms or signals can modulate others. For example, the circadian rhythm (a roughly 24-hour cycle) can modulate other physiological processes, leading to patterns that have periodic qualities modulated by the circadian rhythm, resulting in complex waveform behaviors with distinguishable sub-patterns or subbands.
  4. Vibration Analysis in Mechanical Systems: In mechanical systems, a periodic impulse (such as that from a rotating imbalance) can modulate the vibration signatures of other components of the system. This can lead to complex vibration signatures that contain subbands corresponding to the modulating frequencies.
  5. Electroencephalography (EEG) in Neuroscience: In EEG signals, certain brain wave patterns can be modulated by underlying rhythmic processes, such as the sleep-wake cycle. This can result in EEG waveforms where the primary rhythms are modulated, leading to complex patterns with distinct frequency bands.
  6. Optical Communications: In optical fiber communications, a technique known as Wavelength Division Multiplexing (WDM) can be seen as modulating multiple light signals (each a periodic process) onto a single fiber. The combined signal exhibits a spectrum with distinct subbands, each corresponding to one of the modulated signals.

These examples illustrate the diversity of systems and contexts in which periodic impulse trains modulated by another periodic process can lead to complex waveforms with pronounced subbands, each reflecting the interplay between the modulating signals and the base periodic impulses.

The illustration above showcases vibration analysis in mechanical systems, depicting a complex waveform resulting from the modulation of a periodic impulse train by another periodic process. It includes a spectral analysis graph, highlighting the pronounced subbands within the waveform, and a representation of a mechanical system, such as a rotating shaft with an imbalance, along with the corresponding vibration signal being analyzed. This image is designed to be detailed and technical, suitable for educational purposes in the field of engineering.

More ChatGPT: The spectral domain representation below illustrates the result of a periodic impulse train, such as that from a rotating imbalance, being modulated by another vibrating periodic process. In this simulation, the periodic impulse train has a frequency of 50 Hz, and it is modulated by a lower frequency signal at 2 Hz.

The Fourier Transform of the modulated signal reveals pronounced subbands around the base frequency of the impulse train and its harmonics. These subbands are a direct consequence of the modulation process, where the amplitude of the impulse train is modulated by the lower frequency signal, leading to the creation of sidebands in the frequency domain. This graphical representation captures the complex interplay between the periodic impulse train and the modulating signal, resulting in a composite waveform characterized by these pronounced subbands.

This is a good recent citation : He, G., Ding, K., Li, W. and Li, Y. (2017), Frequency response model and mechanism for wind turbine planetary gear train vibration analysis. IET Renewable Power Generation, 11: 425-432. https://doi.org/10.1049/iet-rpg.2016.0236

Frequency components of complicated asymmetric modulation sidebands, existing in the vibration of the healthy planetary gear train, are prone to be erratically diagnosed as fault characteristics, which leads to difficulties in fault diagnosis of wind turbine planetary gearbox. The factors affecting the modulation sideband, i.e. the periodical time-varying transmission path and meshing force direction, are analysed. Considering both the meshing vibrations of the planet–ring and planet–sun gear pairs, a mathematical model was developed to analyse the planetary gear train’s vibration response. Simulation and experiments were conducted, and the mechanism of vibration modulation sidebands was revealed. The modulation sideband is not caused by the meshing vibration itself, but by the testing method that sensors are fixed on the ring gear or gearbox casing. The frequency components and amplitudes of the sidebands are determined by the tooth number of the ring gear and sun gear, the number of planet gears and their initial assembling phases. The asymmetric modulation sideband is mainly caused by the phase difference of the initial planets’ assembling phase.

Abstract: Frequency response model and mechanism for wind turbine planetary gear train vibration analysis