The idea of a digital twin is relatively new in terms of coinage of terms, but the essential idea has been around for decades. In the past, a digital twin was referred to as a virtual simulation of a specific system, encoded via a programming language. In the case of a system that was previously built, the virtual simulation emulated all the behaviors and characteristics of that system, only operated on a computer, with any necessary interactive controls and displays provided on a console, either real or virtual. A widely known example of a VS is that of a flight simulator, which in historical terms was the industrial forerunner to today’s virtual reality. A virtual simulation could also be used during the design of the system, with the finished digital twin providing a blueprint for the actual synthesis of the end-product. This approach has also been practiced for decades, both in the electronics industry via logic synthesis of integrated circuits from a hardware description language and with physical products via 3D printing from CAD models.
Continue readingClimate
Azimuth Project
The Azimuth Project discussion forum has been erased from existence. So all the interesting contributions will be inaccessible from web links and Google searches. This is in the context of that forum being only one of a few open discussion forums on climate

In the spirit of continuity, a spin-off discussion forum exists on the remnants of the azimuth-project GitHub organization.
Intersectionality and No Regrets
The issue is Intersectionality and the solution is a No Regrets Strategy, which means that you don’t regret the choice you make because at least some of the bases are covered.

After filling in the blanks of the Venn diagram:

Quick YT explainer of No Regrets
Interesting to ask GPT-4 what No Regrets means in the context of the earth, i.e. ContextEarth.com
What does the “No Regrets Strategy” mean in terms of mitigating climate change, fossil fuel depletion, and air pollution?

The “No Regrets Strategy” in the context of climate change mitigation, fossil fuel depletion, and air pollution refers to a set of policies, measures, and actions that provide multiple benefits, regardless of the uncertainties associated with the future development of these issues. These measures are seen as advantageous and worthwhile, even if the worst predictions of climate change, fossil fuel depletion, or air pollution do not materialize.
The No Regrets Strategy typically includes:
- Energy efficiency improvements: By improving energy efficiency, we can reduce energy consumption, decrease greenhouse gas emissions, reduce dependence on fossil fuels, and lower air pollution. These improvements have economic benefits, as they lead to cost savings for businesses and consumers.
- Renewable energy development: Investing in renewable energy sources, such as solar, wind, and hydro, helps to diversify energy supply, reduce greenhouse gas emissions, and decrease reliance on fossil fuels. This transition also has the potential to create new jobs and stimulate economic growth.
- Afforestation and reforestation: Planting trees and restoring forests can help to sequester carbon dioxide, enhance biodiversity, and improve air and water quality. This strategy also helps to protect ecosystems and provides valuable recreational spaces for communities.
- Sustainable transportation: Encouraging public transportation, walking, cycling, and the use of electric vehicles reduces fossil fuel consumption, greenhouse gas emissions, and air pollution. These measures can also help to alleviate traffic congestion and improve public health.
- Waste reduction and recycling: Reducing waste generation and promoting recycling conserves resources, reduces greenhouse gas emissions, and lowers air and water pollution. These actions can also create jobs and generate economic benefits.
By implementing these no-regrets measures, societies can mitigate climate change, reduce fossil fuel dependence, and improve air quality, while also realizing economic, social, and environmental benefits.
The bible of the No Regrets Strategy is Mathematical GeoEnergy
Mathematical Geoenergy

Our book Mathematical Geoenergy presents a number of novel approaches that each deserve a research paper on their own. Here is the list, ordered roughly by importance (IMHO):
- Laplace’s Tidal Equation Analytic Solution.
(Ch 11, 12) A solution of a Navier-Stokes variant along the equator. Laplace’s Tidal Equations are a simplified version of Navier-Stokes and the equatorial topology allows an exact closed-form analytic solution. This could classify for the Clay Institute Millenium Prize if the practical implications are considered, but it’s a lower-dimensional solution than a complete 3-D Navier-Stokes formulation requires. - Model of El Nino/Southern Oscillation (ENSO).
(Ch 12) A tidally forced model of the equatorial Pacific’s thermocline sloshing (the ENSO dipole) which assumes a strong annual interaction. Not surprisingly this uses the Laplace’s Tidal Equation solution described above, otherwise the tidal pattern connection would have been discovered long ago. - Model of Quasi-Biennial Oscillation (QBO).
(Ch 11) A model of the equatorial stratospheric winds which cycle by reversing direction ~28 months. This incorporates the idea of amplified cycling of the sun and moon nodal declination pattern on the atmosphere’s tidal response. - Origin of the Chandler Wobble.
(Ch 13) An explanation for the ~433 day cycle of the Earth’s Chandler wobble. Finding this is a fairly obvious consequence of modeling the QBO. - The Oil Shock Model.
(Ch 5) A data flow model of oil extraction and production which allows for perturbations. We are seeing this in action with the recession caused by oil supply perturbations due to the Corona Virus pandemic. - The Dispersive Discovery Model.
(Ch 4) A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume. - Ornstein-Uhlenbeck Diffusion Model
(Ch 6) Applying Ornstein-Uhlenbeck diffusion to describe the decline and asymptotic limiting flow from volumes such as occur in fracked shale oil reservoirs. - The Reservoir Size Dispersive Aggregation Model.
(Ch 4) A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world. - Origin of Tropical Instability Waves (TIW).
(Ch 12) As the ENSO model was developed, a higher harmonic component was found which matches TIW - Characterization of Battery Charging and Discharging.
(Ch 18) Simplified expressions for modeling Li-ion battery charging and discharging profiles by applying dispersion on the diffusion equation, which reflects the disorder within the ion matrix. - Anomalous Behavior in Dispersive Transport explained.
(Ch 18) Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times. - Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.
(Ch 20) The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials. - Wind Energy Analysis.
(Ch 11) Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany. Found a universal BesselK distribution which improves on the conventional Rayleigh distribution. - Terrain Slope Distribution Analysis.
(Ch 16) Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle. - Thermal Entropic Dispersion Analysis.
(Ch 14) Solving the Fokker-Planck equation or Fourier’s Law for thermal diffusion in a disordered environment. A subtle effect but the result is a simplified expression not involving complex errf transcendental functions. Useful in ocean heat content (OHC) studies. - The Maximum Entropy Principle and the Entropic Dispersion Framework.
(Ch 10) The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet. - Solving the Reserve Growth “enigma”.
(Ch 6) An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed. - Shocklets.
(Ch 7) A kernel approach to characterizing production from individual oil fields. - Reserve Growth, Creaming Curve, and Size Distribution Linearization.
(Ch 6) An obvious linearization of this family of curves, related to Hubbert Linearization but more useful since it stems from first principles. - The Hubbert Peak Logistic Curve explained.
(Ch 7) The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement. - Laplace Transform Analysis of Dispersive Discovery.
(Ch 7) Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile. - Gompertz Decline Model.
(Ch 7) Exponentially increasing extraction rates lead to steep production decline. - The Dynamics of Atmospheric CO2 buildup and Extrapolation.
(Ch 9) Convolving a fat-tailed CO2 residence time impulse response function with a fossil-fuel emissions stimulus. This shows the long latency of CO2 buildup very straightforwardly. - Reliability Analysis and Understanding the “Bathtub Curve”.
(Ch 19) Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components. - The Overshoot Point (TOP) and the Oil Production Plateau.
(Ch 8) How increases in extraction rate can maintain production levels. - Lake Size Distribution.
(Ch 15) Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison). - The Quandary of Infinite Reserves due to Fat-Tail Statistics.
(Ch 9) Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints. - Oil Recovery Factor Model.
(Ch 6) A model of oil recovery which takes into account reservoir size. - Network Transit Time Statistics.
(Ch 21) Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks. - Particle and Crystal Growth Statistics.
(Ch 20) Detailed model of ice crystal size distribution in high-altitude cirrus clouds. - Rainfall Amount Dispersion.
(Ch 15) Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size. - Earthquake Magnitude Distribution.
(Ch 13) Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold. - IceBox Earth Setpoint Calculation.
(Ch 17) Simple model for determining the earth’s setpoint temperature extremes — current and low-CO2 icebox earth. - Global Temperature Multiple Linear Regression Model
(Ch 17) The global surface temperature records show variability that is largely due to the GHG rise along with fluctuating changes due to ocean dipoles such as ENSO (via the SOI measure and also AAM) and sporadic volcanic eruptions impacting the atmospheric aerosol concentrations. - GPS Acquisition Time Analysis.
(Ch 21) Engineering analysis of GPS cold-start acquisition times. Using Maximum Entropy in EMI clutter statistics. - 1/f Noise Model
(Ch 21) Deriving a random noise spectrum from maximum entropy statistics. - Stochastic Aquatic Waves
(Ch 12) Maximum Entropy Analysis of wave height distribution of surface gravity waves. - The Stochastic Model of Popcorn Popping.
(Appx C) The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production. Can use this to model epidemics, etc. - Dispersion Analysis of Human Transportation Statistics.
(Appx C) Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.
AO
The Arctic Oscillation (AO) dipole has behavior that is correlated to the North Atlantic Oscillation (NAO) dipole. We can see this in two ways. First, and most straight-forwardly, the correlation coefficient between the AO and NAO time-series is above 0.6.
Secondly, we can use the model of the NAO from the last post and refit the parameters to the AO data (data also here), but spanning an orthogonal interval. Then we can compare the constituent lunisolar factors for NAO and AO for correlation, and further discover that this also doubles as an effective cross-validation for the underlying LTE model (as the intervals are orthogonal).

Top panel is a model fit for AO between 1900-1950, and below that is a model fit for NAO between 1950-present. The lower pane is the correlation for a common interval (left) and for the constituent lunisolar factors for the orthogonal interval (right)
Only the anomalistic factor shows an imperfect correlation, and that remains quite high.
NAO
The challenge of validating the models of climate oscillations such as ENSO and QBO, rests primarily in our inability to perform controlled experiments. Because of this shortcoming, we can either do (1) predictions of future behavior and validate via the wait-and-see process, or (2) creatively apply techniques such as cross-validation on currently available data. The first is a non-starter because it’s obviously pointless to wait decades for validation results to confirm a model, when it’s entirely possible to do something today via the second approach.
There are a variety of ways to perform model cross-validation on measured data.
In its original and conventional formulation, cross-validation works by checking one interval of time-series against another, typically by training on one interval and then validating on an orthogonal interval.
Another way to cross-validate is to compare two sets of time-series data collected on behaviors that are potentially related. For example, in the case of ocean tidal data that can be collected and compared across spatially separated geographic regions, the sea-level-height (SLH) time-series data will not necessarily be correlated, but the underlying lunar and solar forcing factors will be closely aligned give or take a phase factor. This is intuitively understandable since the two locations share a common-mode signal forcing due to the gravitational pull of the moon and sun, with the differences in response due to the geographic location and local spatial topology and boundary conditions. For tides, this is a consensus understanding and tidal prediction algorithms have stood the test of time.
In the previous post, cross-validation on distinct data sets was evaluated assuming common-mode lunisolar forcing. One cross-validation was done between the ENSO time-series and the AMO time-series. Another cross-validation was performed for ENSO against PDO. The underlying common-mode lunisolar forcings were highly correlated as shown in the featured figure. The LTE spatial wave-number weightings were the primary discriminator for the model fit. This model is described in detail in the book Mathematical GeoEnergy to be published at the end of the year by Wiley.
Another common-mode cross-validation possible is between ENSO and QBO, but in this case it is primarily in the Draconic nodal lunar factor — the cyclic forcing that appears to govern the regular oscillations of QBO. Below is the Draconic constituent comparison for QBO and the ENSO.

The QBO and ENSO models only show a common-mode correlated response with respect to the Draconic forcing. The Draconic forcing drives the quasi-periodicity of the QBO cycles, as can be seen in the lower right panel, with a small training window.
This cross-correlation technique can be extended to what appears to be an extremely erratic measure, the North Atlantic Oscillation (NAO).

NAO from NOAA https://www.ncdc.noaa.gov/teleconnections/nao/
Like the SOI measure for ENSO, the NAO is originally derived from a pressure dipole measured at two separate locations — but in this case north of the equator. From the high-frequency of the oscillations, a good assumption is that the spatial wavenumber factors are much higher than is required to fit ENSO. And that was the case as evidenced by the figure below.

ENSO vs NAO cross-validation
Both SOI and NAO are noisy time-series with the NAO appearing very noisy, yet the lunisolar constituent forcings are highly synchronized as shown by correlations in the lower pane. In particular, summing the Anomalistic and Solar constituent factors together improves the correlation markedly, which is because each of those has influence on the other via the lunar-solar mutual gravitational attraction. The iterative fitting process adjusts each of the factors independently, yet the net result compensates the counteracting amplitudes so the net common-mode factor is essentially the same for ENSO and NAO (see lower-right correlation labelled Anomalistic+Solar).
Since the NAO has high-frequency components, we can also perform a conventional cross-validation across orthogonal intervals. The validation interval below is for the years between 1960 and 1990, and even though the training intervals were aggressively over-fit, the correlation between the model and data is still visible in those 30 years.

NAO model fit with validation spanning 1960 to 1990
Over the course of time spent modeling ENSO, the effort that went into fitting to NAO was a fraction of the original time. This is largely due to the fact that the temporal lunisolar forcing only needed to be tweaked to match other climate indices, and the iteration over the topological spatial factors quickly converges.
Many more cross-validation techniques are available for NAO, since there are different flavors of NAO indices available corresponding to different Atlantic locations, and spanning back to the 1800’s.
ENSO, AMO, PDO and common-mode mechanisms
The basis of the ENSO model is the forcing derived from the long-period cyclic lunisolar gravitational pull of the moon and sun. There is some thought that ENSO shows teleconnections to other oceanic behaviors. The primary oceanic dipoles are ENSO and AMO for the Pacific and Atlantic. There is also the PDO for the mid-northern-latitude of the Pacific, which has a pattern distinct from ENSO. So the question is: Are these connected through interactions or do they possibly share a common-mode mechanism through the same lunisolar forcing mechanism?
Based on tidal behaviors, it is known that the gravitational pull varies geographically, so it would be understandable that ENSO, AMO, and PDO would demonstrate distinct time-series signatures. In checking this, you will find that the correlation coefficient between any two of these series is essentially zero, regardless of applied leads or lags. Yet the underlying component factors (the lunar Draconic, lunar Anomalistic, and solar modified terms) may potentially emerge with only slight variations in shape, with differences only in relative amplitude. This is straightforward to test by fitting the basic ENSO model to AMO and PDO by allowing the parameters to vary.
The following figure is the result of fitting the model to ENSO, AMO, and PDO and then comparing the constituent factors.
First, note that the same parametric model fits each of the time series arguably well. The Draconic factor underling both the ENSO and AMO model is almost perfectly aligned, indicated by the red starred graph, with excursions showing a CC above 0.99. All of the rest of the CC’s in fact are above 0.6.
The upshot of this analysis is two-fold. First to consider how difficult it is to fit any one of these time series to a minimal set of periodically-forced signals. Secondly that the underlying signals are not that different in character, only that the combination in terms of a Laplace’s tidal equation weighting are what couples them together via a common-mode mechanism. Thus, the teleconnection between these oceanic indices is likely an underlying common lunisolar tidal forcing, just as one would suspect from conventional tidal analysis.
An obvious clue from tidal data
One of the interesting traits of climate science is the way it gives away obvious clues. This recent paper by Iz
Iz, H Bâki. “The Effect of Regional Sea Level Atmospheric Pressure on Sea Level Variations at Globally Distributed Tide Gauge Stations with Long Records.” Journal of Geodetic Science 8, no. 1 (n.d.): 55–71.

Raw data for NYC station (Iz, H Bâki. “The Effect of Regional Sea Level Atmospheric Pressure on Sea Level Variations at Globally Distributed Tide Gauge Stations with Long Records.” Journal of Geodetic Science 8, no. 1 (n.d.): 55–71.)
and for the transformed data shown in the histogram below, where I believe the waviness in the lines is compensated by fitting to long-period tidal signal factors (such as 18.6 year, 9.3 year periods, etc).
The first temptation is to attribute the pattern to a measurement artifact. These are monthly readings and there are 12 separate discrete values identified so that connection seems causal. The author says
“It was shown that random component of regional atmospheric pressure tends to cluster at monthly intervals. The clusters are likely to be caused by the intraannual seasonal atmospheric temperature changes, which may also act as random beats in generating sub-harmonics observed in sea level changes as another mechanism.”
“At any fixed location, the sea level record is a function of time, involving periodic components as well as continuous random fluctuations. The periodic motion is mostly due to the gravitational effects of the sun-earth-moon system as well as because of solar radiation upon the atmosphere and the ocean as discussed before. Sometimes the random fluctuations are of meteorological origin and reflect the effect of ’weather’ upon the sea surface but reflect also the inverse barometric effect of atmospheric pressure at sea level.”
“Stations closer to the equator are also exposed to yearly periodic variations but with smaller amplitudes. Large adjusted R2 values show that the models explain most of the variations in atmospheric pressure observed at the sea level at the corresponding stations. For those stations closer to the equator, the amplitudes of the annual and semiannual changes are considerably smaller and overwhelmed by random excursions. Stations in Europe experience similar regional variations because of their proximities to each other”
The ENSO Forcing Potential – Cheaper, Faster, and Better
Following up on the last post on the ENSO forcing, this note elaborates on the math. The tidal gravitational forcing function used follows an inverse power-law dependence, where a(t) is the anomalistic lunar distance and d(t) is the draconic or nodal perturbation to the distance.
Note the prime indicating that the forcing applied is the derivative of the conventional inverse squared Newtonian attraction. This generates an inverse cubic formulation corresponding to the consensus analysis describing a differential tidal force:
For a combination of monthly and fortnightly sinusoidal terms for a(t) and d(t) (suitably modified for nonlinear nodal and perigean corrections due to the synodic/tropical cycle) the search routine rapidly converges to an optimal ENSO fit. It does this more quickly than the harmonic analysis, which requires at least double the unknowns for the additional higher-order factors needed to capture the tidally forced response waveform. One of the keys is to collect the chain rule terms a'(t) and d'(t) in the numerator; without these, the necessary mixed terms which multiply the anomalistic and draconic signals do not emerge strongly.
As before, a strictly biennial modulation needs to be applied to this forcing to capture the measured ENSO dynamics — this is a period-doubling pattern observed in hydrodynamic systems with a strong fundamental (in this case annual) and is climatologically explained by a persistent year-to-year regenerative feedback in the SLP and SST anomalies.
Here is the model fit for training from 1880-1980, with the extrapolated test region post-1980 showing a good correlation.
The geophysics is now canonically formulated, providing (1) a simpler and more concise expression, leading to (2) a more efficient computational solution, (3) less possibility of over-fitting, and (4) ultimately generating a much better correlation. Alternatively, stated in modeling terms, the resultant information metric is improved by reducing the complexity and improving the correlation — the vaunted cheaper, faster, and better solution. Or, in other words: get the physics right, and all else follows.
ENSO model for predicting El Nino and La Nina events
—Can capture vast majority of #ElNino and #LaNina events post-1950 by training only on pre-1950 data, with 1 lunar calibrating interval 🌛 pic.twitter.com/10PAirJAYD
— Paul Pukite (@WHUT) August 17, 2017
Applying the ENSO model to predict El Nino and La Nina events is automatic. There are no adjustable parameters apart from the calibrated tidal forcing amplitudes and phases used in the process of fitting over the training interval. Therefore the cross-validated interval from 1950 to present is untainted during the fitting process and so can be used as a completely independent and unbiased test.