Intersectionality and No Regrets

The issue is Intersectionality and the solution is a No Regrets Strategy, which means that you don’t regret the choice you make because at least some of the bases are covered.

After filling in the blanks of the Venn diagram:

Quick YT explainer of No Regrets

Interesting to ask GPT-4 what No Regrets means in the context of the earth, i.e. ContextEarth.com

What does the “No Regrets Strategy” mean in terms of mitigating climate change, fossil fuel depletion, and air pollution?

The “No Regrets Strategy” in the context of climate change mitigation, fossil fuel depletion, and air pollution refers to a set of policies, measures, and actions that provide multiple benefits, regardless of the uncertainties associated with the future development of these issues. These measures are seen as advantageous and worthwhile, even if the worst predictions of climate change, fossil fuel depletion, or air pollution do not materialize.

The No Regrets Strategy typically includes:

  1. Energy efficiency improvements: By improving energy efficiency, we can reduce energy consumption, decrease greenhouse gas emissions, reduce dependence on fossil fuels, and lower air pollution. These improvements have economic benefits, as they lead to cost savings for businesses and consumers.
  2. Renewable energy development: Investing in renewable energy sources, such as solar, wind, and hydro, helps to diversify energy supply, reduce greenhouse gas emissions, and decrease reliance on fossil fuels. This transition also has the potential to create new jobs and stimulate economic growth.
  3. Afforestation and reforestation: Planting trees and restoring forests can help to sequester carbon dioxide, enhance biodiversity, and improve air and water quality. This strategy also helps to protect ecosystems and provides valuable recreational spaces for communities.
  4. Sustainable transportation: Encouraging public transportation, walking, cycling, and the use of electric vehicles reduces fossil fuel consumption, greenhouse gas emissions, and air pollution. These measures can also help to alleviate traffic congestion and improve public health.
  5. Waste reduction and recycling: Reducing waste generation and promoting recycling conserves resources, reduces greenhouse gas emissions, and lowers air and water pollution. These actions can also create jobs and generate economic benefits.

By implementing these no-regrets measures, societies can mitigate climate change, reduce fossil fuel dependence, and improve air quality, while also realizing economic, social, and environmental benefits.


The bible of the No Regrets Strategy is Mathematical GeoEnergy

Mathematical Geoenergy

Our book Mathematical Geoenergy presents a number of novel approaches that each deserve a research paper on their own. Here is the list, ordered roughly by importance (IMHO):

  1. Laplace’s Tidal Equation Analytic Solution.
    (Ch 11, 12) A solution of a Navier-Stokes variant along the equator. Laplace’s Tidal Equations are a simplified version of Navier-Stokes and the equatorial topology allows an exact closed-form analytic solution. This could classify for the Clay Institute Millenium Prize if the practical implications are considered, but it’s a lower-dimensional solution than a complete 3-D Navier-Stokes formulation requires.
  2. Model of El Nino/Southern Oscillation (ENSO).
    (Ch 12) A tidally forced model of the equatorial Pacific’s thermocline sloshing (the ENSO dipole) which assumes a strong annual interaction. Not surprisingly this uses the Laplace’s Tidal Equation solution described above, otherwise the tidal pattern connection would have been discovered long ago.
  3. Model of Quasi-Biennial Oscillation (QBO).
    (Ch 11) A model of the equatorial stratospheric winds which cycle by reversing direction ~28 months. This incorporates the idea of amplified cycling of the sun and moon nodal declination pattern on the atmosphere’s tidal response.
  4. Origin of the Chandler Wobble.
    (Ch 13) An explanation for the ~433 day cycle of the Earth’s Chandler wobble. Finding this is a fairly obvious consequence of modeling the QBO.
  5. The Oil Shock Model.
    (Ch 5) A data flow model of oil extraction and production which allows for perturbations. We are seeing this in action with the recession caused by oil supply perturbations due to the Corona Virus pandemic.
  6. The Dispersive Discovery Model.
    (Ch 4) A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.
  7. Ornstein-Uhlenbeck Diffusion Model
    (Ch 6) Applying Ornstein-Uhlenbeck diffusion to describe the decline and asymptotic limiting flow from volumes such as occur in fracked shale oil reservoirs.
  8. The Reservoir Size Dispersive Aggregation Model.
    (Ch 4) A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.
  9. Origin of Tropical Instability Waves (TIW).
    (Ch 12) As the ENSO model was developed, a higher harmonic component was found which matches TIW
  10. Characterization of Battery Charging and Discharging.
    (Ch 18) Simplified expressions for modeling Li-ion battery charging and discharging profiles by applying dispersion on the diffusion equation, which reflects the disorder within the ion matrix.
  11. Anomalous Behavior in Dispersive Transport explained.
    (Ch 18) Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.
  12. Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.
    (Ch 20) The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.
  13. Wind Energy Analysis.
    (Ch 11) Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany. Found a universal BesselK distribution which improves on the conventional Rayleigh distribution.
  14. Terrain Slope Distribution Analysis.
    (Ch 16) Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.
  15. Thermal Entropic Dispersion Analysis.
    (Ch 14) Solving the Fokker-Planck equation or Fourier’s Law for thermal diffusion in a disordered environment. A subtle effect but the result is a simplified expression not involving complex errf transcendental functions. Useful in ocean heat content (OHC) studies.
  16. The Maximum Entropy Principle and the Entropic Dispersion Framework.
    (Ch 10) The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.
  17. Solving the Reserve Growth “enigma”.
    (Ch 6) An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.
  18. Shocklets.
    (Ch 7) A kernel approach to characterizing production from individual oil fields.
  19. Reserve Growth, Creaming Curve, and Size Distribution Linearization.
    (Ch 6) An obvious linearization of this family of curves, related to Hubbert Linearization but more useful since it stems from first principles.
  20. The Hubbert Peak Logistic Curve explained.
    (Ch 7) The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.
  21. Laplace Transform Analysis of Dispersive Discovery.
    (Ch 7) Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.
  22. Gompertz Decline Model.
    (Ch 7) Exponentially increasing extraction rates lead to steep production decline.
  23. The Dynamics of Atmospheric CO2 buildup and Extrapolation.
    (Ch 9) Convolving a fat-tailed CO2 residence time impulse response function with a fossil-fuel emissions stimulus. This shows the long latency of CO2 buildup very straightforwardly.
  24. Reliability Analysis and Understanding the “Bathtub Curve”.
    (Ch 19) Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.
  25. The Overshoot Point (TOP) and the Oil Production Plateau.
    (Ch 8) How increases in extraction rate can maintain production levels.
  26. Lake Size Distribution.
    (Ch 15) Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison).
  27. The Quandary of Infinite Reserves due to Fat-Tail Statistics.
    (Ch 9) Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.
  28. Oil Recovery Factor Model.
    (Ch 6) A model of oil recovery which takes into account reservoir size.
  29. Network Transit Time Statistics.
    (Ch 21) Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.
  30. Particle and Crystal Growth Statistics.
    (Ch 20) Detailed model of ice crystal size distribution in high-altitude cirrus clouds.
  31. Rainfall Amount Dispersion.
    (Ch 15) Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.
  32. Earthquake Magnitude Distribution.
    (Ch 13) Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold.
  33. IceBox Earth Setpoint Calculation.
    (Ch 17) Simple model for determining the earth’s setpoint temperature extremes — current and low-CO2 icebox earth.
  34. Global Temperature Multiple Linear Regression Model
    (Ch 17) The global surface temperature records show variability that is largely due to the GHG rise along with fluctuating changes due to ocean dipoles such as ENSO (via the SOI measure and also AAM) and sporadic volcanic eruptions impacting the atmospheric aerosol concentrations.
  35. GPS Acquisition Time Analysis.
    (Ch 21) Engineering analysis of GPS cold-start acquisition times. Using Maximum Entropy in EMI clutter statistics.
  36. 1/f Noise Model
    (Ch 21) Deriving a random noise spectrum from maximum entropy statistics.
  37. Stochastic Aquatic Waves
    (Ch 12) Maximum Entropy Analysis of wave height distribution of surface gravity waves.
  38. The Stochastic Model of Popcorn Popping.
    (Appx C) The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production. Can use this to model epidemics, etc.
  39. Dispersion Analysis of Human Transportation Statistics.
    (Appx C) Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.

 

The 6-year oscillation in Length-of-Day

A somewhat hidden cyclic variation in the length-of-day (LOD) in the earth’s rotation, of between 6 and 7 years, was first reported in Ref [1] and analyzed in Ref [2]. Later studies further refined this period [3,4,5] closer to 6 years.

Change in detected LOD follows a ~6-yr cycle, from Ref [3]

It’s well known that the moon’s gravitational pull contributes to changes in LOD [6]. Here is the set of lunar cycles that are applied as a forcing to the ENSO model using LOD as calibration.
Continue reading

The QBO anomaly of 2016 revisited

Remember the concern over the QBO anomaly/disruption during 2016?

Quite a few papers were written on the topic

  1. Newman, P. A., et al. “The anomalous change in the QBO in 2015–2016.” Geophysical Research Letters 43.16 (2016): 8791-8797.
    Newman, P. A., et al. “The Anomalous Change in the QBO in 2015-16.” AGU Fall Meeting Abstracts. 2016.
  2. Randel, W. J., and M. Park. “Anomalous QBO Behavior in 2016 Observed in Tropical Stratospheric Temperatures and Ozone.” AGU Fall Meeting Abstracts. 2016.
  3. Dunkerton, Timothy J. “The quasi‐biennial oscillation of 2015–2016: Hiccup or death spiral?.” Geophysical Research Letters 43.19 (2016).
  4. Tweedy, O., et al. “Analysis of Trace Gases Response on the Anomalous Change in the QBO in 2015-2016.” AGU Fall Meeting Abstracts. 2016.
  5. Osprey, Scott M., et al. “An unexpected disruption of the atmospheric quasi-biennial oscillation.” Science 353.6306 (2016): 1424-1427.
According to the lunar forcing model of QBO, which was also presented at AGU last year, the peak in acceleration should have occurred at the time pointed to by the BLACK downward arrow in the figure below. This was in April of this year. The GREEN is the QBO 30 hPa acceleration data and the RED is the QBO model.

Note that the training region for the model is highlighted in YELLOW and is in the interval from 1978 to 1990. This was well in the past, yet it was able to pinpoint the sharp peak 27 years later.

The disruption in 2015-2016 shown with shaded black may have been a temporary forcing stimulus.  You can see that it obviously flipped the polarity with respect to the model. This will provoke a transient response in the DiffEq solution, which will then eventually die off.


The bottom-line is that the climate scientists who pointed out the anomaly were correct in that it was indeed a disruption, but this wasn’t necessarily because they understood why it occurred — but only that it didn’t fit a past pattern. It was good observational science, and so the papers were appropriate for publishing.  However, if you look at the QBO model against the data, you will see many similar temporary disruptions in the historical record. So it was definitely not some cataclysmic event as some had suggested. I think most scientists took a less hysterical view and simply pointed out the reversal in stratospheric winds was unusual.

I like to use this next figure as an example of how this may occur (found in the comment from last year). A local hurricane will temporarily impact the tidal displacement via a sea swell. You can see that in the middle of the trace below. On both sides of this spike, the tidal model is still in phase and so the stimulus is indeed transient while the underlying forcing remains invariant. For QBO, instead of a hurricane, the disruption could be caused by a SSW event. It also could be an unaccounted-for lunar forcing pulse not captured in the model. That’s probably worth more research.

As the QBO is still on a 28 month alignment, that means that the external stimulus — as with ENSO, likely the lunar tidal force — is providing the boundary condition synchronization.

The Hawkmoth Effect

Contrasting to the well-known Butterfly Effect, there is another scientific modeling limitation known as the Hawkmoth Effect.  Instead of simulation results being sensitive to initial conditions, which is the Butterfly Effect, the Hawkmoth Effect is sensitive to model structure.  It’s a more subtle argument for explaining why climate behavioral modeling is difficult to get right, and named after the hawkmoth because hawkmoths are “better camouflaged and less photogenic than butterflies”.

Not everyone agrees that this is a real effect, or it just reveals shortcomings in correctly being able to model the behavior under study. So, if you have the wrong model or wrong parameters for the model, of course it may diverge from the data rather sharply.

In the context of the ENSO model, we already provided parameters for two orthogonal intervals of the data.  Since there is some noise in the ENSO data — perfectly illustrated by the fact that SOI and NINO34 only have a correlation coefficient of 0.79 — it is difficult to determine how much of the parameter differences are due to over-fitting of that noise.

In the figure below, the middle panel shows the difference between the SOI and NINO34 data, with yellow showing where the main discrepancies or uncertainties in the true ENSO value lie. Above and below are the model fits for the earlier (1880-1950 shaded in a yellow background) and later (1950-2016) training intervals. In certain cases, a poorer model fit may be able to be ascribed to uncertainty in the ENSO measurement, such as near ~1909., ~1932, and ~1948, where the dotted red lines align with trained and/or tested model regions. The question mark at 1985 is a curiosity, as the SOI remains neutral, while the model fits to more La Nina conditions of NINO34.

There is certainly nothing related to the Butterfly Effect in any of this, since the ENSO model is not forced by initial conditions, but by the guiding influence of the lunisolar cycles. So we are left to determine how much of the slight divergence we see is due to non-stationary variation of the model parameters over time, or whether it is due to missing some other vital structural model parameters. In other words, the Hawkmoth Effect is our only concern.

In the model shown below, we employ significant over-fitting of the model parameters. The ENSO model only has two forcing parameters — the Draconic (D) and Anomalistic (A) lunar periods, but like in conventional ocean tidal analysis, to make accurate predictions many more of the nonlinear harmonics need to be considered [see Footnote 1]. So we start with A and D, and then create all combinations up to order 5, resulting in the set [ A, D, AD, A2, D2, A2D, AD2, A3, D3, A2D2, A3D, AD3, A4, D4, A2D3, A3D2, A4D1, A1D4, A5, D5 ].

This looks like it has the potential for all the negative consequence of massive over-fitting, such as fast divergence in amplitude outside the training interval, yet the results don’t show this at all.  Harmonics in general will not cause a divergence, because they remain in phase with the fundamental frequencies both inside and outside the training interval. Besides that, the higher order harmonics start having a diminished impact, so this set is apparently about right to create an excellent correlation outside the training interval.  The two other important constraints in the fit, are (1) the characteristic frequency modulation of the anomalistic period due to the synodic period (shown in the middle left inset) and (2) the calibrated lunar forcing based on LOD measurements (shown in the lower panel).

The resulting correlation of model to data is 0.75 inside the training interval (1880-1980) and 0.69 in the test interval (1980-2016).  So this gets close to the best agreement we can expect given that SOI and NINO34 only reaches 0.79.  Read this post for the structural model parameter variations for a reduced harmonic set to order 3 only.

Welcome to the stage of ENSO analysis where getting the rest of the details correct will provide only marginal benefits;  yet these are still important, since as with tidal analysis and eclipse models, the details are important for fine-tuning predictions.

Footnote:

  1. For conventional tidal analysis, hundreds of resulting terms are the norm, so that commercial tidal prediction programs allow an unlimited number of components.

 

 

 

Switching between two models

ENSO+QBO Elevator Pitch

Most papers on climate science take pages and pages of exposition before they try to make any kind of point. The excessive verbiage exists to rationalize their limited understanding of the physics, typically by explaining how complex it all is.

Conversely, think how easy it is to explain sunrise and sunset. From a deterministic point of view [1] and from our understanding of a rotating earth and an illuminating sun, it’s trivial to explain that a sunrise and sunset will happen once each per day. That and perhaps another sentence would be all that would be necessary to write a research paper on the topic …  if it wasn’t already common knowledge. Any padding to this would be unnecessary to the basic understanding. For example, going further and explaining why the earth rotates amounts to answering the wrong question. Thus the topic is essentially an elevator pitch.

If sunset/sunrise is too elementary an example, one could explain ocean tides. This is a bit more advanced because the causal connection is not visible to the eye. Yet all that is needed here is to explain the pull of gravity and the orbital rate of the moon with respect to the earth, and the earth to the sun. A precise correlation between the lunisolar cycles is then applied to verify causality. One could add another paragraph to explain how mixed tidal effects occur, but that should be enough for an expository paper.

We could also be at such a point in our understanding with respect to ENSO and QBO. Most of the past exposition was lengthy because the causal factors could not be easily isolated or were rationalized as random or chaotic. Yet, if we take as a premise that the behavior was governed by the same orbital factors as what governs the ocean tides, we can make quick work of an explanation.

Continue reading

The Lunar Geophysical Connection

The conjecture out of NASA JPL is that the moon has an impact on the climate greater than is currently understood:

Claire Perigaud (Caltech/JPL)
and
Has this research gone anywhere?  Looks as if has gone to this spin-off.
According to the current consensus, variability in wind is what contributes to forcing for behaviors such as the El Nino/Southern Oscillation (ENSO).
OK, but what forces the wind? No one can answer that apart from saying wind variability is just a part of the dynamic climate system.  And so we are lead to believe that a wind burst will cause an ENSO and then the ENSO event will create a significant disruptive transient to the climate much larger than the original wind stimulus. And that’s all due to positive feedback of some sort.
I am only paraphrasing the current consensus.
A much more plausible and parsimonious explanation lies with external lunar forcing reinforced by seasonal cycles.

Continue reading

Lindzen doth protest too much

Incredible that Richard Lindzen was quoted as saying this:

Richard Lindzen, the Alfred P. Sloan Professor of Meteorology at MIT and a member of the National Academy of Sciences who has long questioned climate change orthodoxy, is skeptical that a sunnier outlook is upon us.

“I actually doubt that,” he said. Even if some of the roughly $2.5 billion in taxpayer dollars currently spent on climate research across 13 different federal agencies now shifts to scientists less invested in the calamitous narrative, Lindzen believes groupthink has so corrupted the field that funding should be sharply curtailed rather than redirected.

“They should probably cut the funding by 80 to 90 percent until the field cleans up,” he said. “Climate science has been set back two generations, and they have destroyed its intellectual foundations.”

Consider the psychological projection aspect of what Lindzen is asserting. The particularly galling part is this:

“Climate science has been set back two generations, and they have destroyed its intellectual foundations.”

It may actually be Lindzen that has set back generations of atmospheric science research with his deeply flawed model of the quasi-biennial oscillation of equatorial stratospheric winds — see my recent QBO presentation for this month’s AGU meeting.   He missed a very simple derivation that he easily could have derived back in the 1960’s, and that could have set a nice “intellectual foundation” for the next 40+ years. Instead he has essentially “corrupted the field” of atmospheric sciences that could have been solved with the right application of Laplace’s tidal equations — equations known since 1776 !

The “groupthink” that Lindzen set in motion on the causes behind QBO is still present in the current research papers, with many scientists trying to explain the main QBO cycle of 28 months via a relationship to an average pressure. See for example this paper I reviewed earlier this year.

To top it all off, he was probably within an eyelash of figuring out the nature of the forcing, given that he actually considered the real physics momentarily:

Alas, all those millions of taxpayer funds that Lindzen presumably received over the years didn’t help, and he has been reduced to whining over what other climate scientists may receive in funding as he enters into retirement.

Methinks it’s usually the case that the one that “doth protest too much” is the guilty party.

Added: here is a weird graphic of Lindzen I found on the cliscep blog. The guy missed the simple while focussing on the complex.

richardlindzen

From climate scientist Dessler

From climate scientist Dessler

 

ENSO redux

I’ve been getting push-back on the ENSO sloshing model that I have devised over the last year.  The push-back revolves mainly about my reluctance to use it for projection, as in immediately.  I know all the pitfalls of forecasting — the main one being that if you initially make a wrong prediction, even with the usual caveats, you essentially don’t get a second chance.   The other problem with forecasting is that it is not timely; in other words, one will have to wait around for years to prove the validity of a model.   Who has time for that ? 🙂

Yet, there are ways around forecasting into the future. One of which primarily involves using prior data as a training interval, and then using other data in the timeline (out-of-band data) as a check.

I will give an example of using training data of SOI from 1880 – 1913 (400 months of data points) to predict the SOI profile up to 1980 (800 months of data points). We know and other researchers [1] have confirmed that ENSO undergoes a transition around 1980, which obviously can’t be forecast.   Other than that, this is a very aggressive training set, which relies on ancient historical data that some consider not the highest quality. The results are encouraging to say the least.

Continue reading