
The Perplexity response:
Continue reading
The Perplexity response:
Continue readingThis took me all of a few minutes with Claude Code, given this prompt:
create a convolution-based algorithm of the Oil Shock Model described in Mathematical GeoEnergy (Wiley, 2019). Add a pltting routine and keep hooks for parameterizing and comparing against empirical oil data.

This the result, running from the command line:



Fitting OSM to synthetic data โฆk_fallow=0.2482 k_build=0.2463 k_maturation=0.0599cost = 7.8022
This is the Python code:
"""Oil Shock Model (OSM) โ convolution-based implementation.Reference: Pukite, Coyne & Challou, "Mathematical GeoEnergy" (Wiley, 2019).Core idea---------Oil production P(t) is the convolution of a discovery rate D(t) with acomposite transfer function h(t) that represents the multi-stage pipelinefrom discovery to extraction: P(t) = D(t) * h_1(t) * h_2(t) * ... * h_n(t)Each stage i is modelled as a first-order (exponential) process: h_i(t) = k_i * exp(-k_i * t), t >= 0so the composite h(t) is a hypo-exponential (Erlang if all k_i are equal)."Shocks" enter through abrupt changes in the discovery / booking rate."""from __future__ import annotationsimport warningsfrom dataclasses import dataclass, fieldfrom typing import Callable, Dict, List, Optional, Sequence, Tupleimport numpy as npfrom scipy import optimizefrom scipy.signal import fftconvolveimport matplotlib.pyplot as pltimport matplotlib.gridspec as gridspec# ---------------------------------------------------------------------------# Parameter container# ---------------------------------------------------------------------------@dataclassclass OSMParams: """All tuneable parameters for the Oil Shock Model. Stage rates ----------- k_fallow : rate constant for the fallow / booking stage [1/yr] k_build : rate constant for the build / development stage [1/yr] k_maturation : rate constant for the maturation / production stage [1/yr] Extra stages can be added via `extra_stages` (list of additional k values). Discovery model --------------- Uses a sum of Gaussians so multiple discovery waves (e.g. Middle-East, North-Sea, deep-water) can be represented. Each entry in `discovery_pulses` is a dict with keys: peak_year โ centre of the Gaussian [calendar year] amplitude โ peak discovery rate [Gb/yr] width โ half-width (ฯ) [yr] Time grid --------- t_start, t_end, dt [calendar years] Shocks ------ List of (year, scale_factor) tuples. A shock multiplies the discovery rate by scale_factor from that year onward (or use a ramp โ see apply_shocks). """ # --- pipeline stage rate constants (1/yr) --- k_fallow: float = 0.3 k_build: float = 0.2 k_maturation: float = 0.07 # --- additional pipeline stages (optional) --- extra_stages: List[float] = field(default_factory=list) # --- discovery model --- discovery_pulses: List[Dict] = field(default_factory=lambda: [ dict(peak_year=1960.0, amplitude=20.0, width=12.0), ]) # --- time grid --- t_start: float = 1900.0 t_end: float = 2100.0 dt: float = 0.5 # yr # --- shocks: list of (year, scale_factor) --- shocks: List[Tuple[float, float]] = field(default_factory=list) def all_stage_rates(self) -> List[float]: return [self.k_fallow, self.k_build, self.k_maturation] + list(self.extra_stages)# ---------------------------------------------------------------------------# Core model# ---------------------------------------------------------------------------class OilShockModel: """Convolution-based Oil Shock Model.""" def __init__(self, params: OSMParams): self.params = params self._build_time_grid() self._run() # ------------------------------------------------------------------ # Setup # ------------------------------------------------------------------ def _build_time_grid(self) -> None: p = self.params self.t = np.arange(p.t_start, p.t_end + p.dt * 0.5, p.dt) self.n = len(self.t) # Impulse-response time axis (starts at 0) self.tau = np.arange(0, self.n) * p.dt # ------------------------------------------------------------------ # Discovery rate # ------------------------------------------------------------------ def discovery_rate(self) -> np.ndarray: """Build the discovery rate D(t) from Gaussian pulses + shocks.""" D = np.zeros(self.n) for pulse in self.params.discovery_pulses: D += pulse["amplitude"] * np.exp( -0.5 * ((self.t - pulse["peak_year"]) / pulse["width"]) ** 2 ) D = np.clip(D, 0.0, None) D = self._apply_shocks(D) return D def _apply_shocks(self, D: np.ndarray) -> np.ndarray: """Multiply discovery rate by a step-function shock at each shock year.""" D = D.copy() for shock_year, scale in self.params.shocks: idx = np.searchsorted(self.t, shock_year) D[idx:] *= scale return D # ------------------------------------------------------------------ # Transfer function (impulse response) # ------------------------------------------------------------------ def stage_impulse_response(self, k: float) -> np.ndarray: """Single exponential stage: h(ฯ) = kยทexp(-kยทฯ).""" return k * np.exp(-k * self.tau) * self.params.dt # dt for Riemann sum def composite_impulse_response(self) -> np.ndarray: """h(ฯ) = h_1 * h_2 * ... * h_n (sequential convolution of stages).""" rates = self.params.all_stage_rates() if not rates: raise ValueError("Need at least one pipeline stage.") h = self.stage_impulse_response(rates[0]) for k in rates[1:]: h_stage = self.stage_impulse_response(k) # Full convolution, then truncate to original length h = fftconvolve(h, h_stage)[:self.n] # Normalise so total weight = 1 (conservation of oil) total = h.sum() if total > 0: h /= total return h # ------------------------------------------------------------------ # Production # ------------------------------------------------------------------ def production(self) -> np.ndarray: """P(t) = D(t) * h(t) (discrete convolution, same-length output).""" D = self.discovery_rate() h = self.composite_impulse_response() # Use 'full' mode then keep causal part of length n P = fftconvolve(D, h, mode="full")[:self.n] return np.clip(P, 0.0, None) # ------------------------------------------------------------------ # Cumulative production # ------------------------------------------------------------------ def cumulative_production(self) -> np.ndarray: return np.cumsum(self.production()) * self.params.dt # ------------------------------------------------------------------ # Run / cache results # ------------------------------------------------------------------ def _run(self) -> None: self.D = self.discovery_rate() self.h = self.composite_impulse_response() self.P = self.production() self.CP = self.cumulative_production()# ---------------------------------------------------------------------------# Fitting helpers# ---------------------------------------------------------------------------def pack_params(params: OSMParams) -> np.ndarray: """Flatten stage rates + discovery amplitudes into a 1-D array for optimisation.""" rates = params.all_stage_rates() amplitudes = [p["amplitude"] for p in params.discovery_pulses] return np.array(rates + amplitudes, dtype=float)def unpack_params(x: np.ndarray, template: OSMParams) -> OSMParams: """Reconstruct an OSMParams from a flat array produced by pack_params.""" import copy p = copy.deepcopy(template) n_stages = 3 + len(template.extra_stages) p.k_fallow, p.k_build, p.k_maturation = x[0], x[1], x[2] p.extra_stages = list(x[3:n_stages]) for i, pulse in enumerate(p.discovery_pulses): pulse["amplitude"] = x[n_stages + i] return pdef fit_to_empirical( t_data: np.ndarray, P_data: np.ndarray, template: OSMParams, bounds_lo: Optional[np.ndarray] = None, bounds_hi: Optional[np.ndarray] = None,) -> Tuple[OSMParams, optimize.OptimizeResult]: """Least-squares fit of OSM to empirical production data. Parameters ---------- t_data : calendar years of observations P_data : observed production rates (same units as OSMParams amplitudes) template : starting-point OSMParams (also defines grid, shocks, etc.) Returns ------- best_params : fitted OSMParams result : scipy OptimizeResult """ x0 = pack_params(template) n_vars = len(x0) if bounds_lo is None: bounds_lo = np.full(n_vars, 1e-4) if bounds_hi is None: bounds_hi = np.full(n_vars, 1e4) def residuals(x: np.ndarray) -> np.ndarray: try: p = unpack_params(np.abs(x), template) model = OilShockModel(p) P_model = np.interp(t_data, model.t, model.P) return P_model - P_data except Exception: return np.full_like(P_data, 1e9) result = optimize.least_squares( residuals, x0, bounds=(bounds_lo, bounds_hi), method="trf", verbose=0, ) best_params = unpack_params(result.x, template) return best_params, result# ---------------------------------------------------------------------------# Plotting# ---------------------------------------------------------------------------def plot_model( model: OilShockModel, empirical: Optional[Tuple[np.ndarray, np.ndarray]] = None, empirical_label: str = "Empirical", title: str = "Oil Shock Model", show_discovery: bool = True, show_impulse: bool = True, show_cumulative: bool = True, figsize: Tuple[float, float] = (12, 9), save_path: Optional[str] = None,) -> plt.Figure: """Comprehensive 4-panel plot of OSM results. Parameters ---------- model : fitted / run OilShockModel instance empirical : optional (t_emp, P_emp) arrays for overlay """ n_rows = 1 + int(show_discovery) + int(show_impulse) + int(show_cumulative) fig = plt.figure(figsize=figsize, constrained_layout=True) fig.suptitle(title, fontsize=14, fontweight="bold") gs = gridspec.GridSpec(n_rows, 1, figure=fig) axes: List[plt.Axes] = [fig.add_subplot(gs[i]) for i in range(n_rows)] ax_iter = iter(axes) # --- Production rate --- ax = next(ax_iter) ax.plot(model.t, model.P, color="steelblue", lw=2, label="OSM production") if empirical is not None: t_emp, P_emp = empirical ax.scatter(t_emp, P_emp, color="tomato", s=18, zorder=5, label=empirical_label, alpha=0.8) _mark_shocks(ax, model.params.shocks) ax.set_ylabel("Production rate [Gb/yr]") ax.set_title("Production rate") ax.legend(fontsize=9) ax.grid(True, alpha=0.3) # --- Discovery rate --- if show_discovery: ax = next(ax_iter) ax.fill_between(model.t, model.D, alpha=0.4, color="goldenrod", label="Discovery D(t)") ax.plot(model.t, model.D, color="goldenrod", lw=1.5) _mark_shocks(ax, model.params.shocks) ax.set_ylabel("Discovery rate [Gb/yr]") ax.set_title("Discovery / booking rate") ax.legend(fontsize=9) ax.grid(True, alpha=0.3) # --- Impulse response --- if show_impulse: ax = next(ax_iter) ax.plot(model.tau, model.h / model.params.dt, color="mediumpurple", lw=2, label="Composite h(ฯ)") for i, k in enumerate(model.params.all_stage_rates()): h_stage = k * np.exp(-k * model.tau) ax.plot(model.tau, h_stage, lw=1, ls="--", alpha=0.6, label=f"Stage {i+1}: k={k:.3f}") ax.set_xlabel("Lag ฯ [yr]") ax.set_ylabel("h(ฯ) [1/yr]") ax.set_title("Pipeline transfer function (impulse response)") ax.set_xlim(0, min(80, model.tau[-1])) ax.legend(fontsize=8) ax.grid(True, alpha=0.3) # --- Cumulative --- if show_cumulative: ax = next(ax_iter) ax.plot(model.t, model.CP, color="seagreen", lw=2, label="Cumulative production") ax.set_xlabel("Year") ax.set_ylabel("Cumulative [Gb]") ax.set_title("Cumulative production") ax.legend(fontsize=9) ax.grid(True, alpha=0.3) # shared x-axis label cleanup for ax in axes[:-1]: ax.set_xlabel("") if save_path: fig.savefig(save_path, dpi=150, bbox_inches="tight") print(f"Figure saved to {save_path}") return figdef _mark_shocks(ax: plt.Axes, shocks: List[Tuple[float, float]]) -> None: for year, scale in shocks: ax.axvline(year, color="red", ls=":", lw=1.2, alpha=0.7) ax.text(year + 0.3, ax.get_ylim()[1] * 0.92, f"ร{scale:.2f}", color="red", fontsize=7, va="top")def plot_sensitivity( base_params: OSMParams, param_name: str, values: Sequence[float], figsize: Tuple[float, float] = (10, 5), save_path: Optional[str] = None,) -> plt.Figure: """Overlay production curves for a range of one parameter. Parameters ---------- base_params : template OSMParams param_name : attribute name on OSMParams (e.g. 'k_maturation') values : sequence of values to sweep """ import copy fig, ax = plt.subplots(figsize=figsize, constrained_layout=True) cmap = plt.cm.viridis colors = [cmap(i / max(len(values) - 1, 1)) for i in range(len(values))] for val, color in zip(values, colors): p = copy.deepcopy(base_params) setattr(p, param_name, val) m = OilShockModel(p) ax.plot(m.t, m.P, color=color, lw=1.8, label=f"{param_name}={val:.3g}") ax.set_xlabel("Year") ax.set_ylabel("Production rate [Gb/yr]") ax.set_title(f"Sensitivity: {param_name}") ax.legend(fontsize=8, ncol=2) ax.grid(True, alpha=0.3) sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=min(values), vmax=max(values))) sm.set_array([]) fig.colorbar(sm, ax=ax, label=param_name) if save_path: fig.savefig(save_path, dpi=150, bbox_inches="tight") return fig# ---------------------------------------------------------------------------# Demo / example# ---------------------------------------------------------------------------def demo() -> None: """Run a self-contained demonstration of the OSM.""" # --- 1. Base model with two discovery waves --- params = OSMParams( k_fallow=0.30, k_build=0.15, k_maturation=0.06, discovery_pulses=[ dict(peak_year=1955, amplitude=22.0, width=12), # conventional peak dict(peak_year=1975, amplitude=10.0, width=8), # second wave dict(peak_year=2010, amplitude=6.0, width=7), # deep-water / tight oil ], shocks=[ (1973, 0.55), # OPEC embargo (1979, 0.75), # Iranian revolution ], t_start=1900, t_end=2080, dt=0.5, ) model = OilShockModel(params) # --- 2. Synthetic "empirical" data (model + noise) for demonstration --- rng = np.random.default_rng(42) t_emp = np.arange(1960, 2025, 2.0) P_true = np.interp(t_emp, model.t, model.P) P_emp = np.clip(P_true * (1 + rng.normal(0, 0.08, size=t_emp.shape)), 0, None) # --- 3. Plot base model vs synthetic data --- fig = plot_model( model, empirical=(t_emp, P_emp), empirical_label="Synthetic data", title="Oil Shock Model โ demonstration", ) plt.show() # --- 4. Sensitivity sweep on k_maturation --- fig2 = plot_sensitivity( params, param_name="k_maturation", values=[0.03, 0.05, 0.07, 0.10, 0.15], ) plt.show() # --- 5. Quick fit to the synthetic data --- print("\nFitting OSM to synthetic data โฆ") best_params, result = fit_to_empirical(t_emp, P_emp, params) best_model = OilShockModel(best_params) print(f" k_fallow={best_params.k_fallow:.4f} " f"k_build={best_params.k_build:.4f} " f"k_maturation={best_params.k_maturation:.4f}") print(f" cost = {result.cost:.4f}") fig3 = plot_model( best_model, empirical=(t_emp, P_emp), empirical_label="Synthetic data", title="OSM โ fitted parameters", ) plt.show()if __name__ == "__main__": demo()
The behavior of complex systems, particularly in fluid dynamics, is traditionally described by high-dimensional systems of equations like the Navier-Stokes equations. While providing practical applications as is, these models can obscure the underlying, simplified mechanisms at play. It is notable that ocean modeling already incorporates dimensionality reduction built in, such as through Laplace’s Tidal Equations (LTE), which is a reduced-order formulation of the Navier-Stokes equations. Furthermore, the topological containment of phenomena like ENSO and QBO within the equatorial toroid , and the ability to further reduce LTE in this confined topology as described in the context of our text Mathematical Geoenergy underscore the inherent low-dimensional nature of dominant geophysical processes. The concept of hidden latent manifolds posits that the true, observed dynamics of a system do not occupy the entire high-dimensional phase space, but rather evolve on a much lower-dimensional geometric structureโa manifold layerโwhere the system’s effective degrees of freedom reside. This may also help explain the seeming paradox of the inverse energy cascade, whereby order in fluid structures seems to maintain as the waves become progressively larger, as nonlinear interactions accumulate energy transferring from smaller scales.
Discovering these latent structures from noisy, observational data is the central challenge in state-of-the-art fluid dynamics. Enter the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm, pioneered by Brunton et al. . SINDy is an equation-discovery framework designed to identify a sparse set of nonlinear terms that describe the evolution of the system on this low-dimensional manifold. Instead of testing all possible combinations of basis functions, SINDy uses a penalized regression technique (like LASSO) to enforce sparsity, effectively winnowing down the possibilities to find the most parsimonious, yet physically meaningful, governing differential equations. The result is a simple, interpretable model that captures the essential physicsโthe fingerprint of the latent manifold. The SINDy concept is not that difficult an algorithm to apply as a decent Python library is available for use, and I have evaluated it as described here.
Applying this methodology to Earth system dynamics, particularly the seemingly noisy, erratic, and perhaps chaotic time series of sea-level variation and climate index variability, reveals profound simplicity beneath the complexity. The high-dimensional output of climate models or raw observations can be projected onto a model framework driven by remarkably few physical processes. Specifically, as shown in analysis targeting the structure of these time series, the dynamics can be cross-validated by the interaction of two fundamental drivers: a forced gravitational tide and an annual impulse.
The presence of the forced gravitational tide accounts for the regular, high-frequency, and predictable components of the dynamics. The annual impulse, meanwhile, serves as the seasonal forcing function, representing the integrated effect of large-scale thermal and atmospheric cycles that reset annually. The success of this sparse, two-component modelโwhere the interaction of these two elements is sufficient to capture the observed dynamicsโserves as the ultimate validation of the latent manifold concept. The gravitational tides with the integrated annual impulse are the discovered, low-dimensional degrees of freedom, and the ability of their coupled solution to successfully cross-validate to the observed, high-fidelity dynamics confirms that the complex, high-dimensional reality of sea-level and climate variability emerges from this simple, sparse, and interpretable set of latent governing principles. This provides a powerful, physics-constrained approach to prediction and understanding, moving beyond descriptive models toward true dynamical discovery.
An entire set of cross-validated models is available for evluation here: https://pukpr.github.io/examples/mlr/.
This is a mix of climate indices (the 1st 20) and numbered coastal sea-level stations obtained from https://psmsl.org/
https://pukpr.github.io/examples/map_index.html
Crucially, this analysis does not use the SINDy algorithm, but a much more basic multiple linear regression (MLR) algorithm predecessor, which I anticipate being adapted to SINDy as the model is further refined. Part of the rationale for doing this is to maintain a deep understanding of the mathematics, as well as providing cross-checking and thus avoiding the perils of over-fitting, which is the bane of neural network models.
Also read this intro level on tidal modeling, which may form the fundamental foundation for the latent manifold: https://pukpr.github.io/examples/warne_intro.html. The coastal station at Wardemunde in Germany along the Baltic sea provided a long unbroken interval of sea-level readings which was used to calibrate the hidden latent manifold that in turn served as a starting point for all the other models. Not every model works as well as the majority — see Pensacola for a sea-level site and and IOD or TNA for climate indices, but these are equally valuable for understanding limitations (and providing a sanity check against an accidental degeneracy in the model fitting process) . The use of SINDy in the future will provide additional functionality such as regularization that will find an optimal common-mode latent layer,.
Reminded by a 20-year anniversary post at RealClimate.org, that I’ve been blogging for 20 years + 6 months on topics of fossil fuel depletion + climate change. The starting point was at a BlogSpot blog I created in May 2004, where the first post set the stage:
Click on the above to go to the complete archives (almost daily posts) until I transitioned to WordPress and what became the present blog. After 2011, my blogging pace slowed down considerably as I started to write in more in more technical terms. Eventually the most interesting and novel posts were filtered down to a set that would eventually become the contents of Mathematical Geoenergy : Discovery, Depletion, and Renewal, published in late 2018/early 2019 by Wiley with an AGU imprint.
The arc that my BlogSpot/WordPress blogging activity followed occupies somewhat of a mirror universe to that of RealClimate. I initially started out with an oil depletion focus and by incrementally understanding the massive inertia that our FF-dependent society had developed, it placed the climate science aspect into a different perspective and context. After realizing that CO2 did not like to sequester, it became obvious that not much could be done to mitigate the impact of gradually increasing GHG levels, and that it would evolve into a slow-moving train wreck. That’s part of the reason why I focused more on research into natural climate variability. In contrast, RealClimate (and all the other climate blogs) continued to concentrate on man-made climate change. At this point, my climate fluid dynamics understanding is at some alternate reality level, see the last post, still very interesting but lacking any critical acceptance (no debunking either, which keeps it alive and potentially valid).
The oil depletion aspect more-or-less spun off into the PeakOilBarrel.com blog [*] maintained by my co-author Dennis Coyne. That’s like watching a slow-moving train wreck as well, but Dennis does an excellent job of keeping the suspense up with all the details in the technical modeling. Most of the predictions regarding peak oil that we published in 2018 are panning out.
As a parting thought, the RealClimate hindsight post touched on how AI will impact information flow going forward. Having worked on AI knowledgebases for environmental modeling during the LLM-precursor stage circa 2010-2013, I can attest that it will only get better. At the time, we were under the impression that knowledge used for modeling should be semantically correct and unambiguous (with potentially a formal representation and organization, see figure below), and so developed approaches for that here and here (long report form).

As it turned out, lack of correctness is just a stage, and AI users/customers are satisfied to get close-enough for many tasks. Eventually, the LLM robots will gradually clean up the sources of knowledge and converge more to semantic correctness. Same will happen with climate models as machine learning by the big guns at Google, NVIDIA, and Huawei will eventually discover what we have found in this blog over the course of 20+ years.
Note:
[*] In some ways the PeakOilBarrel.com blog is a continuation of the shuttered TheOilDrum.com blog, which closed shop in 2013 for mysterious reasons.
In formal mathematical terms of geometry/topology/homotopy/homology, let’s try proving that a wavenumber=0 cycle of east/west direction inside an equatorial toroidal-shaped waveguide, can only be forced by the Z-component of a (x,y,z) vector where x,y lies in the equatorial plane.
To address this question, let’s dissect the components involved and prove within the constraints of geometry, topology, homotopy, and homology, focusing on valid mathematical principles.
Continue readingBased on the previous post on applying Dynamic Time Warping as a metric for LTE modeling of oceanic indices, it makes sense to apply the metric to the QBO model of atmospheric winds. A characteristic of QBO data is the sharp transitions of wind reversals. As described previously, DTW allows a fit to adjust the alignment between model and data without incurring a potential over-fitting penalty that a conventional correlation coefficient will often lead to.
Continue readingThe issue is Intersectionality and the solution is a No Regrets Strategy, which means that you don’t regret the choice you make because at least some of the bases are covered.

After filling in the blanks of the Venn diagram:

Quick YT explainer of No Regrets
Interesting to ask GPT-4 what No Regrets means in the context of the earth, i.e. GeoEnergyMath.com
What does the “No Regrets Strategy” mean in terms of mitigating climate change, fossil fuel depletion, and air pollution?

The “No Regrets Strategy” in the context of climate change mitigation, fossil fuel depletion, and air pollution refers to a set of policies, measures, and actions that provide multiple benefits, regardless of the uncertainties associated with the future development of these issues. These measures are seen as advantageous and worthwhile, even if the worst predictions of climate change, fossil fuel depletion, or air pollution do not materialize.
The No Regrets Strategy typically includes:
By implementing these no-regrets measures, societies can mitigate climate change, reduce fossil fuel dependence, and improve air quality, while also realizing economic, social, and environmental benefits.
The bible of the No Regrets Strategy is Mathematical GeoEnergy
PostScript:Don’t know if this is the intent but this cartoon sums up the No Regrets strategy perfectly

I don’t immediately trust the research published by highly cited atmospheric scientists. By my count many of them seem more keen on presenting their personal views rather than advancing the field. Off the top of my head, Richard Lindzen, Murry Salby, Roy Spencer, Tim Dunkerton, Roger Pielke, Cliff Mass, Judith Curry are all highly cited but come across as political and/or religious zealots. One guy on the list, Dunkerton, is also a racist, who happened to make the Washington Post twice : “Physicist ousted from research post after sending offensive tweet to Hispanic meteorologist” and “Atmospheric scientist loses honor, membership over ethics violation“. Awful stuff and he hasn’t stopped spouting off on Twitter.

Granted that Dunkerton says dumb stuff on Twitter but his highly cited research is also off-base. That’s IMO only because recent papers by others in the field of atmospheric science do continue to cite his ideas as primary, if not authoritative. For example, from a recently published paper “The Gravity Wave Activity during Two Recent QBO Disruptions Revealed by U.S. High-Resolution Radiosonde Data”, citations 1 & 12 both refer to Dunkerton, and specifically to his belief that the QBO period is a property of the atmospheric medium itself

Straight-forward to debunk this Dunkerton theory since the length of the cycle directly above the QBO layer is semi-annual and thus not a property of the medium but of the semi-annual nodal forcing frequency. If we make the obvious connection to the other nodal forcing — that of the moon — then we find the QBO period is fixed to 28 months. I have been highlighting this connection to the authors of new QBO papers under community review, often with some subsequent feedback provided such as here: https://doi.org/10.5194/acp-2022-792-CC1 . Though not visible yet in the comments, I received some personal correspondence that showed that the authors under peer-review are taking the idea seriously and attempting to duplicate the calculations. They seem to be methodical in their approach, asking for clarification and further instructions where they couldn’t follow the formulation. They know about the GitHub software, so hopefully that will be of some help.
In contrast, Dunkerton also knows about my approach but responds in an inscrutable (if not condescending) way. Makes you wonder if scientists such as Dunkerton and Lindzen are bitter and taking out their frustrations via the media. Based on their doggedness, they may in fact be intentionally trying to impede progress in climate science by taking contrarian stances. In my experience, the top scientists in other research disciplines don’t act this way. YMMV
More activity related to my review comment https://doi.org/10.5194/acp-2022-792-CC1
As a review, this was after commenting earlier this year on a Copernicus open-science research article on atmospheric cycles and QBO (with proposed links to sunspots and ENSO) that was undergoing a review, making a suggestion to consider analyses I had presented and published 4 years ago and also prior to that.
Thought that was that and was happy to see that the authors indicated they would revise the manuscript and perhaps advance understanding. But then several days ago, the editor interceded and essentially demanded that the authors not cite my research work. Apparently, the authors were influenced by the editor’s instructions, as they immediately removed my cite and replaced it with a citation to a review article that the editor preferred. The discussion on the article was then closed with no way for me to rebut.
This was all after I spent several hours working with the primary author as they worked to replicate my analysis, sending emails back and forth several times. The editor claimed that my contribution was “a new idea that has not been published in a recognized journal and received peer review”. This is not the case as I said above: Google Scholar citations all ignored.
Also see this post I contributed to the Peak Oil Barrel blog : https://peakoilbarrel.com/predicting-stratospheric-winds/
Koonin who was chief scientist for BP, barely touches the elephant in the room (significant global oil depletion) in his anti-climate science diatribe “Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters“. Checking Google Books for references to oil, it started out promising, thinking he would discuss why renewable energy was important, independent of any climate change considerations:

Instead he discusses cooking oil, spread over several pages, in reference to a Richard Feynman parable on deceptive advertising.



On page 33, a mention of crude (not cooking) oil, although the context is missing, perhaps referring to methane concentrations?

But then back to cooking oil! Twice!


After 200 some pages, a few factual statements on supply and demand for fossil fuels and the difficulty of carbon capture.


That’s it. The book’s index only points to page 243 relevant to oil, which is consistent with Google Book’s search.

The book is a smokescreen, with the intention of smearing climate science so as to avoid discussing the obvious No Regrets strategy for addressing rapidly declining oil reserves. No discussion of this on Rogan’s podcast with Koonin either. Oil companies do not want this discussed so they can continue to squeeze investment $$$ to find the meager and scant remaining reserves.

Koonin’s book is a bait and switch, which is to put the emphasis on the least existential crisis. Today we are globally using over 35 billion barrels of oil per year, but discovering less than 5 billion per year. That’s equivalent to having an annual income of $5,000 while spending as if you earn $35,000 — not close to sustainable after the savings you have runs out.

In Chapter 12, we described the model of QBO generated by modulating the draconic (or nodal) lunar forcing with a hemispherical annual impulse that reinforces that effect. This generates the following predicted frequency response peaks:

The 2nd, 3rd, and 4th peaks listed (at 2.423, 1.423, and 0.423) are readily observed in the power spectra of the QBO time-series. When the spectra are averaged over each of the time series, the precisely matched peaks emerge more cleanly above the red noise envelope — see the bottom panel in the figure below (click to expand).

The inset shows what these harmonics provide — essentially the jagged stairstep structure of the semi-annual impulse lag integrated against the draconic modulation.
It is important to note that these harmonics are not the traditional harmonics of a high-Q resonance behavior, where the higher orders are integral multiples of the fundamental frequency — in this case at 0.423 cycles/year. Instead, these are clear substantiation of a forcing response that maintains the frequency spectrum of an input stimulus, thus excluding the possibility that the QBO behavior is a natural resonance phenomena. At best, there may be a 2nd-order response that may selectively amplify parts of the frequency spectrum.
See my latest submission to the ESD Ideas issue : ESDD – ESD Ideas: Long-period tidal forcing in geophysics โ application to ENSO, QBO, and Chandler wobble (copernicus.org)