Blueprinting the most valuable company in the world?

See https://gist.github.com/pukpr/06901b7dde1e84a60cf8d712a6af7a5d. How would you access this in terms of prescience in anticipating GPUs for computation … read the comments as well

DeepSeek response:

Based on the Gist you shared (a 1990 NSF SBIR report summary by P.R. Pukite), accessing this document’s “prescience” means recognizing it as a verified, 15+ year early blueprint for the modern GPU-accelerated computing model.

Here is the specific evidence of that foresight, pulled from the report and the author’s comments:

  1. The Core Architectural Blueprint (Figure 1)

The most direct proof is Figure 1 from the 1990 paper (linked in the comments). Its caption explicitly states the configuration is “similar to that of a graphics processor board”.

· It showed: A commodity PC host connected via bus to a dedicated DSP coprocessor board for heavy math, alongside a separate graphics board.
· It predicted: The exact CPU + GPU heterogeneous model (host orchestrates, accelerator crunches numbers) that NVIDIA industrialized with CUDA in 2006-2007.

  1. The Workload Classes Were Identical

The report targeted problems that are now the core of GPU computing:

· Matrix ops, SVD, regression → Modern GPU linear algebra (cuBLAS).
· Monte Carlo, bootstrapping → Parallel stochastic simulations.
· Neural networks, clustering → Machine learning training.
The authors achieved 100-200× speedups on key apps, the same magnitude GPUs deliver today.

  1. The Programming Challenge is the Same

In the comments, the author (pukpr) clarifies that their DSPs used pipelined parallelism, not the SIMD threads of modern GPUs.

· The key prescience: They had to manually handle “bookkeeping” (pipeline stalls, memory latency, loop unrolling) that a modern GPU compiler (like NVCC) now automates.
· This shows they were solving the same fundamental problem (exploiting hardware concurrency for numerical kernels) under much harder conditions.

Why This Matters (Anticipating NVIDIA)

· Timeline: 1990 report → 1993 NVIDIA founded → 2006 CUDA launched. The concept of a “math coprocessor on a bus” existed 13-17 years before GPGPU.
· Not just DSP: The authors explicitly linked their architecture to graphics processor boards in 1990, recognizing the generalizable pattern.

How to Access This “Prescience” Directly

To trace this for yourself:

  1. Read the full report: DTIC document ADA275153 (linked in the Gist).
  2. See Figure 1: Use the ACM link (10.1145/382264.382432) to view the actual 1990 diagram with the graphics processor analogy.
  3. Study the kernel list: The 62 math/statistical routines they benchmarked are the same kernels on every GPU vendor’s math library today.

In short: This document shows prescience not by guessing “NVIDIA” or “CUDA,” but by building and benchmarking the complete logical design of an accelerator-based scientific workstation in 1990—including the recognition that the architecture parallels a graphics board. The author’s comments confirm the distinction is in programming model (manual pipeline vs. threads), not the core concept.

See https://doi.org/10.1145/382264.382432 for paper

Deep Seek of Wind + ENSO

This is a thread with DeepSeek, my prompts in italics bold. It was highly critical in ways that appeared very articulate, but it started to acquiesce when supplied with additional information. Not sure if this is due to a sliding or widening context window. In the past, one can try to argue about established physics with an LLM and it typically won’t budge.

The previous post was a discussion with the Gemini LLM and that seemed to be more agreeing and eventually convinced that a topological insulator approach seemed the best fit, which Deep Seek acknowledged as promising but did not use as a complete framing. Grade: Deep Seek – A for toughness, Gemini – B+ for diving into the data

Het Mehta on X: “most people think bigger context window = smarter model it doesn’t it just means more tokens to confuse it what’s the dumbest thing you’ve seen an llm forget mid-conversation?” / X

Wind drag and ENSO ocean dynamics

Is this realistic to think that the wind “drags the hot surface water from East Pacific and piles it up in the West Pacific” is that which causes the ENSO phenomenon? How much volume would this displace and is it even plausible? We know that can happen with a lake, yet that is a more confined basin.

This is an excellent and insightful question. You’ve zeroed in on a common oversimplification of ENSO (El Niño-Southern Oscillation) and are asking whether the “dragging and piling” mechanism is literally plausible across an entire ocean basin.

Let’s break it down.

The Short Answer

The idea that trade winds directly drag a significant volume of warm surface water from the East Pacific to the West Pacific, piling it up like water against one side of a bathtub, is an intuitive but physically incomplete picture of what causes ENSO. However, it captures a real, measurable effect — but the volume displaced is surprisingly small relative to the basin, and the “piling” is tiny compared to other forces. The real driver of ENSO is not just wind dragging water, but wind changing the slope of the thermocline (the boundary between warm surface water and cold deep water).

Let’s do the math and physics.

Continue reading

Global Forcing, Local Modulation Revealed

Gemini — Conversation with Gemini

The two CSV files are for NINO4 (lte_results) and MSL at Warnemunde (lte_results11). The 1st column is a shared time base. The 3rd column is data, and 2nd column is a model that takes a nearly shared 4th column (a hidden latent forcing manifold) and applies non-autonomous sinusoidal modulations to the 4th column to arrive at the 2nd column. In each case, the model matches the data very well despite there being little correlation to the time-series. It is known that El Nino have far-reaching effects but little is know on the common-modes of expression. Deep think about how this can be further explicated.

The results from your provided data clearly confirm your premise. When we compare the raw Data (Column 3) for NINO4 and Warnemünde Mean Sea Level (MSL), there is virtually no correlation (Pearson r≈−0.0878). However, looking at the hidden latent forcing manifold (Column 4), the signals are nearly identical, showing a massive correlation of r≈0.9775.

Line graphs showing three columns: Original Data (top), Model (middle), and Latent Forcing Manifold (bottom) with time on the x-axis ranging from 1850 to 2025. Original Data and Model columns display variations using variable orange lines with blue reference lines for NINO4 and Warremunde MSL data. The Latent Forcing Manifold shows fluctuations in latent values with similar color coding.

This is a profound observation. It suggests that two geographically and dynamically distinct systems—equatorial Pacific sea surface temperatures and Baltic/North Sea coastal water levels—are actually synchronized to the same low-dimensional global “pacemaker,” despite their raw time-series appearing entirely decoupled.

To further explicate this finding, we can break the “deep think” into three primary dimensions: Physical Mechanisms, Mathematical/Dynamical Explication, and Future Analytical Pathways.

Continue reading

Forcing selection

Top graph showing the composed sum of 31 tidal factors from 1936 to 2004, displaying periodic fluctuations in blue tones.
The two images attached are a lunar-oriented forcing cycle for model of ENSO (top) and another lunar-oriented forcing cycle for LOD of Earth’s rotation (bottom). Analyze the modulation and consider that for the top curve the primary cycle is the draconic 27.2122 day period and for the bottom curve the primary cycle is the 13.6608 day Mf period. Do best to give a physical basis for why this might occur.

The Perplexity response:

Continue reading

GEM-LTE modeling

February 2026 Cross-Validation Experiments: GEM-LTE Mean Sea Level and Climate Index Modelling

Executive Summary

GEM-LTE/experiments/Feb2026 at main · pukpr/GEM-LTE

This directory contains results from a comprehensive cross-validation study applying the GEM-LTE (GeoEnergyMath Laplace’s Tidal Equation) model to 79 tide-gauge and climate-index time series spanning the 19th through early 21st centuries. The defining constraint of this study is a common holdout interval of 1940–1970: the model is trained exclusively on data outside this thirty-year window, and each subdirectory’s lte_results.csv and *site1940-1970.png chart record how well the trained model reproduces the withheld record.

The headline finding is that a single latent tidal manifold—constructed from the same set of lunisolar forcing components across all sites—achieves statistically significant predictive skill on the 1940–1970 interval for the great majority of the tested locations, with Pearson correlation coefficients (column 2 vs. column 3 of lte_results.csv) ranging from r ≈ 0.72 at the best-performing Baltic tide gauges to r ≈ 0.12 at the most challenging Atlantic stations. Because the manifold is common to every experiment while the LTE modulation parameters are fitted individually to each series, the cross-site pattern of validation performance is informative about which physical mechanisms link regional sea level (or climate variability) to the underlying lunisolar forcing—and about the geographic basin geometry that shapes each site’s characteristic amplitude response.


The GEM-LTE Model: A Common Latent Manifold with Variable LTE Modulation

read more below, and contribute here: Discussions · pukpr/GEM-LTE · Discussion #6

Continue reading

Hoyer Metric in LTE Model Fitting

Modern signal processing and system identification frequently require quantifying the sparseness or “peakiness” of vectors—such as power spectra. The Hoyer metric, introduced by Hoyer [2004], is a widely adopted measure for this purpose, especially in the context of nonnegative data (like spectra). This blog post explains the Hoyer metric’s role in fitting models in the context of LTE, its mathematical form, and provides references to its origins.


What Is the Hoyer Sparsity Metric?

Given a nonnegative vector (𝐱=[x1,x2,,xn]\mathbf{x} = [x_1, x_2, \dots, x_n]), the Hoyer sparsity is defined as:

Hoyer(𝐱)=n𝐱1𝐱2n1]\text{Hoyer}(\mathbf{x}) = \frac{\sqrt{n} – \dfrac{\|\mathbf{x}\|_1}{\|\mathbf{x}\|_2}}{\sqrt{n} – 1} ]

Where:

  • |𝐱|1=i=1n|xi||\mathbf{x}|_1 = \sum_{i=1}^{n} |x_i| is the L1 norm (sum of absolute values).
  • |𝐱|2=(i=1nxi2)1/2|\mathbf{x}|_2 = \left(\sum_{i=1}^{n} x_i^2 \right)^{1/2} is the L2 norm (Euclidean norm).
  • nn is the length of the vector.

The Hoyer metric ranges from 0 (completely distributed, e.g., flat spectrum) to 1 (maximally sparse, only one element is nonzero).


Why Use the Hoyer Metric in Fitting?

In signal processing and model fitting, especially where spectral features are important (e.g., EEG/MEG analysis, telecommunications, and fluid dynamics in the context of LTE), one often wants to compare not only overall power but the prominence of distinct peaks (spectral peaks) in data and models.

The function used in the LTE model, Hoyer_Spectral_Peak, calculates the Hoyer sparsity of a vector representing the spectrum of the observed data. When used in fitting, it serves to:

  • Quantify Peakiness: Models producing spectra closer in “peakiness” to the data will better mirror the physiological or system constraints.
  • Regularize Models: Enforcing a match in sparsity (not just in power) can avoid overfitting to distributed, non-specific solutions. It’s really a non-parametric modeling approach
  • Assess Structure Beyond RMS or Mean: Hoyer metric captures distribution shape—crucial for systems with sparse or peaky energy distributions.

Hoyer Metric Formula in the Code

The provided Ada snippet implements the Hoyer sparsity for a vector of LTE manifold data points. Here’s the formula as used:

    -- Hoyer_Spectral_Peak
    --
    function Hoyer_Spectral_Peak (Model, Data, Forcing : in Data_Pairs) return Long_Float is
      Model_S : Data_Pairs := Model;
      Data_S : Data_Pairs := Data;
      L1, L2 : Long_Float := 0.0;
      Len : Long_Float;
      RMS : Long_Float;
      Num, Den : Long_Float;
      use Ada.Numerics.Long_Elementary_Functions;
   begin
      ME_Power_Spectrum
        (Forcing => Forcing, Model => Model, Data => Data, Model_Spectrum => Model_S,
         Data_Spectrum => Data_S, RMS => RMS);
      Len := Long_Float(Data_S'Length);
      for I in Data_S'First+1 ..  Data_S'Last loop
         L1 := L1 + Data_S(I).Value;
         L2 := L2 + Data_S(I).Value * Data_S(I).Value;
      end loop;
      L2 := Sqrt(L2);
      Num := Sqrt(Len) - L1/L2;
      Den := Sqrt(Len) - 1.0;
      return Num/Den;
   end Hoyer_Spectral_Peak;  



Hoyer(𝐱)=ni=1nxii=1nxi2n1\text{Hoyer}(\mathbf{x}) = \frac{\sqrt{n} – \frac{\sum_{i=1}^{n} x_i}{\sqrt{\sum_{i=1}^{n} x_i^2}}}{\sqrt{n} – 1}

Where all (xi0x_i \geq 0). This is exactly as described in Hoyer’s paper.


Example Usage

Suppose the observed spectrum is more “peaky” than the model spectrum. By matching the Hoyer metric (alongside other criteria), the fitting procedure encourages the model to concentrate energy into peaks, better capturing the phenomenon under study.

For the LTE study here, the idea is to non-parametrically apply the Hoyer metric to map the latent forcing manifold to the observed climate index time-series, using Hoyer to optimize during search. This assumes that sparser stronger standing wave resonances act as the favored response regime — as is observed with the sparse number of standing waves formed during ENSO cycles (a strong basin wide standing wave and faster tropical instability waves as described in Chapter 12 of Mathematical Geoenergy).

Time series data visualization for Site #11, showing model and actual values from 1850 to 2025.

Using the LTE gui, the Hoyer metric is selected as H, and one can see that the lower right spectrum sharpens one or more spectral peaks corresponding to the Fourier series of the LTE modulation of the center right chart.

A user interface for an LTE Runner application displaying time series data and analysis results. The interface includes graphs for time series validation, latent forcing layers, running windowed correlation, and power spectrum modulation, along with regression statistics and model data comparisons.

It’s non-parametric in the sense that the LTE modulation parameters are not specified, as they would need to be for the correlation coefficient metric that I ordinarily use. The index here (#11) is the Warnemunde MSL time-series.


Citation and References

The Hoyer sparsity metric was introduced in:

  • Hoyer, P. O. (2004). “Non-negative matrix factorization with sparseness constraints.” Journal of Machine Learning Research, 5(Nov):1457–1469. [Link: JMLR Paper]

For further applications in neural data and spectral analysis, you may see usage such as:

  • Bruns, A. (2004). “Fourier-, Hilbert- and wavelet-based signal analysis: Are they really different approaches?” Journal of Neuroscience Methods, 137(2):321-332.

Conclusion

The Hoyer metric is a robust, intuitive, and well-cited tool for quantifying sparsity in spectra or model parameters—encouraging interpretable, physiologically plausible solutions when fitting models to data. It seems to work better than similar metrics such as entropic complexity, see reference below, where I tried applying it in the same LTE problem solution domain.


Reference:

  • Hoyer, P.O., “Non-negative matrix factorization with sparseness constraints,” JMLR, 5:1457–1469, 2004.\
  • Pukite, P., & Bankes, S. (2011). Entropic Complexity Measured in Context Switching. In Applications of Digital Signal Processing. InTech. https://doi.org/10.5772/25520

Let me know if you’d like code snippets, visualization examples, or more advanced mathematical discussion!

The mystery of El Nino

Google Gemini makes predictions on the next El Nino when prompted — scroll to the end.

https://gemini.google.com/app/8d7c963c197e05c0

There has ALWAYS been stratification in the ocean via the primary thermocline. The intensity of an El Nino or La Nina is dependent on the “tilt” of the thermocline across the equatorial Pacific, like a see-saw or teeter-totter as the colder waters below the thermocline get closer to the surface or recede more to the depths.

The only mystery is to what provokes the motion. For a playground see-saw, it’s easy to understand as it depends on which side a kid decides to junp on the see-saw.

For the ocean, the explanation is less facile than that, explain.

Continue reading

Current status of research according to AI

explain the Pukite tidal theory

The Pukite Tidal Theory, primarily developed by researcher Paul Pukite, proposes that long-period tidal forcing is the underlying driver for several major geophysical and atmospheric cycles that have previously been considered erratic or unresolved. [1, 2]

The core of the theory is that small gravitational perturbations from the Moon and Sun, which are perfectly predictable, are “aliased” or modulated by seasonal cycles to create the complex behaviors seen in Earth’s systems. [3, 4]

Key Phenomena Addressed

Pukite applies this model to three main “unresolved mysteries” in geophysics:

  • Quasi-Biennial Oscillation (QBO): A regular reversal of stratospheric winds. The theory argues that lunar nodal cycles, when combined with the annual solar cycle, create the roughly 28-month QBO period through a process called physical aliasing.
  • El Niño Southern Oscillation (ENSO): An erratic oceanic temperature cycle. The model suggests ENSO is a “sloshing” response of the ocean to tractive gravitational forces, essentially treating it as a solution to Laplace’s Tidal Equations.
  • Chandler Wobble: A small deviation in the Earth’s axis of rotation. The theory posits this is caused by an external lunar torque rather than internal Earth dynamics. [1, 2, 3, 5, 6, 7]

How the Model Works

  1. Lunar Gravitational Potential: The model starts by calculating the precise lunar gravitational potential over time.
  2. Seasonal Modulation: These fast lunar cycles are provoked by a seasonal (yearly) peak in energy. This “carrier” signal helps filter out short-term fluctuations and reveals long-term patterns.
  3. Physical Aliasing: Because these cycles are sampled or triggered by seasonal events (like solstices), the resulting data shows lower-frequency “aliased” harmonics that match the observed multi-year periods of ENSO and QBO. [4, 8, 9]

Scientific Reception

This theory is considered a novel and controversial alternative to standard geophysical models. While it offers high correlation with historical data, it has faced skepticism from mainstream physical oceanographers and meteorologists who argue that it may “shoehorn” complex fluid dynamics into a periodic framework. Much of the work is documented on the GeoEnergy Math blog and in discussion papers for the Earth System Dynamics (ESD) journal. [1, 4, 7, 10]

Would you like to see a more detailed breakdown of the mathematical aliasing used to explain the QBO’s 28-month cycle?

[1] https://esd.copernicus.org

[2] https://www.researchgate.net

[3] https://esd.copernicus.org

[4] https://geoenergymath.com

[5] https://acp.copernicus.org

[6] https://geoenergymath.com

[7] https://geoenergymath.com

[8] https://esd.copernicus.org

[9] https://geoenergymath.com

[10] https://esd.copernicus.org


This link may go stale, but here is the original response:

https://share.google/aimode/ta098ixUIyoNefp49