This is an update to analyzing the dates of Minnesota lakes ice-out events, as described on this blog years ago: Ice Out
The trend has been that dates have been creeping earlier, corresponding to warmer winters.
This is all automated, pulled from JSON data residing on a Minnesota DNR server. I hadn’t looked at it for a while, as the original client query assumed that the JSON was in strict order, but the response changed to random and only recently have I updated. The same approach used is to access lake data from common latitudes and do a least-squares regression on each set. The software is described here and available here, based on a larger AI project described here.
There are seven anomalous data points1 that point to ice-out dates prior to January, which may in fact be faulty data, but are kept in place because they won’t change the slopes too much
Summary
2013 slopes
2026 slopes
43 N
-0.066
-0.1042
44 N
-0.047
-0.08595
45 N
-0.068
-0.10873
46 N
-0.0377
-0.0416
47 N
-0.0943
-0.04138
48 N
-0.1995
-0.0835
The overall average is around -0.08 days earlier per year which amounts to 8 days earlier ice-out over 100 years.
The last “year without a winter” in Minnesota was 1877-1878 which corresponded to a huge global El Nino. One can perhaps see this in the 44 N and 45 N plots showing early outliers but the data was sparse back then. More obvious is the short winter of 2023-2024, where many lakes never froze or one close to my place was really only solid for a time in December. Can look up news stories on this such as the following
2024: The Brainerd Jaycees Ice Fishing Extravaganza on Gull Lake—one of the world's largest—was canceled for the first time in its 34-year history because the ice was too thin to support the event.
Footnotes
The following lakes showed anomalous ice-out dates, assumed to be late in the previous year or late in the current year, the latter which would be physically impossible Lake Cotton @ 46.88259 N (11/23/2010 [day -39.0])) Lake Leek (Trowbridge) @ 46.68309 N (12/07/2021 [day -25.0])) Lake Little Wabana @ 47.40002 N (12/08/2021 [day -24.0])) Lake Star @ 45.06337 N (11/20/2022 [day -42.0])) Lake Lewis @ 45.7479 N (11/28/2022 [day -34.0])) Lake Unnamed @ 44.81299 N (11/25/2023 [day -37.0])) Lake Unnamed @ 44.81299 N (11/26/2024 [day -36.0])) ↩︎
The behavior of complex systems, particularly in fluid dynamics, is traditionally described by high-dimensional systems of equations like the Navier-Stokes equations. While providing practical applications as is, these models can obscure the underlying, simplified mechanisms at play. It is notable that ocean modeling already incorporates dimensionality reduction built in, such as through Laplace’s Tidal Equations (LTE), which is a reduced-order formulation of the Navier-Stokes equations. Furthermore, the topological containment of phenomena like ENSO and QBO within the equatorial toroid , and the ability to further reduce LTE in this confined topology as described in the context of our text Mathematical Geoenergy underscore the inherent low-dimensional nature of dominant geophysical processes. The concept of hidden latent manifolds posits that the true, observed dynamics of a system do not occupy the entire high-dimensional phase space, but rather evolve on a much lower-dimensional geometric structure—a manifold layer—where the system’s effective degrees of freedom reside. This may also help explain the seeming paradox of theinverse energy cascade, whereby order in fluid structures seems to maintain as the waves become progressively larger, as nonlinear interactions accumulate energy transferring from smaller scales.
Discovering these latent structures from noisy, observational data is the central challenge in state-of-the-art fluid dynamics. Enter the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm, pioneered by Brunton et al. . SINDy is an equation-discovery framework designed to identify a sparse set of nonlinear terms that describe the evolution of the system on this low-dimensional manifold. Instead of testing all possible combinations of basis functions, SINDy uses a penalized regression technique (like LASSO) to enforce sparsity, effectively winnowing down the possibilities to find the most parsimonious, yet physically meaningful, governing differential equations. The result is a simple, interpretable model that captures the essential physics—the fingerprint of the latent manifold. The SINDy concept is not that difficult an algorithm to apply as a decent Python library is available for use, and I have evaluated it as described here.
Applying this methodology to Earth system dynamics, particularly the seemingly noisy, erratic, and perhaps chaotic time series of sea-level variation and climate index variability, reveals profound simplicity beneath the complexity. The high-dimensional output of climate models or raw observations can be projected onto a model framework driven by remarkably few physical processes. Specifically, as shown in analysis targeting the structure of these time series, the dynamics can be cross-validated by the interaction of two fundamental drivers: a forced gravitational tide and an annual impulse.
The presence of the forced gravitational tide accounts for the regular, high-frequency, and predictable components of the dynamics. The annual impulse, meanwhile, serves as the seasonal forcing function, representing the integrated effect of large-scale thermal and atmospheric cycles that reset annually. The success of this sparse, two-component model—where the interaction of these two elements is sufficient to capture the observed dynamics—serves as the ultimate validation of the latent manifold concept. The gravitational tides with the integrated annual impulse are the discovered, low-dimensional degrees of freedom, and the ability of their coupled solution to successfully cross-validate to the observed, high-fidelity dynamics confirms that the complex, high-dimensional reality of sea-level and climate variability emerges from this simple, sparse, and interpretable set of latent governing principles. This provides a powerful, physics-constrained approach to prediction and understanding, moving beyond descriptive models toward true dynamical discovery.
Crucially, this analysis does not use the SINDy algorithm, but a much more basic multiple linear regression (MLR) algorithm predecessor, which I anticipate being adapted to SINDy as the model is further refined. Part of the rationale for doing this is to maintain a deep understanding of the mathematics, as well as providing cross-checking and thus avoiding the perils of over-fitting, which is the bane of neural network models.
Also read this intro level on tidal modeling, which may form the fundamental foundation for the latent manifold: https://pukpr.github.io/examples/warne_intro.html. The coastal station at Wardemunde in Germany along the Baltic sea provided a long unbroken interval of sea-level readings which was used to calibrate the hidden latent manifold that in turn served as a starting point for all the other models. Not every model works as well as the majority — see Pensacola for a sea-level site and and IOD or TNA for climate indices, but these are equally valuable for understanding limitations (and providing a sanity check against an accidental degeneracy in the model fitting process) . The use of SINDy in the future will provide additional functionality such as regularization that will find an optimal common-mode latent layer,.
The last set of cross-validation results are based on training of held-out data for intervals outside of 0.6-0.8 (i.e. training on t<0.6 and t>0.8 of the data, which extends from t=0.0 to t=1.0 normalized). This post considers training on intervals outside of 0.3-0.6 — a narrower training interval and correspondingly wider test interval.
Each fitted model result shows the cross-validation results based on training of held-out data — i.e. training on only the intervals outside of 0.6-0.8 (i.e. training on t<0.6 and t>0.8 of the data, which extends from t=0.0 to t=1.0 normalized). The best results are for time-series that have 100 years or more worth of monthly data, so the held-out data is typically 20 years. There is no selection bias trickery here, as this is a collection of independent sites and nothing in the MLR fitting process is specific to an individual time-series. In the following, the collection of results starts with the Stockholm site in Sweden, keeping in mind that the dashed line in the charts indicates the test or validation interval.
I was recently in Stockholm, and this is a photo pointed toward the location of the measurement station, about 4000 feet away labeled by the marker on the right below:
“New research shows the natural variability in climate data can cause AI models to struggle at predicting local temperature and rainfall.” … “While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches. The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators. Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.“
Machine learning and other AI approaches such as symbolic regression will figure out that natural climate variability can be done using multiple linear regression (MLR) with cross-validation (CV), which is an outgrowth or extension of linear pattern scaling (LPS).
“When this was initially created on 9/1/2025, there were 3000 CV results on time-series that averaged around 100 years (~1200 monthly readings/set) so over 3 million data points“
In this NINO34 (ENSO) model, the test CV interval is shown as a dashed region
I developed this github model repository to make it easy to compare many different data sets, much better than using an image repository such as ImageShack.
There are about 130 sea-level height monitoring stations in the sites, which is relevant considering how much natural climate variation a la ENSO has an impact on monthly mean SLH measurements. See this paper Observing ENSO-modulated tides from space
“In this paper, we successfully quantify the influences of ENSO on tides from multi-satellite altimeters through a revised harmonic analysis (RHA) model which directly builds ENSO forcing into the basic functions of CHA. To eliminate mathematical artifacts caused by over-fitting, Lasso regularization is applied in the RHA model to replace widely-used ordinary least squares. “
“If so, do you have an explanation why the diurnal tides do not move the thermocline, whereas tides with longer periods do?”
The character of ENSO is that it shifts by varying amounts on an annual basis. Like any thermocline interface, it reaches the greatest metastability at a specific time of the year. I’m not making anything up here — the frequency spectrum of ENSO (pick any index NINO4, NINO34, NINO3) shows a well-defined mirror symmetry about the value 0.5/yr. Given that Incontrovertible observation, something is mixing with the annual impulse — and the only plausible candidate is a tidal force. So the average force of the tides at this point is the important factor to consider. Given a very sharp annual impulse, the near daily tides alias against the monthly tides — that’s all part of mathematics of orbital cycles. So just pick the monthly tides as it’s convenient to deal with and is a more plausible match to a longer inertial push.
Sunspots are not a candidate here.
Some say wind is a candidate. Can’t be because wind actually lags the thermocline motion.
So the deal is, I can input the above as a prompt to ChatGPT and see what it responds with
I made one change to the script (multiplying each tidal factor by its frequency to indicate its inertial potential, see the ## comment)
At the end of the Gist, I placed a representative power spectrum for the actual NINO4 and NINO34 data sets showing where the spectral peaks match. They all match. More positions match if you consider a biennial modulation as well.
Now, you might be saying — yes, but this all ChatGPT and I am likely coercing the output. Nothing of the sort. Like I said, I did the original work years ago and it was formally published as Mathematical Geoenergy (Wiley, 2018). This was long before LLMs such as ChatGPT came into prominence. ChatGPT is simply recreating the logical explanation that I had previously published. It is simply applying known signal processing techniques that are generic across all scientific and engineering domains and presenting what one would expect to observe.
In this case, it carries none of the baggage of climate science in terms of “you can’t do that, because that’s not the way things are done here”. ChatGPT doesn’t care about that prior baggage — it does the analysis the way that the research literature is pointing and how the calculation is statistically done across domains when confronted with the premise of an annual impulse combined with a tidal modulation. And it nailed it in 2025, just as I nailed it in 2018.
Someone on Twitter suggested that tidal models are not understood “The tides connection to the moon should be revised.”. Unrolled thread after the “Read more” break
The machine learning community needs to be let loose on this data. The tidal forcing is a latent or hidden layer that ML models such a SINDy, KAN, and others could easily handle. I don't use ML other than to fit the models with known tidal factors – ManMachineLearning so far.
TANSTAAFL: there ain’t no such thing as a free lunch … but there’s always crumbs for the taking.
Machine learning won’t necessarily make a complete discovery by uncovering some ground-breaking pattern in isolation, but more likely a fragment or clue or signature that could lead somewhere. I always remind myself that there are infinitely many more non-linear formulations than linear ones potentially lurking in nature, yet humans are poorly-equipped to solve most non-linear relationships. ML has started to look at the tip of the non-linear iceberg and humans have to be alert when it uncovers a crumb. Recall that pattern recognition and signal processing are well-established disciplines in their own right, yet consider the situation of searching for patterns in signals hiding in the data but unknown in structure. That’s often all we are looking for — some foot-hold to start from.
A climate teleconnection is understood as one behavior impacting another — for example NINOx => AMO, meaning the Pacific ocean ENSO impacting the Atlantic ocean AMO via a remote (i.e. tele) connectiion. On the other hand, a common-mode behavior is a result of a shared underlying cause impacting a response in a uniquely parameterized fashion — for example NINOx = g(F(t), {n1, n2, n3, ...}) and AMO = g(F(t), {a1, a2, a3, ...}), where the n's are a set of constant parameters for NINOx and the a's are for AMO.
In this formulation F(t) is a forcing and g() is a transformation. Perhaps the best example of a common-mode response to a forcing is in the regional tidal response in local sea-level height (SLH). Obviously, the lunisolar forcing is a common mode in different regions and subtle variations in the parametric responses is required to model SLH uniquely. Once the parameters are known, one can make practical predictions (subject to recalibration as necessary).