Tidal Gauge Differential

A climate science breakthrough likely won’t be on some massive computation but on a novel formulation that exposes some fundamental pattern (perhaps discovered by deep mining during a machine learning exercise). Over 10 years ago, I wrote on a blog post on how one can extract the ENSO signal by doing simple signal processing on a sea-level height (SLH) tidal time-series — in this case, at Fort Denison located in Sydney harbor.

The formulation/trick is to take the difference between the SLH reading and that from 2 years (24 months) prior, described here

Check the recent blog post Lunar Torque Controls All for context of how it fits in to the unified model.

The rationale for this 24 month difference is likely related to the sloshing of the ocean triggered on an annual basis. I think this is a pattern that any ML exercise would find with very little effort. After all, it didn’t take me that long to find it. But the point is that the ML configuration has to be open and flexible enough to be able to search, generate, and test for the same formulation. IOW, it may not find it if the configuration, perhaps focused on computationally massive PDEs, is too narrow. That was my comment to a RC post on applying machine learning to climate science, see the following link and subsequent quote:

Nick McGreivy commented on:

“ML-based parameterizations have to work well for thousands of years of simulations, and thus need to be very stable (no random glitches or periodic blow-ups) (harder than you might think). Bias corrections based on historical observations might not generalize correctly in the future.”

This same issue arises when using ML to simulate PDEs. The solution is to analytically calculate what the stability condition(s) is (are), then at each timestep to add some numerical diffusion that nudges the solution towards satisfying the stability condition(s). I imagine this same technique could be used for ML-based parametrizations.