Raising the Bar on ENSO Model Validation

I have been using the Azimuth Project Forum as a sounding board for the ENSO Model [1,2,3,4,5,6,7,8].  The audience there is very science-savvy so are not easily convinced of the worth of any particular finding (and whether it is correct in the first place). They also tend to prefer pure math because that can be sufficiently detached from the muddy world of applied physics such that one can avoid being labeled as “right” or “wrong”.  With math one can always come up with a formulation that can exist on its own terms, separate from a practical application.

So trying to convince those folks in the validity of the ENSO model is difficult at best.

Recently the advice has been to do statistical validation on the model. One participant recommended I try an experimental approach

“I still don’t have the spare cycles to address this fully, but given that one of the two terms of an AIC or BIC is the log likelihood and there is not a closed form representation of the likelihood in this case, I’d probably explore either the empirical likelihood work of Art Owen and his students, for one thing as packaged in the emplik R package, or possibly Approximate Bayesian Computation (ABC; see also and here).”

I am not going to go to the trouble of “exploring” some unaccepted statistical validation procedure, when I am having enough of a challenge defending the ENSO model physics. What am I supposed to do — defend someone else’s empirical statistical research in addition to defending my own work? No thanks.

It seems to be always about #RaisingTheBar to see what someone will do to defend their results.

Fair enough.

So I will in this post show an overwhelming piece of evidence that the modeling work is on the right track.

This involves using coral proxy data that has been calibrated against modern day (1880-1978) ENSO records.  The calibration of the proxy data to ENSO indices is very good, hovering in the range of 0.8 and higher for correlation coefficient.

One set of proxy data is called the Unified ENSO Proxy (UEP), which is an aggregate of a number of research efforts.  What this gives us is an out-of-band time series that extends from 1650 to 1880, a span of 230 years that we can use to validate the ENSO model previously tuned for the time span 1880 to 1980.

If the back-extrapolated fit has a correlation coefficient of close to 0.4 or above, this is a result that is extremely unlikely due to chance alone. As many researchers think that ENSO is a red-noise process, the randomness would give a correlation coefficient for a sample run of anywhere between -0.2 and 0.2 — in other words no phase coherence over an interval that didn’t overlap the fitting interval.

Yet, with a non-random deterministic model, the sinusoidal variation should extrapolate backwards, maintaining a coherent phase relationship that depends on how precise the underlying model frequencies match the physical cycles of tides and wobbles. And as you can see from the multiple-regression model below, the agreement is beyond promising.

uep_validationThe correlation coefficient is very high over the training interval, almost reaching 0.8 and it is about 0.53 over the entire interval. Remember, the fit to the model is over a region that is less than a third of the entire interval.  One spike in temperature that covered two years after the massive Laki volcanic eruption of 1783 was removed.

The bar has been raised for others.

 

 

 

7 thoughts on “Raising the Bar on ENSO Model Validation

  1. http://stattrek.com/online-calculator/binomial.aspx

    On the validation side, the sign of the excursion matches 143 out of 230 times (i.e. for each year from 1650 to 1880). Using the binomial theorem with P=0.5, this should number of matches (or greater) should occur with probability 0.00008 strictly due to chance.

    On the training side (which doesn’t count), the sign of the excursion matches 70 out of 97 times (each year from 1880 to 1977). This number of matches could occur by chance with probability 0.000007. Even though the number is less than 0.00008, since the training side is prone to overfitting the number is meaningless.

    However, on the validation side, the probability is significant because it is an out-of-band blind test. The pattern in the underlying oscillations is very likely deterministic as also shown by

    [1]H. Astudillo, R. Abarca-del-Rio, and F. Borotto, “Long-term non-linear predictability of ENSO events over the 20th century,” arXiv preprint arXiv:1506.04066, 2015.

    Like

  2. Yeah, this looks like the way to go, rather than abstruse statistical tests. Everybody gets the binomial, and the p values pretty much correctly reflect the eyeball match of the curves.

    Like

  3. Another thing that’s been kicking around in my head is the ENSO phase adjustment of ~1980, and how we explain that in physical terms. And I’m wondering if (a) lunar forcing of sloshing “works” because the physical size of the Pacific basin has a resonant frequency that happens to be pretty close to lunar frequencies; and (b) even close resonant frequencies eventually reach a point where the resonance breaks down and must be re-established.

    I have no idea how to test this idea.

    Like

    • Good ideas and this part is open-ended because there are no constraints on what can cause the behavior. With periodic forcing you at least know that it has a long-running impact, but here it is just one event.

      I think it may be a combination of a stimulated resonance and a metastability of the underlying standing wave. The forced response is the strongest yet any perturbation in the forcing will set up a transient resonance that will die down. The meta-stability is in the fact that the standing wave can do a phase reversal while maintaining the forcing. Lots of degrees of freedom to play around with here. I stuck to a phase reversal for now because that is just a sign change.

      Like

  4. What do the model results look like through 2020? 2040? I think it more of a test to compare the model predictions to data that have not been collected yet as of the time of parameter estimation. I am impressed by the fit to past “out of training” data, but I still think it best actually to try to predict the future and see how accurate the prediction turns out.


    [Response: How exactly can I show how good the model is in predicting the future, when the future has not yet arrived? You really have to think before asking these kinds of questions, otherwise it looks like you are groping for flaws.

    The ENSO model is not quite as mature as the QBO model, which is precisely calibrated http://contextearth.com/2015/10/22/pukites-model-of-the-quasi-biennial-oscillation/
    ]

    Like

  5. Hi Paul,

    This model looks pretty good to me. What are the periods that have the highest rms in your enso model?

    I believe people would like to see what the model predicts forward in time (say the next 10 years from 2016 to 2025). You can put it up on your blog, and when 2026 arrives, we can see how it has done. Another potential test would be to use different training intervals (maybe take the total record including proxies and divide into 3 intervals) and see if the coefficients of the model remain fairly consistent when the training intervals are changed. You also could use the entire history and and how that compares with the shorter intervals (which could be halves to create less work for you). Probably for a future prediction I would use all the data I had, if the training intervals showed consistency, if not just use 1880 to 2015 (as the data is probably better over that period), or even use 1950 to 2015, as the early SOI data is not very good (or so I have read).

    Like

    • Dennis,

      The two highest and most consistent are the 2.33 and 2.9 year periods, with 2.33 the strongest.

      I am going to keep going back to the historical records. Plotting the future holds no interest for me anymore and I have learned my lesson from peak oil that you only get one chance to make a prediction.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s