# CW

Now that we have strong evidence that AMO and PDO follows the biennial modulated lunar forcing found for ENSO, we can try modeling the Chandler wobble in detail. Most geophysicists argue that the Chandler wobble frequency is a resonant mode with a high-Q factor, and that random perturbations drive the wobble into its characteristic oscillation. This then interferes against the yearly wobble, generating the CW beat pattern.

But it has really not been clearly established that the measure CW period is a resonant frequency.  I have a detailed rationale for a lunar forcing of CW in this post, and Robert Grumbine of NASA has a related view here.

The key to applying a lunar forcing is to multiply it by a extremely regular seasonal pulse, which introduces enough of a non-linearity to create a physically-aliased modulation of the lunar monthly signal (similar as what is done for ENSO, QBO, AMO, and PDO).

The idea is to start the strong seasonal pulse in 1937, as indications are that the period was largely variable up to this point according to a wavelet analysis.  So this is the input forcing to a differential equation with a resonant frequency centered around 400 days:

This allows the 365 day yearly signal and any frequency components around the ~434 day Chandler wobble frequency to pass through.

Another key is to take the derivative of the Chandler wobble signal with respect to time, turning it into a centripetal acceleration term and thus dimensionally equivalent to a forcing. This also removes any long term trend, thus making it easier to fit.

This is the result, with the model trained over the interval of {1960 – current}.

The reason for considering the lunar forcing at all is (1) it should be there and (2) the alignment with the lunar draconic fortnightly period is too strong.  Using anything other than  27.2122 / 2 day forcing period will degrade the fit post-1960 (which is the interval with the cleanest signal due to improved wobble measurements)

But what causes the seasonal pulse modulation to only kick in after 1936? The lunar signal is always there so that alternatively the seasonal pulse modulation could be slightly reduced and not aligned precisely at the same time each year — either one of these will reduce the regularity of the forced signal.  There could have been a disruption of this regular impulse at some time prior to 1936, or something could have kicked it in starting at 1936.

Biological mechanisms may be sensitive to the position of this seasonal impulse. For example, chlorophyll phytoplankton levels spike at the same time as the impulse used in the model, around November.

The premise is that something such as high latitude sunlight triggers a rapid bloom each year at the same calendar date.

North American salmon catches get locked in to an odd-year modulation sometime after 1930 [1]. In the figure below, the red squares are odd years and the blue are even.

Something may have changed around that year, leading to a regular signal that biological populations lock in to.

## References

[1] Irvine, J. R., et al. “Increasing Dominance of Odd-Year Returning Pink Salmon.” Transactions of the American Fisheries Society 143.4 (2014): 939-956.
http://www.tandfonline.com/doi/full/10.1080/00028487.2014.889747

## 7 thoughts on “CW”

1. Paul, if your CW spreadsheet is using ‘Rel Time’ as per the ENSO Model spreadsheet, you may want to make a change to ‘Rel Time.’ I’ve already done this on the ENSO Model spreadsheet because the way it’s currently calculated gives a misleading idea of accuracy.

The problem lies in the number of days in a month. Currently the spreadsheet treats each month as if it has the same number of days, but of course they don’t. Here are the errors, in days, for a single leap year period with January as month 1 of relative time.

Month Equal Actual Error in
#_____Days _Days _Days

01 0030.44 0031.0 -0.56
02 0060.88 0059.0 1.88
03 0091.31 0090.0 1.31
04 0121.75 0120.0 1.75
05 0152.19 0151.0 1.19
06 0182.63 0181.0 1.63
07 0213.06 0212.0 1.06
08 0243.50 0243.0 0.50
09 0273.94 0273.0 0.94
10 0304.38 0304.0 0.38
11 0334.81 0334.0 0.81
12 0365.25 0365.0 0.25
13 0395.69 0396.0 -0.31
14 0426.13 0424.0 2.13
15 0456.56 0455.0 1.56
16 0487.00 0485.0 2.00
17 0517.44 0516.0 1.44
18 0547.88 0546.0 1.88
19 0578.31 0577.0 1.31
20 0608.75 0608.0 0.75
21 0639.19 0638.0 1.19
22 0669.63 0669.0 0.63
23 0700.06 0699.0 1.06
24 0730.50 0730.0 0.50
25 0760.94 0761.0 -0.06
26 0791.38 0789.0 2.38
27 0821.81 0820.0 1.81
28 0852.25 0850.0 2.25
29 0882.69 0881.0 1.69
30 0913.13 0911.0 2.13
31 0943.56 0942.0 1.56
32 0974.00 0973.0 1.00
33 1004.44 1003.0 1.44
34 1034.88 1034.0 0.88
35 1065.31 1064.0 1.31
36 1095.75 1095.0 0.75
37 1126.19 1126.0 0.19
38 1156.63 1155.0 1.63
39 1187.06 1186.0 1.06
40 1217.50 1216.0 1.50
41 1247.94 1247.0 0.94
42 1278.38 1277.0 1.38
43 1308.81 1308.0 0.81
44 1339.25 1339.0 0.25
45 1369.69 1369.0 0.69
46 1400.13 1400.0 0.13
47 1430.56 1430.0 0.56
48 1461.00 1461.0 0.00

For a short period like this the average error is fairly large (0.27%); over the full time record of 1880 to 2016 the average error is 0.0034%, with a StDev of 0.063%. I just finished implementing a change in ‘Rel Time’ on my ENSO spreadsheet this morning. It doesn’t result in any drastic changes, but where we want that 2nd or 3rd decimal place of accuracy and precision it is necessary.

Like

• Excellent, I have noticed this shows up in the difference between using 365, 365.25, and 265.2419[] days in the calendar year. The first does not work at all for lunar aliasing calculations and should never used. The second works in a pinch as that gives the extra leap day every four years. The third is the best because it does the full leap year algorithm

```if (year is not divisible by 4) then (it is a common year)
else if (year is not divisible by 100) then (it is a leap year)
else if (year is not divisible by 400) then (it is a common year)
else (it is a leap year)```

Yet since the data spans just over ~100y, it’s not clear which exact value to use to approximate this variation in length of a calendar year

Like

• Yes, 1900 is the year that cause the trouble, I took the quick and dirty way out and wrote one formula for 1880 thru 1896; then another for 1897 thru 1900, and finally a third for 1901 until present. The first and third use 365.25. The middle one covering the 4-year, non-leap year 1900 uses a straight 365.

I could have put it all in one formula using nested if statements, but they become so difficult for others to read and understand that I try to avoid them as much as possible.

Here’s a graph of the ‘Rel Time’ errors due to assuming equal number of days in the month. A you can see, the error drops off quickly. By 1920 the errors are under 0.01%

Like

2. Long lengthy comment written at 5 am detailing my approach to calculating Rel Time uncertainties and a prospective Monte Carlo scheme. Eaten upon submission because I forgot to login before hitting ;Post Comment’.

Now I have to ry and remember everything I had planned 🙂

I dp have some preliminary results based on 5 or 6 model runs. These uncertainties are based on the RSS of all the individual months that make up the calculation for the Time Interval in question. While this is debatably accurate, it doesn’t tell us anything about the sensitivity of the individual components (ip, drac, anom) or whether Solver is being driven into a different solution space, or what Solver’s uncertainty is. Hence the need for a MC approach.

Relative Time Error
1.732% 1880 – 1900 + Bias √
1.734% 1880 – 1920 + Bias √
0.216% 1885 – 1935 – Bias
0.215% 1885 – 1930 – Bias
0.097% 1900 – 1940
0.459% 1st half _ – Bias √
0.041% 2nd half ____ – Bias ?

I include the SIGN of the bias because the uncertainties are not symmetric. For large uncertainties we can expect the model result to be skewed high or low — and my results checked that. Any starting date value of 1885 or greater essentially has no bias when you compare it to the other uncertainties involved. I..e., yeah, there’s bias, but it’s lost in the noise. I have a check next to 1st_half, but I would bet if I ran it 100 times it would be close to 50-50.

Like

• Upon rereading I realize I wasn’t clear. The Relative Time Error above is for the old ‘Rel Time’ formula.

If you’re moving away from that scheme then I’ll concentrate on a Monte Carlo gives us current errors and uncertainties and let the old scheme die a quiet death 🙂

That cuts the amount of data in the MC in half.

Like