Expansion of atmosphere and ocean

This is a short tutorial together with some observational evidence explaining how the atmosphere and ocean is expanding measurably in the face of global warming.

Ocean thermal expansion

The ocean absorbs heat per area according to its heat capacity

$$ \Delta E = C_p \cdot \Delta T \cdot {Depth}$$

The linear coefficient of thermal expansion is assumed constant over a temperature range. Multiplying this over a depth:

$$ \Delta Z = \epsilon \cdot \Delta T \cdot Depth $$
But now we can substitute the total energy gained from the first equation:
$$ \Delta Z = \epsilon \frac{\Delta E}{C_p} $$
Assume that the linear coefficient of thermal expansion is 0.000207 per °C, and specific heat capacity is 4,000,000 J/m^3/°C.

If an excess forcing of 0.6 W/m^2 occurs over one year (see the OHC model), then the increase in the level of the ocean is

0.000207  * 0.6*(365*24*60*60) / 4,000,000 = 0.98 mm

This is called the steric sea-level rise, and it is just one component of the sea-level rise over time (the others have to do with melting ice). From the figure below, one can see that the thermal rise is close to 1mm/year over the last decade.

The red line shows the thermal expansion from the ocean heating. Thermal rise is 1 mm/year over the 3 mm/year total sea-level rise.

Atmosphere thermal expansion

The trick here is to infer the atmosphere expansion by looking at an equal pressure point as a function of altitude (a geopotential height isobar) and determine how much that point has increased over time.  I was able to find one piece of data from The Weather Channel’s senior meteorologist Stu Ostro.

The geopotential height anomaly is shown below for the 500 mb (1/2 atmosphere)

Geopotential height anomaly @ 500 mb plotted alongside global temperature anomaly.

In absolute terms it is charted as follows:

Global average geopotential height @ 500 mb plotted alongside global temperature.
http://icons-ak.wxug.com/graphics/earthweek/geopotential-height-and-air-temperature.png

To understand how the altitude has changed, consider the classical barometric formula:
$$ P(H) = P_0 e^{-mgH/RT} $$

We take the point at which we reached 1/2 the STP of 1 atmosphere at sea-level:

$$ P(H)/P_0 = 0.5 = e^{-mgH/RT} $$

or
$$ H = RT/mg \cdot ln(2) $$
Assuming the average molecular weight of the atmospheric gas constituents does not change, the change in altitude (H) should be related to the change in temperature (T) by:
$$ \Delta H = R \Delta T / mg \cdot ln(2) $$
or
$$ \frac{ \Delta H}{\Delta T} =  \frac{R}{mg} ln(2) $$
For R = 8314, m=29, and g=9.8, the slope should be 29.25 *ln(2) = 20.3 m/°C.

From the linear regression agreement between the two, we get a value of 25.7 m/°C.

Linear regression between the geopotential height change and temperature change

Why is this geopotential height change about 26% higher than it should be from the theoretical value considering that the height should track the temperature according to the barometric formula?

If we use the polytropic approximation (equation 1 in the barometric formula), the altitudinal difference between the low temperature and high temperature 500 mb pressure values remains the same as when we use the classic exponential damped barometric formula:

If we apply the polytropic barometric formula instead of the exponential, we still show a real height change that is higher than theoretically predicted by ~25% at the 500 mb isobar.

This discrepancy could be due to measurement error, as the readings are taken by weather balloons and the accuracy could have drifted over the years.

It is also possible that the composition of the atmosphere could have changed slightly at altitude. What happens if the moisture increased slightly? This shouldn’t make much difference.

Most likely is the possibility that the baseline sea-level pressure has changed, which shifted the 500 mbar point artificially. See http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-3-2-4.html.

“10.3.2.4 Sea Level Pressure and Atmospheric Circulation

As a basic component of the mean atmospheric circulations and weather patterns, projections of the mean sea level pressure for the medium scenario A1B are considered. Seasonal mean changes for DJF and JJA are shown in Figure 10.9 (matching results in Wang and Swail, 2006b). Sea level pressure differences show decreases at high latitudes in both seasons in both hemispheres. The compensating increases are predominantly over the mid-latitude and subtropical ocean regions, extending across South America, Australia and southern Asia in JJA, and the Mediterranean in DJF. Many of these increases are consistent across the models. This pattern of change, discussed further in Section 10.3.5.3, has been linked to an expansion of the Hadley Circulation and a poleward shift of the mid-latitude storm tracks (Yin, 2005). This helps explain, in part, the increases in precipitation at high latitudes and decreases in the subtropics and parts of the mid-latitudes. Further analysis of the regional details of these changes is given in Chapter 11. The pattern of pressure change implies increased westerly flows across the western parts of the continents. These contribute to increases in mean precipitation (Figure 10.9) and increased precipitation intensity (Meehl et al., 2005a). “

I will have to figure out the mean sea-level pressure change over this time period to verify this hypothesis.

The following recent research article looks into the shifts in geopotential height over shorter time durations:

[1]
Y. Y. Hafez and M. Almazroui, “The Role Played by Blocking Systems over Europe in Abnormal Weather over Kingdom of Saudi Arabia in Summer 2010,” Advances in Meteorology, vol. 2013, p. 20, 2013.

 

                   ADDED  7/8/2013                

One possibility for the larger-than-expected altitude change is that the average lapse rate has changed slightly.  We can use the lapse rate variation of the barometric formula, and perturb the lapse rate, L, around its average value:

$$ P(H) = P(0) (1- \frac{LH}{T_0})^{\frac{gm}{LR}} $$

Granted, the error bars on this calculation are significant but we can see how subtle the effect is.

Sea-level pressure, P(0) = 1013.25 mb
Gas constant,  R = 8.31446 J/K/mol
Earth’s gravity, g = 9.807 m/s^2
Avg molecular weight, m = 0.02896 kg/m

In 1960, the temperature was about 14.65°C and 500 mb altitude was 5647 m.
In 2010, the temperature was about 15.4°C and 500 mb altitude was 5667 m.

All we need to do is invert the P(H) formula for each pair of values, modifying L slightly.

If we select a lapse rate, L, for 1960 of 0.005C/m, we calculate H = 5647.9m for P(H)=500 mb.
If we select a lapse rate, L, for 2010 of 0.0050°C/m, we calculate H = 5668.4m for P(H)=500 mb.

The difference in the two altitudes for a change in L of -0.0001°C/m is 20.5m, about what the geopotential height chart shows.

If we leave the L at 0.005C/m for both 1960 and 2010, the difference of the 500mb altitudes is only 14.7m.

To the extent that we can trust the numbers on the charts from Ostro, the change in geopotential height is suggesting a feedback effect in the lapse rate due to global warming.  The lapse rate is decreasing over time, which implies that the heat capacity of the atmosphere is increasing (likely due to higher specific humidity), thereby buffering changes in temperature with altitude.

This means that a given temperature increase at a particular altitude (where the CO2 IR window can achieve a radiative balance) will be reflected as a scale-modified temperature at sea level

In 1960, the temperature difference at 500mb altitude is 0.005C/m * 5647.9m = 28.8C
In 2010, the temperature difference at 500mb altitude is 0.0050°C/m * 5668.4m = 28.34°C

The difference at sea-level from the chart is 15.4°C-14.65°C = 0.75°C whereas the difference at the 500mb altitude assuming the modified lapse rate is 0.75°C – 0.46°C = 0.29°C.  If the lapse rate didn’t change then this sea-level difference would maintain at a constant atmospheric pressure isobar in altitude.

For implications in the interpretation, see page 24 of National Research Council Panel on Climate Change Feedbacks (2003). Understanding climate change feedbacks : National Academies Press. ISBN 978-0-309-09072-8.  They caution that the measurements require some precision, otherwise the errors can multiply due to the differences between two large numbers.

Characterization of Battery Charging and Discharging

I had the good fortune of taking a week long Society of Automotive Engineers (SAE) Academy class on hybrid/electric vehicles. The take-home message behind HEV and EV technology is to remember that a quality battery plus optimizing the management of battery cycling remain the keys to success.   That is not surprising —  we all know that gasoline has long been “king”, and since current battery technology has nowhere near the energy density of gasoline, the battery has turned into a “diva”.  In other words, it will perform as long as it is in charge (so to speak) and the battery is well maintained.

I can report that much class time was devoted to the electrochemistry of Lithium-ion batteries. Lithium is an ideal elemental material due to its position in the upper left-hand corner of the periodic table — in other words a very lightweight material with a potentially high energy density.

What was surprising to me was the sparseness of detailed characterization of the material properties. One instructor stated that the lack of measured diffusion properties for battery cell specifications was a pet peeve of his.  Having these properties at hand allows a design engineer to better model the charging and discharging characteristics of the battery, and thus to perhaps to develop better battery management schemes. Coming from the semiconductor world, it is almost unheard of to do design without adequate device characteristics such as mobility and diffusivity.

From my perspective, this is not necessarily bad. Any time I see an anomalous behavior or missing piece, it opens the possibility I can fill a modeling niche.

Introduction

Modern rechargeable battery technology still relies on the principles of electro-chemistry and a reversible process, which hasn’t changed in fundamental terms since the first lead-acid battery came to market in the early 1900’s. What has changed is the combination of materials that make a low-cost, lightweight, and energy-efficient battery which will serve the needs of demanding applications such as electric and hybrid-electric vehicles (EV/HEV).

As energy efficient operation is dependent on the properties of the materials being combined, it is well understood that characterizing the materials is important to advancing the state-of-the-art (and in increasing EV acceptance).

Of vital importance is the characterization of diffusion in the electrode materials, as that is the rate-limiting factor in determining the absolute charging and discharging speed of the material-specific battery technology. Unfortunately, because of the competitive nature of battery producers, many of the characteristics are well-guarded and treated as trade secrets. For example, it is very rare to find diffusion coefficient characteristics on commercial battery specification sheets, even though this kind of information is vital for optimizing battery management schemes [7].

In comparison to the relatively simple diffusional mechanisms of silicon oxide growth, the engineered structure of well-designed battery cell presents a significant constraint to the diffusional behavior. In Figure 1 below we show a schematic of a single lithium-ion cell and the storage particles that charge and discharge. The disordered nature of the storage particles in Figure 2 is often described by what is referred to as a tortuosity measure.

Figure 1: Exaggerated three-dimensional view of a lithium-ion battery cell and the direction of current flow during charging and discharging
Figure 2 : Realistic view of the heavily disordered nature of the LiFePO4 storage particles [1].

The constraints on the diffusion is that it is limited in scale to that of the radius of the storage particle. The length scale is limited essentially to the values L to Lmax shown in Figure 3 below.

Figure 3 : Diffusion of ions takes place through the radial shell of the LiFePO4 spherical particle [1]. During the discharge phase, the ions need to migrate outward through shell and through the SEI barrier before reaching the electrolyte. At this point they can contribute to current flow.

The size of the particles also varies as shown in Figure 4 below.  The two Lithium-ion materials under consideration, LiFePO4 and LiFeSO4F, have different materials properties but are structurally very similar (matrixed particles of mixed size) so that we can use a common analysis approach.  This essentially allows us to apply uncertainty in the diffusion coefficient and uncertainties in the particle size to establish a common diffusional behavior formulation.

Figure 4 : Particle size distribution of FeSO4F spherical granules [2]. The variation in lengths and material diffusivities opens the possibility of applying uncertainty quantification to a model of diffusive growth.

Dispersive Diffusion Analysis of Discharging

The diffusion of ions through the volume of a spherical particle does have similarity to classical regimes such as the diffusion of silicon through silicon dioxide.  That process in fact leads to the familiar Fick’s law of diffusion, whereby the growing layer of oxide follows a parabolic growth law (in fact this is a square root with time, but was named parabolic by the semiconductor technology industry for historical reasons, see for example here).

The model that we can use for Li+ diffusion derives from the classic solution to the Fokker-Planck equation of continuity (neglecting any field driven drift).
$$ \frac{\partial C}{\partial t} – D \nabla^2 C=0 $$ where C is a concentration and D is the diffusion coefficient.  Ignoring the spherical orientation, we can just assume a solution along a one dimensional radially outward axis, x:

$$ \large C(t,x|D) = \frac{1}{\sqrt{4 \pi D t}} e^{-x^2/{4 D t}} $$
This is a marginal probability which depends on the diffusion coefficient. Since we do not know the variance of the diffusivity, we can apply a maximum entropy distribution across D.
$$ \large p_d(D) = \frac{1}{D_0} e^{-D/D_0} $$
This simplifies the representation to the following workable formulation.
$$ \large C(t,x) = \frac{1}{2 \sqrt{D_0 t}} e^{-x/{\sqrt{D_0 t}}} $$
We now have what is called a kernel solution (i.e. Green’s function) that we can apply to specific sets of initial conditions and forcing functions, the latter solved via convolution.

Fully Charged Initial Conditions
Assume the spherical particle is uniformly distributed with a charge density C(0, x) at time t=0.

Discharging Model
For every point along the dimensions of the particle of size L, we calculate the time it takes to diffuse to the outer edge, where it can enter the electrolytic medium.  This is simply an integral of the C(t, x) term for all points starting from x’ = d to L, where d is the inner core radius.
$$  \large C(t) = \int_{d}^L C(t,L-x) dx $$
this integrates straightforwardly to this concise representation:
$$ \large C(t) = C_0 \frac{ 1 – e^{-(L-d)/{\sqrt{D_0 t}}} } {L – d} $$
The voltage of the cell is essentially the amount of charge available, so as this charge depletes, the voltage decreases in proportion.

We can test the model on two data sets corresponding to a LiPO4 cell [1] and a LiSO4F cell [2]. Figure 5 below shows the model fit for LiPO4 as the red dotted line, and which should be level-compared to the solid black line labelled 1. The other curves labelled 2,3,4,5 are alternative diffusional model approximations applied by Churikov et al that clearly do not work as well as the dispersive diffusion formulation derived above.
 

Figure 5 : Discharge profile of LiFePO4 battery cell [1], with the red dotted line showing the parameterized dispersive diffusion model.  The curves labelled 1 through 5 show alternative models that the authors applied to fit the data. Only the dispersive diffusion model duplicates the fast drop-off and long-time scale decline.

Figure 6 below shows the fit to voltage characteristics of a LiSO4F cell, drawn as a red dotted line above the light gray data points. In this case the diffusional model by Delacourt shown in solid black is well outside acceptable agreement.

Figure 6 : Discharge profile of LiFeSO4F battery cell [2], with the red dotted line showing the parameterized dispersive diffusion model.  The black curve shows the model that the authors applied to fit the data. 

The question is why does this simple formulation work so well? As with many similar cases of characterizing disordered material, the fundamentally derived solution needs to be adjusted to take into account the uncertainty in the parameter space.  However, this step is not routinely performed and by adding modeling details (see [4]) to try to make up for a poor fit works only as a cosmetic heuristic.   In contrast, by performing the uncertainty quantification, like we did with the diffusion coefficient, the first-order solution works surprisingly well with no need for additional detail.

Constant Current Discharge
Instead of assuming that the particle size is L, we can say that the L is an average and apply the same maximum entropy spread in values.
$$  \large C(t) = \int_{0}^{\infty} C(t,x) \frac{1}{L} e^{-x/L} dx $$
this integrates straightforwardly to this concise representation:
$$ \large C(t) = C_0 \frac{1}{ L + {\sqrt{D_0 t}} } $$
The reason we do this is to allow us to recursively define the change in charge to a current. In  this case, to get current we need to differentiate the charge with respect to time.
$$ \large I(t) = \frac{dC(t)}{dt}  $$
This differentiates to the following expression
$$ \large I(t) = – \frac{C_0}{ (L + {\sqrt{D_0 t}})^2} \frac{1}{2 \sqrt{t}} $$
But note that we can insert C(t) back in to the expression
$$ \large I(t) = \frac{C(t)}{(L + {\sqrt{D_0 t}}) 2 \sqrt{t}} $$
Finally, since I(t) is a constant and we can set that to a value of I_constant. Then the charge has the following profile
$$ C(t) = C(0) – k_c I_{constant} (L + \sqrt{Dt}) 2 \sqrt{t} $$
or as a voltage decline
$$ V(t) = V(0) – k_v I_{constant} (L + \sqrt{Dt}) 2 \sqrt{t} $$
For a set of constant current values, we can compare this formulation against experimental data for LiFePO4 (shown as gray open circles) shown in Figure 7 below. A slight constant current offset (which may arise from unspecified shunting and/or series elements) was required to allow for the curves to align proportionally.  Even with that, it is clear that the dispersive diffusion formulation works better than the conventional model (solid black lines) except where the discharge is nearing completion.

Figure 7 : Constant current discharge profile [6]. Superimposed as dotted lines are the set of model fits which use the current value as a fixed parameter.

We can also model battery charging but the lack of information on the charging profile makes the discharge behavior a simpler study.

Related Diffusion Topics

References

[1]
A. Churikov, A. Ivanishchev, I. Ivanishcheva, V. Sycheva, N. Khasanova, and E. Antipov, “Determination of lithium diffusion coefficient in LiFePO 4 electrode by galvanostatic and potentiostatic intermittent titration techniques,” Electrochimica Acta, vol. 55, no. 8, pp. 2939–2950, 2010.

[2]
C. Delacourt, M. Ati, and J. Tarascon, “Measurement of Lithium Diffusion Coefficient in Li y FeSO4F,” Journal of The Electrochemical Society, vol. 158, no. 6, pp. A741–A749, 2011.

[3]
M. Park, X. Zhang, M. Chung, G. B. Less, and A. M. Sastry, “A review of conduction phenomena in Li-ion batteries,” Journal of Power Sources, vol. 195, no. 24, pp. 7904–7929, Dec. 2010.

[4]
J. Christensen and J. Newman, “A mathematical model for the lithium-ion negative electrode solid electrolyte interphase,” Journal of The Electrochemical Society, vol. 151, no. 11, pp. A1977–A1988, 2004.

[5]
Q. Wang, H. Li, X. Huang, and L. Chen, “Determination of chemical diffusion coefficient of lithium ion in graphitized mesocarbon microbeads with potential relaxation technique,” Journal of The Electrochemical Society, vol. 148, no. 7, pp. A737–A741, 2001.

[6]
P. M. Gomadam, J. W. Weidner, R. A. Dougal, and R. E. White, “Mathematical modeling of lithium-ion and nickel battery systems,” Journal of Power Sources, vol. 110, no. 2, pp. 267–284, 2002.

[7]
“Hybrid and Electric Vehicle Engineering Academy.” [Online]. Available: http://training.sae.org/academies/acad06/. [Accessed: 09-Jun-2013].

The homework problem to end all homework problems

This is a problem that has driven anyone that has studied climate science up the wall.

Premise: Venus has an adiabatic index γ (gamma) and a temperature lapse rate λ (lambda). Earth also has an adiabatic index and temperature lapse rate.  These have been measured, and for the Earth a standard atmospheric profile has been established. The general relationship is based on thermodynamic principles but the shape of the profile diverges from simple applications of adiabatic principles.  In other words, a heuristic is applied to allow it to match the empirical observations, both for Venus and Earth. See this link for more background

Assigned Problem: Derive the adiabatic index and lapse rate for both planets, Venus and Earth, using only the planetary gravitational constant, the molar composition of atmospheric constituents, and any laws of physics that you can apply.  The answer has to be right on the mark with respect to the empirically-established standards.

Caveat: Reminder that this is a tough nut to crack.

Solution:  The approach to use is concise but somewhat twisty.  We work along two paths, the initial path uses basic physics and equations of continuity;  while the subsequent path ties the loose ends together using thermodynamic relationships which result in the familiar barometric formula and lapse rate formula.  The initial assumption that we make is to start with a sphere that forms a continuum from the origin; this forms the basis of a polytrope, a useful abstraction to infer the generic properties of planetary objects.

An abstracted planetary atmosphere

The atmosphere has a density ρ, that decreases outward from the origin. The basic laws we work with are the following:

Mass Conservation

$$ \frac{dm(r)}{dr} = 4 \pi r^2 \rho $$

Hydrostatic Equilibrium

$$ \frac{dP(r)}{dr} = – \rho g = – \frac{Gm(r)}{r^2}\rho $$

To convert to purely thermodynamic terms, we first integrate the hydrostatic equilibrium relationship over the volume of the sphere
$$ \int_0^R \frac{dP(r)}{dr} 4 \pi r^3 dr = 4 \pi R^3 P(R) – \int_0^R 12 P(r) \pi r^2 dr $$
on the right side we have integrated by parts, and eliminate the first term as P(R) goes to zero (note: upon review, the zeroing of P(R) is an approximation if we do not let R extend to the deep pressure vacuum of space, as we recover the differential form later — right now we just assume P(R) decreases much faster than R^3 increases ). We then reduce the second term using the mass conservation relationship, while recovering the gravitational part:
$$ – 3 \int_0^M\frac{P}{\rho}dm = -\int_0^R 4 \pi r^3 \frac{G m(r)}{r^2} dr $$
again we apply the mass conservation
$$ – 3 \int_0^M\frac{P}{\rho}dm = -\int_0^M  \frac{G m(r)}{r} dm $$
The  right hand side is simply the total gravitational potential energy Ω while the left side reduces to a pressure to volume relationship:
 $$ – 3 \int_0^V P dV = \Omega$$
This becomes a variation of the Virial Theorem relating internal energy to potential energy.

Now we bring in the thermodynamic relationships, starting with the ideal gas law with its three independent variables. 

Ideal Gas Law

$$ PV = nRT $$

Gibbs Free Energy

$$ E = U – TS + PV $$

Specific Heat (in terms of molecular degrees of freedom)

$$ c_p = c_v + R = (N/2 + 1) R $$

On this path, we make the assertion that the Gibbs free energy will be minimized with respect to perturbations. i.e. a variational approach.

$$ dE = 0 = dU – d(TS) + d(PV) = dU – TdS – SdT + PdV + VdP $$

Noting that the system is closed with respect to entropy changes (an adiabatic or isentropic process) and substituting the ideal gas law featuring a molar gas constant for the last term.

$$ 0 = dU – SdT + PdV + VdP = dU – SdT + PdV + R_n dT$$

At constant pressure (dP=0) the temperature terms reduce to the specific heat at constant pressure:

$$ – S dT + nR dT = (c_v +R_n) dT = c_p dT $$

Rewriting the equation

$$ 0 = dU + c_p dT + P dV $$

Now we can recover the differential virial relationship derived earlier:

$$ – 3 P dV = d \Omega $$

and replace the unknown PdV term

$$ 0 = dU + c_p dT – d \Omega / 3 $$

but dU is the same potential energy term as dΩ, so

$$ 0 = 2/3 d \Omega+ c_p dT $$

Linearizing the potential gravitational energy change with respect to radius

$$ 0 = \frac{2 m g}{3} dr + c_p dT $$

Rearranging this term we have derived the lapse formula

$$ \frac{dT}{dr} = – \frac{mg}{3/2 c_p} $$

Reducing this in terms of the ideal gas constant and molecular degrees of freedom N

$$ \frac{dT}{dr} = – \frac{mg}{3/2 (N/2+1) R_n} $$

We still need to derive the adiabatic index, by coupling the lapse rate formula back to the hydrostatic equilibrium formulation.

Recall that the perfect adiabatic relationship (the Poisson’s equation result describing the potential temperature) does not adequately describe a standard atmosphere — being 50% off in lapse rate —  and so we must use a more general polytropic process approach.

Combining the Mass Conservation with the Hydrostatic Equilibrium:

$$ \frac{1}{r^2} \frac{d}{dr} (\frac{r^2}{\rho} \frac{dP}{dr}) = -4 \pi G \rho $$

if we make the substitution
$$ \rho = \rho_c \theta^n $$
where n is the polytropic index.  In terms of pressure via the ideal gas law
$$ P = P_c \theta^{n+1} $$
if we scale r as the dimensionless ξ :

$$ \frac{1}{\xi^2} \frac{d}{d\xi} (\frac{\xi^2}{\rho} \frac{dP}{d\xi}) = – \theta^n $$

This formulation is known as the Lane-Emden equation and is notable for resolving to a polytropic term. A solution for n=5 is
$$ \theta = ({1 + \xi^2/3})^{-1/2} $$

We now have a link to the polytropic process equation
$$ P V^\gamma = {constant} $$
and
$$ P^{1-\gamma} T^{\gamma} = {constant} $$
or
$$ P = P_0 (\frac{T}{T_0})^{\frac{\gamma}{1-\gamma}} $$
Tieing together the loose ends, we take our lapse rate gradient
$$ \frac{dT}{dr} = \frac{mg}{3/2 (N/2+1) R} $$
and convert that into an altitude profile, where r = z
$$ T = T_0 (1 – \frac{z}{f z_0}) $$
where
$$ z_0 = \frac{R T_0}{m g} $$
and
$$ f = 3/2 (1 + N/2) $$
and the temperature gradient, aka lapse rate
$$ \lambda=   \frac{m g}{ 3/2 (1 + N/2) R } $$
To generate a polytropic process equation from this, we merely have to raise the lapse rate to a power, so that we recreate the power law version of the barometric formula:
$$  P = P_0 (1 – \frac{z}{f z_0})^f $$
which essentially reduces to Poisson’s equation on substitution:
$$ P = P_0 (T/T_0)^f $$
where the equivalent adiabatic exponent is
$$ f =  \frac{\gamma}{1-\gamma} $$

Now we have both the lapse rate, barometric formula, and Poisson’s equation derived based only on the gravitational constant g, the gas law constant R, the average molar molecular weight of the atmospheric constituents m, and the average degrees of freedom N.

Answer: Now we want to check the results against the observed values for the two planets

Parameters

Object Main Gas N m g
Earth N2, O2 5 28.96 9.807
Venus CO2 6 43.44 8.87

Results

Object Lapse Rate observed f observed
Earth 6.506 C/km 6.5 21/4 5.25
Venus 7.72 7.72 6 6

All the numbers are spot on with respect to the empirical data recorded for both Earth and Venus, with supporting figures available here.

——-

The rough derivation that I previously posted to explain the empirical data was not very satisfying in its thoroughness.  The more comprehensive derivation in this post serves to shore up the mystery behind the deviation from the adiabatic derivation.  The key seems to be correctly accounting for the internal energy necessary to maintain the gravitational hydrostatic equilibrium. Since the polytropic expansion describes a process, the actual atmosphere can accommodate these constraints (while minimizing Gibbs free energy under constant entropy conditions) by selecting the appropriate polytropic index.   The mystery of the profile seems not so mysterious anymore.

Criticisms welcome as I have not run across anything like this derivation to explain the Earth’s standard atmosphere profile nor the stable Venus data (not to mention the less stable Martian atmosphere).  The other big outer planets filled with hydrogen are still an issue, as they seem to follow the conventional adiabatic profile, according to the few charts I have access to.  The moon of Saturn, Titan, is an exception as it has a nitrogen atmosphere with methane as a greenhouse gas.

BTW, this post is definitely not dedicated to Ferenc Miskolczi. Please shoot me if I ever drift in that direction. It’s a tough slog laying everything out methodically but worthwhile in the long run.

                      Added                     
Added Fig 1 : Lapse Rate on Earth versus Latitude. From

D. J. Lorenz and E. T. DeWeaver, “Tropopause height and zonal wind response to global warming in the IPCC scenario integrations,” Journal of Geophysical Research: Atmospheres (1984–2012), vol. 112, no. D10, 2007.

Added Fig 2 : Lapse Rate on Earth versus Latitude. The average was calculated by integrating
with effective cross-sectional area weighting of (sin(Latitude+2.5)-sin(Latitude-2.5)) . Adapted from

J. P. Syvitski, S. D. Peckham, R. Hilberman, and T. Mulder, “Predicting the terrestrial flux of sediment to the global ocean: a planetary perspective,” Sedimentary Geology, vol. 162, no. 1, pp. 5–24, 2003.

Added Fig 3: This study also suggests an average lapse rate of 6.1C/km over the northern hemisphere.


I. Mokhov and M. Akperov, “Tropospheric lapse rate and its relation to surface temperature from reanalysis data,” Izvestiya, Atmospheric and Oceanic Physics, vol. 42, no. 4, pp. 430–438, 2006.

Since I posted this derivation, I received feedback from several other blogs which I attached as comments below this post.  In the original post I concluded by saying that I was satisfied with my alternate derivation, but after receiving the feedback, there is still the nagging issue of why the Venus lapse rate profile can be so linear in the lower atmosphere even though we know that the heat capacity of CO2 varies with temperature (particularly in the high temperature range of greater than 500 Kelvin).

If  we go back and look at the hydrostatic relation derived earlier, we see an interesting identity:
$$ – 3 \int_0^M\frac{P}{\rho}dm = -\int_0^M  \frac{G M}{r} dm $$
If I pull out the differential from the integral
$$ 3 \frac{P}{\rho} = \frac{G M}{r} $$
and then realize that the left-hand side is just the Ideal Gas law
$$ 3RT/m = \frac{G M}{r} $$
This is internal energy due to  gravitational potential energy.
If we take the derivative with respect to r, or altitude:
$$ 3R \frac{dT}{dr} = – \frac{G M m}{r^2} $$
The right side is just the gravitational force on an average particle. So we essentially can derive a lapse rate directly:
$$  \frac{dT}{dr} = – \frac{g m}{3 R} $$
This will generate a linear lapse rate profile of temperature that decreases with increasing altitude. Note however that this does not depend on the specific heat of the constituent atmospheric molecules. That is not surprising since it only uses the Ideal Gas law, with no application of the variational Gibbs Free Energy approach used earlier.

What this gives us is a universal lapse rate that does not depend on the specific heat capacity of the constituent gases, only the mean molar molecular weight, m.   This is of course an interesting turn of events in that it could explain the highly linear lapse  profile of Venus.  However, plugging in numbers for the gravity of Venus and the mean molecular weight (CO2 plus trace gases), we get a lapse rate that is precisely twice that which is observed.

The “obvious”  temptation is to suggest that half of the value of this derived hydrodynamic lapse rate would position it as the mean of the lapse rate gradient and an isothermal lapse rate (i.e. slope of zero).
$$  \frac{dT}{dr} = – \frac{g m}{6 R} $$
The rationale for this is that most of the planetary atmospheres are not any kind of equilibrium with energy flow and are constantly swinging between an insolating phase during daylight hours, and then a outward radiating phase at night.   The uncertainty is essentially describing fluctuations between when an atmosphere is isothermal (little change of temperature with altitude producing a MaxEnt outcome in distribution of pressures, leading to the classic barometric formula) or isentropic (where no heat is exchanged with the surroundings, but the temperature can vary as rapid convection occurs).

In keeping with the Bayesian decision making, the uncertainty is reflected by equal an weighting between isothermal (zero lapse rate gradient) and an isentropic (adiabatic derivation shown).  This puts the mean lapse rate at half the isentropic value. For Earth, the value of g*m/3R is 11.4 C/km.  Half of this value is 5.7 C/km, which is a value closer to actual mean value than the US Standard Atmosphere of 6.5 C/km

J. Levine, The Photochemistry of Atmospheres. Elsevier Science, 1985.
“The value chosen for the convective adjustment also influences the calculated surface temperature. In lower latitudes, the actual temperature decrease with height approximates the moist adiabatic rate. Convection transports H2O to higher elevations where condensation occurs, releasing latent heat to the atmosphere; this lapse rate, although variable, has an average annual value of 5.7 K/km in the troposphere. In mid and high latitudes, the actual lapse rates are more stable; the vertical temperature profile is controlled by eddies that are driven by horizontal temperature gradients and by topography. These so-called baroclinic processes produce an average lapse rate of 5.2 K/km – It is interesting to note that most radiative convective models have used a lapse rate of 6.5 K km – which was based on date sets extending back to 1933. We know now that a better hemispherical annual lapse rate is closer to 5.2 K/km, although there may be significant seasonal variations.

BTW, the following references are very interesting presentations on the polytropic approach.


References

[1]
“Polytropes.” [Online]. Available: http://mintaka.sdsu.edu/GF/explain/thermal/polytropes.html. [Accessed: 19-May-2013].

[2]
B. Davies, “Stars Lecture.” [Online]. Available: http://www.ast.cam.ac.uk/~bdavies/Stars2 . [Accessed: 28-May-2013].

                      Even More Recent Research                    

A number of Chinese academics [3,4] are attacking the polytropic atmosphere problem from an angle that I hinted at in the original Standard Atmosphere Model and Uncertainty in Entropy post.    The gist of their approach is to assume that the atmosphere is not under thermodynamic equilibrium (which it isn’t as it continuously exchanges heat with the sun and outer space in a stationary steady-state) and therefore use some ideas of non-extensible thermodynamics.  Specifically they invoke Tsallis entropy and a generalized Maxwell-Boltzmann distribution to model the behavioral move toward an equilibrium.  This is all in the context of self-gravitational systems, which is the theme of this post.  Why I think it is intriguing, is that they seem to tie the entropy considerations together with the polytropic process and arrive at some very simple relations (at least they appear somewhat simple to me).

In the non-extensive entropy approach, the original Maxwell-Boltzmann (MB) exponential velocity distribution is replaced with the Tsallis-derived generalized distribution — which looks like the following power-law equation:

$$ f_q(v)=n_q B_q (\frac{m}{2 \pi k T})^{3/2} (1-(1-q) \frac{m v^2}{2 k T})^{\frac{1}{1-q}}$$

The so-called q-factor is a non-extensivity parameter which indicates how much the distribution deviates from MB statistics. As q approaches 1, the expression gradually trasforms into the familiar MB exponentially damped v^2 profile.

When q is slightly less than 1, all the thermodynamic gas equations change slightly in character.  In particular, the scientist Du postulated that the lapse rate follows the familiar linear profile, but scaled by the (1-q) factor:

$$ \frac{dT}{dr} = \frac{(1-q)g m}{R} $$

Note that this again has no dependence on the specific heat of the constituent gases, and only assumes an average molecular weight.  If q=7/6 or Q = 1-q = -1/6, we can model the f=6 lapse rate curve that we fit to earlier.

There is nothing special about the value of f=6 other than the claim that this polytropic exponent is on the borderline for maintaining a self-gravitational system [5].

Note that as q approaches unity, the thermodynamic equilibrium value, the lapse rate goes to zero, which is of course the maximum entropy condition of uniform temperature.

The Tsallis entropy approach is suspiciously close to solving the problem of the polytropic standard atmosphere. Read Zheng’s paper for their take [3] and also Plastino [6]. 

The cut-off in the polytropic distribution (5) is an example of what is known, within the field of non extensive thermostatistics, as “Tsallis cut-off prescription”, which affects the q-maximum entropy distributions when q < 1. In the case of stellar polytropic distributions this cut-off arises naturally, and has a clear physical meaning. The cut-off corresponds, for each value of the radial coordinate r, to the corresponding gravitational escape velocity.

This has implications for the derivation of the homework problem that we solved at the top of this post, where we eliminated one term of the integration-by-parts solution. Obviously, the generalized MB formulation does have a limit to the velocity of a gas particle in comparison to the classical MB view. The tail in the statistics is actually cut-off as velocities greater than a certain value are not allowed, depending on the value of q.  As q approaches unity, the velocities allowed (i.e. escape velocity) approach infinity.

As Plastino states [6]:

Polytropic distributions happen to exhibit the form of q-MaxEnt distributions, that is, they constitute distribution functions in the (x,v) space that maximize the entropic functional Sq under the natural constraints imposed by the conservation of mass and energy.

The enduring question is does this describe our atmosphere adequately enough? Zheng and company certainly open it up to another interpretation.

[3]
Y. Zheng, W. Luo, Q. Li, and J. Li, “The polytropic index and adiabatic limit: Another interpretation to the convection stability criterion,” EPL (Europhysics Letters), vol. 102, no. 1, p. 10007, 2013.

[4]
Z. Liu, L. Guo, and J. Du, “Nonextensivity and the q-distribution of a relativistic gas under an external electromagnetic field,” Chinese Science Bulletin, vol. 56, no. 34, pp. 3689–3692, Dec. 2011.

[5]
M. V. Medvedev and G. Rybicki, “The Structure of Self-gravitating Polytropic Systems with n around 5,” The Astrophysical Journal, vol. 555, no. 2, p. 863, 2001.

[6]
A. Plastino, “Sq entropy and selfgravitating systems,” europhysics news, vol. 36, no. 6, pp. 208–210, 2005.

Airborne fraction of CO2 explained by sequestering model

As acknowledgement of the atmospheric levels of CO2 reaching 400 PPM, this post is meant to clear up one important misconception (suggested prerequisite reading on fat-tail CO2 sequestration here and the significance of the fat-tail here)

A recently active skeptic meme is that the amount of CO2 as an airborne fraction is decreasing over time.

“If we look at the data since Mauna Loa started, we see that the percentage of the CO2 emitted by humans that “remains” in the atmosphere has averaged around half, but that it has diminished over time, by around 1% per decade.
Over the 30 year period 1959-1989 it was around 55%; over the following 20+ years it was just over 50%.
Why is this?”

What the befuddled fellow is talking about are the charts being shown below. These are being shown without much context and no supporting documentation, which puts the burden on the climate scientists to explain. Note that the airborne fraction does seem to decrease slightly over the past 50 years, even though the carbon emissions are increasing.

This obviously needs some explaining.  The following figure illustrates what the CO2 sequestration model actually does.

Figure 1:  Model airborne fraction of CO2 against actual data

On the left is the data plotted together with the model of the yearly fraction not sequestered out. The model is less noisy than the data but it does clearly decline as well.  No big surprise as this is a response function, and responses are known to vary depending on the temporal profile of the input and the fat-tail in the adjustment time impulse response function.

On the right is the model with the incorporation of a temperature-dependent outgassed fraction. In this case the model is more noisy than the data, as it includes outgassing of CO2 depending on the global temperature for that year. Since the temperature is noisy, the CO2 fraction picks up all of that noise.  Still, the airborne fraction shows a small yet perceptible decline, and the model matches the data well, especially in recent years where the temperature fluctuations are reduced.

Amazing that over 50 years, the mean fraction has not varied much from 55%. That has a lot to do with the math of diffusional physics. Essentially a random walk moving into and out of sequestering sites is a 50/50 proposition. That’s the way to intuit the behavior, but the math really does the heavy lifting in predicting the fraction sequestered out.

It looks like the theory matches the data once again. The skeptics provide a knee-jerk view that this behavior is not well understood, but not having done the analysis themselves, they lose out — the skeptic meme is simply one of further propagating fear, uncertainty, and doubt (FUD) without concern for the underlying science.

Proportional Land/Sea Global Warming Model

I found an interesting global temperature regression exercise that may lead to some insight with respect to understanding ocean heat content .  From a previous post we estimated that about half the excess heat produced over the oceans is getting stored (sequestered) in the ocean depths.  The intriguing premise is that we can potentially substantiate the flow of heat by comparing the relative growths of global temperatures against the land-only temperatures and the ocean-surface temperatures (i.e. the sea-surface temperatures known as SST).

The elementary kriging approximation is that the global temperature anomaly (TG) is a proportional mix of the ocean temperature (To) with the land temperature (Tl ):
$$ T_G = p_o T_o + p_l T_l $$
where
$$ 1 = p_o + p_l $$
with the approximate fraction of earth’s coverage by the ocean equal to 0.7 (and the land therefore 0.3).

We use that Hadley center data sets

global land ocean
hadcrut4 crutem4vgl hadsst2gl

Shifting the baseline anomaly by at most 0.1C, we come up with the following fit, shown in Figure 1.  The composed temperature lines up very closely to the reported global temperature, with the red areas peeking out where the agreement is not perfect.

Figure 1 : Global temperature anomaly is straightforwardly recreated by assuming that the global temperature is composed by a proportion of ocean and land area.  The fit works very well apart from a few points (years centered around 1948) that may be traced to systemic errors or discrepancies the database versions.

That by itself is only somewhat interesting, as it merely confirms that the Hadley center researchers know how to do first order proportional mapping (aka kriging). The regression agreement is shown in Figure 2.

Figure 2 : The composed temperature maps nearly one-to-one to the actual global temperature.

The other identity we need to consider involves a fraction of the heat sunk by the ocean. Since the land has essentially no heat sink, but that the ocean has a fraction f that acts as a heat sink, we can assert:
$$ T_o = f T_g $$
where we determined previously that f is about 0.5.

We can plot the linear regression between the two below.

Figure 3 : Linear regression between Land and SST temperature is more noisy.

As this is fit is fairly noisy, we can try to reduce the variance by plotting against the global mean temperature as a multiple regression fit.  We take the first equation and replace each of the component temperatures with their fractional equivalents.
$$ T_G =  p_o  f T_l + p_l T_l $$
$$ T_G =  p_o T_o + p_l \frac{T_o}{f}  $$
and then apply these as equal weightings
$$ T_G = 1/2 ( p_o f T_l + p_l T_l ) +  1/2 ( p_o T_o + p_l \frac{T_o}{f} )$$

rearranging terms
$$ T_G = 1/2 ( f p_o + p_l  ) T_l +  1/2 ( p_o + \frac{p_l}{f} ) T_o $$
If we apply a multiple regression of the global temperature data against a linear combination of the ocean and temperature data we get the following tabulated results:

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%
Intercept 0.02225179 0.003884172 5.728838 4.97E-08 0.0145802 0.02992339
Land 0.29922517 0.015995888 18.70638 6.56E-42 0.26763182 0.33081852
Ocean 0.65796939 0.024637656 26.70584 1.86E-60 0.60930775 0.70663103

With this information, we can solve for f and the land ocean split.
$$ 1/2 ( f p_o + p_l  ) = 0.299 $$
$$ 1/2 ( p_o + \frac{p_l}{f} ) = 0.658 $$
Given two linear equations and two unknowns, we get f = 0.46 and ocean fraction of 73.5%.
The solution is also shown as the open circles in Figure 1.

We originally asserted that 1/2 the heat is entering the ocean, and substantiated this with a value of 0.46.  We can also compare the generally agreed upon value of 71% of the surface water coverage with the value of 73.5% determined here.  Given the confidence interval uncertainty in the coefficients as shown in the table above, we see that this simple analysis substantiates our original premise.

This is also a subtle effect and can easily be misinterpreted as arising from just the proportional warming of ocean and land. However something has to create the temperature imbalance between the ocean and land, and the fact that the coefficients of proportionality shown in the table come out fairly close to 0.3 and 0.7 (those of land and ocean) is just a coincidence in the math. If some value markedly different from 0.5 for heat sinking was involved, then the ratios would differ more obviously.

Climate science, like other science disciplines, consists of an array of interlocking parts that need to fit together. If these don’t fit, our model will lose its predictive power.  In the case of this model, we can help verify that the excess heat is entering the ocean, suppressing the global temperature by about 2/3 from the land temperature.   This will continue as long as the ocean acts as a heat sink, a point still some distance in the future.   All we can really say is that global temperature anomalies have the potential for increasing by 3/2 or by 50% from the current readings when they eventually and asymptotically reach a near-equilibrium steady-state.

                                    UPDATE                                   

Using the Eureqa curve fitting software, a linear combination of data sets provides the low complexity fit.

Figure 4: The linear combination of SST and Land with an offset gives a Pareto frontier optimum

                                   It’s the ocean heat content, stupid                                   

June 22, 2013. This paper has applicability to the proportional land/sea warming model

M. Watanabe, Y. Kamae, M. Yoshimori, A. Oka, M. Sato, M. Ishii, T. Mochizuki, and M. Kimoto, “Strengthening of ocean heat uptake efficiency associated with the recent climate hiatus,” Geophysical Research Letters, 2013.

The research results claim that the ocean has been adjusting its heat uptake in the last few years as a result of transient changes in the large-scale hydrodynamics.  This has the effect of suppressing the warming in terms of temperature, although the heat uptake from the AGW forcing still exists. So the implication is that what is lacking in a temperature rise is made up for by the heat sinking of the ocean  (also see the “missing heat” issue studied by Trenberth).

The ocean heat uptake efficiency measure of Watanabe is related to the ratio f between ocean and land temperature defined at the top of this post. The idea is that — similar to the aim of the Japanese research study — to see if we can detect changes in f over the last few years.

To do this we need to take great care with the numbers. Instead of using the WoofForTrees data, I used the CRU data directly.  The sets were CRUTEM4 (land), HadCRUT4 (global), and HadSST3 (sea).  The composed set looks like the following chart for a value of f = 0.5, which is the nominal fraction assumed for the original proportional land/sea analysis.

The composed temperature lies on top of the HadCRUT4 global temperature

If we look at the error residual between the HadCRUT4 global temperature and the fractionally composed model, we get the following chart.  Note that as an absolute error, the value is obviously decreasing over time, likely attributed to better and more accurate record keeping with current temperature measurement techniques.

The absolute error decreases with more recent records.
(The last data point is 2012, which often undergoes corrections for the next update.)

The high resolution and low error in recent years indicates that perhaps we can try to more accurately fit the fraction f.   So essentially, we want to zero out the error by solving the proportional land/sea warming model for a continuously varying value of f.
$$ 0 = T_G – 1/2 ( f p_o + p_l  ) T_l +  1/2 ( p_o + \frac{p_l}{f} ) T_o $$
This turns into a quadratic equation for f, which we can solve by the quadratic formula. The set of value calculated by minimizing the error is shown below.  Note that the average remains around f = 0.5, but it shows a distinct decreasing trend in recent years.

The fraction ratio of ocean to land temperature appears to be decreasing in recent years, leading to an apparent flattening in global temperature rise. Lower values of f cause the global temperature signal to appear cooler for a given AGW forcing.

If this is a real trend (as opposed to some type of accumulating systemic error or noise) it is telling us that more of the heat is accumulating in the ocean, consistent with the claims of Watanabe et al.   It is possible that the fraction is actually decreasing from a past value of around 0.6 to a current value of 0.4.  Although this is a subtle effect in terms of the fit (probably the not most robust metric one can imagine), it has significant effect in terms of the global surface temperature signal.

This is seen if we deconstruct the proportional model in terms of the land temperature alone, assuming the area land/ocean split as 0.71/0.29 :

$$ T_G =  (0.71 \cdot f + 0.29 ) T_l $$
Note that with a slowly increasing land temperature signal Tl , the declining f can compensate for this value and actually cause the global temperature value TG to flatten.

To take an example, reducing the value of f from 0.6 to 0.4 causes the global temperature to decline from 0.716*Tl to 0.574*Tl.  If the land temperature is held constant, the global temperature will decline, while if the land temperature rises by 25%, the global temperature rise will look flat.

Contour plot showing optimal values of f.  This is a log plot and higher negative values indicate low error

That is exactly what Watanabe et al are claiming.  Moreover, they assert that this decline can’t remain in place for the long term, and eventually the ocean hydrodynamics will stabilize or even reverse, with a concomitant rebound in global temperature.

To review, the essential premise of the proportional land/ocean model is:

  1. The land surface reaches the steady-state temperature quickly
  2. The ocean sinks excess heat, thus moderating the sea surface temperature rise.
  3. The fractional ratio of ocean temperature to land temperature is given by f.
  4. The global surface temperature is determined as combination of land and sea surface temperatures prorated according to the land/sea areal split.

From this set of premises, we can algebraically estimate the amount of ocean heat sinking from global temperature records as gleaned from the Climatic Research Unit.

(also see I. Held’s blog post on this topic [2])

References

[1]
D. Dommenget, “The ocean’s role in continental climate variability and change,” Journal of Climate, vol. 22, no. 18, pp. 4939–4952, 2009.

“The land–sea warming ratio in the ECHAM–HadISST holds also for the warming trend over the most recent decades, despite the fact that no anthropogenic radiative forcings are included in the simulations. The temperature trends during the past decades as observed and in the (ensemble mean) model response (Fig. 4) are roughly consistent with each other, which indicates that much of the land warming is a response to the warming of the oceans. The simulated land warming, however, is weaker than that observed in many regions, with an average land–sea warming ratio of 1.6, amounting to about 75% of the observed ratio of 2.1 .”

[2]
I. Held, “38. NH-SH differential warming and TCR « Isaac Held’s Blog.” [Online]. Available: http://www.gfdl.noaa.gov/blog/isaac-held/2013/06/14/38-nh-sh-differential-warming-and-tcr/#more-5774. [Accessed: 25-Jun-2013].

Greenland Red Noise

Tamino analyzed a paper by Chylek [1] who claimed to be able to extract a periodic signal out of Greenland ice core data (Dye3).

Tamino’s post is comprehensive, but I wanted to check to see if I could extract any signal from the data myself.

The data is in the upper right of the following figure, and the FFT-based power spectral density (PSD) is in the lower left. 

FFT PSD lower left, Data upper right

The PSD is nondescript with a red noise envelope shown by the solid line. Red noise is the embodiment of the Ornstein-Uhlenbeck process.  This red noise shows very short correlation, at most a lag of one or two time units, as shown in the autocorrelation below.

Autocorrelation function, centered at 1/2 time series length

Hard pressed to find any real signal in such a time series.  A simulation of Ornstein-Uhlenbeck red noise is shown to the left below, with a slice of the Greenland data to the right.

Ornstein-Uhlenbeck red noise time series (left) and slice of data (right)

Chylek claims that he can see 20 year AMO (Atlantic Ocean) oscillations in the data. Like Tamino, I see nothing but garden variety noise.

References

[1]
P. Chylek, C. Folland, L. Frankcombe, H. Dijkstra, G. Lesins, and M. Dubey, “Greenland ice core evidence for spatial and temporal variability of the Atlantic Multidecadal Oscillation,” Geophysical Research Letters, vol. 39, no. 9, 2012.

 

 

Filtering Sea Level Rise

Measuring global sea-level rise is not for the faint-of-heart.  The levels are averaged over the earth’s oceans and likely are processed to remove tidal and other effects.

At one of the climate blogs, a gang of climate change skeptics tried to convince their fellow skeptics that the sea level rise had turned the tide (so to speak) and started to decrease. The skeptic Rob (Ringo) Starkey kindly provided the data and pointed to a recent dip in the data:

http://sealevel.colorado.edu/files/2013_rel3/sl_ns_global.txt

BDD smelled something fishy and pointed out that these are at best seasonal dips and labelled it rubbish, and reasoned that they were likely trying to save face over some other wrong they had committed.  What I usually find is that the more that the fake skeptics howl about something, the more likely that there are creepy crawly things they are trying to hide under the rock. Well, in this case the sea-level rise data that Ringo pointed us to shows a clear 2 month oscillation in the time series. Sorry Ringo, “steven”, and Captain Tuna, easy enough to put a 2-month notch filter in the time series data and just look at how much smoother it gets.
 

Figure 1 : Upper left inset is a FFT of the detrended sea-level rise data shown in the lower right inset. A clear two month cycle shows up in the FFT and by eyeballing any yearly interval.  After performing a simple 2 month notch filter on the data and adding back in the trend, the red curve shows significant noise reduction.

The oscillation is possibly identified as a meridional current fluctuation of 2 months located in the Pacific described by this Woods Hole investigation:  “Air-Sea Interaction in the Eastern Tropical Pacific ITCZ/Cold Tongue Complex” page. It also is described by a member of the same team in this PhD thesis [1] :

“The period of the oscillations can be seen to be about two months from October 1997 to June 1998, and the oscillations rapidly intensify during the first few months of 1998. The amplitude of the oscillations approximately doubles between December 1997 and February 1998, and it doubles again between February and April 1998. At its peak strength in April 1998, the signal is associated with nearly a 7 cm peak-to-peak change in the thickness of the upper 110 m of the water column.

One can see the peaks in late 1997 and early 1998 from Figure 1, enlarged and highlighted below

Figure 2: Identification of water column thickness increase

The main point is that these fluctuations are oscillatory and don’t contribute to a persistent rise in the sea-level. Like tides, this phenomena is more similar to the sloshing of water in a bucket.  (This particular two month cycle is not a tide as lunar tides such as neap and spring tides have monthly or biweekly periods).

References

[1]
J. Farrar, “Air-sea interaction at contrasting sites in the Eastern Tropical Pacific : mesoscale variability and atmospheric convection at 10°N,” PhD Thesis, MIT, Woods Hole, 2007.
http://dspace.mit.edu/handle/1721.1/39009

                                           EDIT                                                 

I noticed that the sea level research group at U of Colorado had already placed a 60 day (2 month) smoothing filter when they display the data (see figure below). This is similar to the 2-point notch filter that I use, which I set specifically to pair up data points so that they are out-of-phase at the 30-day mark.  A general smoothing filter will accomplish the same thing but will also filter out shorter periods in the data set.

http://sealevel.colorado.edu/content/2013rel3-global-mean-sea-level-time-series-seasonal-signals-removed

The Rise and Decline of UK Crude Oil

This is updated information from 2005 and 2008 concerning projections of North Sea UK Oil production. 

The following Figure 1 is a screenshot of an interactive oil shock model session, which is part of a larger environmental modeling framework. 

Figure 1:  Shock Model for UK North Sea Oil. The oil production output is the profile with two humps, with model and data shown together. Extraction rate applied to the extrapolated reserves gives the production.

The discovery curve follows a dispersive profile which allows us to project future availability along a declining tail.  The extraction perturbations show a temporary glitch corresponding to the Piper Alpha platform disaster, but otherwise shows around a 6% extraction rate from reserves over the years.  This rate isn’t expected to move upwards much (contrary to my previous projection) because any increase in technological advancements will be compensated by fields that are tougher to extract from.

Also included in the interactive session is an ability to pull up individual fields (all part of a semantic web server), shown in Figure 2.  The “Forties” oil field is one of the earliest discoveries and commenced production at the start of the UK North Sea era.

Figure 2:  Individual plot of a UK platform, the “Forties” oil field. The downward glitch at the end is a -1 value indicating production values are not yet available for that year.

The UK Prime Ministers during this time (see Figure 3) likely benefited politically from the oil production bonanza, starting with Margaret Thatcher’s leadership of the conservative party in 1975.

Figure 3: PM Margaret Thatcher was at the right place at the right time. Leader of the conservative party starting in 1975, and then became Prime Minister in 1979, she reigned during the huge buildup in oil production, which was temporarily halted by the Piper Alpha explosion in 1988.

The Piper production halted for several years and pulled down other platform production levels as better maintenance policies were instituted. This had the unintended benefit of extending the UK production level before the eventual decline set in.

Figure 4: Piper Alpha oil production halted for several years

As I have always said, having good numbers for production data allows use to apply sophisticated analyses such as the oil shock model to project and predict future trends. The UK government has done a fine job in making the data publicly available, allowing lots of interesting analysis which I documented in The Oil Conundrum book.

                                            Updates                                         

Feedback from others
http://www.theoildrum.com/node/9946#comment-958176
http://transportblog.co.nz/2013/04/12/oil-dependancy-and-the-wealth-of-nations/

Sources
http://www.worldreview.info/content/uk-north-sea-oil-and-gas-resurgence-activity

Ornstein-Uhlenbeck Diffusion

The Ornstein-Uhlenbeck correction to diffusion can be used in applications ranging from modeling Bakken oil production to modeling CO2 sequestration.

Due to its origins as a random walk process, a pure diffusion model of particles will show unbounded excursions given a long enough time duration. This is characterized by the unbounded Fickian growth law showing a t dependence for a pure random walk with a single diffusivity.

In practice, the physical environment of a particle may prevent unbounded excursions. It is physically possible that the environment may impose limiting effects on the extent of motion, or that it will place some form of drag on the particle’s hopping rate the further it moves away from a mean starting value.

The rationale for this limiting process within a producing hydrofractured reservoir may arise from a barrier to diffusion beyond a certain range.

Figure 1: The Ornstein-Uhlenbeck process suppresses the diffusional flow by limiting the extent at which the mobile solute can travel, thus generating a constrained asymptote below that of drag-free diffusion.

Similarly, a sequestration barrier may exist preventing CO2 from permanently exiting the active carbon cycle; this makes the fat temporal tail even fatter.

Figure 2 : Impulse Response of the sequestering of Carbon Dioxide to a normalized stimulus. The solid blue curve represents the generally accepted model, while the dashed and dotted curves represent the dispersive diffusion model with O-U correction. The correction flattens out the tail.

We can use the Ornstein-Uhlenbeck process to model mathematically how this pure random walk becomes bounded. The Ornstein-Uhlenbeck process has its origins in the modeling of Brownian motion with a special “reversion to the mean” property in motion excursions (specifically, it was first formulated to describe Brownian motion in the presence of drag on particle velocities, extending Einstein’s work). The following expression shows the stationary marginal probability given a stochastic differential equation

dX=-a X∙dt+dW

which models a drag on an excursion [1]

dPXt+s=x  Xs= 0)= 12πτe-x22τ dx 
where   τ=1-e-2at2a

The O-U correction is straight-forward to apply on our dispersive growth formulation, we only need apply a non-linear transformation to the time-scale.

t →O-Uτ

and then apply this to a dispersive growth term such as the following:

xτ(t)=Dτ(t)Dτ(t)x01+ Dτ(t)x0    →where    τt=(1-e-2at)/2a  

This has the equivalent effect of appearing to slow down time at an exponential rate. This exponential rate turns out to be much faster than the Fickian growth law can sustain, so that an asymptotic limit is achieved in the diffusional growth extent.

Figure 3: The reversion to the mean process of the Ornstein-Uhlenbeck process will limit the growth of a diffusional process

A physical model of an attractor or potential well which “tugs” on the random walker to bring it back to the mean state (see Figure below).

Figure 4: Representation of an Ornstein-Uhlenbeck random walk process. The hopping rate works similarly to a potential well, with a greater resistance to hopping the further excursion ways from the mean.

Algorithmic Code

The following pseudo-code snippet sets up an Ornstein-Uhlenbeck random walk model with a reversion-to-the-mean term. The diffusion term is the classical Markovian random walk transition rate. The drag term places an attractor which opposes large excursions in the term Z.

— Ornstein-Uhlenbeck random walk = ou

ou(X1, X2, Z)
   random(R)
   if R < 0.5 then
      Z = Z*X1 + X2
   else
      Z = Z*X1 – X2
   end

— This is how it gets parameterized

ou_random_walker (dX, Diffusion, Drag, Z)
   X1 = exp(-2*Drag*dX)
   X2 = sqrt(Diffusion*(1-exp(-2*Drag*dX))/2/Drag)
   ou(X1, X2, Z)

Variance

To determine whether an Ornstein-Uhlenbeck process is apparent on a set of data, one can apply a simple multiscale variance to the result of a Z array of length N :

variance(Z,N) {
   L = N/2
   while(L > 1) {
      Sum = 0.0
      for(i=1; i<N/2; i++) {
        Val = Z[i] – Z[i+L]
        Sum += Val * Val
      }
      print L “ “ sqrt(Sum/(N/2))
      L = 0.95*L;
   }
}



So that for a given random walk simulation, the asymptotic variance will tend to saturate at longer correlation length scales. A typical multiscale variance plot will look like those described here: http://theoilconundrum.blogspot.com/2011/11/multiscale-variance-analysis-and.html.

Autocorrelation and Spectral Representation

The autocorrelation of the Ornstein-Uhlenbeck process is, where Theta=Drag:

Rx= Dθ e-θ|x|

Even though this shows a saturation level, the power spectrum still obeys a 1/S^2 fall-off.

ISx= Dπ∙1Sx2+θ2

The Orenstein-Uhlenbeck process is often referred to as red noise and the two parameters of Diffusion and Drag can be determined either from the autocorrelation function or from the PSD. For the PSD, on a log-log plot, this involves reading the peak near S=0 and then determining the shoulder of the power-law roll-off. Between these two measures, one can infer both parameter values.

References

[1]
D. Mumford and A. Desolneux, Pattern Theory: The Stochastic Analysis Of Real-World Signals. A K Peters, Ltd., 2010.

Ocean Heat Content Model

The ocean heat content continues to increase and perhaps accelerate [1], as expected due to global warming. In an earlier post, I analyzed the case of the “missing heat”, which wasn’t missing after all, but the result of the significant heat sinking characteristics of the oceans [2].

The objective is to create a simple model which tracks the transient growth as shown in the recent paper by Balmaseda, Trenberth, and Källén (BTK)[1] and described in Figure 1 below

Figure 1: Ocean Heat Content from BTK [1] (source)

We will assume a diffusive flow of heat as described in James Hansen’s 1981 paper [2]. In general  the diffusion of heat is qualitatively playing out according to the way Fick’s law would apply to a heat sink. Hansen also volunteered an effective diffusion that should apply, set to the nice round number of 1 cm^2/second.

In the following we provide a mathematical explanation which works its way from first principles to come up with an uncertainty-quantified formulation. After that we present a first-order sanity check to the approximation.

Solution

We have three depths that we are looking at for heat accumulation (in addition to a surface layer which gives only a sea-surface temperature or SST). These are given as depths to 300 meters, to 700 meters, and down to infinity (or 2000 meters from another source), as charted on Figure 1.

We assume that excess heat (mostly infrared) is injected through the ocean’s surface and works its way down through the depths by an effective diffusion coefficient.  The kernel transient solution to the planar heat equation is this:

$$\Delta T(x,t) = \frac{c}{\sqrt{D_i t}} \exp{\frac{-x^2}{D_i t}} $$
where D_i is the diffusion coefficient, and x is the depth (note that often a value of 2D instead of D is used depending on the definition of the random walk).  The delta temperature is related to a thermal energy or heat through the heat capacity of salt water, which we assume to be constant through the layers. Any scaling is accommodated by the prefactor, c

The first level of uncertainty is a maximum entropy prior (MaxEnt) that we apply to the diffusion coefficient to approximate the various pathways that heat can follow downward.  For example, some of the flow will be by eddy diffusion and other paths by conventional vertical mixing diffusion.  If we apply a maximum entropy probability density function, assuming only a mean value for the diffusion coefficient:
$$ p(D_i) = \frac{1}{D} e^{\frac{-D_i}{D}} $$

then we get this formulation:
$$\Delta T(t|x) = \frac{c}{\sqrt{D t}} e^{\frac{-x}{\sqrt{D t}}} (1 + \frac{x}{\sqrt{D t}})$$

The next uncertainty is in capturing the heat content for a layer. The incremental heat across a layer, L, we can approximate as
$$ \int { \Delta T(t|x) p(x) dx}  = \int { \Delta T(t|x) e^{\frac{-x}{L}} dx} $$

Which gives as the excess heat response the following concise equation.
$$ I(t) = \frac{ \frac{\sqrt{Dt}}{L} + 2 }{(\frac{\sqrt{Dt}}{L} + 1)^2}$$

This is also the response to a delta forcing impulse, but for a realistic situation where a growing aCO2 forces the response (explained in the previous post), we simply apply a convolution of the thermal stimulus with the thermal response.  The temporal profile of the increasing aCO2 generates a growing thermal forcing function.

$$ R(t) = F(t) \otimes I(t) = \int_0^t { F(\tau) I(t-\tau) d\tau} $$
If the thermal stimulus is a linearly growing heat flux, which roughly matches the GHG forcing function (see Figure 7 in the previous post)
 $$ F(t) = k \cdot (t-t_0) $$
then assuming a starting point t_0=0.
$$ R(t)/k = \frac{2 L}{3} {(D t)}^{1.5}- L^2 D t+2 L^3 \sqrt{D t}-2 L^4 \ln(\frac{\sqrt{D t}+L}{L})) $$

A good approximation is to assume the thermal forcing function started kicking in about 50 years ago, circa1960. We can then plot the equation for various values of the layer thickness, L, and a value of D of 3 cm^2/s.

Figure 2 : Thermal Dispersive Diffusion Model applied to the OHC data

Another view of the OHC includes the thermal mass of the non-ocean regions (see Figure 3). This data appears smoothed in comparison to the raw data of Figure 2.

Figure 3 :  Alternate view of growing ocean heat content. This also includes non-ocean.

In the latter figure, the agreement with the uncertainty-quantified theory is more striking. A single parameter, Hansen’s effective diffusion coefficient D, along with the inferred external thermal forcing function is able to reproduce the temporal profile accurately.

The strength of this modeling approach is to lean on the maximum entropy principle to fill in the missing gaps where the variability in the numbers are uncertain.  In this case, the diffusion and ocean depths hold the uncertainty, and we use first-order phyiscs to do the rest.

Sanity Check

The following is a sanity check for the above formulation; providing essentially a homework assignment that a physics professor would hand out in class.

An application of Fick’’s Law is to approximate the amount of material that has diffused (with thermal diffusion coefficient D) at least a certain distance, x, over a time duration, t, by

$$ e^{\frac{-x}{\sqrt{Dt}}}= \frac{Q}{Q_0}$$

  • For greater than 300 meters, Q/Q_0 is 13.5/20 read from Figure 1.
  • For greater than 700 meters, Q/Q_0 is 7.5/20

Where Q_0=20 is the baseline for the total heat measured over all depths (i.e. between x=0 and x=infinite depth) reached at the current time. No heat will diffuse to infinite depths so at that point Q/Q_0 is 0/20.

First, we can check to see how close the value of L=sqrt(Dt) scales, by fitting the Q/Q_0 ratio at each depth..

  • For x=300 meters, we get L=763
  • For x=700 meters, we get L=713

These two are close enough to maintaining invariance that the Fick’’s law scaling relation holds and we can infer that the flow is by an effective diffusion, just as was surmised by Hansen et al in 1981.

We then use an average elapsed diffusion time of t=40 years and assume an average diffusion depth of 740, and D comes out to 4.5 cm^2/s.

Hansen in 1981 [2] used an estimated value of diffusion of 1 cm^2/s, which is within an order of magnitude of 4.5 cm^2/s

This is a scratch attempt at an approximate solution to the more general solution, which is the equivalent of generating an impulse function in the past and watching that evolve. The general solution which we formulated earlier involves a modulated forcing function and we compute the convolution against the thermal impulse response (i.e. Green’s function) and compare that against Figure 1 and Figure 2, and the full temporal profile.

Discussion

Reading Hansen and looking at the BTK paper’s results, we should note that nothing looks out of the ordinary for the OHC trending if we compare it to what conventional thermal physics would predict. The total heat absorbed by the ocean is within  range of that expected by a 3C doubling sensitivity. BTK has done the important step in determining the amount of total heat absorbed, which allows us to estimate the effective diffusion coefficient by evaluating how the heat contents modulate with depth.We add a maximum uncertainty modifier to the coefficient to model the disorder in diffusivity (some of it eddy diffusivity, some vertical, etc) and that allows us to match the temporal profile accurately.

References

[1]
M. A. Balmaseda, K. E. Trenberth, and E. Källén, “Distinctive climate signals in reanalysis of global ocean heat content,” Geophysical Research Letters, 2013.


[2]
J. Hansen, D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, “Climate impact of increasing atmospheric carbon dioxide,” Science, vol. 213 (4511), pp. 957–966.
http://pubs.giss.nasa.gov/docs/1981/1981_Hansen_etal.pdf

 EDITED 

More data from the NOAA site using the same fit. Levitus et al [3] apply an alternative characterization to the above. I apply the same model as dashed green line.

Figure 4 : From the NOAA site, with the same model
http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/

And here is a plot of OHC using the effective forcing described by Hansen [4]. Instead of using a ramp forcing function which gives the analytical result of Figure 2, a realistic forcing (which takes into account perturbations due to volcanic events) is numerically convolved with the diffusive response function, I(t).

 Figure 5: Numerical convolution of effective forcing (lower left) is convolved with the diffusive transfer function to give the response (upper right).  The three curves are < 300 m,  < 700 m, and < 2000 m.

Note that the volcanic disturbances are clearly visible in the response, although they do not show as sharp a transient decrease after the events, perhaps half of what Figure 1 shows. The suppression due to the 1997-1998 El Nino is also not observable, but that of course is not a effective forcing and so won’t appear.

[3]
S. Levitus, J. Antonov, T. Boyer, O. Baranova, H. Garcia, R. Locarnini, A. Mishonov, J. Reagan, D. Seidov, and E. Yarosh, “World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010,” Geophysical Research Letters, vol. 39, no. 10, 2012.

[4]
J. Hansen, M. Sato, P. Kharecha, and K. von Schuckmann, “Earth’s energy imbalance and implications,” Atmospheric Chemistry and Physics, vol. 11, no. 24, pp. 13421–13449, Dec. 2011.

Ocean map of depths