The ocean heat content continues to increase and perhaps accelerate [1], as expected due to global warming. In an earlier post, I analyzed the case of the “missing heat”, which wasn’t missing after all, but the result of the significant heat sinking characteristics of the oceans [2].

The objective is to create a simple model which tracks the transient growth as shown in the recent paper by Balmaseda, Trenberth, and Källén (BTK)[1] and described in** Figure 1 **below

Figure 1: Ocean Heat Content from BTK [1] (source) |

We will assume a diffusive flow of heat as described in James Hansen’s 1981 paper [2]. In general the diffusion of heat is qualitatively playing out according to the way Fick’s law would apply to a heat sink. Hansen also volunteered an effective diffusion that should apply, set to the nice round number of 1 cm^2/second.

In the following we provide a mathematical explanation which works its way from first principles to come up with an *uncertainty-quantified* formulation. After that we present a first-order sanity check to the approximation.

#### Solution

We have three depths that we are looking at for heat accumulation (in addition to a surface layer which gives only a sea-surface temperature or SST). These are given as depths to 300 meters, to 700 meters, and down to infinity (or 2000 meters from another source), as charted on **Figure 1**.

We assume that excess heat (mostly infrared) is injected through the ocean’s surface and works its way down through the depths by an effective diffusion coefficient. The kernel transient solution to the planar heat equation is this:

$$\Delta T(x,t) = \frac{c}{\sqrt{D_i t}} \exp{\frac{-x^2}{D_i t}} $$

where* D*_i is the diffusion coefficient, and *x* is the depth (note that often a value of 2*D *instead of *D* is used depending on the definition of the random walk). The delta temperature is related to a thermal energy or heat through the heat capacity of salt water, which we assume to be constant through the layers. Any scaling is accommodated by the prefactor, *c*

The first level of uncertainty is a *maximum entropy prior* (MaxEnt) that we apply to the diffusion coefficient to approximate the various pathways that heat can follow downward. For example, some of the flow will be by eddy diffusion and other paths by conventional vertical mixing diffusion. If we apply a maximum entropy probability density function, assuming only a mean value for the diffusion coefficient:

$$ p(D_i) = \frac{1}{D} e^{\frac{-D_i}{D}} $$

then we get this formulation:

$$\Delta T(t|x) = \frac{c}{\sqrt{D t}} e^{\frac{-x}{\sqrt{D t}}} (1 + \frac{x}{\sqrt{D t}})$$

The next uncertainty is in capturing the heat content for a layer. The incremental heat across a layer, *L*, we can approximate as

$$ \int { \Delta T(t|x) p(x) dx} = \int { \Delta T(t|x) e^{\frac{-x}{L}} dx} $$

Which gives as the excess heat response the following concise equation.

$$ I(t) = \frac{ \frac{\sqrt{Dt}}{L} + 2 }{(\frac{\sqrt{Dt}}{L} + 1)^2}$$

This is also the response to a delta forcing impulse, but for a realistic situation where a growing aCO2 forces the response (explained in the previous post), we simply apply a convolution of the thermal stimulus with the thermal response. The temporal profile of the increasing aCO2 generates a growing thermal forcing function.

$$ R(t) = F(t) \otimes I(t) = \int_0^t { F(\tau) I(t-\tau) d\tau} $$

If the thermal stimulus is a linearly growing heat flux, which roughly matches the GHG forcing function (see Figure 7 in the previous post)

$$ F(t) = k \cdot (t-t_0) $$

then assuming a starting point t_0=0.

$$ R(t)/k = \frac{2 L}{3} {(D t)}^{1.5}- L^2 D t+2 L^3 \sqrt{D t}-2 L^4 \ln(\frac{\sqrt{D t}+L}{L})) $$

A good approximation is to assume the thermal forcing function started kicking in about 50 years ago, circa1960. We can then plot the equation for various values of the layer thickness, *L*, and a value of *D* of 3 cm^2/s.

Figure 2 : Thermal Dispersive Diffusion Model applied to the OHC data |

Another view of the OHC includes the thermal mass of the non-ocean regions (see **Figure 3**). This data appears smoothed in comparison to the raw data of **Figure 2**.

Figure 3 : Alternate view of growing ocean heat content. This also includes non-ocean. |

In the latter figure, the agreement with the uncertainty-quantified theory is more striking. A single parameter, Hansen’s effective diffusion coefficient* D*, along with the inferred external thermal forcing function is able to reproduce the temporal profile accurately.

The strength of this modeling approach is to lean on the maximum entropy principle to fill in the missing gaps where the variability in the numbers are uncertain. In this case, the diffusion and ocean depths hold the uncertainty, and we use first-order phyiscs to do the rest.

#### Sanity Check

The following is a sanity check for the above formulation; providing essentially a homework assignment that a physics professor would hand out in class.

An application of Fick’s Law is to approximate the amount of material that has diffused (with thermal diffusion coefficient *D*) at least a certain distance, *x*, over a time duration, *t*, by

$$ e^{\frac{-x}{\sqrt{Dt}}}= \frac{Q}{Q_0}$$

- For greater than 300 meters,
*Q/Q_0*is 13.5/20 read from**Figure 1**. - For greater than 700 meters,
*Q/Q_0*is 7.5/20

Where *Q_0*=20 is the baseline for the total heat measured over all depths (i.e. between x=0 and *x*=infinite depth) reached at the current time. No heat will diffuse to infinite depths so at that point *Q/Q_0* is 0/20.

First, we can check to see how close the value of *L*=sqrt(*Dt*) scales, by fitting the *Q/Q_0 *ratio at each depth..

- For
*x*=300 meters, we get*L*=763 - For
*x*=700 meters, we get*L*=713

These two are close enough to maintaining invariance that the Fick’s law scaling relation holds and we can infer that the flow is by an effective diffusion, just as was surmised by Hansen et al in 1981.

We then use an average elapsed diffusion time of *t*=40 years and assume an average diffusion depth of 740, and *D* comes out to 4.5 cm^2/s.

Hansen in 1981 [2] used an estimated value of diffusion of 1 cm^2/s, which is within an order of magnitude of 4.5 cm^2/s

This is a scratch attempt at an approximate solution to the more general solution, which is the equivalent of generating an impulse function in the past and watching that evolve. The general solution which we formulated earlier involves a modulated forcing function and we compute the convolution against the thermal impulse response (i.e. Greens function) and compare that against **Figure 1** and **Figure 2**, and the full temporal profile.

#### Discussion

Reading Hansen and looking at the BTK paper’s results, we should note that nothing looks out of the ordinary for the OHC trending if we compare it to what conventional thermal physics would predict. The total heat absorbed by the ocean is within range of that expected by a 3C doubling sensitivity. BTK has done the important step in determining the amount of total heat absorbed, which allows us to estimate the effective diffusion coefficient by evaluating how the heat contents modulate with depth.We add a maximum uncertainty modifier to the coefficient to model the disorder in diffusivity (some of it eddy diffusivity, some vertical, etc) and that allows us to match the temporal profile accurately.

**References**

*Geophysical Research Letters*, 2013.

*Science*, vol. 213 (4511), pp. 957–966.

http://pubs.giss.nasa.gov/docs/1981/1981_Hansen_etal.pdf

## EDITED

More data from the NOAA site using the same fit. Levitus et al [3] apply an alternative characterization to the above. I apply the same model as dashed green line.

Figure 4 : From the NOAA site, with the same modelhttp://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/ |

And here is a plot of OHC using the effective forcing described by Hansen [4]. Instead of using a ramp forcing function which gives the analytical result of **Figure 2**, a realistic forcing (which takes into account perturbations due to volcanic events) is numerically convolved with the diffusive response function, *I(t)*.

Note that the volcanic disturbances are clearly visible in the response, although they do not show as sharp a transient decrease after the events, perhaps half of what** Figure 1** shows. The suppression due to the 1997-1998 El Nino is also not observable, but that of course is not a effective forcing and so won’t appear.

*Geophysical Research Letters*, vol. 39, no. 10, 2012.

*Atmospheric Chemistry and Physics*, vol. 11, no. 24, pp. 13421–13449, Dec. 2011.

WHT,Nice work as usual.An unrelated question. I thought I had read a post here about an Ornstein-Uhlenbeck Diffusion model, but I cannot seem to find it. Was that only at the oil drum? Thanks.DC

LikeLike

dc, I don't have the o-u model writeup complete wrt to Bakken flow but the march 21 blog posting has some simulations of ornstein-uhlenbeck red noise. I will have more.

LikeLike

Well done.

LikeLike

Agree with David Benson wholeheartedly. This is a tidy bit of analysis. Sorry for the ah… lagged response but I've only just seen your post. Aside – how the hell you can put up with the talking backsides at JC's as you do is beyond me. After just a couple of days I am ready to grip a windpipe again.

LikeLike

BBD, It's motivation. I need conflict to keep driving forward, otherwise I tend to procrastinate. Thanks for the support.

LikeLike

Here is a interesting comment from CE blog\” Pekka Pirilä | May 10, 2013 at 5:46 am |Heat transfer in oceans is not a diffusion process. It’s still possible that diffusion type models like that of WHUT provide a fair approximation to the heat transfer when averaged over the oceans but it’s also possible that they fail badly.\”Horizontal and vertical eddy diffusion is not a \”standard\” diffusion process such as you would find in a Brownian motion example, yet the math used is similar. That's why macro-scale eddy behaviors are classified as diffusion processes. The heat gets dispersed not because of ordinary thermal conductivity but because of advective diffusion processes which carry the heat away as a free ride. Sometimes we refer to these kinds of processes as \”effective\” diffusion coefficients because they are a hybrid combination of various diffusivities. In this case, the highest diffusion coefficient, that of eddy diffusivity wins out.Recall that James Hansen applied the concept of this effective macro diffusion in the early 1980's. He used diffusion coefficients as high as 1 cm^2/s in trying to model the behavior of vertical eddy diffusivity. A value of 1 cm^2/sec is comparable to the thermal diffusivity of copper. Stick your tongue to a below freezing copper sheet and you will get an appreciation for how fast heat will diffuse away from your tissue.Science is about building on the work of others. What I am doing is following the numbers as supplied by Hansen and oceanography scientists and then make sense of the data models provided by Levitus and Balmaseda/Trenberth.

LikeLike

Web – let's remember that I am the guy who referred you to a Hansen paper. Not because I have any great understanding, but I read it and remembered it when you mentioned \”diffusion\” WRT to the oceans warming in a comment, I went back and found the paper provided you with a link. Obviously during La NIna and negative PDO there is tremendous upwelling of cold water alongside South America and the American/Canadian Pacific NW. Along the equator strong winds peel off the warm surface waters and move them to the Western Pacific, where they mound up: sea level there goes way up. In Western Pacific there is significant downwelling of hot water that once extended all the way across the surface of the Pacific equator region. I have not found a reference that states the same thing is happening in the Pacific ocean region by Russia and Japan, but I can see no reason why it would not be happening there as well.I do not know when it became known that this significant amount of upwelling and downwelling takes place during La Nina and negative PDO, but, and I have to find this again, I'm certain Tsonis and Swanson referred to their speculated regime shift as being similar to a past period where the deep ocean warmed. Which is what is happening now in a La Nina dominance and a negative PDO.Would Hansen known of this phenomena in the early 1980s?

LikeLike

BBD – I miss you over there. That's why they act the way they act. They know it will drive away reason:

LikeLike

Hi WHT,Saw your link here from Lucia's a few weeks ago, and marked it down as something I wanted to look at further. Do you happen to have the spreadsheet or script available for your model? Tried reproducing most of it in R from the post description, but I think I may be missing a couple details to match your results…Thanks,-Troy

LikeLike

I use a combination of Prolog and R where appropriate. High level language for time series analysis%% lambda function for diffusiondd_ohc(Scale, L, D, Year, Result) :- R is sqrt(D*Year)/L, Result is Scale*(R+2)/(R+1)^2/2.%%% For the 700 meter layer, DD7 mapdot dd_ohc(1.0, 0.7, D) ~> Time, OHC7 convolve DD7*Forcing

LikeLike

Thanks, WHT. The problem I see with this model as it stands now is that there is no radiative restoration term. This is problematic for two reasons: 1) as this term (units W/m^2/K, e.g.) is what determines equilibrium sensitivity (it is its inverse), the model as it stands does not really say anything about sensitivity, and 2) by assuming the TOA imbalance is equal to the forcing anomaly (which is implicity what you are doing by using this for your excess heat), you miss the dependence of TOA imbalance on surface temperature, which in turn affects the evolution of the ocean heat uptake. Consider what happens if you set D to some tiny number (or 0, for the sake of this thought experiment). Now if you apply a single step change in forcing of 1 W/m^2 for instance, and then hold that steady, your model will continue to accumulate heat in perpetuity. In this sense, your model seems to have an infinite ECS. What should happen in that scenario is that the surface layer should heat very quickly, increasing the temperature until the increased outgoing radiation from the temperature increase matches that from the forcing increase, thereby restoring equilibrium (and no more heating should occur after that). Again, as it stands now, it appears the only way for your model to \”lose\” energy (rather than increasing outgoing radiation) is for it to diffuse below the bottom of the ocean. I think you could still use the diffusion part, but you will need to determine what depth constitutes the surface \”mixed-layer\”, and then choose some value for the radiative restoration strength (lamda) that will increase outgoing radiation proportion to the temperature increase of this surface layer. The problem, however, is that you could then likely match observations with a variety of values for D and lamda (or perhaps not, haven't looked into it too far).

LikeLike

We are speaking different languages here. I am approaching the problem as a transient thermal dynamics behavior. The radiative restoration term is buried in with the latent heat of evaporation. The outward radiation only occurs at the surface layer and that is mixed with the latent heat of vaporization.Once any excess thermal energy gets to the subsurface layers, it can randomly walk up and down. That is called diffusion. If some of this thermal energy walks back up, then yes it can get radiated back outward. But that is taken into consideration by the Brownian motion implied in the model. The actual temperature change that occurs is negligible because the large specific heat capacity of water plus the diffusive volume smears this out. That's entropy for you, the mixing and spread of energy. If D was tiny like you say it could be, then yes, the boundary conditions would need to change and definitely the response would differ. But we are dealing with thermal diffusivities on the order of 1 cm^2/s which is akin to a copper block — once that heat enters, it will go all over the place, not due to the intrinsic thermal conductivity of water but thanks to the random eddies which form and act as an effective diffusivity.These perturbation diffusion problems aren't hard. The interface between the ocean and the atmosphere is not as compliant as you would find in other thermal diffusion models. Treat the thermal energy as a particle, assume a large uncertainty in diffusivity and apply the transient formulation of the heat equation, using the forcing as a stimulus to the Green's function.It will be a long time for the ECS to reach its ultimate value, which is where steady-state thermal compliance will start taking hold. I haven't looked at your own work, maybe you have made the classic mistake of assuming first-order response times for thermal forcings. Realistically, the solutions are not first-order (i.e. exponentially damped) , instead they are Fickian-like response profiles which have fat-tails. It appears as a semi-infinite heat sink. If you don't assume the full partial differential heat equation as your initial premise and instead opt for an approximate first-order temporal differential equation, you will get the wrong results.

LikeLike

You are right, we seem to be speaking a different language. However, let's see if we can agree on the following first:If you were to hold the forcing anomaly fixed (let's say at 1.5 W/m^2 relative to 1955 values) in the year 2000, would you expect the *rate* of ocean heat uptake (including all layers) to stay constant, or decrease? Hopefully you agree that this rate should decrease, as the temperature increase during the time when the forcing is held constant will lead to a decrease in the TOA imbalance. However, from what I can see, in your model this rate remains constant in that case. This is what I mean when I say your model has an infinite ECS. Getting into specifics, you seem to be arguing that the temperature change is so small as to be insignificant for the transient case, so the above model divergence is not problematic? We know sea surface temperatures have risen ~ 0.5 K over this period…assuming the 1.23 W/m^2/K radiative response associated with a 3K sensitivity results in about 0.6 W/m^2 of increase outgoing radiation to balance out the ~1.5 W/m^2 forcing at the end. This means you are pumping heat into your ocean at the end by about 67% higher rate than you should be in recent decades. If effective sensitivity were instead closer to 2K over this period, your TOA would be 3x what it should be. I assume this is why you required the 1/2 factor, since you were using the forcing as the TOA imbalance without consideration of how surface temperature evolution would impact the TOA imbalance. However, if you continue to do this, as the forcing increases your model's ocean warming (if you hold your current D parameter fixed) will continue to accelerate at a much faster rate than it should. If you were merely modeling the diffusion of heat after the initial ocean uptake, I would not object (although for the surface mixed layer, which is homogenized through turbulent mixing, even \”effective\” diffusion probably does not describe the process very well). However, again, your convolution with the forcing history without consideration of surface temperature evolution means that the magnitude and evolution of the rate of ocean uptake will be incorrect.

LikeLike

That wouldn't be an infinite ECS, it would be a stiff ECS which only responds to SST changes.The ocean is so compliant that it acts as a huge heat sink, almost infinite. But this does not lead to an infinite ECS, it leads to an ECS that simply reflects the SST layer, which is the heat residing near the surface (a proportion of the total). It is the difference between ECS and TCR.

LikeLike