DC at the OilPeakClimate blog spent some time at re-analysis of the Oil Shock model and Dispersive Discovery model which were originally described in The Oil Conundrum Book (see menu above).
Whenever a model is re-analyzed, there is the possibility of improvement. In DC’s case, he made a very smart move to try to separate the extra-heavy oil as a distinct process. The Shell Oil discovery data appears to combine the two sets of data, leading to a much larger URR than Laherrere gets. What he accomplished is to reconcile the lighter-crude Laherrere discovery data and the reality that there are likely ~500 GB of extra-heavy crude waiting to be exploited. Whether this effectively happens is the big question.
Read DC’s whole post and potential discussion here, as he has made an excellent effort the last several years of trying to digest some heavy math and dry reading from the book. He is also making sense of the Bakken oil production numbers in other posts.
As a PS, I have added an extra section in the book to describe the dispersive diffusion model describing the Bakken production numbers.
DC also posted this piece on the PeakOilBarrel blog. Almost 500 comments there.
2 thoughts on “The Oil Shock Model Simplified”
It must have been a re-analysis week. I published an update to the Loglet Analysis almost on the same day:
The math is certainly simpler, but it is interesting to see how results improve by unbundling NGLs from crude. The next step is to unbundle heavy petroleums and model those separetldy.
Another thing: Jean has been using a 500 Gb ultimate for heavy petroleums for some time. Check for instance this article:
See you around. Best.
The correct link to Jean’s article:
Click to access JL_Clarmix-Previsions1900-2100.pdf