Exploring (de)compaction with Python

All clastic sediments are subject to compaction (and reduction of porosity) as the result of increasingly tighter packing of grains under a thickening overburden. Decompaction – the estimation of the decompacted thickness of a rock column – is an important part of subsidence (or geohistory) analysis. The following exercise is loosely based on the excellent basin analysis textbook by Allen & Allen (2013), especially their Appendix 56. You can download the Jupyter notebook version of this post from Github.

Import stuff

import numpy as np
import matplotlib.pyplot as plt
import functools
from scipy.optimize import bisect
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.rcParams['mathtext.fontset'] = 'cm'

Posing the problem

Given a sediment column of a certain lithology with its top at y_1 and its base at y_2, we are trying to find the thickness and average porosity of the same sediment column at a different depth (see figure below). We are going to set the new top y_1' and work towards finding the new base y_2'.

plt.figure(figsize=(2,5))
x = [0,1,1,0,0]
y = [1,1,1.5,1.5,1]
plt.text(-0.6,1.02,'$y_1$',fontsize=16)
plt.text(-0.6,1.52,'$y_2$',fontsize=16)
plt.text(-0.6,1.27,'$\phi$',fontsize=16)
plt.fill(x,y,'y')
x = [3,4,4,3,3]
y = [0.5,0.5,1.15,1.15,0.5]
plt.text(2.25,0.52,'$y_1\'$',fontsize=16)
plt.text(2.25,1.17,'$y_2\'$',fontsize=16)
plt.text(2.25,0.9,'$\phi\'$',fontsize=16)
plt.fill(x,y,'y')
plt.plot([1,3],[1,0.5],'k--')
plt.plot([1,3],[1.5,1.15],'k--')
plt.gca().invert_yaxis()
plt.axis('off');

decompaction1

Porosity decrease with depth

Porosity decreases with depth, initially largely due to mechanical compaction of the sediment. The decrease in porosity is relatively large close to the seafloor, where sediment is loosely packed; the lower the porosity, the less room there is for further compaction. This decrease in porosity with depth is commonly modeled as a negative exponential function (Athy, 1930):

\phi(y) = \phi_0 e^{-\frac{y}{y_0}}

where \phi(y) is the porosity at depth y and y_0 is the depth where the initial porosity \phi_0 was reduced by 1/e.

This is an empirical equation, as there is no direct physical link between depth and porosity; compaction and porosity reduction are more directly related to the increase in effective stress under a thicker overburden. Here we only address the simplest scenario with no overpressured zones. For normally pressured sediments, Athy’s porosity-depth relationship can be expressed in a slightly different form:

\phi(y) = \phi_0 e^{-cy}

where c is a coefficient with the units km^{-1}. The idea is that c is a characteristic constant for a certain lithology and it can measured if porosity values are available from different depths. Muds have higher porosities at the seafloor than sands but they compact faster than sands. The plot below show some typical curves that illustrate the exponential decrease in porosity with depth for sand and mud. The continuous lines correspond to the parameters for sand and mud in Appendix 56 of Allen & Allen (2013); the dashed lines are exponential fits to data from the Ocean Drilling Program (Kominz et al., 2011).

c_sand = 0.27 # porosity-depth coefficient for sand (km-1)
c_mud = 0.57 # porosity-depth coefficent for mud (km-1)
phi_sand_0 = 0.49 # surface porosity for sand
phi_mud_0 = 0.63 # surface porosity for mud
y = np.arange(0,3.01,0.01)

phi_sand = phi_sand_0 * np.exp(-c_sand*y)
phi_mud = phi_mud_0 * np.exp(-c_mud*y)

plt.figure(figsize=(4,7))
plt.plot(phi_sand,y,'y',linewidth=2,label='sand')
plt.plot(phi_mud,y,'brown',linewidth=2,label='mud')
plt.xlabel('porosity')
plt.ylabel('depth (km)')
plt.xlim(0,0.65)
plt.gca().invert_yaxis()

c_sand = 1000/18605.0 # Kominz et al. 2011 >90% sand curve
c_mud = 1000/1671.0 # Kominz et al. 2011 >90% mud curve
phi_sand_0 = 0.407 # Kominz et al. 2011 >90% sand curve
phi_mud_0 = 0.614 # Kominz et al. 2011 >90% mud curve
phi_sand = phi_sand_0 * np.exp(-c_sand*y)
phi_mud = phi_mud_0 * np.exp(-c_mud*y)
plt.plot(phi_sand,y,'y--',linewidth=2,label='90% sand')
plt.plot(phi_mud,y,'--',color='brown',linewidth=2,label='90% mud')
plt.legend(loc=0, fontsize=10);

decompaction2

While the compaction trends for mud happen to be fairly similar in the plot above, the ones for sandy lithologies are very different. This highlights that porosity-depth curves vary significantly from one basin to another, and are strongly affected by overpressures and exhumation. Using local data and geological information is critical. As Giles et al. (1998) have put it, “The use of default compaction curves can introduce significant errors into thermal history and pore- fluid pressure calculations, particularly where little well data are available to calibrate the model.” To see how widely – and wildly – variable compaction trends can be, check out the review paper by Giles et al. (1998).

Deriving the general decompaction equation

Compacting or decompacting a column of sediment means that we have to move it along the curves in the figure above. Let’s consider the volume of water in a small segment of the sediment column (over which porosity does not vary a lot):

dV_w = \phi dV_t

As we have seen before, porosity at depth y is

\phi(y) = \phi_0 e^{-cy}

The first equation then becomes

dV_w = \phi_0 e^{-cy} dV_t

But

dV_w = A dy_w

and

dV_t = A dy_t

where y_w and y_t are the thicknesses that the water and total volumes occupy respectively, and A is the area of the column we are looking at. So the relationship is equivalent to

dy_w = \phi_0 e^{-cy} dy_t

If we integrate this over the interval y_1 to y_2 we get

y_w = \int_{y1}^{y2} \phi_0 e^{-cy} dy_t

Integrating this yields

y_w = \phi_0 \Bigg(\frac{1}{-c}e^{-cy_2} - \frac{1}{-c}e^{-cy_1}\Bigg) = \frac{\phi_0}{c} \big(e^{-cy_1}-e^{-cy_2}\big)

As the total thickness equals the sediment thickness plus the water “thickness”, we get

y_s = y_t - y_w = y_2 - y_1 - y_w = y_2 - y_1 - \frac{\phi_0}{c} \big(e^{-cy_1}-e^{-cy_2}\big)

The decompacted value of y_w is

y_w' = \frac{\phi_0}{c} \big(e^{-cy_1'}-e^{-cy_2'}\big)

Now we can write the general decompaction equation:

y_2'-y_1' = y_s+y_w'

That is,

y_2'-y_1' = y_2 - y_1 - \frac{\phi_0}{c} \big(e^{-cy_1}-e^{-cy_2}\big) + \frac{\phi_0}{c} \big(e^{-cy_1'}-e^{-cy_2'}\big)

The average porosity at the new depth will be

\phi = \frac{Ay_w'}{Ay_t'} = \frac{\phi_0}{c}\frac{\big(e^{-cy_1'}-e^{-cy_2'}\big)}{y_2'-y_1'}

Write code to compute (de)compacted thickness

The decompaction equation could be solved in the ‘brute force’ way, that is, by gradually changing the value of y_2' until the right hand side (RHS) of the equation is the same as the left hand side (LHS) – see for example the Excel spreadsheet that accompanies Appendix 56 in Allen & Allen (2013). However, we (and scipy) can do better than that; we will use bisection, one the simplest optimization methods to find the root of the function that we set up as RHS-LHS.

# compaction function - the unknown variable is y2a
def comp_func(y2a,y1,y2,y1a,phi,c):
    # left hand side of decompaction equation:
    LHS = y2a - y1a
    # right hand side of decompaction equation:
    RHS = y2 - y1 - (phi/c)*(np.exp(-c*y1)-np.exp(-c*y2)) + (phi/c)*(np.exp(-c*y1a)-np.exp(-c*y2a))
    return LHS - RHS

Now we can do the calculations; here we set the initial depths of a sandstone column y_1,y_2 to 2 and 3 kilometers, and we estimate the new thickness and porosity assuming that the column is brought to the surface (y_1'=0).

c_sand = 0.27 # porosity-depth coefficient for sand (km-1)
phi_sand = 0.49 # surface porosity for sand
y1 = 2.0 # top depth in km
y2 = 3.0 # base depth in km
y1a = 0.0 # new top depth in km

One issue we need to address is that ‘comp_func’ six input parameters, but the scipy ‘bisect’ function only takes one parameter. We create a partial function ‘comp_func_1’ in which the only variable is ‘y2a’, the rest are treated as constants:

comp_func_1 = functools.partial(comp_func, y1=y1, y2=y2, y1a=y1a, phi=phi_sand, c=c_sand)
y2a = bisect(comp_func_1,y1a,y1a+3*(y2-y1)) # use bisection to find new base depth
phi = (phi_sand/c_sand)*(np.exp(-c_sand*y1)-np.exp(-c_sand*y2))/(y2-y1) # initial average porosity
phia = (phi_sand/c_sand)*(np.exp(-c_sand*y1a)-np.exp(-c_sand*y2a))/(y2a-y1a) # new average porosity

print('new base depth: '+str(round(y2a,2))+' km')
print('initial thickness: '+str(round(y2-y1,2))+' km')
print('new thickness: '+str(round(y2a-y1a,2))+' km')
print('initial porosity: '+str(round(phi,3)))
print('new porosity: '+str(round(phia,3)))

Write code to (de)compact a stratigraphic column with multiple layers

Next we write a function that does the depth calculation for more than one layer in a sedimentary column:

def decompact(tops,lith,new_top,phi_sand,phi_mud,c_sand,c_mud):
    tops_new = [] # list for decompacted tops
    tops_new.append(new_top) # starting value
    for i in range(len(tops)-1):
        if lith[i] == 0:
            phi = phi_mud; c = c_mud
        if lith[i] == 1:
            phi = phi_sand; c = c_sand
        comp_func_1 = functools.partial(comp_func,y1=tops[i],y2=tops[i+1],y1a=tops_new[-1],phi=phi,c=c)
        base_new_a = tops_new[-1]+tops[i+1]-tops[i]
        base_new = bisect(comp_func_1, base_new_a, 4*base_new_a) # bisection
        tops_new.append(base_new)
    return tops_new

Let’s use this function to decompact a simple stratigraphic column that consists of 5 alternating layers of sand and mud.

tops = np.array([1.0,1.1,1.15,1.3,1.5,2.0])
lith = np.array([0,1,0,1,0]) # lithology labels: 0 = mud, 1 = sand
phi_sand_0 = 0.49 # surface porosity for sand
phi_mud_0 = 0.63 # surface porosity for mud
c_sand = 0.27 # porosity-depth coefficient for sand (km-1)
c_mud = 0.57 # porosity-depth coefficent for mud (km-1)
tops_new = decompact(tops,lith,0.0,phi_sand_0,phi_mud_0,c_sand,c_mud) # compute new tops

Plot the results:

def plot_decompaction(tops,tops_new):
    for i in range(len(tops)-1):
        x = [0,1,1,0]
        y = [tops[i], tops[i], tops[i+1], tops[i+1]]
        if lith[i] == 0:
            color = 'xkcd:umber'
        if lith[i] == 1:
            color = 'xkcd:yellowish'
        plt.fill(x,y,color=color)
        x = np.array([2,3,3,2])
        y = np.array([tops_new[i], tops_new[i], tops_new[i+1], tops_new[i+1]])
        if lith[i] == 0:
            color = 'xkcd:umber'
        if lith[i] == 1:
            color = 'xkcd:yellowish'
        plt.fill(x,y,color=color)
    plt.gca().invert_yaxis()
    plt.tick_params(axis='x',which='both',bottom='off',top='off',labelbottom='off')
    plt.ylabel('depth (km)');

plot_decompaction(tops,tops_new)

decompaction3

Now let’s see what happens if we use the 90% mud and 90% sand curves from Kominz et al. (2011).

tops = np.array([1.0,1.1,1.15,1.3,1.5,2.0])
lith = np.array([0,1,0,1,0]) # lithology labels: 0 = mud, 1 = sand
c_sand = 1000/18605.0 # Kominz et al. 2011 >90% sand curve
c_mud = 1000/1671.0 # Kominz et al. 2011 >90% mud curve
phi_sand_0 = 0.407 # Kominz et al. 2011 >90% sand curve
phi_mud_0 = 0.614 # Kominz et al. 2011 >90% mud curve
tops_new = decompact(tops,lith,0.0,phi_sand_0,phi_mud_0,c_sand,c_mud) # compute new tops

plot_decompaction(tops,tops_new)

decompaction4

Quite predictably, the main difference is that the sand layers have decompacted less in this case.

That’s it for now. It is not that hard to modify the code above for more than two lithologies. Happy (de)compacting!

References

Allen, P. A., and Allen, J. R. (2013) Basin Analysis: Principles and Application to Petroleum Play Assessment, Wiley-Blackwell.

Athy, L.F. (1930) Density, porosity and compaction of sedimentary rocks. American Association Petroleum Geologists Bulletin, v. 14, p. 1–24.

Giles, M.R., Indrelid, S.L., and James, D.M.D., 1998, Compaction — the great unknown in basin modelling: Geological Society London Special Publications, v. 141, no. 1, p. 15–43, doi: 10.1144/gsl.sp.1998.141.01.02.

Kominz, M.A., Patterson, K., and Odette, D., 2011, Lithology Dependence of Porosity In Slope and Deep Marine Sediments: Journal of Sedimentary Research, v. 81, no. 10, p. 730–742, doi: 10.2110/jsr.2011.60.

Thanks to Mark Tingay for comments on ‘default’ compaction trends.

Cutoffs in submarine channels

A paper on sinuous submarine channels that is based on work I have done with Jake Covault (at the Quantitative Clastics Laboratory, Bureau of Economic Geology, UT Austin) has been recently published in the journal Geology (officially it will be published in the October issue, but it has been online for a few weeks now).

The main idea of the paper is simple: many submarine channels become highly sinuous and cutoff bends, similar to those observed in the case of rivers, must be fairly common. The more high-quality data becomes available from these systems, the more evidence there is that this is indeed the case. I think this observation is fairly interesting in itself, as cutoffs will add significant complexity to the three-dimensional structure of the preserved deposits. We have addressed that complexity to some degree in another paper. However, previous work on submarine channels has not dealt with the implications of how the along-channel slope changes as cutoff bends form.

So, inspired by a model of incising subaerial bedrock rivers, we have put together a numerical model that not only mimics the plan-view evolution of meandering channels, by keeping track of the x and y coordinates of the channel centerline, but it also records the elevations (z-values) at each control point along this line. The plan-view evolution is based on the simplest possible model of meandering, described in a fantastic paper by Alan Howard and Thomas Knutson published in 1984 and titled “Sufficient conditions for river meandering: A simulation approach“. The key idea is that the rate of channel migration is not only a function of local curvature (in the sense of larger curvatures resulting in larger rates of outer bank erosion and channel migration), but it also depends on the channel curvature upstream from the point of interest; and the strength of this dependence declines exponentially as you move upstream. Without this ‘trick’ there is no stable meandering. The model starts out with an (almost) straight line that has some added noise; meanders develop through time as a dominant wavelength gets established. The z-coordinates are assigned initially so that they describe a constant along-channel slope; as the sinuosity increases, the overall slope decreases and local variability appears as well, depending of the local sinuosity of the channel. More discrete changes take place at the time and location of a cutoff: this is essentially a shortcut that introduces a much steeper segment into the centerline. The animations below (same as two of the supplementary files that were published with the paper) show how the cutoffs form and how the along-channel slope profiles change through time.

sylvester_covault_submarine_channel

Animation of incising submarine channels with early meander cutoffs.

 

sylvester_covault_submarine_channel_profile

Evolution of along-channel gradient (top) and elevation profile (bottom) through time.

Meandering lowland rivers (like the Mississippi) have very low slopes to start with, so cutoff-related elevation drops are not as impressive as in the case of steep (with up to 40-50 m/km gradients) submarine channels (see diagram below). As a result, knickpoints that form this way in submarine channels are likely to be the locations of erosion and contribute to the stratigraphic complexity of the channel deposits. Previous work has shown that avulsion- and deformation-driven knickpoints are important in submarine channels; our results suggest that the meandering process itself can generate significant morphologic and stratigraphic along-channel variability.

screen-shot-2016-09-16-at-1-13-17-pm

Knickpoint gradient plotted against the overall slope. Dashed lines show trends for different cutoff distances (beta is the ratio between cutoff distance and channel width).

Finally I should mention that, apart from the seismic interpretation itself, we have done all the modeling, analysis, visualization, and plotting with Jupyter notebooks and Mayavi (for 3D visualization), using the Anaconda Python distribution. I cannot say enough good things about – and enough thanks for – these tools.

Stratigraphic patterns in slope minibasins

A few months ago some former colleagues and myself have published a paper on the stratigraphic evolution of minibasins that are present on many continental slopes (Sylvester, Z., Cantelli, A., and Pirmez, C., 2015, Stratigraphic evolution of intraslope minibasins: Insights from surface-based model: AAPG Bulletin, v. 99, no. 6, p. 1099–1129, doi: 10.1306/01081514082.) We used a simple model that investigates the interplay between subsidence and sedimentation and helps in the understanding of how stratal termination patterns relate to variations in sediment input and basin subsidence. Conventional sequence stratigraphy focuses on what is happening on the shelf and the shelf edge; and the origin of stratal patterns that characterize the typical ‘slug diagram’ are not trivial to relate to the three main parameters that influence continental-margin stratigraphy: changes in sea level, sediment supply, and subsidence. Our minibasin model is simpler because (1) it only has two parameters: sediment supply and subsidence (the impact of sea level – if any – is included in the sediment supply curve); and (2) sedimentary layers are deposited horizontally during each time step. The basic idea is to investigate the geometries of a system that consists of a depression that deepens through time and sediment is deposited in it with a certain rate. The figures that follow are color versions of the model cross sections that were published in the paper. The originals are actually interactive (you can zoom in and pan if you want to, which is useful if you want to check some details, especially along the basin margins). However, I could not get WordPress to display these in an interactive format; you can check out the interactive version of this post on Github. I have used the excellent, d3.js-based – and recently open-sourced – Plotly to create the interactive plots. A few brief comments follow so that the plots make some sense even if you don’t read the paper.

There are two types of layers in the model: (1) the ones that correspond to the sediment supply that comes into the basin from updip, mostly in the form of turbidity currents – these are shown in yellow color in the plots below; and (2) thinner ones that reflect an overall finer-grained background sedimentation – these are shown in brown color. The latter have a constant thickness across the basin. It is important to note that, while it is true that all the sand is supposed to be in the yellow units, not all of the yellow units are coarse-grained. In fact, slope minibsains have overall a much lower sand content than what a ‘yellow = sand’ assumption would imply in these plots. The blue dots are pinchout points, locations where the yellow layers terminate. In the paper we adopted the idea that ‘onlap’ refers to the case when these termination points move toward the basin edge, and ‘offlap’ corresponds to stratigraphic intervals where the pinchout locations migrate toward the center of the basin.

Sedimentary fills of slope minibasins commonly show strata that either converge toward or onlap onto the basin margins (or show a combination of the two). The first two cross sections are examples of ‘pure’ convergence and well-defined onlap. In the first case (convergence), both sediment input rate and subsidence are constant and have similar orders of magnitude, and, as a result, the locations where sedimentary layers terminate are stationary. The basin never gets very deep, as sedimentation keeps up with subsidence.

Screen Shot 2015-12-19 at 4.40.33 PM

Obviously animations are much better at conveying the idea of sedimentation taking place while the basin is subsiding (the background sedimentation is set to zero in the animation; the lower panel is a chronostratigraphic – or Wheeler – diagram):

movie_1_convergence

For the formation of onlap, a relatively deep basin and high sediment input are needed. The model below has the same overall sediment volume as the previous one, but it was deposited in half the time. Subsidence was kept constant, just like in the convergence model.

Screen Shot 2015-12-19 at 4.42.09 PM

And here is the animation for the onlap scenario:

movie_2_onlap

The next two plots illustrate what happens if the sediment input varies following a sinusoidal curve with three cycles, while subsidence is constant. In the first model, there is enough accommodation in the basin to keep all the sediment; in the second model, the sediment input exceeds the space available in the basin and each cycle has an upper part during which some of the sediment bypasses the basin. If you zoom in, you can see that both onlap and offlap are present in both cases.

Screen Shot 2015-12-19 at 4.44.14 PM

Screen Shot 2015-12-19 at 4.46.05 PM

And the animation for the no-bypass scenario:

movie_3_sinusoidal_input_no_bypass

Next we looked at what happens if sediment input is kept constant, but subsidence varies (following a sinusoidal function). It is tempting to think that the result must be similar to the case of variable sediment input and constant subsidence, but that is *not* the case. In  previous models onlap surfaces overlie condensed sections across the whole basin. In this new model, onlap surfaces are restricted to the basin margin and they correlate to sections with high sedimentation rate at the basin center.

Screen Shot 2015-12-19 at 4.46.51 PM

It is a cool idea that stratal patterns might tell you whether variability in sediment supply or deformation is more important. However, if both parameters are changing through time, with similar magnitudes, it is practically impossible to tell what is going on:

Screen Shot 2015-12-19 at 4.47.44 PM

This animation gives you an idea how can these two cases transition into each other, as sediment input goes from constant to sinusoidal and out-of-phase with subsidence:

movie_7_constant_to_out_of_phase_sediment_input

So far all the simulations were based on sinusoidally varying parameters; what happens though if sediment supply is closer to an ‘on-off’ function? You can see the result below: each sedimentary cycle only shows onlap, the offlapping upper part is missing.

Screen Shot 2015-12-19 at 4.48.35 PM

The next two plots are attempts to reproduce the stratal patterns seen in two well-studied minibasins in the Gulf of Mexico: Brazos-Trinity Basin 4 and the Auger Basin.

Screen Shot 2015-12-19 at 4.49.19 PM

Screen Shot 2015-12-19 at 4.50.08 PM

Finally, we looked at the common scenario of multiple linked minibasins, with sediment bypassing the updip basin serving as the sediment input for the basin downdip. The classic idea about this depositional setting is called ‘fill and spill’: the updip basin has to be filled first before any sediment gets into or spills the second basin. A ‘static’ version of this idea, with pre-formed basins and no subsidence during sedimentation, is illustrated in the plot below, with three sediment input cycles:

Screen Shot 2015-12-19 at 4.50.46 PM

The animated version looks like this:

movie_9_static_fill_and_spill

However, this static fill and spill is probably the exception rather than the rule: most minibasins keep subsiding while sedimentation is taking place. So, in contrast with the previous model, the large-scale fills of the basins are time-equivalent, while at the scale of individual sediment input cycles the fill-and-spill model still works.

Screen Shot 2015-12-19 at 4.50.58 PM

Now let’s animate this:

movie_10_dynamic_fill_and_spill

For more details on these models, see the paper in AAPG Bulletin. Ten related animations are open access and available on the AAPG website. And, again, the plots are a lot more interesting if you can zoom, pan and rescale, so you should really check out the intended format of this post.

Forest cover change in the Carpathians since 1985

There has been a lot of talk in recent months about how bad the deforestation problem is in Romania, especially related to the expansion of the Austrian company Schweighofer. Although there is a lot of chit-chat in the media and on the internet on the subject, data and facts are not that easy to find. Not that easy — unless you make a bit of an effort: it turns out that a massive dataset on forest cover change in Eastern Europe is available for download, thanks to scientists at the University of Maryland who have published a paper on the dataset in the journal Remote Sensing of Environment.

The dataset is based on Landsat imagery collected between 1985 and 2012 and has a pixel size of 30×30 meters. It can be downloaded from this website. I am especially interested in what’s going on in parts of the Carpathians, because that’s where I grew up and went on lots of hikes and through lots of adventures in the 1980s and early 1990s. What follows are a few screenshots that give an idea about what is possible to see with this data.

This first image shows the whole Carpathian – intra-Carpathian area. There are five colors that correspond to different histories of forest cover: black pixels are places without forest during the time of study; green is stable forest; blue is forest gain; red is forest loss; and purple is forest loss followed by forest gain [there are two additional categories in the data, but they are not very common in this area].

carpathian_area

The good news is that there is a lot of green in this map, which means that about 28.5% of the Carpathian area was continuously covered by forest since 1985. An additional 3.4% was without tree cover in 1985, but has gained forest cover since then. The forest is gone from 1.5% of the area; and 1.7% has lost and then regained tree cover.

If we zoom in to the ‘bend’ area, which is essentially the southeastern corner of Transylvania, Romania, it becomes more obvious that, although the big picture doesn’t look very bad, some places have been affected fairly significantly over the last three decades:

bend_area

The black, forest-free patches in the middle are young intramontane basins with no forest. Deforestation looks to be more of a problem in the the Ciuc/Csík and Gheorgheni/Gyergyó basins. Let’s have a closer look at the Csík Basin:

csik

A further zoom-in shows one of the areas with the most forest loss, the western side of the Harghita Mountains:

hargita

I am going to stop here; but this dataset has a lot more to offer than I showed in this post. The conclusion from this certainly shouldn’t be that everything is fine; often the issue is not so much the quantity of the forest being cut, but *where* it is being cut. Protected areas and national parks should clearly be green and stay green on these maps; and the Romanian Carpathians could use a few more protected lands, as many of these forests have never been cut (unlike a lot of forests in Western Europe).

I have used IPython Notebook with the GDAL package to create these images. The notebook can be viewed and downloaded over here.

Reference

P.V. Potapov, S.A. Turubanova, A. Tyukavina, A.M. Krylov, J.L. McCarty, V.C. Radeloff, M.C. Hansen, Eastern Europe’s forest cover dynamics from 1985 to 2012 quantified from the full Landsat archive, Remote Sensing of Environment, Volume 159, 15 March 2015, Pages 28-43, http://dx.doi.org/10.1016/j.rse.2014.11.027.

Exploring the diffusion equation with Python

Ever since I became interested in science, I started to have a vague idea that calculus, matrix algebra, partial differential equations, and numerical methods are all fundamental to the physical sciences and engineering and they are linked in some way to each other. The emphasis here is on the word vague; I have to admit that I had no clear, detailed understanding of how these links actually work. It seems like my formal education both in math and physics stopped just short of where everything would have nicely come together. Papers that are really important in geomorphology, sedimentology or stratigraphy seemed impossible to read as soon as they started assuming that I knew quite a bit about convective acceleration, numerical schemes, boundary conditions, and Cholesky factorization. Because I didn’t.

So I have decided a few months ago that I had to do something about this. This blog post documents the initial – and admittedly difficult – steps of my learning; the purpose is to go through the process of discretizing a partial differential equation, setting up a numerical scheme, and solving the resulting system of equations in Python and IPython notebook. I am learning this as I am doing it, so it may seem pedestrian and slow-moving to a lot of people but I am sure there are others who will find it useful. Most of what follows, except the Python code and the bit on fault scarps, is based on and inspired by Slingerland and Kump (2011): Mathematical Modeling of Earth’s Dynamical Systems (strongly recommended). You can view and download the IPython Notebook version of this post from Github.

Estimating the derivatives in the diffusion equation using the Taylor expansion

This is the one-dimensional diffusion equation:

\frac{\partial T}{\partial t} - D\frac{\partial^2 T}{\partial x^2} = 0

The Taylor expansion of value of a function u at a point \Delta x ahead of the point x where the function is known can be written as:

u(x+\Delta x) = u(x) + \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2} \frac{\partial^2 u}{\partial x^2} + \frac{\Delta x^3}{6} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)

Taylor expansion of value of the function u at a point one space step behind:

u(x-\Delta x) = u(x) - \Delta x \frac{\partial u}{\partial x} + \frac{\Delta x^2}{2} \frac{\partial^2 u}{\partial x^2} - \frac{\Delta x^3}{6} \frac{\partial^3 u}{\partial x^3} + O(\Delta x^4)

Solving the first Taylor expansion above for \frac{\partial u}{\partial x} and dropping all higher-order terms yields the forward difference operator:

\frac{\partial u}{\partial x} = \frac{u(x+\Delta x)-u(x)}{\Delta x} + O(\Delta x)

Similarly, the second equation yields the backward difference operator:

\frac{\partial u}{\partial x} = \frac{u(x)-u(x-\Delta x)}{\Delta x} + O(\Delta x)

Subtracting the second equation from the first one gives the centered difference operator:

\frac{\partial u}{\partial x} = \frac{u(x+\Delta x)-u(x-\Delta x)}{2\Delta x} + O(\Delta x^2)

The centered difference operator is more accurate than the other two.

Finally, if the two Taylor expansions are added, we get an estimate of the second order partial derivative:

\frac{\partial^2 u}{\partial x^2} = \frac{u(x+\Delta x)-2u(x)+u(x-\Delta x)}{\Delta x^2} + O(\Delta x^2)

Next we use the forward difference operator to estimate the first term in the diffusion equation:

\frac{\partial T}{\partial t} = \frac{T(t+\Delta t)-T(t)}{\Delta t}

The second term is expressed using the estimation of the second order partial derivative:

\frac{\partial^2 T}{\partial x^2} = \frac{T(x+\Delta x)-2T(x)+T(x-\Delta x)}{\Delta x^2}

Now the diffusion equation can be written as

\frac{T(t+\Delta t)-T(t)}{\Delta t} - D \frac{T(x+\Delta x)-2T(x)+T(x-\Delta x)}{\Delta x^2} = 0

This is equivalent to:

T(t+\Delta t) - T(t) - \frac{D\Delta t}{\Delta x^2}(T(x+\Delta x)-2T(x)+T(x-\Delta x)) = 0

The expression D\frac{\Delta t}{\Delta x^2} is called the diffusion number, denoted here with s:

s = D\frac{\Delta t}{\Delta x^2}

FTCS explicit scheme and analytic solution

If we use n to refer to indices in time and j to refer to indices in space, the above equation can be written as

T[n+1,j] = T[n,j] + s(T[n,j+1]-2T[n,j]+T[n,j-1])

This is called a forward-in-time, centered-in-space (FTCS) scheme. Its ‘footprint’ looks like this:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg' # display plots in SVG format

plt.figure(figsize=(6,3))
plt.plot([0,2],[0,0],'k')
plt.plot([1,1],[0,1],'k')
plt.plot([0,1,2,1],[0,0,0,1],'ko',markersize=10)
plt.text(1.1,0.1,'T[n,j]')
plt.text(0.1,0.1,'T[n,j-1]')
plt.text(1.1,1.1,'T[n+1,j]')
plt.text(2.1,0.1,'T[n,j+1]')
plt.xlabel('space')
plt.ylabel('time')
plt.axis('equal')
plt.yticks([0.0,1.0],[])
plt.xticks([0.0,1.0],[])
plt.title('FTCS explicit scheme',fontsize=12)
plt.axis([-0.5,2.5,-0.5,1.5]);

Screen Shot 2015-02-02 at 9.03.19 PM

Now we are ready to write the code that is the solution for exercise 2 in Chapter 2 of Slingerland and Kump (2011). This is an example where the one-dimensional diffusion equation is applied to viscous flow of a Newtonian fluid adjacent to a solid wall. If the wall starts moving with a velocity of 10 m/s, and the flow is assumed to be laminar, the velocity profile of the fluid is described by the equation

\frac{\partial V}{\partial t} - \nu \frac{\partial^2 V}{\partial y^2} = 0

where \nu is the kinematic viscosity of the fluid. We want to figure out how the velocity will change through time as a function of distance from the wall. [Note that I have changed the original 40 m/s to 10 m/s — the former seems like an unnaturally large velocity to me].

We can compare the numerical results with the analytic solution, which is known for this problem:

V = V_0 \Big\{ \sum\limits_{n=0}^\infty erfc\big(2n\eta_1+\eta\big) - \sum\limits_{n=0}^\infty erfc\big(2(n+1)\eta_1+\eta\big) \Big\}

where

\eta_1 = \frac{h}{2\sqrt{\nu t}}

and

\eta = \frac{y}{2\sqrt{\nu t}}

dt = 0.0005 # grid size for time (s)
dy = 0.0005 # grid size for space (m)
viscosity = 2*10**(-4) # kinematic viscosity of oil (m2/s)
y_max = 0.04 # in m
t_max = 1 # total time in s
V0 = 10 # velocity in m/s

# function to calculate velocity profiles based on a 
# finite difference approximation to the 1D diffusion 
# equation and the FTCS scheme:
def diffusion_FTCS(dt,dy,t_max,y_max,viscosity,V0):
    # diffusion number (has to be less than 0.5 for the 
    # solution to be stable):
    s = viscosity*dt/dy**2 
    y = np.arange(0,y_max+dy,dy) 
    t = np.arange(0,t_max+dt,dt)
    r = len(t)
    c = len(y)
    V = np.zeros([r,c])
    V[:,0] = V0
    for n in range(0,r-1): # time
        for j in range(1,c-1): # space
            V[n+1,j] = V[n,j] + s*(V[n,j-1] - 
                2*V[n,j] + V[n,j+1]) 
    return y,V,r,s
# note that this can be written without the for-loop 
# in space, but it is easier to read it this way


from scipy.special import erfc

# function to calculate velocity profiles 
# using the analytic solution:
def diffusion_analytic(t,h,V0,dy,viscosity):
    y = np.arange(0,h+dy,dy)
    eta1 = h/(2*(t*viscosity)**0.5)
    eta = y/(2*(t*viscosity)**0.5)
    sum1 = 0
    sum2 = 0
    for n in range(0,1000):
        sum1 = sum1 + erfc(2*n*eta1+eta)
        sum2 = sum2 + erfc(2*(n+1)*eta1-eta)
    V_analytic = V0*(sum1-sum2)
    return V_analytic

y,V,r,s = diffusion_FTCS(dt,dy,t_max,y_max,viscosity,V0)

# plotting:
plt.figure(figsize=(7,5))
plot_times = np.arange(0.2,1.0,0.1)
for t in plot_times:
    plt.plot(y,V[t/dt,:],'Gray',label='numerical')
    V_analytic = diffusion_analytic(t,0.04,40,dy,viscosity)
    plt.plot(y,V_analytic,'ok',label='analytic',
        markersize=3)
    if t==0.2:
        plt.legend(fontsize=12)
plt.xlabel('distance from wall (m)',fontsize=12)
plt.ylabel('velocity (m/s)',fontsize=12)
plt.axis([0,y_max,0,V0])
plt.title('comparison between explicit numerical 
    \n(FTCS scheme) and analytic solutions');

diffusion with FTCS scheme

The dots (analytic solution) overlap pretty well with the lines (numerical solution). However, this would not be the case if we changed the discretization so that the diffusion number was larger. Let’s look at the stability of the FTCS numerical scheme, by computing the solution with different diffusion numbers. It turns out that the diffusion number s has to be less than 0.5 for the FTCS scheme to remain stable. What follows is a reproduction of Figure 2.7 in Slingerland and Kump (2011):

dt = 0.0005 # grid size for time (m)
dy = 0.0005 # grid size for space (s)
y,V,r,s = diffusion_FTCS(dt,dy,t_max,y_max,viscosity,V0)
V_analytic = diffusion_analytic(0.5,0.04,V0,dy,viscosity)
plt.figure(figsize=(7,5))
plt.plot(y,V_analytic-V[0.5/dt],'--k',label='small s')

dy = 0.0010
dt = 0.00254
y,V,r,s = diffusion_FTCS(dt,dy,t_max,y_max,viscosity,V0)
V_analytic = diffusion_analytic(0.5,0.04,V0,dy,viscosity)
V_numeric = V[r/2-1,:]

plt.plot(y,V_analytic-V_numeric,'k',label='large s')
plt.xlabel('distance from wall (m)',fontsize=12)
plt.ylabel('velocity difference (m/s)',fontsize=12)
plt.title('difference between numerical and analytic 
   \n solutions for different \'s\' values',fontsize=14)
plt.axis([0,y_max,-4,4])
plt.legend();

difference between analytic and numerical solutions

Laasonen implicit scheme

plt.figure(figsize=(6,3))
plt.plot([0,2],[1,1],'k')
plt.plot([1,1],[0,1],'k')
plt.plot([0,1,2,1],[1,1,1,0],'ko',markersize=10)
plt.text(1.1,0.1,'T[n,j]')
plt.text(0.1,1.1,'T[n+1,j-1]')
plt.text(1.1,1.1,'T[n+1,j]')
plt.text(2.1,1.1,'T[n+1,j+1]')
plt.xlabel('space')
plt.ylabel('time')
plt.axis('equal')
plt.yticks([0.0,1.0],[])
plt.xticks([0.0,1.0],[])
plt.title('Laasonen scheme',fontsize=12)
plt.axis([-0.5,2.5,-0.5,1.5]);

Laasonen scheme

Instead of estimating the velocity at time step n+1 with the curvature calculated at time step n, as it is done in the FTCS explicit scheme, we can also estimate the curvature at time step n+1, using the velocity change from time step n to time step n+1:

s\big(T(x+\Delta x)-2T(x)+T(x-\Delta x)\big) = T(t+\Delta t)-T(t)

Written in matrix notation, this is equiavlent to

s\big(T[n+1,j+1]-2T[n+1,j]+T[n+1,j-1]\big) = T[n+1,j]-T[n,j]

After some reshuffling we get

-sT[n+1,j+1] + (1+2s)T[n+1,j] - sT[n+1,j-1] = T[n,j]

This is the Laasonen fully implicit scheme. Unlike the FTCS scheme, the Laasonen scheme is unconditionally stable. Let’s try to write some Python code that implements this scheme. First it is useful for me to go through the logic of constructing the system of equations that needs to be solved. Let’s consider a grid that only consists of 5 nodes in space and we are going to estimate the values of T at the locations marked by the red dots in the figure below. Black dots mark the locations where we already know the values of T (from the initial and boundary conditions).

plt.figure(figsize=(6,3))
plt.plot([0,4,4,0],[0,0,1,1],'k')
for i in range(0,4):
    plt.plot([i,i],[0,1],'k')
plt.plot([0,1,2,3,4,0,4],[0,0,0,0,0,1,1],'ko',markersize=10)
plt.plot([1,2,3],[1,1,1],'ro',markersize=10)
for i in range(0,5):
    plt.text(i+0.1,0.1,'T[0,'+str(i)+']')
    plt.text(i+0.1,1.1,'T[1,'+str(i)+']')
plt.xlabel('space')
plt.ylabel('time')
plt.axis('equal')
plt.yticks([0.0,1.0],['0','1'])
plt.title('first two time steps on a 1D grid of five points',fontsize=12)
plt.axis([-0.5,4.8,-0.5,1.5]);
first two time steps

First we write the equations using the Laasonen scheme centered on the three points of unknown velocity (or temperature) — these are the red dots in the figure above:

\begin{array}{rrrrrcl}   -sT[1,0]&+(1+2s)T[1,1]&-sT[1,2]&+0T[1,3]&+0T[1,4]&=&T[0,1] \\  0T[1,0]&-sT[1,1]&+(1+2s)T[1,2]&-sT[1,3]&+0T[1,4]&=&T[0,2] \\  0T[1,0]&+0T[1,1]&-sT[1,2]&+(1+2s)T[1,3]&-sT[1,4]&=&T[0,3]  \end{array}

It may seem like we have five unknowns and only three equations but T[1,0] and T[1,4] are on the boundaries and they are known. Let’s rearrange the equation system so that the left hand side has ony the unknowns:

\begin{array}{rrrrcrr}   (1+2s)T[1,1]&-sT[1,2]&+0T[1,3]&=&T[0,1]&+sT[1,0] \\  -sT[1,1]&+(1+2s)T[1,2]&-sT[1,3]&=&T[0,2]& \\  0T[1,1]&-sT[1,2]&+(1+2s)T[1,3]&=&T[0,3]&+sT[1,4]  \end{array}

In matrix form this is equivalent to

\begin{bmatrix} 1+2s & -s & 0 \\  -s & 1+2s & -s \\  0 & -s & 1+2s \end{bmatrix} \times   \left[ \begin{array}{c} T[1,1] \\ T[1,2] \\ T[1,3] \end{array} \right]  = \left[ \begin{array}{c} T[0,1]+sT[1,0] \\ T[0,2] \\ T[0,3]+sT[1,4] \end{array} \right]

This of course can be extended to larger dimensions than shown here.
Now we are ready to write the code for the Laasonen scheme. One important difference relative to what I did in the explicit scheme example is that in this case we only keep the last two versions of the velocity distribution in memory, as opposed to preallocating the full array of nt x ny size as we did before. This difference is not a significant time saver for simple problems like this but once you start dealing with more complicated tasks and code it is not possible and/or practical to keep the results of all time steps in memory.

from scipy.sparse import diags
def diffusion_Laasonen(dt,dy,t_max,y_max,viscosity,V0,V1):
    s = viscosity*dt/dy**2  # diffusion number
    y = np.arange(0,y_max+dy,dy) 
    t = np.arange(0,t_max+dt,dt)
    nt = len(t) # number of time steps
    ny = len(y) # number of dy steps
    V = np.zeros((ny,)) # initial condition
    V[0] = V0 # boundary condition on left side
    V[-1] = V1 # boundary condition on right side
    # create coefficient matrix:
    A = diags([-s, 1+2*s, -s], [-1, 0, 1], shape=(ny-2, ny-2)).toarray() 
    for n in range(nt): # time is going from second time step to last
        Vn = V #.copy()
        B = Vn[1:-1] # create matrix of knowns on the RHS of the equation
        B[0] = B[0]+s*V0
        B[-1] = B[-1]+s*V1
        V[1:-1] = np.linalg.solve(A,B) # solve the equation using numpy
    return y,t,V,s

Because this is a stable scheme, it is possible to get reasonable solutions with relatively large time steps (which was not possible with the FTCS scheme):

dt = 0.01 # grid size for time (s)
dy = 0.0005 # grid size for space (m)
viscosity = 2*10**(-4) # kinematic viscosity of oil (m2/s)
y_max = 0.04 # in m
V0 = 10.0 # velocity in m/s
V1 = 0.0 # velocity in m/s

plt.figure(figsize=(7,5))
for time in np.linspace(0,1.0,10):
    y,t,V,s = diffusion_Laasonen(dt,dy,time,y_max,viscosity,V0,V1)
    plt.plot(y,V,'k')
plt.xlabel('distance from wall (m)',fontsize=12)
plt.ylabel('velocity (m/s)',fontsize=12)
plt.axis([0,y_max,0,V0])
plt.title('Laasonen implicit scheme',fontsize=14);

Diffusion with Laasonen scheme

Just for fun, let’s see what happens if we set in motion the right side of the domain as well; that is, set V1 to a non-zero value:

dt = 0.01 # grid size for time (s)
dy = 0.0005 # grid size for space (m)
viscosity = 2*10**(-4) # kinematic viscosity of oil (m2/s)
y_max = 0.04 # in m
V0 = 10.0 # velocity in m/s
V1 = 5.0 # velocity in m/s

plt.figure(figsize=(7,5))
for time in np.linspace(0,1.0,10):
    y,t,V,s = diffusion_Laasonen(dt,dy,time,y_max,viscosity,V0,V1)
    plt.plot(y,V,'k')
plt.xlabel('distance from wall (m)',fontsize=12)
plt.ylabel('velocity (m/s)',fontsize=12)
plt.axis([0,y_max,0,V0])
plt.title('Laasonen implicit scheme',fontsize=14);

Case when velocities are nonzero on both sides

Crank-Nicolson scheme

The Crank-Nicholson scheme is based on the idea that the forward-in-time approximation of the time derivative is estimating the derivative at the halfway point between times n and n+1, therefore the curvature of space should be estimated there as well. The ‘footprint’ of the scheme looks like this:

plt.figure(figsize=(6,3))
plt.plot([0,2],[0,0],'k')
plt.plot([0,2],[1,1],'k')
plt.plot([1,1],[0,1],'k')
plt.plot([0,1,2,0,1,2],[0,0,0,1,1,1],'ko',markersize=10)
plt.text(0.1,0.1,'T[n,j-1]')
plt.text(1.1,0.1,'T[n,j]')
plt.text(2.1,0.1,'T[n,j+1]')
plt.text(0.1,1.1,'T[n+1,j-1]')
plt.text(1.1,1.1,'T[n+1,j]')
plt.text(2.1,1.1,'T[n+1,j+1]')
plt.xlabel('space')
plt.ylabel('time')
plt.axis('equal')
plt.yticks([0.0,1.0],[])
plt.xticks([0.0,1.0],[])
plt.title('Crank-Nicolson scheme',fontsize=12)
plt.axis([-0.5,2.5,-0.5,1.5]);

Crank-Nicolson scheme

The curvature at the halfway point can be estimated through averaging the curvatures that are calculated at n and n+1:

0.5s\big(T[n+1,j+1]-2T[n+1,j]+T[n+1,j-1]\big) + 0.5s\big(T[n,j+1]-2T[n,j]+T[n,j-1]\big) = T[n+1,j]-T[n,j]

This can be rearranged so that terms at n+1 are on the left hand side:

-0.5sT[n+1,j-1]+(1+s)T[n+1,j]-0.5sT[n+1,j+1] = 0.5sT[n,j-1]+(1-s)T[n,j]+0.5sT[n,j+1]

Just like we did for the Laasonen scheme, we can write the equations for the first two time steps:

\begin{array}{rrrrrcl}   -0.5sT[1,0] & +(1+s)T[1,1] & -0.5sT[1,2] & = & 0.5sT[0,0] & +(1-s)T[0,1] & +0.5sT[0,2] \\  -0.5sT[1,1] & +(1+s)T[1,2] & -0.5sT[1,3] & = & 0.5sT[0,1] & +(1-s)T[0,2] & +0.5sT[0,3] \\  -0.5sT[1,2] & +(1+s)T[1,3] & -0.5sT[1,4] & = & 0.5sT[0,2] & +(1-s)T[0,3] & +0.5sT[0,4]  \end{array}

Writing this in matrix form, with all the unknowns on the LHS:

\begin{bmatrix} 1+s & -0.5s & 0 \\ -0.5s & 1+s & -0.5s \\ 0 & -0.5s & 1+s \end{bmatrix} \times   \left[ \begin{array}{c} T[1,1] \\ T[1,2] \\ T[1,3] \end{array} \right]  = \begin{bmatrix} 1-s & 0.5s & 0 \\ 0.5s & 1-s & 0.5s \\ 0 & 0.5s & 1-s \end{bmatrix} \times  \left[ \begin{array}{c} T[0,1] \\ T[0,2] \\ T[0,3] \end{array} \right] +  \left[ \begin{array}{c} 0.5sT[1,0]+0.5sT[0,0] \\ 0 \\ 0.5sT[1,4]+0.5sT[0,4] \end{array} \right]

Now we can write the code for the Crank-Nicolson scheme. We will use a new input parameter called ntout that determines how many time steps we want to write out to memory. This way you don’t have to re-run the code if you want to plot multiple time steps.

def diffusion_Crank_Nicolson(dy,ny,dt,nt,D,V,ntout):
    Vout = [] # list for storing V arrays at certain time steps
    V0 = V[0] # boundary condition on left side
    V1 = V[-1] # boundary condition on right side
    s = D*dt/dy**2  # diffusion number
    # create coefficient matrix:
    A = diags([-0.5*s, 1+s, -0.5*s], [-1, 0, 1], 
          shape=(ny-2, ny-2)).toarray() 
    B1 = diags([0.5*s, 1-s, 0.5*s],[-1, 0, 1], shape=(ny-2, ny-2)).toarray()
    for n in range(1,nt): # time is going from second time step to last
        Vn = V
        B = np.dot(Vn[1:-1],B1) 
        B[0] = B[0]+0.5*s*(V0+V0)
        B[-1] = B[-1]+0.5*s*(V1+V1)
        V[1:-1] = np.linalg.solve(A,B)
        if n % int(nt/float(ntout)) == 0 or n==nt-1:
            Vout.append(V.copy()) # numpy arrays are mutable, 
            #so we need to write out a copy of V, not V itself
    return Vout,s

dt = 0.001 # grid size for time (s)
dy = 0.001 # grid size for space (m)
viscosity = 2*10**(-4) # kinematic viscosity of oil (m2/s)
y_max = 0.04 # in m
y = np.arange(0,y_max+dy,dy) 
ny = len(y)
nt = 1000
plt.figure(figsize=(7,5))
V = np.zeros((ny,)) # initial condition
V[0] = 10
Vout,s = diffusion_Crank_Nicolson(dy,ny,dt,nt,viscosity,V,10)

for V in Vout:
    plt.plot(y,V,'k')
plt.xlabel('distance from wall (m)',fontsize=12)
plt.ylabel('velocity (m/s)',fontsize=12)
plt.axis([0,y_max,0,V[0]])
plt.title('Crank-Nicolson scheme',fontsize=14);

Diffusion with Crank-Nicolson scheme

Fault scarp diffusion

So far we have been using a somewhat artificial (but simple) example to explore numerical methods that can be used to solve the diffusion equation. Next we look at a geomorphologic application: the evolution of a fault scarp through time. Although the idea that convex hillslopes are the result of diffusive processes go back to G. K. Gilbert, it was Culling (1960, in the paper Analytical Theory of Erosion) who first applied the mathematics of the heat equation – that was already well known to physicists at that time – to geomorphology.

Here I used the Crank-Nicolson scheme to model a fault scarp with a vertical offset of 10 m. To compare the numerical results with the analytical solution (which comes from Culling, 1960), I created a function that was written using a Python package for symbolic math called sympy. One of the advantages of sympy is that you can quickly display equations in \LaTeX.

import sympy
from sympy import init_printing
init_printing(use_latex=True)
x, t, Y1, a, K = sympy.symbols('x t Y1 a K')
y = (1/2.0)*Y1*(sympy.erf((a-x)/(2*sympy.sqrt(K*t))) + sympy.erf((a+x)/(2*sympy.sqrt(K*t))))
y

Screen Shot 2015-02-06 at 5.13.34 PM

The variables in this equation are x – horizontal coordinates, t – time, a – value of x where fault is located, K – diffusion coefficient, Y1 – height of fault scarp.

from sympy.utilities.lambdify import lambdify
# function for analytic solution:
f = lambdify((x, t, Y1, a, K), y)

dt = 2.5 # time step (years)
dy = 0.1 # grid size for space (m)
D = 50E-4 # diffusion coefficient in m2/yr 
# e.g., Fernandes and Dietrich, 1997
h = 10 # height of fault scarp in m
y_max = 20 # length of domain in m
t_max = 500 # total time in years
y = np.arange(0,y_max+dy,dy) 
ny = len(y)
nt = int(t_max/dt)
V = np.zeros((ny,)) # initial condition
V[:round(ny/2.0)] = h # initial condition

Vout,s = diffusion_Crank_Nicolson(dy,ny,dt,nt,D,V,20)

plt.figure(figsize=(10,5.2))
for V in Vout:
    plt.plot(y,V,'gray')

plt.xlabel('distance (m)',fontsize=12)
plt.ylabel('height (m)',fontsize=12)
plt.axis([0,y_max,0,10])
plt.title('fault scarp diffusion',fontsize=14);
plt.plot(y,np.asarray([f(x0, t_max, h, y_max/2.0, D) 
   for x0 in y]),'r--',linewidth=2);

Fault scarp diffusion

The numerical and analytic solutions (dashed red line) are very similar in this case (total time = 500 years). Let’s see what happens if we let the fault scarp evolve for a longer time.

dt = 2.5 # time step (years)
dy = 0.1 # grid size for space (m)
D = 50E-4 # diffusion coefficient in m2/yr
h = 10 # height of fault scarp in m
y_max = 20 # length of domain in m
t_max = 5000 # total time in years
y = np.arange(0,y_max+dy,dy) 
ny = len(y)
nt = int(t_max/dt)
V = np.zeros((ny,)) # initial condition
V[:round(ny/2.0)] = h # initial condition

Vout,s = diffusion_Crank_Nicolson(dy,ny,dt,nt,D,V,20)

plt.figure(figsize=(10,5.2))
for V in Vout:
    plt.plot(y,V,'gray')

plt.xlabel('distance (m)',fontsize=12)
plt.ylabel('height (m)',fontsize=12)
plt.axis([0,y_max,0,10])
plt.title('fault scarp diffusion',fontsize=14);
plt.plot(y,np.asarray([f(x0, t_max, h, y_max/2.0, D) 
   for x0 in y]),'r--',linewidth=2);

Fault scarp diffusion over long time

This doesn’t look very good, does it? The reason for the significant mismatch between the numerical and analytic solutions is the fixed nature of the boundary conditions: we keep the elevation at 10 m on the left side and at 0 m on the right side of the domain. There are two ways of getting a correct numerical solution: we either impose boundary conditions that approximate what the system is supposed to do if the elevations were not fixed; or we extend the space domain so that the boundary conditions can be kept fixed throughout the time of interest. Let’s do the latter; all the other parameters are the same as above.

dt = 2.5 # time step (years)
dy = 0.1 # grid size for space (m)
D = 50E-4 # diffusion coefficient in m2/yr
h = 10 # height of fault scarp in m
y_max = 40 # length of domain in m
t_max = 5000 # total time in years
y = np.arange(0,y_max+dy,dy) 
ny = len(y)
nt = int(t_max/dt)
V = np.zeros((ny,)) # initial condition
V[:round(ny/2.0)] = h # initial condition

Vout,s = diffusion_Crank_Nicolson(dy,ny,dt,nt,D,V,20)

plt.figure(figsize=(10,5.2))
for V in Vout:
    plt.plot(y,V,'gray')

plt.xlabel('distance (m)',fontsize=12)
plt.ylabel('height (m)',fontsize=12)
plt.axis([0,y_max,0,10])
plt.title('fault scarp diffusion',fontsize=14);
plt.plot(y,np.asarray([f(x0, t_max, h, y_max/2.0, D) 
   for x0 in y]),'r--',linewidth=2);

Fault scarp diffusion, extended domain

Now we have a much better result. The vertical dashed lines show the extent of the domain in the previous experiment. We have also gained some insight into choosing boundary conditions and setting up the model domain. It is not uncommon that setting up the initial and boundary conditions is the most time-consuming and difficult part of running a numerical model.

Further reading

R. Slingerland and L. Kump (2011) Mathematical Modeling of Earth’s Dynamical Systems

W. E. H. Culling (1960) Analytical Theory of Erosion

L. Barba (2013) 12 steps to Navier-Stokes – an excellent introduction to computational fluid dynamics that uses IPython notebooks

I have blogged before about the geosciency aspects of the diffusion equation over here.

You can view and download the IPython Notebook version of this post from Github.

Questions or suggestions? Contact @zzsylvester

Rivers through time, as seen in Landsat images

Thanks to the Landsat program and Google Earth Engine, it is possible now to explore how the surface of the Earth has been changing through the last thirty years or so. Besides the obvious issues of interest, like changes in vegetation, the spread of cities, and the melting of glaciers, it is also possible to look at how rivers change their courses through time. You have probably already seen the images of the migrating Ucayali River in Peru, for example here. This river is changing its course with an impressive speed; many – probably most – other rivers don’t show much obvious change during the same 30-year period. What determines the meander migration rate of rivers is an interesting question in fluvial geomorphology.

The data that underlies Google Earth Engine is not accessible to everybody, but the Landsat data is available to anyone who creates a free account with Earth Explorer. It is not that difficult (but fairly time consuming) to download a set of images and create animations like this (click for higher resolution):

ucayali7

This scene also comes from the Ucayali River (you can view it in Google Earth Engine over here) and it is a nice example of how both neck cutoffs and chute cutoffs form. First a neck cutoff takes place that affects the tight bend in the right side of the image; this is followed by a chute cutoff immediately downstream of the neck cutoff location, as the new course of the river happens to align well with a pre-existing chute channel. The third bend in the upper left corner shows some well-developed counter-point-bar deposits. There is one frame in the movie for each year from 1985 to 2013, with a few years missing (due to low quality of the data).

Update (04/14/2016): If you want to use the animation, feel free to do so, as long as you (1) give credit to NASA/USGS Landsat, (2) give credit to me (= Zoltan Sylvester, geologist), and (3) link to this page. Note that you can see/download the high-resolution version if you click on the image.

Update (04/15/2016): Matteo Niccoli (who has a great blog called MyCarta) has created a slower version of the animation:

ucayali71_slow

Memorable moments and photos from 2013

We are well into 2014 now, but it is not too late to look back at 2013 and pick some of the best moments (which means photos in my case) of the year that just passed.

We started out the year with a trip to Zion and Bryce Canyon National Parks. Although it was fairly cold (especially at Bryce Canyon NP — this park has a much higher elevation overall than Zion NP), we had lots of sunshine and did several day hikes. Visiting these parks in the winter is a great idea — they are a lot less crowded than in the summer, and obviously the landscapes and sights are quite different when they are covered with snow.

Zion National Park is paradise for a sedimentologist: there are endless, top-quality exposures of the Navajo Sandstone, showing all kinds of sedimentary structures characteristic of deposits of wind-blown sand. I have included two examples here; you can find more on my Smugmug site.

Sedimentologically, Bryce Canyon National Park is a bit less exciting than Zion, but this is counterbalanced by the fantastic geomorphology of this place. I haven’t seen Bryce Canyon in the summer, but I wouldn’t be surprised if it was more beautiful when it’s covered with snow.

In February, I went on a ‘business’ trip to Torres del Paine National Park in Southern Chile: I attended a field consortium meeting organized by Steve Hubbard’s group at the University of Calgary. I have been to this area several times before, as it has some of the best outcrops of turbidites (= deep-water sediments) in the world, but I was once again shocked how uniquely beautiful Chilean Patagonia can be.

At the end of the official trip, Zane Jobe (who is blogging at Off the Shelf Edge) and I did a bit of geo-turism: we went to see Glacier Grey and Lago Grey, and then did a day hike in the park to check out the actual Torres del Paine. The rest of the photos are here.

In July, my wife and I took a few days to do some hiking and running in Rocky Mountain National Park. I was struggling with a running injury at that time, but the mountains and the trails acted as efficient tranquilizers. More photos at Smugmug.

In September I attended a research conference on turbidity currents in Italy and Peter Talling showed us some of the classic outcrops of the Marnoso-Arenacea Formation. These rocks are very unique because they were deposited by huge submarine flows that covered the entire basin floor. Always wanted to see them and it was enlightening to get up close to them.

DSC_6304

Turbidites of the Marnoso-Arenacea Formation, Italian Apennines. David Piper and Bill Arnott for scale

In October we spent a long weekend in Moab, Utah, to participate in our first trail races, but we also did some hiking. Running the Moab Trail Marathon was an amazing experience (I think I will have to do it again this year); unfortunately I didn’t take a camera with me, as I was trying to focus on running (and surviving the race).

DSC_6470

Typical view in Canyonlands National Park

To continue with the theme of ‘national parks in winter’, some friends from California and the two of us wrapped up the year with a Christmas trip to Yellowstone and Grand Teton National Parks. More photos, of course, at Smugmug.

Exploring grain settling with Python

Grain settling is one of the most important problems in sedimentology (and therefore sedimentary geology), as neither sediment transport nor deposition can be understood and modeled without knowing what is the settling velocity of a particle of a certain grain size. Very small grains, when submerged in water, have a mass small enough that they reach a terminal velocity before any turbulence develops. This is true for clay- and silt-sized particles settling in water, and for these grain size classes Stokes’ Law can be used to calculate the settling velocity:

Screen Shot 2013-08-09 at 5.54.33 PM where R = specific submerged gravity (the density difference between the particle and fluid, normalized by fluid density), g = gravitational acceleration, D is the particle diameter, C1 is a constant with a theoretical value of 18, and the greek letter nu is the kinematic viscosity.

For grain sizes coarser than silt, a category that clearly includes a lot of sediment and rock types of great interest to geologists, things get more complicated. The reason for this is the development of a separation wake behind the falling grain; the appearance of this wake results in turbulence and large pressure differences between the front and back of the particle. For large grains – pebbles, cobbles – this effect is so strong that viscous forces become small compared to pressure forces and turbulent drag dominates; the settling velocity can be estimated using the empirical equation

Screen Shot 2013-08-09 at 5.54.40 PMThe important point is that, for larger grains, the settling velocity increases more slowly, with the square root of the grain size, as opposed to the square of particle diameter, as in Stokes’ Law.

Sand grains are small enough that viscous forces still play an important role in their subaqueous settling behavior, but large enough that the departure from Stokes’ Law is significant and wake turbulence cannot be ignored. There are several empirical – and fairly complicated – equations that try to bridge this gap; here I focus on the simplest one, published in 2004 in the Journal of Sedimentary Research (Ferguson and Church, 2004):

Screen Shot 2013-08-09 at 5.54.47 PM

At small values of D, the left term in the denominator is much larger than the one containing the third power of D, and the equation is equivalent of Stokes’ Law. At large values of D, the second term dominates and the settling velocity converges to the solution of the turbulent drag equation.

But the point of this blog post is not to give a summary of the Ferguson and Church paper; what I am interested in is to write some simple code and plot settling velocity against grain size to better understand these relationships through exploring them graphically. So what follows is a series of Python code snippets, directly followed by the plots that you can generate if you run the code yourself. I have done this using the IPyhton notebook, a very nice tool that allows and promotes note taking, coding, and plotting within one document. I am not going to get into details of Python programming and the usage of IPyhton notebook, but you can check them out here.

First we have to implement the three equations as Python functions:

import numpy as np
import matplotlib.pyplot as plt
rop = 2650.0 # density of particle in kg/m3
rof = 1000.0 # density of water in kg/m3
visc = 1.002*1E-3 # dynamic viscosity in Pa*s at 20 C
C1 = 18 # constant in Ferguson-Church equation
C2 = 1 # constant in Ferguson-Church equation
def v_stokes(rop,rof,d,visc,C1):
        R = (rop-rof)/rof # submerged specific gravity
        w = R*9.81*(d**2)/(C1*visc/rof)
        return w
def v_turbulent(rop,rof,d,visc,C2):
        R = (rop-rof)/rof
        w = (4*R*9.81*d/(3*C2))**0.5
        return w
def v_ferg(rop,rof,d,visc,C1,C2):
        R = (rop-rof)/rof
        w = ((R*9.81*d**2)/(C1*visc/rof+
            (0.75*C2*R*9.81*d**3)**0.5))
        return w

Let’s plot these equations for a range of particle diameters:

d = np.arange(0,0.0005,0.000001)
ws = v_stokes(rop,rof,d,visc,C1)
wt = v_turbulent(rop,rof,d,visc,C2)
wf = v_ferg(rop,rof,d,visc,C1,C2)
figure(figsize=(10,8))
plot(d*1000,ws,label='Stokes',linewidth=3)
plot(d*1000,wt,'g',label='Turbulent',linewidth=3)
plot(d*1000,wf,'r',label='Ferguson-Church',linewidth=3)
plot([0.25, 0.25],[0, 0.15],'k--')
plot([0.25/2, 0.25/2],[0, 0.15],'k--')
plot([0.25/4, 0.25/4],[0, 0.15],'k--')
text(0.36, 0.11, 'medium sand', fontsize=13)
text(0.16, 0.11, 'fine sand', fontsize=13)
text(0.075, 0.11, 'v. fine', fontsize=13)
text(0.08, 0.105, 'sand', fontsize=13)
text(0.01, 0.11, 'silt and', fontsize=13)
text(0.019, 0.105, 'clay', fontsize=13)
legend(loc=2)
xlabel('grain diameter (mm)',fontsize=15)
ylabel('settling velocity (m/s)',fontsize=15)
axis([0,0.5,0,0.15]);
D = [0.068, 0.081, 0.096, 0.115, 0.136, 0.273,
    0.386, 0.55, 0.77, 1.09, 2.18, 4.36]
w = [0.00425, 0.0060, 0.0075, 0.0110, 0.0139, 0.0388,
    0.0551, 0.0729, 0.0930, 0.141, 0.209, 0.307]
scatter(D,w,50,color='k')
show()

settling1

The black dots are data points from settling experiments performed with natural river sands (Table 2 in Ferguson and Church, 2004). It is obvious that the departure from Stokes’ Law is already significant for very fine sand and Stokes settling is completely inadequate for describing the settling of medium sand.

This plot only captures particle sizes finer than medium sand; let’s see what happens as we move to coarser sediment. A log-log plot is much better for this purpose.

d = np.arange(0,0.01,0.00001)
ws = v_stokes(rop,rof,d,visc,C1)
wt = v_turbulent(rop,rof,d,visc,C2)
wf = v_ferg(rop,rof,d,visc,C1,C2)
figure(figsize=(10,8))
loglog(d*1000,ws,label='Stokes',linewidth=3)
loglog(d*1000,wt,'g',label='Turbulent',linewidth=3)
loglog(d*1000,wf,'r',label='Ferguson-Church',linewidth=3)
plot([1.0/64, 1.0/64],[0.00001, 10],'k--')
text(0.012, 0.0007, 'fine silt', fontsize=13,
    rotation='vertical')
plot([1.0/32, 1.0/32],[0.00001, 10],'k--')
text(0.17/8, 0.0007, 'medium silt', fontsize=13,
    rotation='vertical')
plot([1.0/16, 1.0/16],[0.00001, 10],'k--')
text(0.17/4, 0.0007, 'coarse silt', fontsize=13,
    rotation='vertical')
plot([1.0/8, 1.0/8],[0.00001, 10],'k--')
text(0.17/2, 0.001, 'very fine sand', fontsize=13,
    rotation='vertical')
plot([0.25, 0.25],[0.00001, 10],'k--')
text(0.17, 0.001, 'fine sand', fontsize=13,
    rotation='vertical')
plot([0.5, 0.5],[0.00001, 10],'k--')
text(0.33, 0.001, 'medium sand', fontsize=13,
    rotation='vertical')
plot([1, 1],[0.00001, 10],'k--')
text(0.7, 0.001, 'coarse sand', fontsize=13,
    rotation='vertical')
plot([2, 2],[0.00001, 10],'k--')
text(1.3, 0.001, 'very coarse sand', fontsize=13,
    rotation='vertical')
plot([4, 4],[0.00001, 10],'k--')
text(2.7, 0.001, 'granules', fontsize=13,
    rotation='vertical')
text(6, 0.001, 'pebbles', fontsize=13,
    rotation='vertical')
legend(loc=2)
xlabel('grain diameter (mm)', fontsize=15)
ylabel('settling velocity (m/s)', fontsize=15)
axis([0,10,0,10])
scatter(D,w,50,color='k');
show()

settling3

This plot shows how neither Stokes’ Law, nor the velocity based on turbulent drag are valid for calculating settling velocities of sand-size grains in water, whereas the Ferguson-Church equation provides a good fit for natural river sand.

Grain settling is a special case of the more general problem of flow past a sphere. The analysis and plots above are all dimensional, that is, you can quickly check by looking at the plots what is the approximate settling velocity of very fine sand. That is great, but you would have to generate a new plot – and potentially do a new experiment – if you wanted to look at the behavior of particles in some other fluid than water. A more general treatment of the problem involves dimensionless variables; in this case these variables are the Reynolds number and the drag coefficient. The classic diagram for flow past a sphere is a plot of the drag coefficient against the Reynolds number. I will try to reproduce this plot, using settling velocities that come from the three equations above.

At terminal settling velocity, the drag force equals the gravitational force acting on the grain:

Screen Shot 2013-08-09 at 5.49.31 PM

We also know that the gravitational force is given by the submerged weight of the grain:

Screen Shot 2013-08-09 at 5.34.42 PM

The drag coefficient is essentially a dimensionless version of the drag force:

Screen Shot 2013-08-09 at 5.49.08 PM

At terminal settling velocity, the particle Reynolds number is

Screen Shot 2013-08-09 at 5.59.14 PM

Using these relationships it is possible to generate the plot of drag coefficient vs. Reynolds number:

d = np.arange(0.000001,0.3,0.00001)
C2 = 0.4 # this constant is 0.4 for spheres, 1 for natural grains
ws = v_stokes(rop,rof,d,visc,C1)
wt = v_turbulent(rop,rof,d,visc,C2)
wf = v_ferg(rop,rof,d,visc,C1,C2)
Fd = (rop-rof)*4/3*pi*((d/2)**3)*9.81 # drag force
Cds = Fd/(rof*ws**2*pi*(d**2)/8) # drag coefficient
Cdt = Fd/(rof*wt**2*pi*(d**2)/8)
Cdf = Fd/(rof*wf**2*pi*(d**2)/8)
Res = rof*ws*d/visc # particle Reynolds number
Ret = rof*wt*d/visc
Ref = rof*wf*d/visc
figure(figsize=(10,8))
loglog(Res,Cds,linewidth=3, label='Stokes')
loglog(Ret,Cdt,linewidth=3, label='Turbulent')
loglog(Ref,Cdf,linewidth=3, label='Ferguson-Church')
# data digitized from Southard textbook, figure 2-2:
Re_exp = [0.04857,0.10055,0.12383,0.15332,0.25681,0.3343,0.62599,0.77049,0.94788,1.05956,
       1.62605,2.13654,2.55138,3.18268,4.46959,4.92143,8.02479,12.28672,14.97393,21.33792,
       28.3517,34.55246,57.57204,78.3929,96.88149,159.92596,227.64082,287.31738,375.98547,
       516.14355,607.03827,695.8316,861.51953,1147.26099,1194.43213,1513.70166,1939.70557,
       2511.91235,2461.13232,3106.32397,3845.99561,4974.59424,6471.96875,8135.45166,8910.81543,
       11949.91309,17118.62109,21620.08203,28407.60352,36064.10156,46949.58594,62746.32422,
       80926.54688,97655.00781,122041.875,157301.8125,206817.7188,266273,346423.5938,302216.5938,
       335862.5313,346202,391121.5938,460256.375,575194.4375,729407.625]
Cd_exp = [479.30811,247.18175,199.24072,170.60068,112.62481,80.21341,45.37168,39.89885,34.56996,
       28.01445,18.88166,13.80322,12.9089,11.41266,8.35254,7.08445,5.59686,3.92277,3.53845,
       2.75253,2.48307,1.99905,1.49187,1.27743,1.1592,0.89056,0.7368,0.75983,0.64756,0.56107,
       0.61246,0.5939,0.49308,0.39722,0.48327,0.46639,0.42725,0.37951,0.43157,0.43157,0.40364,
       0.3854,0.40577,0.41649,0.46173,0.41013,0.42295,0.43854,0.44086,0.4714,0.45225,0.47362,
       0.45682,0.49104,0.46639,0.42725,0.42725,0.40171,0.31214,0.32189,0.20053,0.16249,0.10658,
       0.09175,0.09417,0.10601]
loglog(Re_exp, Cd_exp, 'o', markerfacecolor = [0.6, 0.6, 0.6], markersize=8)

# Reynolds number for golf ball:
rof_air = 1.2041 # density of air at 20 degrees C
u = 50 # velocity of golf ball (m/s)
d = 0.043 # diameter of golf ball (m)
visc_air = 1.983e-5 # dynamic viscosity of air at 20 degrees C
Re = rof_air*u*d/visc_air
loglog([Re, Re], [0.4, 2], 'k--')
text(3e4,2.5,'$Re$ for golf ball',fontsize=13)
legend(loc=1)
axis([1e-2,1e6,1e-2,1e4])
xlabel('particle Reynolds number ($Re$)', fontsize=15)
ylabel('drag coefficient ($C_d$)', fontsize=15);

cdvsre

The grey dots are experimental data points digitized from the excellent textbook by John Southard, available through MIT Open Courseware. As turbulence becomes dominant at larger Reynolds numbers, the drag coefficient converges to a constant value (which is equal to C2 in the equations above). Note however the departure of the experimental data from this ideal horizontal line: at high Reynolds numbers there is a sudden drop in drag coefficient as the laminar boundary layer becomes turbulent and the flow separation around the particle is delayed, that is, pushed toward the back; the separation wake becomes smaller and the turbulent drag decreases. Golf balls are not big enough to reach this point without some additional ‘help’; this help comes from the dimples on the surface of the ball that make the boundary layer turbulent and reduce the wake.

You can view and download the IPython notebook version of this post from the IPython notebook viewer site.

References
Ferguson, R. and Church, M. (2004) A simple universal equation for grain settling velocity. Journal of Sedimentary Research 74, 933–937.

My job description, in simple words

I couldn’t refrain from playing with the ‘upgoerfive’ tool, an online text editor that only allows you to use the one thousand most common words of the English language. Here is my attempt to describe what I do.

I study how small pieces of rock get together, move along, and settle down to form beds. Water usually moves smaller pieces further than large ones but lots of small and big pieces together can move really fast and build thick beds on the water bottom. I also look at the form of these beds and think about how much space is left in between the small pieces. This space is filled with water and sometimes with other stuff that people like to burn in their cars. It is nice that such beds are often seen in beautiful places around the world that I like to visit.

Here is the link to the original upgoerfive post. You can try it out yourself over here.

Camouflage in salt-and-pepper sediment, or how to hide on tropical beaches with nearby volcanoes

Here are three photos that demonstrate how well some animals ‘know’ the composition and grain size of the sediment of their homeland. In all three cases, the salt-and-pepper sand of the background is the result of dark-colored grains of volcanic origin mixing with light-colored fragments of corals and seashells.

Ghost crab on Big Island, Hawaii

Ghost crabs are common on many beaches around the world; below is another one from Costa Rica. Note how the different grain size and sediment color are perfectly captured by the crabs.

Ghost crab near Playa Conchal, Guanacaste, Costa Rica

The third example takes us back to Hawaii, under water, where I managed to get this shot of a flounder while snorkeling in Kona. The only reason I saw this guy was that I got really close and part of the seafloor, which later turned out to be the flatfish, unexpectedly took off.

Flowery flounder (Bothus mancus), Kahalu'u Beach Park, Big Island, Hawaii