One of my best friends passed away today

It is with enormous and deep sorrow that one of my best friends in the world passed away today as a result of Hemangiosarcoma. This is where the spleen becomes cancerous which eventually results in fatal hemorrhaging. There is virtually no treatment for this. She would likely pass away on her own accord within 24 hours. We didn’t know whether she was in pain or not, she rarely complained about anything, but we suspected she was. She looked absolutely miserable. We decided to take the incredibly difficult decision to put her to sleep instead of letting her suffer for another 24 hours. An incredibly sad day for all the family.

We got her 12.5 years ago as a puppy, full of life. We named her Rhody after the blossoming rhododendrons in our garden. She was a black Labrador, the sweetest dog you could imagine. She would wait for me outside to come home from work, and if she was in the house she’d listen for the sound of the bus that brought me home. When I entered the house she was almost always the first to greet me. She had an uncanny skill of measuring time. Her afternoon meal was at 4pm and somehow she knew when that time came. She would come to me or my wife looking at us in anticipation for the late lunch, often exactly at 4pm. The same thing would happen at 8pm for her evening snack.

After coming back from a walk with my wife she would rush (and I mean rush) downstairs to the basement to say hello to me. Like all Labradors she was obsessed with fetching balls and sticks and loved getting into water. She also knew how to relax and in the evening would lie on her back, legs up. I believe we gave here a happy life, she certainly gave much joy and love in return. She would be ecstatic when we went on trips to cabins we’d rent. We always picked a place where there was a river, because jumping into rivers was by far her absolute favorite pastime. It’s difficult to be sure, but she seemed to say thank you for the things we gave her. This was especially the case when I filled her dish with food. She would first walk back from her dish, circle me with a wagging tail while looking at me and return to her food. She also had a thing where she would smile at us, probably mimicking what we did. When my wife and kids were away on a trip and I was on my own I would let her upstairs to sleep in with me on her own bed. She would be so excited to do that. Some say Labradors aren’t the sharpest of dogs, but I can tell you she was a genius when it came to food. She had a wide range of words she knew, these included: rhody, ball, stick, get, fetch, sleep, bed, stay, wait, sit, lie, paw (i.e give me your paw), go pee, let’s go (meaning run), leash, walk, good girl, bad girl, dish, treat, squirrel, cat, rat, water, toy, car (she loved riding in the car), post (which actually meant let’s check out the fig tree on the road by our neighbors house next to the postbox), and what’s this (i.e check this out). She had a thing about figs. We also have a fig tree in the garden and as the figs developed she would inspect them almost daily to see if they were ripe. Once they were ripe, they would be gone. At Christmas we always had a wrapped present for her (a bit chewy bone).

This morning at 7.15am (28 Sept), we heard a loud falling sound. Rhody had collapsed, her legs had given away. We rushed downstairs, she was lying flat out at the bottom of the stairs where she usually waited for us to come down stairs in the morning. She couldn’t move and was clearly in a lot of distress. We took her to the vet, her ailment was diagnosed and we faced the abyss of having to lose her. It was an incredibly traumatic moment. You do not want to go through what we went through. It was unbearable. She was still conscious, her faculties were intact but her body was failing her and there was nothing that could be done about it. Imagine having to make the decision to let her go. For my wife and myself, this was the hardest and saddest decision we have ever made.

Rhody, we will miss you so deeply. May you rest in peace, our best friend and most loyal companion. Farewell.

Posted in General Science Interest | Leave a comment

Generating log ranges in Python

This is a very short blog on how to generate logarithmic ranges in python:

import numpy as np

# Brute force
x = [0.1, 1, 10, 100, 1000]

# Classic explicit loop
x = 0.1
for i in range (5):
    print (x)
    x = x*10
# List comprehension, more efficient
x = [10**i for i in range(-1,3)]
print (x)

# Lazy-evaluation (saves memory for very large lists)
for x in (10**i for i in range(-1,3)):
    print (x)
# Using the built-in function in numpy
x = np.logspace (-2,2,num=5)
print (x)
Posted in Programming, Python, Software | Leave a comment

Explaining the smallest chemical network that can display a Hopf bifurcation

Last year I did a short blog on simulating the smallest chemical network that could display Hopf bifurcation oscillations. Here I want to revisit this with an eye to explain why it oscillates. The paper in question is

Wilhelm, Thomas, and Reinhart Heinrich. “Smallest chemical reaction system with Hopf bifurcation.” Journal of mathematical chemistry 17.1 (1995): 1-14.

In the blog I showed the figure that was given in the paper which was:








The question is how does this network generate oscillations? In order to answer this question, we must first redraw the network. I’m going to make two changes to the figure.

You’ll notice that the reaction X + Y -> Y consumes and regenerates Y so that Y doesn’t actually change in concentration as a result of this reaction. Instead, we can treat Y as an activator of the reaction. In the paper the rate law for this reaction was k*X*Y, we leave this as is but change the reaction to X -> waste. This won’t alter the dynamics at all but we can reinterpret this reaction as something activated by Y without consuming Y

For the reaction X + Y -> Y, we can write the equivalent form

X -> waste; activated by Y: v = k*X*Y

The other interesting reaction is X + A -> X + X. This is what is called an auto-catalytic reaction, that is X stimulates its own production and this is key to the origins of the oscillations In the diagram, we can replace this in the diagram with X activating itself, in other words, a positive feedback loop. This reaction on its own has one steady state when X is zero. If X is not zero, the concentration of X tends to infinity at infinite time.

With these changes, we’ll redraw the network in the following way with the reaction numbers staying the same as those found in the original figure:









In the new drawing, we can see a positive feedback loop in blue formed from the X -> X + X reaction and a delayed negative feedback loop in red that goes from X to reaction 2 via reactions 4 and 5. The negative feedback loop is negative with respect to X because increases in X will result in activation of reaction 2 resulting in a higher degradation rate of X. This is the classic structure for a relaxation oscillator, a positive feedback coupled with a negative feedback loop that causes X to turn on and off repeatedly. Let’s make a smaller version of the positive feedback unit using reactions 1, 2, and 4:

X -> X + X; k1*X
X -> Ao;  k2*Y*X
X -> Z; k4*X

The rate of change of X is given by:

dx/dt = k1 X – k2 X Y – k4 X

We can see that the rate of change of X can be positive or negative depending on the value for Y and X. At low Y, the rate of change of X will be positive but at high enough Y, the rate of change will turn negative. In other words, the system can be switched from an unstable to a stable regime by setting Y. If we set k2 = 1 and k4 = 1 then the cross over point from unstable to stable is when Y = k1 – 1. In the model k1 = 4, therefore the rate of change of X will switch stability when the level of Y is 3. See the time course plot below.

When X and Y are small the network is in an unstable state and X rises, this causes Y to rise but with a delay due to having to go through Z. However once Y reaches a threshold dictated by k1, k2, and k4, the rate of change of X goes negative and the system enters a stable regime. As X drops, so does Y which means the threshold is passed again but in the opposite direction and the system switches from a stable to an unstable regime. This loop continues indefinitely resulting in oscillations.


Posted in General Science Interest, Pathways, Systems Theory | Leave a comment

Simple Stochastic Code in Python

It’s been a while since I blogged (grant writing etc gets int he way of actual thinking and doing) so here is a quick post that uses Python to model a simple reaction A -> B using the Gillespie next reaction method.

I know importing pylab is discouraged but I can never remember how to import matplotlib. pylab is an easy way to get matplotlib. It also imports scipy and numpy but I import numpy separately.

I use lists instead of array to accumulate the results because they are faster to extend as the data comes in. Note that with a Gillespie algorithm you don’t know beforehand how many data points you’ll get.

I’ll also make a shameless plug to my book Enzyme Kinetics for Systems Biology which explains the algorithm.

Posted in General Science Interest, Modeling, Programming, Python, Software | Leave a comment

New Textbook on Metabolic Control Analysis

New Textbook Published

I am pleased to announce the publication of my new textbook:

Introduction to Metabolic Control Analysis


“This book is an introduction to control in biochemical pathways. It introduces students to some of the most important concepts in modern metabolic control. It covers the basics of metabolic control analysis that helps us think about how biochemical networks operate. The book should be suitable for undergraduates in their early to mid years at college.”


Available at Amazon for $49.95 or directly from me for only $29.95 at


The book is printed in full color with 275 pages, 118 Illustrations, 71 Exercises.


1, Traditional Concepts in Metabolic Regulation
2. Elasticities
3. Introduction to Biochemical Control
4. Linking the Parts to the Whole
5. Experimental Methods
6. Linear Pathways
7. Branched and Cyclic Systems
8. Negative Feedback
9. Stability
10. Stability of Negative Feedback Systems
11. Moiety Conserved Cycles
12. Moiety Conserved Cycles

Appendix A: List of Symbols and Abbreviations
Appendix B: Control Equations

Posted in Metabolic Control Analysis, Modeling, Pathways, Publishing, Systems Theory, Textbooks | Leave a comment

Repeatable, Reproducible [and Replictable]

There appears to be great confusion in the scientific and social sciences communities on the meaning of words related to certain aspects of the scientific method. The ArXiv paper by Lorena Barba “Terminologies for Reproducible Research” highlights the confused state that has appeared over the last 20 years. The words in question include:


I will dispense with replication simply because it’s too hard to say quickly (especially replicability) but see below for a more serious reason. The contention appears in the meaning of reproducibility or to reproduce. As Barba points out there are at least three ‘camps’ in this community which she labels, A, B1, and B2. The A camp makes no distinction, so we’ll forget about those. To describe B1 and B2 we must look at two extreme scenarios with respect to an experiment:

1. An experiment is carried out and is done again by the same author, using the same equipment, same methods, basically the same everything.

2. The experiment is carried out by a third-party using different equipment, different methods, etc. Basically, everything is different

In between these two extremes are variants, For example, the third-party could use the same methods but implement them independently of the original author, usually by reading the description given in the original published paper.

Given these descriptions, the B1 group calls the first scenario, ‘to reproduce the experiment’ while the second group, B2, calls the first scenario, ‘to replicate the experiment’ and there lies the contention.

Personally, I don’t like either of these terms as used here. As I mentioned before, replicability is a hard word to say. But not only that, from a dictionary perspective, it means the same thing as reproducibility. The Oxford English Dictionary describes replicability as “The quality of being able to be exactly copied or reproduced.” So why use two words, for two quite different things, where the two words have essentially the meaning?

My personal choice are the following two words, Rather than use the word replicability which seems redundant, I choose repeatability, hence:

Repeatability: means ‘to repeat the experiment again’, the word implies that the experiment was done exactly as before – Scenario 1

Reproducibility: means: ‘to recreate the experiment anew; reproduce implies creating a new thing, independently of the old – Scenario 2.

Of course one can get much more fine-grained, especially when it comes to computational experiments. But the fine graining can be included as levels within the class reproducibility.

Other than a change in wording from replicability to reproducibility, I appear to belong to camp B2. I should list others in camp B2, these include FASEB, NIST, 6 sigma, ACM, and Wikipedia and this Wikpedia page and The Physiome Project. I am sure there are others. For example, ACM writes:

Repeatability (Same team, same experimental setup)

The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.

Reproducibility (Different team, different experimental setup)

The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.

Essentially the same definitions I gave above.

The National Academies recently studied this issue closely and is soon coming out with a report. It is apparently in favor of the more confusing option.

Posted in General Science Interest, Publishing | Leave a comment

How to do a simple parameter scan using Tellurium

A common task in modeling is to see how a parameter influences a model’s dynamics. For example, consider a simple two reaction pathway:

-> S1 ->

where the first reaction has a fixed input of vo and the second reaction a first-order rate laws k1*S1. The task is to investigate how the time course of S1 is influenced by vo.

The script below defines the model, then changes vo in increments and plots the effect on the pathway via a time course simulation. To do the parameter scan we exploit plotArray. This initially prevents the plots from being shown using show=False. We make sure that each plot gets a different color using resetColorCycle=False, finally, we show the plot using show(). To make things more interesting we also add a legend entry for each plot.

Note we call reset each time we run a simulation to ensure that S1 is reset back to its initial condition.

The following figure shows the resulting plot:

Thanks to Kiri Choi for pointing out how to use plotArray in this way.

Posted in General Science Interest, Modeling, Pathways, Programming, Python, Software, Tellurium | Leave a comment

How to plot a grid of phase plots using Tellurium

Let’s say we have a chemical network model where the species oscillate and you’d like to plot every combination of these on a grid. If so, then this code might be of help.

This code will generate:

Here is an alternative that has removed the internal ticks and labels and arranges the column and row labels as contiguous. This is probably more what one might expect. This version plots all combinations including the transpose combinations.

The following shows the steady state oscillations. This was done by simulating the model twice and using the plotting the second set of simulation results.

Posted in Enzyme Kinetics, Modeling, Pathways, Programming, Python, Software, Tellurium | Leave a comment

A look at the Euler’s number: e

I’ve never particularly liked the way e, Euler’s number, is introduced in textbooks. Most approaches give me a very limited intuitive feel for what e actually is. Modern textbooks appear to use one of four common ways to introduce e, and only one of them gives me a semblance of an intuitive feel for e. These approaches include computing compound interest (I think this is one of the worst), integrating 1/x and noting that the area from 1 to the value of e is one (amazing but so what?), looking at the slope of a^x at x = 0, the slope is one when a = e (yes ok, and…) and finally the one I like is starting with dy/dt = y and finding the power series solution and then using this to define e.

Before continuing let’s state what e is numerically equal to:

e = 2.71828….

For normal algebra, all we need are addition, and multiplication (subtraction and division are just alternative forms of these). For convenience, we also introduce a power notation such as x^2 and x^n, but these are just a short-hand notation for doing lots of multiplications at once. With the introduction of trigonometry which involves relationships between sides and angles of triangles, the basic algebra becomes cumbersome because it involves the use of infinite series. Rather than writing the series down all the time, we define short-cut names such as sine, cosine etc. Calculus brings us another special type of series which involves the solution to differential equations. The short-cut notation for this series is e^x.

Note that in all these cases we are still only doing addition and multiplication. The functions sine and e^x are just short-hand for particular combinations of addition and multiplication that happen to be in the form of an infinite series.

Given that e pops up in the form of e^x when solving differential equations, I think this is the place to start. Let’s consider the simplest possible non-trivial differential equation:

    \[ \frac{dy}{dx} = y \]

This equation is saying something quite interesting, that the derivative of y is the same as y. What this means is that if we were to find a solution to this differential equation, the solution would have to also equal dy/dt. We can express the above equation in the form:

    \[ f'(x) = f(x) \]

this perhaps makes it more obvious that the derivative is the same as the function f(x). Can we find an actual function that is like this? The way to find this answer is to define f(x) as a general power series:

    \[ f(x) = a_o + a_1 x + a_2 x^2 + a_3 x^3 + \ldots + a_n x^n \]

We now differentiate this with respect to x to give:

    \[ f'(x) = a_1 + 2 a_2 x + 3 a_3 x^2 + \ldots + (n + 1) a_n x^n \]

To find the function f(x) that also equals f'(x) we need to discover the values for the coefficients, a_o, a_1, etc. To make it easier, let’s decide that when x=0, the value of f(0) is one, i.e: f(0) = 1. This will mean that a_o = 1. What we’ll now do is match up the pairs of terms in f(x) and f'(x), that is set: a_o = a_1, a_1 x = 2 a_2 x, a_2 x^2 = 3 a_3 x^2 and so on.

This allows us to state that: a_1 = 1, a_2 = 1/2, a_3 = a_2/3, a_4 = a_3/4 and so on. Working back by subsituting a_3 into the a_4 equation and a_2 into the a_3 equation, and so on leads to the result that:

    \[ a_n = \frac{1}{n!} \]

We therefore conclude that the equation, f(x) that satisfies f(x) = f'(x) is:

    \[ f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} \hdots \]

You can easily check by differentiating f(x), you’ll get f(x) again. The solution to dy/dt = y must therefore also be:

    \[ y = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} \hdots \qquad\qquad (1) \]

Not to labor the point too much, but differentiate y and you’ll get the equivalence dy/dx = y.

Let us define the value of the series when x = 1 to be:

    \[ f(1) = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \hdots + \frac{1}{n!} \]

For convenience we will call this value e:

    \[ e = f(1) = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \hdots + \frac{1}{n!} \]

The next question is, if f(1) = e then what does f(x) equal?

I’m going to make a jump here and propose that f(x) = e^x. From first principles, we can work out the derivative of e^x and remarkably it is e^x! You’ll find a proof at the excellent Paul’s Online Math Notes

Let’s now use the Maclaurin series to find an approximation to e^x. The Maclaurin series is a Taylor series centered on zero, that is:

    \[ f(x)=f(0)+xf'(0)+\frac{x^2}{2 !}f''(0)+...+\frac{x^n}{n !}f^{(n)}(0)+...\]

Since the derivative of e^x is e^x, as well as the third, fourth fifth derivatives etc and noting that e^x at x = 0 is one then we can write:

    \[ f(x) = 1 + 1 \cdot x + \frac{1 \cdot x^2}{2} + \frac{1 \cdot x^3}{6} + \ldots = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots = \sum_{n=0}^{\infty} \frac{x^n}{n~} \]

But this is just the solution to dy/dx= y, — see equation (1) — therefore we conclude that:

    \[ y = e^x \]

It is also worth noting that the derivative for the general exponential a^x is given by:

    \[ \frac{d a^x}{dx} = a^x \log (a) \]

In other words the derivative of e^x is the special case where \log (a) = 1. One reason why e^x is special is that its the purest exponential in the sense that the derivative has no scaling factor. It’s the canonical exponential, the simplest possible. It’s the only exponential function where the function and its derivative are identical.

What else can we say about e^x? What about its rate of increase? We know that the rate of increase is itself, e^x, but how fast is that? One way to look at this is to derive the relative increase in e^x. The relative growth rate is defined by:

    \[ (dy/y)/(dx/y) = \frac{dy}{dx} \frac{x}{y} \]

Given this definition, let’s look first at how fast a^x is increasing by computing

    \[ \frac{d a^x}{dx} \frac{x}{a^x} = a^x \log (x) \frac{x}{a^x} = \log (a) x \]

If we now do the same for e^x we find:

    \[ \frac{d e^x}{dx} \frac{x}{e^x} = e^x \log (e) \frac{x}{e^x} = x \]

In other words e^x is the only exponential function that increases at a rate equal to x. For example if x = 1, it means that a 1% increase in x leads to a 1% increase in e^x while if x = 10 then a 1% increase in x leads to a 10% increase in e^x. This is the characteristic of exponential functions, they increase at an ever increasing rate. Contrast this with a power term such as x^2. If we compute the relative increase for x^2 we find it has a fixed increase of 2:

    \[ \frac{d x^2}{dx} \frac{x}{x^2} = 2 x\frac{x}{x^2} = 2 \]

The growth of a bacterial colony follows this pattern, where the growth in the colony is a fixed percentage over time.

Posted in General Interest, Math | Leave a comment

Smallest Chemical Reactions Systems that is Bistable

A while back Thomas Wilhelm, published a paper that described the smallest chemical network that could display bistability. The paper that describes this result is:

Wilhelm, T. (2009). The smallest chemical reaction system with bistability. BMC systems biology, 3(1), 90.

This is a diagram of the network generated using pathwayDesigner:

Here is a Tellurium script that uses Antimony to define the model (Note that $P means that species P is fixed). The S term in the first reaction is supposed to represent an input signal.

Using the auto200 extension to roadrunner we can plot the bifurcation diagram for this system as a function of the signal S. If S is below 0.8 only one stable steady state exists and the values for X and Y are both zero. Above S = 0.8 we see three steady states emerge.

For the system where it shows three steady states, one is at zero concentration, and is stable but not shown in the plot. The other two are marked by the horizontal line roughly at 1.5 on the y-axis and is unstable. The other is represented by the line that moves up from the turning point at about x = 0.8. This steady state is stable. The unstable branch appears to asymptotically approach a limiting value at high S values, 1.5 for X and approximately 0.00028 for Y.

The paper also describes what happens when we add a fixed input flux to X at a rate of 0.6. This can be simply done by adding the line J5: ->X; 0.6; to the model. This change results in a more classical look for the bifurcation plot as shown below:

The Tellurium script for generating these bifurcation plots is shown below:

Posted in Modeling, Pathways, Python, SBML, Software, Systems Theory, Tellurium | Leave a comment