Mathpix Snip

I came across this amazing tool that can convert images of math equations into LaTeX format. The tool can be found at https://mathpix.com/. I’ve tried on a number of texts including some not so clear and it does a fantastic job of converting to LaTeX. Here is a screen showing part of a page from Paul’s Online Notes:

Trig

The way it works is you select the screen icon on the mathpix tool, the entire screen goes black and white, then we draw a square around the section we want to convert and that’s it. In this case, it generates the following latex

We’ll start with finding the derivative of the sine function. To do this we will need to use the definition of the derivative. It’s been a while since we’ve had to use this, but sometimes there just isn’t anything we can do about it. Here is the definition of the derivative for the sine function.
$$
\frac{d}{d x}(\sin (x))=\lim _{h \rightarrow 0} \frac{\sin (x+h)-\sin (x)}{h}
$$
since we can’t just plug in $h=0$ to evaluate the limit we will need to use the following trig formula on the first sine in the numerator.
$$
\sin (x+h)=\sin (x) \cos (h)+\cos (x) \sin (h)
$$
Doing this gives us,
$$
\begin{aligned}
\frac{d}{d x}(\sin (x)) &=\lim _{h \rightarrow 0} \frac{\sin (x) \cos (h)+\cos (x) \sin (h)-\sin (x)}{h} \\
&=\lim _{h \rightarrow 0} \frac{\sin (x)(\cos (h)-1)+\cos (x) \sin (h)}{h} \\
&=\lim _{h \rightarrow 0} \sin (x) \frac{\cos (h)-1}{h}+\lim _{h \rightarrow 0} \cos (x) \frac{\sin (h)}{h}
\end{aligned}
$$
As you can see upon using the trig formula we can combine the first and third term and then factor a sine out of that. We can then break up the fraction into two pieces, both of which can be dealt with separately.

Which when processed by LaTeX becomes:

Trig2

This is rendered inside the mathpix tool but you’ll notice there isn’t a significant difference between the original and the converted image. I’ve converted some fairly rough images and it generally succeeds. It also gives you a confidence level on how well it thinks it’s done. It took under a second to generate the LaTeX.

As a harder test, I decided to attempt to translate a page from Jim Burns’ thesis. This is a thesis from the 1970s that was typed and the equations a combination of typed characters and hand drawn. The following image shows page 93 which is part of the proof for the connectivity theorem.

null

And here is the image analyzed by mathpix. Remarkably the conversion is almost perfect, the equations, in particular, are translated almost without error, even getting the subscripts on the subscripts correct. It got the delta F1 wrong at the start and it interpreted a mark on the paper as an apostrophe. I tried other pages that included derivatives and these converted without incident.

null

Posted in General Interest, General Science Interest, LaTeX, Metabolic Control Analysis, Publishing, Software | Leave a comment

Front-Loading of Flux Control

There is an interesting property that straight chain pathways have which I call front-loading. The phenomenon has been known for some time and I describe it in detail in my recent textbook on Metabolic Control Analysis. Let’s say we have a straight chain of reactions, something like:

    \[ X_o \rightleftharpoons S_1  \rightleftharpoons S_2 \rightleftharpoons \hdots \rightleftharpoons  S_n \rightleftharpoons X_1 \]

There are no long-range feedback loops (other than short-range product inhibition) and it is assumed that for a given reaction, increases in substrate cause the reaction rate to increase and increases in the product causes the reaction to decrease. Each reaction is reversible using a simple reversible mass-action rate law: k_1 (S - P/K_{eq}). I assume that X_o and X_1 are fixed species. Given suitable rate constants and equilibrium connstants for each reaction, the system will admit a steady state. A question to ask is what is the distribution of flux control across the chain? There are two ways to answer this, we can use theory, this gives us what the distribution is and insight into why or we can do simulation but this will just tell us what the distribution is likely to be. For now let’s do a simulation.

The code is shown below. The code defines a four-step straight-chain, runs 10,000 simulations, randomizing the rate constants for each simulation. Each time it does a simulation we compute the flux control at each step, store this information and then at the end, plot histograms of the distribution of the flux control coefficients. Scroll down to see the plots.

The average values for the flux control ceofficients is shown in the histogram:

Histogram plot of CCF

The distribution of flux control is shown below. The vertical axis is the frequency and the x-axis the value of the flux control coefficient from zero to 1.0. The red curve corresponds to the flux control for the first step, on average this has the highest control. The yellow distribution corresponds to the last step of the pathway, you can see it is least likely to have any significant flux control. The conclusion is that given a straight chain with a random set of parameters, on average the first step will have the highest control, progressively decreasing as we work our way down the pathway. Why is this? The explanation is a thermodynamic one which in turn boils down to have easy it is for perturbation to travel up and down the chain. So long as the thermodynamic gradient is from left to right, it is easier for perturbations to propagate downstream compared to upstream. Since flux control is really a measure of how much a perturbation has on the steady-state flux, the easier a perturbation can travel the higher the flux control.

This is not to say that it isn’t possible to get excellent flux control in downstream steps, it’s just that few parameter combinations will achieve that.

Freq Distribution CCF

Posted in Enzyme Kinetics, Metabolic Control Analysis, Modeling, Pathways, Programming, Python, Systems Theory, Tellurium | Leave a comment

Stream Plots Using Tellurium

Every wanted to do a phase plot for a toggle switch or any two-dimensional model? Here is an example of using a stream plot combined with SBML models using Tellurium based on a simple toggle switch.

The colored circles mark the three steady states. The green circle in the middle is on a saddle-node which is unstable (unless you’re exactly on the saddle ridge) and the two on either side are stable nodes.

Posted in Modeling, Pathways, Python, Systems Theory, Tellurium | Leave a comment

Marsaglia’s MWC (multiply with carry) Random Number Generator for Object Pascal

In the past, I’ve tended to use the Mersenne Twister for generating pseudo-random numbers. This is a widely used generator because it has a very long period of 2^{19937} – 1. The one issue is that it’s not the fastest generator and this is particularly an issue when drawing random numbers for implementing the Gillespie direct method. A much faster variant is the Multiply-with-carry (MWC) random number generator. This uses much simpler arithmetic which means it’s faster. The algorithm can be described in three lines:

    \[  m_z = 36969 * (m_z \text{ and } 65535) + (m_z >> 16) \]

    \[ m_w = 18000 * (m_w \text{ and } 65535) + (m_w >> 16) \]

    \[ \text{random number} = (m_z << 16) + m_w \]

The symbol >> means shift the bits to the right by the given number and << is similar but moves the bits to the left. In object pascal, these operations are represented by shr and shl respectively. The generator needs two values to be initialized, m_z and m_w I've recently implemented the Gillespie method in three languages, C, Go and Object Pascal. Object Pascal has good support for the Mersenne Twister but I had to create my own MWC generator. The code below shows the object pascal implementation:

What about some tests? There are random generator test systems such as TestU01 and dieharder but they are written to run on Linux and I didn't have time to set something up on windows plus the usage docs are not that great. Instead I did three simple sanity tests. The simplest is to check that the mean value for a sample of random numbers is 0.5. Another test is to make sure that, say out of 10,000 random numbers, 10% are between 0 and 0.1, etc. Ideally there should be statistical tests done to check that these values are within certain tolerances. The third test is a visual test where I generate 32768 random numbers and map them on to a bitmap. Each random number is 32 bits long and each bit represents a pixel in the bitmap. 32768 numbers means I generate a bitmap 1024 by 1024 pixels. Thus 32 numbers occupy each row of the bitmap. I wrote a simple Delphi console application that calls the random number generator and creates the bitmap. The main part of this code is shown below. For a comparison I also implemented the NOT to be used randu generator and also the Delphi's built in generator. The approach I take isn't very sophisticated and I am sure it could be made shorter, but for each random number I generate, I convert it into a string of 1's and 0's. I then work my way through the string setting pixels in a bitmap. I do this 32 times for each row in the bitmap.

The results of running this code is below. First randu to show you how bad it is, note the patterns and lines in the bitmap:

randuImage

The next figure shows the results from Delphi’s builtin random number generator, which isn’t so great either:

random

Finally, the MWC algorithm, which you can see from the image the distribution of pixels appears to show no pattern:

mwc

Posted in General Interest, General Science Interest, Math, Programming, Software | Leave a comment

A simple bistable system

Here is a simple bistable system I sometimes use in class. I like it because it’s easy to explain plus the addition of one extra reaction turns it into a relaxation oscillator.

import tellurium as te
import random
import pylab as plt
import numpy as np

# ---- BISTABLE SYSTEM -----

r = te.loada ('''
     $Xo -> S1;  0.5 + Vmax*S1^n/(15 + S1^n);
      S1 -> ;    k1*S1;

     Xo = 1; S1 = 1;
     n = 4; Vmax = 10; k1 = 2;
''')

plt.xlabel ('Time')
plt.ylabel ('Concentration')
for S1 in np.arange (1.45, 1.55, 0.01):
    r.S1 = S1 
    m = r.simulate (0, 5, 100)
    plt.plot (m['time'], m['[S1]'])
   
r.steadyStateSolver.allow_presimulation = False
for i in range (10):
    r.S1 = random.random()*5
    r.steadyState()
    print ('S1 = ', r.S1, 'eigenvalue = ', r.getFullEigenValues()[0])

If you run the above code you’ll get:

S1 = 5.145227 eigenvalue = -1.84055
S1 = 5.145227 eigenvalue = -1.84055
S1 = 0.251329 eigenvalue = -1.95768
S1 = 5.145227 eigenvalue = -1.84055
S1 = 1.492258 eigenvalue = 3.004970
S1 = 5.145227 eigenvalue = -1.84055
S1 = 1.492258 eigenvalue = 3.004970
S1 = 5.145227 eigenvalue = -1.84055
S1 = 5.145227 eigenvalue = -1.84055
S1 = 0.251329 eigenvalue = -1.957684

The above output is what I call the poor-mans way of finding multiple steady states. I scan across the inital conditions computing the steady state in each case and the corresponding eigenvalue (You can also just assign random values to the initial value for S1, which is better if you don’t know what starting conditions to use). As you can see it found three steady states, two stable (neg eignvalue) and one unstable (1.492, pos eigenvalue).

The plot above shows time course trajectories for about 10 different starting points for S1. The two steady states are clearly evident.

Out of interest here is the addition of the extra reaction at the front to turn it into a relaxation oscillator. You have to make a slight modification to the second reaction’s rate law so that it becomes a function of its reactant (which it wasn’t before)

      $Xo -> So;  k0*Xo;
      So -> S1;   k1*So + Vmax*So*S1^n/(15 + S1^n);
      S1 -> ;     k2*S1;

      Xo = 1; S1 = 1; k0 = 0.07; k1 = 0.04
      n = 4; Vmax = 6; k2 = 0.1;
Posted in Enzyme Kinetics, Modeling, Pathways, Python, Systems Theory, Tellurium | Leave a comment

Simulate Regular Doses using Antimony/SBML in the Tellurium Python Tool

Someone recently asked how to create a model where a molecular species received a regular dose at a fixed interval of 12 hours.

They wanted something where a species A is being degraded but at regular intevals is increased by a fixed amount. See the plot below.

Lucian Smith and I came up with two possible solutions which are shown below. The first solution from Lucian is to set up a variable called trigger that we define using the differential equation trigger’ = 1 (note the apostrophe). The ode ensures that trigger increases linearly during the simulation. We then set up an event that triggers when trigger is greater than 12 hours at which point trigger is set back to zero and we add the dose of value 3 to species A. trigger then starts to integrate again from zero and the cycle repeats.

import tellurium as te

r = te.loada("""
      A ->; k1*A;
      k1 = 0.1; A = 10
      
      trigger = 0
      trigger' = 1

      at (trigger>12): A = A+3, trigger=0
""")

m = r.simulate (0, 100, 100, ['time', 'A'])
r.plot()

The second solution is less elegant but increases the trigger value when an event occurs. In the code below when time exceeds the current trigger value we add the dose then increase trigger by 12 hours so that can it can be triggered again.

m = r.simulate (0, 100, 100, ['time', 'A'])
r.plot()

r = te.loada("""
      A ->; k1*A;
      k1 = 0.1; A = 10
      
      trigger = 12
      at (time > trigger): A = A+3, trigger = trigger + 12
""")

m = r.simulate (0, 100, 100, ['time', 'A'])
r.plot()

Posted in Modeling, Pathways, Programming, Python, SBML, Software, Tellurium | Tagged | Leave a comment

One of my best friends passed away today

It is with enormous and deep sorrow that one of my best friends in the world passed away today as a result of Hemangiosarcoma. This is where the spleen becomes cancerous which eventually results in fatal hemorrhaging. There is virtually no treatment for this. She would likely pass away on her own accord within 24 hours. We didn’t know whether she was in pain or not, she rarely complained about anything, but we suspected she was. She looked absolutely miserable. We decided to take the incredibly difficult decision to put her to sleep instead of letting her suffer for another 24 hours. An incredibly sad day for all the family.

We got her 12.5 years ago as a puppy, full of life. We named her Rhody after the blossoming rhododendrons in our garden. She was a black Labrador, the sweetest dog you could imagine. She would wait for me outside to come home from work, and if she was in the house she’d listen for the sound of the bus that brought me home. When I entered the house she was almost always the first to greet me. She had an uncanny skill of measuring time. Her afternoon meal was at 4pm and somehow she knew when that time came. She would come to me or my wife looking at us in anticipation for the late lunch, often exactly at 4pm. The same thing would happen at 8pm for her evening snack.

After coming back from a walk with my wife she would rush (and I mean rush) downstairs to the basement to say hello to me. Like all Labradors she was obsessed with fetching balls and sticks and loved getting into water. She also knew how to relax and in the evening would lie on her back, legs up. I believe we gave here a happy life, she certainly gave much joy and love in return. She would be ecstatic when we went on trips to cabins we’d rent. We always picked a place where there was a river, because jumping into rivers was by far her absolute favorite pastime. It’s difficult to be sure, but she seemed to say thank you for the things we gave her. This was especially the case when I filled her dish with food. She would first walk back from her dish, circle me with a wagging tail while looking at me and return to her food. She also had a thing where she would smile at us, probably mimicking what we did. When my wife and kids were away on a trip and I was on my own I would let her upstairs to sleep in with me on her own bed. She would be so excited to do that. Some say Labradors aren’t the sharpest of dogs, but I can tell you she was a genius when it came to food. She had a wide range of words she knew, these included: rhody, ball, stick, get, fetch, sleep, bed, stay, wait, sit, lie, paw (i.e give me your paw), go pee, let’s go (meaning run), leash, walk, good girl, bad girl, dish, treat, squirrel, cat, rat, water, toy, car (she loved riding in the car), post (which actually meant let’s check out the fig tree on the road by our neighbors house next to the postbox), and what’s this (i.e check this out). She had a thing about figs. We also have a fig tree in the garden and as the figs developed she would inspect them almost daily to see if they were ripe. Once they were ripe, they would be gone. At Christmas we always had a wrapped present for her (a bit chewy bone).

This morning at 7.15am (28 Sept), we heard a loud falling sound. Rhody had collapsed, her legs had given away. We rushed downstairs, she was lying flat out at the bottom of the stairs where she usually waited for us to come down stairs in the morning. She couldn’t move and was clearly in a lot of distress. We took her to the vet, her ailment was diagnosed and we faced the abyss of having to lose her. It was an incredibly traumatic moment. You do not want to go through what we went through. It was unbearable. She was still conscious, her faculties were intact but her body was failing her and there was nothing that could be done about it. Imagine having to make the decision to let her go. For my wife and myself, this was the hardest and saddest decision we have ever made.

Rhody, we will miss you so deeply. May you rest in peace, our best friend and most loyal companion. Farewell.

Posted in General Science Interest | Leave a comment

Generating log ranges in Python

This is a very short blog on how to generate logarithmic ranges in python:

import numpy as np

# Brute force
x = [0.1, 1, 10, 100, 1000]

# Classic explicit loop
x = 0.1
for i in range (5):
    print (x)
    x = x*10
    
# List comprehension, more efficient
x = [10**i for i in range(-1,3)]
print (x)

# Lazy-evaluation (saves memory for very large lists)
for x in (10**i for i in range(-1,3)):
    print (x)
    
# Using the built-in function in numpy
x = np.logspace (-2,2,num=5)
print (x)
Posted in Programming, Python, Software | Leave a comment

Explaining the smallest chemical network that can display a Hopf bifurcation

Last year I did a short blog on simulating the smallest chemical network that could display Hopf bifurcation oscillations. Here I want to revisit this with an eye to explain why it oscillates. The paper in question is

Wilhelm, Thomas, and Reinhart Heinrich. “Smallest chemical reaction system with Hopf bifurcation.” Journal of mathematical chemistry 17.1 (1995): 1-14.

In the blog I showed the figure that was given in the paper which was:

 

 

 

 

 

 

 

The question is how does this network generate oscillations? In order to answer this question, we must first redraw the network. I’m going to make two changes to the figure.

You’ll notice that the reaction X + Y -> Y consumes and regenerates Y so that Y doesn’t actually change in concentration as a result of this reaction. Instead, we can treat Y as an activator of the reaction. In the paper the rate law for this reaction was k*X*Y, we leave this as is but change the reaction to X -> waste. This won’t alter the dynamics at all but we can reinterpret this reaction as something activated by Y without consuming Y

For the reaction X + Y -> Y, we can write the equivalent form

X -> waste; activated by Y: v = k*X*Y

The other interesting reaction is X + A -> X + X. This is what is called an auto-catalytic reaction, that is X stimulates its own production and this is key to the origins of the oscillations In the diagram, we can replace this in the diagram with X activating itself, in other words, a positive feedback loop. This reaction on its own has one steady state when X is zero. If X is not zero, the concentration of X tends to infinity at infinite time.

With these changes, we’ll redraw the network in the following way with the reaction numbers staying the same as those found in the original figure:

 

 

 

 

 

 

 

 

In the new drawing, we can see a positive feedback loop in blue formed from the X -> X + X reaction and a delayed negative feedback loop in red that goes from X to reaction 2 via reactions 4 and 5. The negative feedback loop is negative with respect to X because increases in X will result in activation of reaction 2 resulting in a higher degradation rate of X. This is the classic structure for a relaxation oscillator, a positive feedback coupled with a negative feedback loop that causes X to turn on and off repeatedly. Let’s make a smaller version of the positive feedback unit using reactions 1, 2, and 4:

X -> X + X; k1*X
X -> Ao;  k2*Y*X
X -> Z; k4*X

The rate of change of X is given by:

dx/dt = k1 X – k2 X Y – k4 X

We can see that the rate of change of X can be positive or negative depending on the value for Y and X. At low Y, the rate of change of X will be positive but at high enough Y, the rate of change will turn negative. In other words, the system can be switched from an unstable to a stable regime by setting Y. If we set k2 = 1 and k4 = 1 then the cross over point from unstable to stable is when Y = k1 – 1. In the model k1 = 4, therefore the rate of change of X will switch stability when the level of Y is 3. See the time course plot below.

When X and Y are small the network is in an unstable state and X rises, this causes Y to rise but with a delay due to having to go through Z. However once Y reaches a threshold dictated by k1, k2, and k4, the rate of change of X goes negative and the system enters a stable regime. As X drops, so does Y which means the threshold is passed again but in the opposite direction and the system switches from a stable to an unstable regime. This loop continues indefinitely resulting in oscillations.

 

Posted in General Science Interest, Pathways, Systems Theory | Leave a comment

Simple Stochastic Code in Python

It’s been a while since I blogged (grant writing etc gets int he way of actual thinking and doing) so here is a quick post that uses Python to model a simple reaction A -> B using the Gillespie next reaction method.

I know importing pylab is discouraged but I can never remember how to import matplotlib. pylab is an easy way to get matplotlib. It also imports scipy and numpy but I import numpy separately.

I use lists instead of array to accumulate the results because they are faster to extend as the data comes in. Note that with a Gillespie algorithm you don’t know beforehand how many data points you’ll get.

I’ll also make a shameless plug to my book Enzyme Kinetics for Systems Biology which explains the algorithm.

Posted in General Science Interest, Modeling, Programming, Python, Software | Leave a comment