Posts Tagged ‘graph theory’

Articles

DNA sequence annotation: a graph coloring problem

In algorithms,biology,functional programming,haskell,science on June 18, 2012 by gcbenison Tagged: , , ,

In this post I will explore methods for annotating DNA sequences with translated ORFs, like so:

      1870      1880      1890      1900      1910      1920
TGCTCACCAATGTACATGGCCTTAATCTGGAAAACTGGCAGGAAGAACTGGCGCAAGCCA
euLeuThrAsnValHisGlyLeuAsnLeuGluAsnTrpGlnGluGluLeuAlaGlnAlaL
               MetAlaLeuIleTrpLysThrGlyArgLysAsnTrpArgLysPro
         MetTyrMetAlaLeuIleTrpLysThrGlyArgLysAsnTrpArgLysPro

Creating such annotated diagrams is an application of graph coloring.  My code to produce them is  free software.

Background

DNA is a lot like RAM, except instead of bits coming together in groups of eight to form bytes, DNA bases (of which there are four – A, C, T, and G) come together in groups of three to form codons.  The sequence of codons in turn determines the sequence of amino acids that make up a protein molecule.  The function that maps codons to amino acids (which is notably neither one-to-one nor onto) is called the genetic code, and its deciphering was a major intellectual achievement of the twentieth century.

The analogy between bits and DNA bases breaks down in one important way: the notion of alignment.  The one thing that all computer vendors seem to agree on is that a byte consists of eight bits, that each bit clearly “belongs to” exactly one byte, and that it has a defined place within that byte.  No such thing is true of DNA.  To the molecular machinery that reads it, DNA is just a stream of single bases.  The special sequence of three bases ‘A’, ‘T’, ‘G’ – the “start codon” – tells the DNA-reading machinery to start grouping succeeding triples of bases into codons.  The sequence of codons initiated by the start codon – and then terminated by a sequence known as a “stop codon” – is called an open reading frame (ORF).  Thus each base cannot be inherently the “least significant” or “most significant”, the way that a bit can.  Rather, the significance of a base within a codon is determined entirely by its context within the surrounding frame defined by start and stop codons.  The same base can even simultaneously form part of multiple codons, with a different meaning in each, if it occurs within overlapping reading frames:

        10        20        30        40        50      
TTGGTAATGTTATGCCGAAGGCCCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
      MetLeuCysArgArgProPhePhePhePhePhePhePhePhePhePhe
           MetProLysAlaLeuPhePhePhePhePhePhePhePhePhePhe

In the example above, the ‘C’ base at position 16 occurs in two different contexts: as the final base in the ‘Cys’ codon within the ORF that starts at position 8, and as the first base in the ‘Pro’ codon within the ORF that starts at position 13.

ORF annotation as an application of graph coloring

A display such as the one in the example above, where a DNA sequence is displayed annotated with the ORFs it contains and their translations, is very useful in planning genetic engineering experiments becuase it allows you to edit a DNA sequence and visualize the effects this will have on the encoded proteins.  ORFs that overlap must be displayed on separate lines.  So a program that generates such displays must contain an algorithm to decide on which line to print each ORF.  This deceptively simple-looking task is special case of the graph coloring problem, and another example of how abstract concepts from graph theory tend to show up in everyday contexts.  Let each ORF be the vertex in a graph which contains an edge between any two overlapping ORF’s.  The problem of assigning ORFs to display lines is equivalent to the problem of assigning a color to each node such that no two connected nodes share the same color.   The graph coloring problem is also the key to register allocation, optimal bus scheduling, and many other things.  And it is hard to do well. It is trivially easy to find a coloring of a graph – just assign a different color to each node.  (In the DNA application, just put each ORF on its own line).  It is much harder to find a minimal coloring – a coloring using the fewest possible distinct colors, which in the DNA case corresponds to using the fewest possible display lines to contain all the ORFs.

Comparing two graph coloring algorithms

Somewhat surprisingly, there is no algorithm to solve the general minimal graph coloring problem efficiently (i.e. with polynomial time complexity).  There are many heuristics though that approximate the minimal solution.  For example, the greedy coloring algorithm can be stated: “for each vertex v_i in V, assign the lowest color not assigned to any of its neighbors.”  In Haskell, this can be implemented as a fold over a list of vertices, where the accumulator variable is a map from vertices to colors that is initially empty:

-- Given a vertex list and adjacency list, return a graph coloring
-- i.e. mapping from vertices to colors where colors = [1..]
graphColor::(Ord a)=>[a]->Map a [a]->Map a Int
graphColor nodes neighbors =
  foldl assignColor Data.Map.empty nodes
    where assignColor colors node =
            case Data.Map.lookup node neighbors of
              Nothing -> Data.Map.insert node 1 colors
              Just ns -> let next_color = lowestSlot $ map ((flip Data.Map.lookup) colors) ns
                         in Data.Map.insert node next_color colors

lowestSlot::[Maybe Int]->Int
lowestSlot xs = foldl h 1 $ sort xs
  where h best Nothing = best
        h best (Just x) | best == x = best + 1
                        | otherwise = best

There are many ways to represent graphs, with several interesting implementations proposed for Haskell.  The greedy algorithm above requires that an adjacency list (a map from a node to its neighbors) be available for each node.  I’ve chosen to use the basic Data.Map type with nodes as keys and lists of neighbors as values.    Because I want to focus on the coloring algorithm, I will not discuss further the steps for creating such adjacency lists and instead refer readers to the full source code. The greedy algorithm is quite efficient – O(N) time complexity – but can yield colorings that are far from minimal, depending on the input order of the vertices.  So when given ORFs in the order in which they occur in the DNA sequence, how well does the greedy algorithm perform?

To answer that question, I’ll compare to a slower yet more exhaustive graph coloring algorithm that I’ll call the “extended greedy” algorithm, which can be stated as: “For every remaining uncolored vertex, if it has no neighbor assigned to the current color, assign it to that color.  Repeat for the next color.”  In Haskell it can be implemented like this:

-- Partition vertices into color groups; 'connected' is a function
-- that tells if an edge exists between two vertices
extendedGreedy::(a->a->Bool)->[a]->[[a]]
extendedGreedy _ [] = [[]]
extendedGreedy connected xs =
  let (first, rest) = foldl (\(first, rest) next ->
                              if (not $ any (connected next) first)
                                then ((next:first), rest)
                                else (first, (next:rest)))
                            ([],[]) xs
  in first:(extendedGreedy connected rest)

The extended greedy algorithm is more time-intensive (consider the case where no ORFs overlap.  Then each vertex is still compared with every other so time complexity is at least O(n^2).  It gets worse when there is overlap.)  But it also seems more thorough and more likely to find a minimal coloring.  So how do the two methods compare, both in terms of performance and in terms of finding a minimal coloring?  I ran both against a series of four benchmarks derived from the genome of E. coli strain e2348/69, an organism I work with in the laboratory: first, the plasmid pGFPE-Ler, a small piece of DNA containing just a few genes.  Second, a genomic region responsible for pathogenesis called the LEE comprising about 2% of the genome.  Third, the first 400,000 base pairs, or about 10% of the genome.  And finally, the entire e2348/69 genome.

Greedy algorithm Extended Greedy
Sequence base pairs ORF’s Run time, seconds Chromatic number Run time, seconds Chromatic number
pGFPE-Ler 6362 84 0.01 10 0.01 10
LEE 102512 1079 0.24 26 0.2 26
e2348/69; first 400k 400000 5170 0.85 39 0.9 39
e2348/69 5026578 61255 113 75 11 75

The run times demonstrate the linear time complexity of the greedy algorithm and the worse time complexity of the extended greedy algorithm, which becomes really apparent for the largest benchmark.  However the extended algorithm appears to be doing no better at finding minimal colorings.  It is known that there is an order of input nodes such that the linear greedy algorithm will find an optimal coloring.  Perhaps what these results reveal is that the ORFs sorted in the order in which they occur is such an ideal ordering, but I do not now know how to prove that.  Perhaps a topic for a later post…

Articles

The Nerdy Stuff Matters

In algorithms on March 28, 2012 by gcbenison Tagged: , ,

A childhood friend who grew up to be an artist has created many pieces based on letters and words – not just their meanings, but their visual form and their relationships to each other.  Here is one of them:

YES - YET - GET - GOT - GO - NO

The word “yes” changes into the word “no” in five steps, where each step involves either changing, adding, or deleting exactly one letter, with the result of each step being a valid English word.  Upon seeing this I – being a computer programmer – immediately wondered if I could come up with an algorithm to find such sequences in a more general way.  That is, given a starting word and an ending word, could I design an algorithm to transform one word into the other by changing, adding, or deleting exactly one letter at a time, with the result being a valid English word at each step?

After some thought, I did come up with an approach and soon had a working implementation (fair warning: the link is just the source code for a program that does the job; you have to download and compile it to use it.  I’m working on an online version.)  Here is an example of its output:

  • table
  • able
  • ale
  • all
  • hall
  • hail
  • hair
  • chair

Though I had a working program I wanted to see how other programmers would approach the problem, so I posted it as a challenge on Stackoverflow, a site where programmers exchange questions and answers.  As it turns out, this question is a quite common programming challenge (although the rules about adding and deleting a character are most often left out – so that “cat hat hot” would be a valid sequence, but “cat chat that than tan” would not be.  In this form the sequences are called word ladders.)  Variants have appeared as an interview question.  Within minutes of posting, I received answers describing essentially the same approach I had used: “Represent the word relationships as a graph, and use a shortest path algorithm.  Duh.”

Word Ladders as Applied Graph Theory

For those readers not steeped in discrete math and computer science, some definitions are in order.  A graph is a collection of vertices – in this case, the words – and edges – in this case, the transformations relating the words to each other.  Such a graph formed from an English dictionary can be visualized as quite a large web spreading through space; here is a small part of it –

portion of the English language word graph

Once such a graph is defined, the problem of finding the desired sequence of words becomes the problem of finding the shortest possible path between two points in the graph.  This is not an easy problem, but it is a very standard one, so it becomes possible to make use of its very well-known solution.  As a programmer, you need not re-implement the solution or even understand it particularly well; you can just re-use an existing implementation.  I used the one provided by the excellent igraph library.  You simply feed it the graph, the start point, and the end point, and it returns the path.  So the only real work is to build the word graph from all the words in the language.

Building the Word Graph

To build a graph out of all words in an English dictionary, you must provide a list of all pairs of words (cat <-> hat ; hat <-> hot; hot <-> shot) that should be connected.  The graph library doesn’t care what order you provide them in, as long as you provide each pair once.  Clearly the list will be large for the English language – there are over 200,000 words in the spelling dictionary on my computer, and there will be many more connections between these.  So how can you generate a list of all possible pairs?  How can you avoid finding the same pair repeatedly?  How do you know when you’re done?  How can you do this efficiently?

A naive algorithm would compare every word in the dictionary, in order, to every other word, in order.  But this is slow – the number of comparisons grows as the square of the size of the dictionary.  We can do much better by sorting the words into equivalence classes.  Consider words that differ only in their first letter.  We put these into an equivalence class by replacing the first character with a special place-holder.  For example, “walk” and “talk” both become “*alk”; they are in the same equivalence class and therefore should be connected.  Apply a similar procedure to the second character, the third character, etc… to build up a table like this:

walk *alk
walk w*lk
walk wa*k
walk wal*
tall *all
tall t*ll
tall ta*l
tall tal*
ear *ar
ear e*r
ear ea*
talk *alk
talk t*lk
talk ta*k
talk tal*

OK, so how does this speed things up?  It becomes clear when we sort the table by equivalence class:

walk *alk
talk *alk
tall *all
ear *ar
ear e*r
ear ea*
talk t*lk
tall t*ll
talk ta*k
tall ta*l
talk tal*
tall tal*
walk w*lk
walk wa*k
walk wal*

The sort operation has grouped together all words that should be connected; “walk -> talk” and “talk -> tall” in this example.  The slow step in this procedure is the sorting, which can be achieved by any of a number of standard algorithms in time proportional to n * log n – a big improvement over n * n for large dictionaries.

So what about the rule that connects words by  adding or deleting one character? This variant of the problem is harder to find online, but I did find an implementation in a comment on this blog similar to the one I used.  The key is to create a table, as before, but for each word add all possible rows formed by deleting one letter:

bring ring
bring bing
bring brng
bring brig
bring brin
rings ings
rings rngs
rings rigs
rings rins
rings ring
ear ar
ear er
ear ea

Then sort by the second column, and filter against the sorted dictionary to retain only those rows where the second column is an actual word:

ear ar
bring bing
bring brig
bring brin
bring brng
ear ea
ear er
rings ings
rings rigs
rings ring
bring ring
rings rins
rings rngs

So that in this example, the three rules “rings <-> ring”, “rings <-> rigs”, and “bring <-> ring” were found.  As before, the sorting is the slow step with O(n log n) time complexity.  So how do you combine the connections due to letter substitution with the connections due to deletion?  This is called a graph union and is supported by any graph library.  An interesting effect happens if you include only the connections due to deletion and leave out connections due to substitution.  Now the word must change length by one letter at each step!

  • hands
  • hand
  • and
  • land
  • lad
  • lead
  • led
  • fled
  • fed
  • feed
  • fee
  • feet

Conclusion: the nerdy stuff matters

Every practicing programmer knows that it is possible – perhaps normal is the right word – to write practical, useful code without drawing on the nerdier parts of computer science. You can write a program that works just fine without ever analyzing it for time complexity or proving its correctness.  The simplest, most intuitive data structures are often best.  But hopefully what the word ladder  example illustrates is that often the best way to solve an everyday, practical problem is to translate it into a special case of a well-studied abstract problem, and then apply a standard solution to that problem.  If solving a word ladder does not seem very practical, applied graph theory shows up everywhere: in revision management, in the provenance of lab samples, in deciding the best layout for a communications network.  So keep studying the nerdy stuff – one day, you will find yourself facing a down-to-earth problem that is harder than it looks, and realize that you already have the answer.