Category Archives: Resource

Resource: Turning an Excel sheet into a web-accessible database with GreenMamba

Anyone who has worked with computational biology for many years will be familiar with the following situation: from collaborators you have received an Excel spreadsheet, which is generously referred to as a “database”, and you now need to make the data accessible to the world. One could obviously simply provide the file for download; however, it would be much preferred if the data could be searched through a simple web interface.

This is not a particularly difficult job, but it is a fair amount of work to do. Typically you would need set up a database (be that an SQL database or something else), write a CGI script that queries the database and formats the result as an HTML table, and spend some time on web design to make the input and output pages look aesthetically pleasing. It all takes a lot of time that you would probably rather spend on doing something more productive. Consequently this is often not done at all, and data sets that might be of value to others are thus never made available.

One of the key features of the GreenMamba project (see previous blog post on the topic) is to make it as easy as possible to turn any regular Excel spreadsheet into a web database with nearly no work involved. In fact, all it takes is the following four steps:

  1. Download and unpack Mamba.
  2. Save your spreadsheet in tab-delimited format with column names in the first line.
  3. Add the following two lines to your .ini file:
    [NameOfDatabase]
    database : my_spreadsheet.tsv
  4. Start the Mamba server (./mambasrv my_database.ini)

To exemplify this, we downloaded the complete list of 1743 known instances of Eukaryotic Linear Motifs from the ELM database. The following inifile is all it taks to turn the resulting tab-delimited file into a simple web-accessible database:

[SERVER]
host : localhost
port : 8080
plugins : ./greenmamba

[Instances]
database : greenmamba/examples/instances.tsv

The [SERVER] tag specifies the host of the computer where the mamba web server actually runs and the plugins variable specifies where to load the plugins that enable the whole green-mamba framework and should always be set to this to work. The [Instances] tag specifies the name of the database and the database points to the tab-delimited version of the spreadsheet. After starting the mamba server you can access http://localhost:8080/HTML/Instances and to see the following query interface (here shown with a query):

Upon submitting the query, GreenMamba retrieves all lines that match the search criteria and formats them as an output page:

One could set up a nicer and simpler version of the database by filtering the tab-delimited file a bit. For example, one might want to leave out the columns ELMType (which is redundant with ELMIdentifer), Accessions, InstanceLogic, Evidence, PDB, and Organism (which is redundant with ProteinName) and rename ELMIdentifier to ELM and ProteinName to Protein. This would result in a simpler query form and a more concise results table. Doing this is left as an exercise for the interested reader.

Resource: Turning databases and tools into web resources with GreenMamba

Today, the users of bioinformatics databases and tools increasingly rely on being able to access them through web interfaces. Almost all major databases and most of the commonly used tools can be accessed in this manner, which is mostly good news from the users perspective. However, in my experience from teaching on numerous courses, these users have never worked with a command line and thus typically run their head against a wall the moment they have to do anything slightly more specialized than, for example, running a BLAST search or making a multiple alignment.

The reason for this is simple: specialist tools and databases are typically not made available through user-friendly web interfaces, because they have too few users to make it worthwhile to create such an interface. Worse yet, the tools are in many cases not even distributed, because the many dependencies and lack of documentation would result in too many questions if one were to distribute it. Consequently, almost every bioinformatician that I have spoken about this has one or more resources that they are currently not sharing – not because they are not willing to share, but because sharing would imply too much extra work. To address this problem, we have developed a web server that allows you to easily wrap existing databases and tools with a web interface like the one shown below.

In my group we are involved in the development and maintenance of many bioinformatics web resources, and I have thus been pushing the development of a reusable infrastructure. The result of this is the Python framework Mamba, which has primarily been developed by Sune Frankild and myself. Briefly, Mamba is a network-centric, multi-threaded queuing system that deals with the many technical aspects related to network communication with the clients and server-side resource management. All the specific work pertaining to a resource is done by modules that run under the Mamba server. GreenMamba is one such Mamba module, which based on a simple configuration file can provide a complete web interface around a tab-delimited data file or a command-line tool.

It is thus with great pleasure that we can now release the first version of the Mamba queuing system and GreenMamba wrapper under the BSD license. We hope that by eliminating most of the work in setting up bioinformatics web resources, it will encourage people to make available data sets and tools that hitherto were not worthwhile the time and effort to set up.

Over the next days and weeks, I plan to publish a series of blog posts that illustrate how one can use this framework to wrap a web interface around existing databases and command-line tools with practically no work. Impatient people are welcome to download the software and look in the greenmamba/examples directory.

Resource: Real-time text mining in Second Life using the Reflect API

Sometimes things just come together at the right time. The past few weeks Heiko Horn, Sune Frankild, and I have made much progress on the new version of Reflect, which we hope to put into production very soon. One of the major new features is that Reflect can now be accessed as REST and SOAP web services. When Linden Lab made available the beta version of Second Life viewer 2, which enables you to place a web browser on a face of a 3D object, I simply had to try to put the two together to provide real-time text mining inside Second Life.

The system works as follows. The Reflect Second Life object contains an LSL script that listens to everything that is said in local chat. It sends any text that it picks up to the Reflect REST web service, which returns a simple XML document listing the entities (proteins and small molecules) that were mentioned in the text. The LSL script parses this XML, constructs a URL pointing to the Reflect popup that corresponds to the set of entities in question, and sets this as the shared media to be shown on the Reflect object in Second Life.

The result is an information board that automatically pulls up possibly relevant information related to what people close to it are talking about. The picture below shows the result of me typing a sentence that mentioned human and mouse IL-5 (click for a larger version).

I am well aware that this may not be particularly useful to very many people in Second Life. However, I think it is a nice technology demo of how much can be accomplished with the new Reflect API and just a few lines code.

Resource: Second Life Interactive Dendrogram Rezzer (SLIDR)

About half a year ago, I began experimenting with Second Life as a tool for virtual conferences (I should add that my experiences have since improved). However, I believe that imitating real life in a virtual world is not necessarily the best way to use the technology – it may be better to use virtual reality for doing the things that are difficult to do in the real world. A good example of this is Hiro’s Molecule Rezzer, which is one of the best known scientific tools in Second Life. It, and its much improved successor Orac, allows people to easily construct molecular models of small molecules in Second Life.

After speaking with several other researchers in Second Life, who like I are interested in evolution, I set out to build a similar tool for visualization of phylogenetic trees. The result is SLIDR (Second Life Interactive Dendrogram Rezzer), which based on a tree in Newick format constructs a dendrogram object. The first version of SLIDR can handle trees both with and without branch lengths; however, I have not yet implemented support for labels on internal nodes or for bootstrap values.

The picture below shows an example of a dendrogram that was automatically generated by SLIDR based on a Newick tree:

SLIDR closeup

There is a bit more to SLIDR than this, though. After the dendrogram has been built, it can be loaded with a photo and/or a sound for each of the leaf nodes. When click on a node, the corresponding sound will be played and the photo will be shown on the associated screen (the white box in front of which I stand):

SLIDR posing

I plan to work with collaborators in Second Life to construct dendrograms for evolution of bats (including their echolocation sounds and photos of the animals) and for the fully sequenced Drosophila genomes. Please do hesitate to contact me if you would like to use SLIDR on another project. I intend to make SLIDR available as open source software once I have implemented support for the full Newick format.

WebCiteCite this post

Resource: STRING v8.1

After months of hard work from the entire STRING team – thanks everyone –  I am pleased to be able to say that STRING v8.1 has now been put into production. Here is a screen shot of the start page:

STRING 8.1 start page

This is a minor release of STRING, which means that the imported databases of microarray expression data, protein interactions, genetic interactions, and pathways as well as text-mining evidence have all been updated. We have also fixed a bug that affected the minority of bacteria that have multiple chromosomes.

Another notable feature of STRING v8.1 is the new interactive network viewer that is implemented in Adobe Flash:

STRING 8.1 network viewer

For further details please see the post on the official STRING/STITCH blog.

WebCiteCite this post

Resource: The BuzzCloud visualization of buzzwords

“Oh, you work on systems biology? So do I!”

New buzzwords to describe scientific disciplines and technologies seem to pop up every year. For the fun of it, I have developed a small web resource, BuzzClouds, that provides a visual overview of the latest buzzwords in biomedicine.

Without destroying your weekend with mathematical formulas, here is how the BuzzCloud selection and visualization method works:

  • A list of potential buzzwords is constructed by extracting all one- and two-word phrases ending on -ics, -ology, -omy, -phy, -chemistry, -medicine, or -sciences. These endings were select to get buzzwords that correspond to scientific disciplines and technologies.
  • The potential buzzwords are ranked according to a score that takes into account their frequencies within the past year and within the preceding decade (for details see this review article). To get a high score, a buzzword must be both frequent and new. The top-50 buzzwords are included in the cloud.
  • The size of each buzzword is proportional to the logarithm of its frequency during the past year. Common buzzwords are thus large where as rare buzzwords are small.
  • The brightness of each buzzword shows the frequency of the buzzword within the past year relative to the preceding decade. New buzzwords are thus bright whereas the older ones are darker.
  • Finally, each buzzword is assignd a tint that goes from yellow via white to cyan based on how often it occurs in scientific journals (yellow) as opposed to medical journals (cyan).

When run for the year 2007, the end result looks like this (BuzzClouds for other years are available from the web resource):

50 buzzwords identified based on Medline abstracts from 2007

I think the method does a pretty decent job despite the occasional mistakes such as nice technology and timely topics. In terms of scientific buzzwords, quantitative proteomics is booming, systems biology still hot although it is getting a bit long in the tooth, and synthetic biology is rapidly gaining popularity. And nanotechnology seems to be popular within the medical domain, giving rise to buzzwords like nanomedicine and nanotherapeutics.

Maybe I should write a buzzword-compliant, interdisciplinary grant application that combines click chemistry and synthetic biology to develop novel nanotherapeutics.

WebCiteCite this post