Tag Archives: database

Resource: The TISSUES database on tissue expression of genes and proteins

As mentioned in the last entry, 2015 has been a year of publishing web resources for my group. The COMPARTMENTS and DISEASES databases have yet another sister resource, namely TISSUES.

This web resource allows users to easily obtain a color-coded schematic of the tissue expression of a protein of interest, providing an at-a-glance overview of evidence from database annotations, from proteomics and transcriptomics studies as well as from automatic text mining of the scientific literature:

tissues_body

Whereas the resource integrates all of the above-mentioned types of evidence, the focus in this work was primarily on combining data from systematic tissue expression atlases, produced using a variety of different high-throughput assays. This required extensive work on mapping, scoring, and benchmarking the different datasets to put them on a common confidence scale. The scientific results and details of all those analyses can be found in the article “Comprehensive comparison of large-scale tissue expression datasets”.

Resource: The DISEASES database on disease–gene associations

2015 has been an exceptionally busy year in my group in terms of publishing databases and other web resources; so busy that I have failed to write blog posts describing several of them.

One of them is the DISEASES database, which is described in detail in an article with the informative, if not very inventive title “DISEASES: Text mining and data integration of disease–gene associations”.

The DISEASES database can be viewed as a sister resource to the subcellular localization database COMPARTMENTS, which you can read more about in this blog post. Indeed, the two resources share much of their infrastructure, including the web framework, the backend database, and the text-mining pipeline.

The big difference between the two resources is the scope: whereas COMPARTMENTS links proteins to their subcellular localizations, DISEASES links them to the diseases in which they are implicated. To this end we make use of the Disease Ontology, which turned out to be very well suited for text-mining purposes due to its many synonyms for terms. Text mining is the most important source of associations but is complemented by manually curated associations from Genetics Home Reference and UniProtKB as well as GWAS results imported from DistiLD.

To facilitate usage in large-scale analysis and integration into other databases, all data in DISEASES are available for download. Indeed, the text-mined associations from DISEASES are already included in both GeneCards and Pharos.

Commentary: The sad tale of MutaDATABASE

The problem of bioinformatics web resources dying or moving is well known. It has been quantified in two interesting papers by Jonathan Wren entitled “404 not found: the stability and persistence of URLs published in MEDLINE” and “URL decay in MEDLINE — a 4-year follow-up study”. There is also a discussion on the topic at Biostar.

The resources discussed in these papers at least existed in an operational form at the time of publication, even if they have since perished. The same cannot be said about MutaDATABASE, which in 2011 was published in Nature Biotechnology as a correspondence entitled “MutaDATABASE: a centralized and standardized DNA variation database”. Fellow blogger Neil Saunders was quick to pick up on the fact that this database was an empty shell, but generously gave the authors the benefit of the doubt in his closing statement:

Who knows, MutaDatabase may turn out to be terrific. Right now though, it’s rather hard to tell. The database and web server issues of Nucleic Acids Research require that the tools described be functional for review and publication. Apparently, Nature Biotechnology does not.

Now, almost five years after the original publication, I think it is fair to follow up. Unfortunately, MutaDATABASE did not turn out to be terrific. Instead, it turned out just not to be. In March 2014, about three years after the publication, www.mutadatabase.org looked like this:
MutaDATABASE in 2014

By the end of 2015, the website had mutated into this:
MutaDATABASE in 2015

To quote Joel Spolsky: “Shipping is a feature. A really important feature. Your product must have it.” This also applies to biological databases and other bioinformatics resources, which is why journals would be wise never to publish any resource without this crucial feature.

Analysis: Does a publication matter?

This may seem a strange question to ask for someone working in academia – of course a publication matters, especially if it is cited a lot. However, when it comes to publications about web resources, publications and citations in my opinion mainly serve as somewhat odd proxies on my CV for what really matters: the web resources themselves and how much they are used.

Still, one could hope that a publication about a new web resource would make people aware of its existence and thus attract more users. To analyze this, I took a look at the user statistics of our recently developed resource COMPARTMENTS:

COMPARTMENTS user statistics

Before publishing a paper about it, the web resource had less than 5 unique users per day. Our paper about the resource was accepted on January 26 in the journal Database, which increased the usage to about 10 unique users on a typical weekday. The spike of 41 unique users in a single day was due to me teaching on a course.

So what happened end of June that gave a more than 10-fold increase in the number of users from one day to the next? A new version of GeneCards was released with links to COMPARTMENTS. It seems safe to conclude that the peer-reviewed literature is not where most researchers discover new tools.

Resource: The COMPARTMENTS database on protein subcellular localization

Together with collaborators in the groups of Seán O’Donoghue and Reinhard Schneider, my group has recently launched a new web-accessible database named COMPARTMENTS.

COMPARTMENTS unifies subcellular localization evidence from many sources by mapping all proteins and compartments to their STRING identifiers and Gene Ontology terms, respectively. We import curated annotations from UniProtKB and model organism databases and assign confidence scores to them based on their evidence codes. For human proteins, we similarly import and score evidence from The Human Protein Atlas. COMPARTMENTS also uses text mining to derive subcellular localization evidence from co-occurrence of proteins and compartments in Medline abstracts. Finally, we precompute subcellular localization predictions with the sequence-based methods WoLF PSORT and YLoc. For further details, please refer to our recently published paper entitled “COMPARTMENTS: unification and visualization of protein subcellular localization evidence”.

To provide a simple overview of all this information, we visualize the combined localization evidence for each protein onto a schematic of an animal, fungal, or plant cell:

COMPARTMENTS NR3C1

COMPARTMENTS COX1

COMPARTMENTS PSAB

You can click any of the three images above to go to the COMPARTMENTS web resource. To facilitate use in large-scale analyses, the complete datasets for major eukaryotic model organisms are available for download.

Resource: Antibodypedia bulk download file and STRING payload

Antibodypedia is a very useful resource for finding commercially available antibodies against human proteins developed by Antibodypedia AB and Nature Publishing Group.

The resource is made available under the Creative Commons Attribution-NonCommercial 3.0 license, which allows for reuse and redistribution of the data for non-commercial purposes. However, the data are purely available for browsing through a web interface, which greatly limits systems biology uses of the resource. I thus wrote a robot to scrape all information from the web resource and convert it into a convenient tab-delimited file, which I have made available for download under the same license. This dataset covers a total of 579,038 antibodies against 16,827 human proteins.

To be able to use the dataset in conjunction with STRING and related resources, I next mapped the proteins to STRING protein identifiers. I was able to map 92% of all proteins in Antibodypedia. Having done this, I created the necessary files for the STRING payload mechanism to be able to show the information from Antibodypedia directly within STRING.

The end result looks like this when searching for the WNT7A protein:

Antibodypedia STRING network

The halos around the proteins encode the type and number of antibodies available. Red rings imply that at least one monoclonal antibody exists whereas gray rings imply that only polyclonal antibodies exist. The darker the ring (be it red or gray), the more different antibodies are available.

They STRING payload mechanism also extends the popups with additional information, here shown for LRP6:

Antibodypedia STRING popup

The popup shows the total number of antibodies available and how many of them are monoclonal. It also provides a direct linkout to the relevant protein page on Antibodypedia.

Please, feel free to use this Antibodypedia-STRING mashup.

Resource: Adding bells & whistles to GreenMamba

My latest blog post ended at the stage where we had combined the Instances database and the Motifs tool into a single metatool. In this post I will show how little it takes to add the bells and whistles that turn it into the complete, professional web resource that I showed as a teaser in the first blog post of this series.

You may not want green to be the design color used throughout your web interface. This is easily changed by adding a line like color : #083D65 to your inifile. You can use named colors instead of hex values if you prefer. Whichever color you pick will be used throughout the web interface to ensure a consistent design.

In the simple default design the frame changes size when changing between the Motifs and Instances input forms because the forms are not equally wide. This can easily be changed by setting a fixed width for all lines by adding line such as width : 650px. You do not have to necessarily specify the width in pixels, any units permitted in cascading style sheets can be used.

Most bioinformatics web resources require one or more pages to explain what the resource is all about. Such pages can easily be provided within the GreenMamb framework a by adding lines with the same syntax as page_home. If you add a page_about line, you will get an ABOUT menu item at the top right, which when clicked will show provided HTML text wrapped with within the GreenMamba layout to provide a consistent look. There is nothing magic to the word “about”; for example, if you write page_download you will get a page named DOWNLOAD.

You may want to also add a footer that is shown at the bottom of every page that, for example, mentions who made the resource, whom to contact in case of scientific questions or technical problems, and possibly points to one or more papers that describe the tools and which the user is requested to cite. To insert a footer you simple add a line to the inifile with the keyword footer followed by the text you want shown; this text can contain HTML code.

If you set up a Mamba server to host a single resource, you will want the Mamba server to automatically direct users to the main input form in case they access the server without requesting a specific page. For example, we would want to redirect requests for localhost:8080 to localhost:8008/HTML/ELM. This can be done the [REWRITE] section in the inifile, which allows you to specify simple URL rewrite rules similar to what can be done in Apache.

Below is the inifile required to set up the complete ELM example resource as it was shown in the first blog post of this series:

[SERVER]
host : localhost
port : 8080
plugins : ./greenmamba

[REWRITE]
/ : /HTML/ELM

[Instances]
database : greenmamba/examples/instances.tsv

[Motifs]
command : greenmamba/examples/motifs.pl $motif @fasta
page_home : greenmamba/examples/motifs_home.html

[ELM]
tools : Motifs; Instances;
color : #083D65
width : 650px
footer : Disclaimer: This is ELM mirror only serves as an example for the GreenMamba framework. For scientific purposes, please use the real ELM server instead.
page_about : greenmamba/examples/elm_about.html

Starting up the Mamba server with this inifile and accessing localhost:8080 yields this interface:

Clicking the ABOUT link will brings up the contents of the file elm_about.html wrapped with the GreenMamba design elements:

In case you want to include pictures or other content on your pages, you do not need a separate web server to host this. Mamba implements a simple web server that you can use for this purpose; all you have to do is to add a www_dir : <directory> in the [SERVER] section of the inifile and place the files you want to serve within the specified directory.

Finally, the output pages of the metatool are also formatted to follow the design specified in the inifile. The header shows the name if the metatool, color matches that of the other pages, the menu with links to the pages is shown, and the footer is included: