Blue Collar Bioinformatics

Archive for the ‘visualization’ Category

Finding and displaying short reads clustered in the genome

leave a comment »

Short read, or next-generation, sequencing is becoming increasingly prevalent in the day-to-day lives of bioinformatics researchers. Programs such as Bowtie, CloudBurst, Maq, and Arachne are available to align short reads to a reference genome. Once you have these alignments, the data analysis work begins. During a recent project, I was interested in looking at overlapping read groups across a genome; this post describes a method for collecting and visualizing those groups.

Starting with a file from an alignment program, the first step is to write a Python generator that returns information about the alignments. Since there are many different aligners, below is some Python pseudocode which describes the function:

def parse_alignment(align_file, errors_allowed=0):
    with open(align_file) as align_handle:
        for (read_id, match_id, match_start, match_end, errors) in \
                your_parser.read_iterator(align_handle):
            if (errors <= errors_allowed):
                yield read_id, match_id, match_start, match_end

It parses a line, checks if it has an acceptable number of alignment errors, and yields a unique id for the read, an id for the match like a chromosome or contig name, and the start and end coordinates of the match. We can use this generator to build our representation of overlapping reads with the help of the excellent bx-python library. bx-python contains a Python and C implementation of clustered interval trees. The function below uses the interface to generate ClusterTree objects for each chromosome/contig in your alignment:

import collections
from bx.intervals.cluster import ClusterTree

def build_cluster_trees(align_generator, cluster_distance):
    # arguments to ClusterTree are:
    # - Distance in basepairs for two reads to be in the same cluster;
    #   for instance 20 would group all reads with 20bp of each other
    # - Number of reads necessary for a group to be considered a cluster;
    #   2 returns all groups with 2 or more overlapping reads
    cluster_trees = collections.defaultdict(lambda:
            ClusterTree(cluster_distance, 2))
    for read_id, match_id, start, end in align_generator:
        cluster_trees[match_id].insert(start, end, read_id)
    return dict(cluster_trees)

The generated trees will readily provide a list of start and end coordinates for a clustered region, along with all of the reads that map to that region:

for chrom, cluster_tree in cluster_trees.items():
    for start, end, read_ids in cluster_tree.getregions():
        # do something with your read cluster

My interest was in visualizing the reads in these clusters along with the frequency of each read. To do so, I generated two python dictionaries which map the read_id identifiers, used in the grouping, to the chromosome location of the read (location_db) and the number of times a read was found in the raw short read results (freq_db). Practically, these are implemented as BerkeleyDB key value stores, to avoid having to keep all of the read information in memory.

With this information, we can use matplotlib to visualize the reads. Frequencies are represented using a color map. Each read in a group is plotted using horizontal bar graphs:

import pylab

def plot_reads(cluster_name, read_ids, location_db, freq_db):
    read_info = []
    for read_id in read_ids:
        start, end = location_db[read_id]
        freq = freq_db[read_id]
        read_info.append((start, end, freq))
    read_info.sort(reverse=True)
    min_freq = min(l[2] for l in read_info)
    max_freq = max(l[2] for l in read_info)
    freq_cmap = pylab.get_cmap('Blues')
    pylab.clf()
    for rindex, (start, end, freq) in enumerate(read_info):
        freq_percent = float(freq - min_freq) / float(max_freq - min_freq)
        color = freq_cmap(freq_percent)
        if freq in [min_freq, max_freq]:
            label = "Frequency: %s" % (freq)
        else:
            label = None
        if label and label not in labels_done:
            labels_done.append(label)
        else:
            label = None
        pylab.barh(rindex, end - start, left=start, height=0.8, color=color,
                label=label)
    pylab.title(cluster_name)
    pylab.xlabel("Coordinates")
    pylab.ylabel("Reads")
    pylab.legend(loc='upper right')
    pylab.yticks(())
    out_file = "%s.png" % (cluster_name)
    pylab.savefig(out_file)

The resulting graph for an example region provides a quick visual summary of read coverage, overlap, and reliability:

Example coverage plot

Written by Brad Chapman

April 29, 2009 at 4:55 pm

Automated protein conservation display from BLAST alignments

with 15 comments

Pawel at Freelancing Science had an excellent post about making structural predictions from visual analysis of BLAST alignments. This concept inspired me to put together an automated solution to visualize protein conservation. Starting with a protein of interest it will retrieve the relevant identifier from NCBI, do a remote BLAST, examine the alignments to related divergent proteins, calculate conservation and display a plot of the results.

With a protein identifier like a GenBank accession number or UniProt ID, we first need to find the standard NCBI GI number. Using Biopython’s Entrez module:

def search_for_gi(self, uniprot_id, db_name):
    handle = Entrez.esearch(db=db_name, term=uniprot_id)
    record = Entrez.read(handle)
    ids = record["IdList"]
    if len(ids) == 0:
        raise ValueError("Not found in NCBI: %s" % ids)
    return ids[0]

Given the GI, a remote BLAST is performed and the XML result is parsed into a record object. This is again done using Biopython libraries, with the BLAST result cached in case of re-runs. If you were using this code for many queries or on a web page presented to scientists, it would make sense to use a local BLAST for speed purposes. This could easily be plugged in, again using Biopython. Here is the remote version:

def remote_blast(self, search_gi, blast_method):
    out_file = os.path.join(self._cache_dir, "%s_%s_blo.xml" % (blast_method,
        search_gi))
    if not os.path.exists(out_file):
        blast_handle = NCBIWWW.qblast(blast_method, "nr", search_gi)
        with open(out_file, 'w') as out_handle:
            for line in blast_handle:
                out_handle.write(line)
        blast_handle.close()
    with open(out_file) as in_handle:
        rec_it = NCBIXML.parse(in_handle)
        return rec_it.next()

With the parsed record, the next step is to loop over the alignments to calculate conservation across the protein. To provide quantitative results, a protein substitution matrix provides a score for each BLAST alignment character pair. Higher scores indicate a more conserved alignment, with exact matches being the highest scores. We use the BLOSUM62 matrix here, although a wide variety are supported by Biopython. The class below loops through all of the alignments and high scoring pairs (HSP, in BLAST nomenclature), notes the position, and uses the alignment pairs and matrix to assign conservation scores at each position:

class BlastConservationCalculator:
    def __init__(self, matrix_name="blosum62"):
        self._subs_mat = getattr(MatrixInfo, matrix_name)
        self._no_use_thresh = 0.95

    def conservation_dict(self, blast_rec):
        cons_dict = {}
        rec_size = int(blast_rec.query_letters)
        for base_index in range(rec_size):
            cons_dict[base_index] = []
        for align in blast_rec.alignments:
            for hsp in align.hsps:
                if (float(hsp.identities) / float(rec_size) <=
                        self._no_use_thresh):
                    cons_dict = self._add_hsp_conservation(hsp, cons_dict)
        return cons_dict

    def _add_hsp_conservation(self, hsp, cons_dict):
        start_index = int(hsp.query_start) - 1
        hsp_index = 0
        for q_index in range(len(hsp.query)):
            if (hsp.query[q_index] != '-'):
                if (hsp.sbjct[q_index] != '-'):
                    try:
                        sub_val = self._subs_mat[(hsp.query[q_index],
                                                  hsp.sbjct[q_index])]
                    except KeyError:
                        sub_val = self._subs_mat[(hsp.sbjct[q_index],
                                                  hsp.query[q_index])]
                    cons_dict[start_index + hsp_index].append(sub_val)
                hsp_index += 1
        return cons_dict

The result of this work is a dictionary of score conservation at each position. If you plot the average of these scores directly, it results in a very choppy graph which is hard to interpret. Luckily, Andrew Dalke has tackled this problem and presented a detailed writeup of smoothing scores for plotting. Using the Savitzky-Golay technique described there, the smoothed average of the results are plotted using matplotlib:

window_size = 29
data_smoother = SavitzkyGolayDataSmoother(window_size)
pos_data = []
cons_data = []
for pos in indexes:
    pos_data.append(pos + 1)
    if len(cons_dict[pos]) > 0:
        cons_data.append(numpy.median(cons_dict[pos]))
    else:
        cons_dict.append(0)
smooth_data = data_smoother.smooth_values(cons_data)
smooth_pos_data = pos_data[data_smoother.half_window():
        len(pos_data) - data_smoother.half_window()]
pylab.plot(smooth_pos_data, smooth_data)
pylab.axis(xmin=min(pos_data), xmax=max(pos_data))
pylab.xlabel("Amino acid position")
pylab.ylabel("Conservation")
pylab.savefig('%s_conservation.png' % accession.replace(".", "_"))

The resulting plot was prepared for the example from Pawel’s post that inspired all this and is shown below. We can see the 4 regions of less conservation noted by Pawel by inspection of the alignment, along with the 3 intervening peaks of conservation:

Example conservation plot

The full script puts all of these parts together into a working version that could be used for your own proteins of interest. These plots are useful for getting a quick overview of protein conservation when working with a new protein. They could be used to compare with known regions like domains, to identify likely regions of importance for protein deletion studies, or to make structure evaluations. The intriguing aspect of the plots is the opportunity to quickly evaluate and make predictions for experimental study.

Written by Brad Chapman

February 7, 2009 at 5:18 pm

Graphing of variables before classification

leave a comment »

It is important to be able to graphically evaluate variables that feed into classification algorithms. In addition to assessing the distribution of the data, visual inspection also reveals patterns that can be later tested statistically. This post describes the preparation of a quick histogram for data from positive and negative examples feeding into a classifier. The excellent matplotlib library is used for visualization.

The example involves classifying two sets of proteins based on regional sequence charge. Two FASTA files were prepared, containing positive (active) and negative (non-active) examples. The goal is to look for a difference in charge between the two groups. Given a window size of amino acids to look at, we loop over the records in the file using the Biopython SeqIO interface:

def file_charges(in_file, cur_window):
    all_charges = []
    with open(in_file) as in_handle:
        for rec in SeqIO.parse(in_handle, "fasta"):
            cur_charges = calc_region_charges(rec.seq, cur_window)
            all_charges.extend(cur_charges)
    return all_charges

We also use Biopython to calculate the Isoelectric point of each protein window. This is used as a proxy for charge; higher isoelectric points correspond to higher charges over the region.

def calc_region_charges(seq, cur_window):
    # internal small regions, so do not deal with C and N terminal charges
    IsoelectricPoint.pKcterminal = {}
    IsoelectricPoint.pKnterminal = {}
    cur_pos = 0
    region_charges = []
    while cur_pos < len(seq) - cur_window:
        cur_seq = seq[cur_pos:cur_pos + cur_window]
        prot_analysis = ProtParam.ProteinAnalysis(str(cur_seq))
        ie_calc = IsoelectricPoint.IsoelectricPoint(cur_seq,
                prot_analysis.count_amino_acids())
        region_charges.append(ie_calc.pi())
        cur_pos += 1
    return region_charges

With this in place, we get the charges for the example records and plot them side by side as a histogram using matplotlib:

cur_window = 75
pos_charges = file_charges(pos_file, cur_window)
neg_charges = file_charges(neg_file, cur_window)
n, bins, patches = pylab.hist([pos_charges, neg_charges], 30,
        normed=True, histtype='bar')
pylab.xlabel('Isoelectric point')
pylab.ylabel('Normalized percent of regions')
pylab.title('Protein charge of %s amino acid windows' % cur_window)
pylab.legend([patches[0][0], patches[1][0]], ['positives', 'negatives'])
pylab.savefig('pos_neg_hist.png')
pylab.show()

The resulting graph shows the different distribution of charge between the positive and negative records. At an isoelectric point just above 10, only sequence windows from the positive examples are found. Looking at the percentage of highly charged sequence regions above this threshold in non-classified sequences could serve as the basis for a classifier.

Example histogram

Example histogram

The example script puts all of this together, and could be used as a template for classification problems in your own research.

Written by Brad Chapman

January 24, 2009 at 1:26 pm

Follow

Get every new post delivered to your Inbox.

Join 141 other followers