Sequencing based approaches of gene expression analysis generate millions of sequence tags, thus providing the dynamic range required to investigate genes of low abundance. Currently available digital gene expression analysis systems offer the potential for high-throughput transcriptomic measurements, however truly quantitative data are routinely not obtained. The most widely used RNA-seq protocol relies upon fragmentation of mRNA generating a library of uniformly distributed fragments of mRNA. This protocol requires large amounts of starting material (100ng of mRNA) limiting its application in many fields such as in developmental biology, where it is impractical to get such large amounts. Furthermore, this protocol maintains the relative order of transcript expression resulting in poor representation of low abundance transcripts at current sequencing depths. Multireads and biases introduced by transcript length and random hexamer primer hybridization further restrict reliable quantitation of low abundance transcripts for large mammalian transcriptomes. While random priming strategies amplify starting material (mRNA or cDNA) by exploiting hybridization and extension potential of hexamer/heptamer primers, they often result in low yield of good quality reads arising out of mis-hybridization of primers and primer dimerization. In a recent experiment, the inventors used a widely available sequencer to generate sequence tags via random priming strategy. Only 18% of the reads mapped uniquely to the transcriptome and low abundant transcripts were significantly under-represented because of poor dynamic range. Since many genes (signal transduction, transcription factors) are only expressed at relatively low levels, currently available strategies fall short in statistically quantifying these genes.