from R&D Magazine by Barrett Bready and John Thompson, Nabsys
Ever since the study of individual genes and RNAs was first known to be important, there has been a drive to get as detailed and complete genomic information as possible. Early technologies like the hybridization-based Southern and Northern blotting methods were tremendous advances, but allowed only a handful of genomic targets to be studied at a time. Analog information about length and frequency was generated for a small number of targets.
These methods gave way to microarrays, another hybridization-based approach generating analog data. Arrays provided information on many more RNAs or DNAs and the age of genome-wide studies began. Array-based methods didn’t measure RNA lengths and had certain drawbacks like dynamic range and a requirement for prior sequence knowledge. Despite this, the tremendous throughput advantage meant that many more genes could be examined in parallel, making up for the shortcomings. Some information content was sacrificed but a more complete view of the nucleic acid universe resulted. Array technologies improved to the point that over a million SNPs or all sequenced RNAs could be assessed on single arrays (subject to the limitations of dynamic range).
As arrays became the standard for evaluating DNA variants and gene expression, DNA sequencing was advancing at an astounding rate with costs dropping and throughput increasing. This technological transformation had a dramatic impact on genomics experiments, shifting both older methods and previously undoable methods to sequencing. Arrays were replaced by sequencing on whatever platform researchers had access.