Wednesday, November 4, 2009

Content: I first would like to comment on Dr. Parker's lecture. This integrates with Dr. Fridsma's lecture on Electronic medical records and Mr. Warden's previous lecture. Intermountain Healthcare is a good example of how information technology as part of an integrated approach applied to areas of high variability, including clinical issues can impact on care, cost, and efficiency. The fact that data can be generated for research is critical. The use of the evidence based approach is also important as part of this unified scheme. The fact that data comes into a Integrated Engine that is linked to a Data Dictionary provides a real world use of HL7 RIM for the integration engine and the terminology organizers such as SNOMED. Another very effective feature is the intergration of a relationational database that integrates with an object repository that can retrieve rapidly when triggered from the database. Each is part of a Central data repository that can communicate directly with users and clinical departments. This CDR is part of an Electronic Data Warehouse that handles clinical, financial, and business data and integrates this. I consider this and the Banner System to be the standards that we should be looking at nationally for the evolving BMI in clinical medicine.

At this time, I also want to discuss the two lectures by Dr. Dinu on methods in bioinformatics. As a molecular biologist, this is an area that has clearly evolved with enormous current and potential application. In 1981, using recombinant DNA tools that were just being developed, we were able to construct a cDNA library from rat liver. Subsequently, the plating of the cDNAs on to special large sheets of filter paper could be screened by sequential colony hybridization with cDNA probes from different hormonal treatments. This  yielded different abundances for specific cDNA, reflective of specific mRNAs that were regulated by hormonal treatment. Abundance determination was visual following autoradiography, usually, Using digitization and quantitation schemes developed by NASA and Johns Hopkins, abundance determinations could be made for large numbers of colonies and compared and then represented by differing color intensity. Today, the use of microarrays with large libraries of DNA in combination with PCR amplification, is an offshoot of these earlier more labor intensive colony hybridizations for selection of relevant molecules to look for mechanisms of transcriptional regulation. One of the major advances has been the use of optically active molecules incorporated into probes rather than radioisotopes that have decay and health risks. The technology has also taken us much further in terms of the identification and quantitation of changes in mRNA abundance by the use of machine learning techniques with semi-supervised and supervised learning. This is particularly amazing for me and I look forward to applying these techniques.

Another aspect of extreme significance is the ability to use microarray for SNP analysis to look for disease association and potential gene abnormalities.These types of analyses provide a basis for the development of molecular techniques for disease identification and screening, and also potential evaluation of disease severity or recurrence (particularly in the case of cancer). I expect that this will evolve significantly and in no small part due to new techniques in proteomics. In the exercise for the class, I was very pleased to see the power of the gene software and Blast, that has in the past been principally manual and very clunky.

In my next BLOG, later this week, I will try to make some sense of Natural Language Processing and text retrieval, which has great potential for mining data on gene expression that is not immediately found on simple Pub Med searches.


Posted by Stuart

No comments:

Post a Comment

Gentle Reminder: Sign comments with your name.