I recently attended the NLM Georgia Biomedical Informatics Course at the lovely Brasstown Valley Resort in Young Harris, GA. This week-long semiannual course is hosted by the Robert B. Greenblatt, M.D. Library, Georgia Regents University and funded by the National Library of Medicine. If you’ve ever heard library colleagues talk about the Woodshole course, this is the current version of that course. The content changes every session, which is necessary in such a fast moving field.
Attendees were a nice mix of librarians, clinicians, researchers and others involved in medical information technology. Instructors who are in the forefront of their field came from around the country to teach in this prestigious course. I found it to be a great overview of current important topics in informatics, and I learned so much about the breadth of this essential field from both the instructors and the other attendees. We also did some networking and shooting pool at the local watering hole, Brassies.
Read more to see what was covered (and some cool pictures from a field trip we took)
What is biomedical Informatics?
James Cimino answered this question succinctly: the representation of medical concepts in a way that computers can manipulate. This process facilitates taking data to information to knowledge using computational power.
What kinds of data do we have to deal with?
Clinical data are of major concern for research but also to improve patient care. If you’ve been to the doctor any time recently, you’ve probably seen your physician entering notes into an electronic health record (EHR). This results in a large amount of unstructured text that could be very useful for research. Your hospital has entirely separate systems to store clinical imaging data. These images require contextual information (metadata) like who the patient is, when it was taken, what region is imaged, and take up orders of magnitude more computer storage space. These data also have to comply with Meaningful use, which is using certified electronic health record (EHR) technology to: Improve quality, safety, efficiency, and reduce health disparities.
Now think of all of the other patients at your facility and all of the other facilities around the world collecting similar data. That’s a big data problem if I’ve ever heard one. “Big data” isn’t just about size but also complexity. DNA sequencing generates massive amounts of data as well. Donald Lindberg, Director Emeritus of the National Library of Medicine gave us a historical perspective on genomics. He did an excellent job at explaining how our understanding of genetics has changed from the conceptualization of inheritance to the human genome project and beyond with an emphasis on human disease. New personalized medicine initiatives are proposing adding genomic data to EHRs, increasing the complexity.
It’s not all about EHRs though. Non-clinical genomic data are stored in the NCBI databases. Public health efforts generate massive amounts of data as well. Jessica Schwind introduced us to the interdisciplinary world of Public Health Informatics and the variety outputs from this discipline, such as disease surveillance data found in tools like health map. Advances in technology allow the collection of data constantly through mobile devices. Rebecca Schnall discussed this new field (mHealth) from its origins in clinical decision support at the bed side to modern smart phone apps.
Mathematical modeling is also integral to the field of informatics. Dmitry Kondrashovmade mathematical modeling fun using a modeling environment called NetLogo. We used this program to model the predicted course of a disease outbreak while altering variables like herd immunity, population size, chance of infection or length of recovery.
So, what do you need to do to create machine readable content?
Mostly, this requires a structured dataset and a language to describe it. Creating a structured dataset can be achieved by following data management best practices. Paul Harris explained the major considerations for doing this with clinical data. That said, every data type is different, and requires unique metadata to really make it useful.
Having a nicely structured dataset is all well and good, but how does the computer ascribe meaning? For this you need a controlled vocabulary, a concept that librarians are familiar with. Using controlled vocabularies when designing a data applies meaning in a standardized way that allows data sets to be analyzed together. It’s very much the same logic of MeSH terms that are applied to articles in PubMed, only for data sets with a lot more field to describe. Dr. Cimino presented on what makes a good controlled terminology, based on his 1998 publication in Methods of Information in Medicine.
But what standards do you use? Christopher Chute discussed the major standards for clinical data in electronic health records (EHRs). Using standards is essential for using EHR data in research, which is a major component of meaningful use. He also went over the major meaningful use terminologies and information models along with their strengths and weaknesses.
Another option is to make the computer do the work. Wendy Chapman presented natural language processing (NLP) in a really accessible way. NLP is a process by which the computer can be trained to read unstructured text, which is common in EHR data, and ascribe meaning to these texts automatically. It’s not always as easy as it sounds though: we performed an activity where we were tasked with coding the text from notes from an EHR to demonstrate that it’s hard to get agreement among human coders let along between humans and computers.
How do we handle this diversity of data?
If big data is the question, Michael Ackerman speculates that the answer might be the cloud. He discussed the many flavors of “the cloud” and how they can help “solve” the big data problem. He also discussed the paradigm shift from hypothesis driven research to data driven research and how this necessitates open data to make it really work well, and the need for better infrastructure to support data storage and analytics.
But once the data is in the cloud, how do we make sure that the right people can access it? Aa you already know, the NLM makes a huge effort to curate and preserve these data. PubMed contains a wealth of health information, but getting the information that you need out of there is not always easy as a layman. Kathy Davies gave a great overview of the vast number of curated collections of information from the NLM. NLM houses resource collections for AIDS information, and information for populations with unique health concerns, like senior citizens or American Indians.
Molecular data are stored in the National Center for Biotechnology information (NCBI) databases. Rana Morris from the NCBI presented on the resources in those databases in a very practical way. We began with a clinical diagnosis, and tracked through MedGen to the Gene database and beyond to find out about the molecular basis of the disease in question.
What problems arise?
We’ve got the standards and the infrastructure, so sharing clinical data should be easy right? Well, not really. Implementing these systems require the cooperation of humans, many of whom are resistant to change. Kevin Johnson presented on Electronic Health Records (EHR), Health Information exchange and meaningful use. The major themes are illustrated in the documentary No Matter Where. Joan Ash tackled these issues from an organizational perspective. Her talk focused on the process of implementing change in organizations adopting new health information technologies. A video titled “HIT or miss” illustrated how failures in systems can compromise patient care.
Have you ever thought about how you’d manage information retrieval in a disaster? Neither had I until the Disaster Informatics sessions with Steven Phillips and Jennifer Pakiam. They described the NLM resources that are useful in the event of a disaster and led us through a scenario where we used these tools to find information about an earthquake at a nuclear facility.
Where is the field headed?
Betsy Humphries acting NLM director discussed Research issues in biomedical informatics. She underscored the importance of having access to research products, including information, data and software, as a return on the government’s investment, to promote transparency and reproducibility, and permanent access. She also discussed the Precision medicine initiative and gave an update on the NLM leadership transition that is currently underway.
Jessica Tenenbaum discussed the importance of translational bioinformatics to precision medicine. She gave this topic a personal twist by discussing her personal experience with direct to consumer genetic testing effected her course of treatment.
Overall, I feel that this class gave me a good overview of what constitutes informatics (a lot more than I realized) and a great basis of terminology involved. I’ve already gotten to put this knowledge into practice on campus in meetings about clinical data management and data science curriculum development. If you want to see what others in the course found interesting, check out the Twitter feed (#NLMGRUInformatics). You’ll even get to see some of the fun things we got to do on our afternoon break.
And on that note, I will leave you with some pictures from the trip we took to Crane Creek Vineyards.
Summer is a busy time for medical librarians but it can also be a time to hone skills that have been lying dormant. This summer, as I continued to transition into a new position I realized that my evidence based practice (EBP) skills were a little rusty. What’s more, I realized that clinicians wanted more from librarians in the area of qualitative analysis than I had training in.
My library supported my attendance in the Supporting Clinical Care: An Institute in Evidence-Based Practice for Medical Librarians workshop held at the University of Colorado Anschutz Medical Campus Library in Aurora, Colorado. The intensive three day course is led by faculty including Pamela Bagley, Jeff Mason, Angela Myatt, Connie Schardt, Lisa Traditi and many others. Sponsored by BMJ Best Practice and EBSCO Health the intensive workshop provides both small and large group learning on topics essential to EBP.
Overall course content is designed to be introductory which makes this workshop a good opportunity to get started in EBP or brush up on skills. The content for the course was impressive, yes homework was involved. The workshop is designed to be challenging as well as informative and fun, there is even a bit of competition in the form of an EBP Jeopardy challenge.
One of the major topic areas that I had little training in what searching for and evaluation qualitative research. The agenda for this workshop included a large group introduction to qualitative research and small group work. The small group session on qualitative research was informative as it included a review of qualitative search techniques, modified question framing tools, and practice in assessing qualitative studies. The skilled faculty led both large and small group in informative discussions about all the topics covered.
During this summer’s session I was lucky enough to meet librarian and co-author Tobin Magle. An unexpected aspect of this workshop is the community that is created so quickly. From small group to large group, participants share their expertise and skills. Networking and teamwork are encouraged throughout the workshop. It was from discussions in small group was as well as some of the team based activities that I feel I learned the most. Not only about EBP but also about ways to apply what I have learned into other aspects of librarianship.
If you are unable to make the 2016 workshop but are still interested in getting training in EBP or qualitative research, workshop instructor Connie Schardt presented two excellent MLA webinars this summer that cover the topics and provide useful information for librarians and clinicians alike.
Thanks, Emily, for summarizing your experience so well! I’ve only been in health science librarianship for about a year now so I have a lot to learn. Though my primary duties at the library involve working with basic scientists, the EBP workshop was essential to my professional development at the Health Science Library because it allows me to integrate better with the rest of the staff and put our work in a broader context.
My background is in basic science research. One difference between basic and clinical research that has always struck me is the well-defined structure of clinical research. Many of the concepts are the same (5 section paper format, controls, statistics, etc.), but the way clinical research can be divided into distinct study types is very different. I enjoyed learning about study design and hope to use these skills in my work at the library.
I had already been teaching part of a research methods class (DSAD 5502) in the School of Dental Medicine curriculum using my previous research knowledge, but going to the EBP workshop gave me a framework to hang these similarities on and present the material in a way that is more engaging to future dental professionals. For example, instead of taking the time to explain how to calculate a Chi Squared test, I emphasized how to interpret the result to improve patient care. It has also helped me to work on PICO questions during literature search consultations with College of Nursing students.
I am very grateful that the Health Sciences Library supported my participation in the workshop. This type of cross training helps me feel more engaged with our organization’s mission.
These last few weeks I have been traveling to the chapter meetings (and participating in the virtual chapter meeting) and during my MLA Update I remind people that engagement within MLA is important to members building their own value within the organization. One of the best ways to be engaged is to join an MLA Committee.
Time is running out, you must submit an application to join a committee by October 31, 2015.
Over the years I’ve written several posts about joining an MLA Committee, here is a “Behind the Scenes” post which gives a detailed account of the process.
Primary things to remember when joining a committee:
We try very hard to make sure everyone is assigned to a committee but if you don’t fill everything out or list only one committee it makes things very difficult.
Last year when I assigned committee members I worked with a giant spread sheet of member requests, a giant spread sheet of chair requests, and a spreadsheet listing every committee applicant so I could check off that they got assigned to at least one committee. Thankfully I have 2 computer monitors so I could keep track of it all.
So please apply to join a committee it is a great way to get involved.
I love conferences: meeting other librarians, learning about new products and services, and getting great ideas from others’ innovative projects. However, it is always hard to get away to go to conferences. Both the time and funds can be hard to find. This is why I was so excited for the first-ever virtual conference by the Midcontinental Chapter of the Medical Library Association (MCMLA). This was also the first ever all-virtual meeting of any MLA chapter in the history of the organization. I did not have to find money in my budget or time in my schedule, but still was able to attend many informative conference sessions. And, I got to attend the conference while wrapped in my fleece blanket.
I know the virtual conference has been years in the making from many dedicated librarians, but they made it look easy. Also, Elsevier, McGraw-Hill, Wolters Kluwer, and Rittenhouse agreed to participate in this experiment and gave presentations about their new products. Overall, the conference had great presenters, engaged participants, and moved smoothly past the few, small technical glitches that occurred.
Check out #MCMLA2015 to see the Twitter discussions during the conference and go to the MCMLA conference page for more details about the meeting and the poster that was presented at MLA 2015 about the virtual conference. I hope this is only the beginning of associations experimenting with virtual conferences and exploring alternative ways of sharing ideas and research with each other.
My only other co-worker is transferring to another hospital at the end of the month so I will soon becoming a truly one person library, hopefully only temporary but it could be permanent. In any case, at least for a few months I’ll be on my own.
Now I need to figure out how to organize my workday to cover two set of job duties. I have so many questions. Do I sit at the reference desk every day, or do I split my day between the reference desk and my office? I’m not full time. Do I work 4 8 hours days and one 4 hour day, or do I spread my hours evenly over 5 days?
Then comes the fun stuff – prioritizing my work. Figuring out how to balance ILLs, searches, technical issues, renewals and other library administrative tasks. Oh, and I forgot to mention the library is moving. Every task is a priority but some have more visible results than others.
Hopefully this will be a temporary situation but on the off chance it isn’t I’ll be documenting my journeys down this rabbit hole. Any comments or thoughts are more than welcome!
Google Scholar (GS) is a very useful addition to the searchers arsenal; following a “cited by” trail nicely complements results retrieved by keyword/subject heading searches in databases such as Embase and Medline.
One area where GS is less useful is exporting records to reference management software. Using the settings, you can set up an export to BibTex, Endnote, RefMan and RefWorks. However, there are two limitation:
GS, after a little fiddling about, does allow you to save citations to a list (My library) but citations in this list can still only be exported one at a time so this produces no benefit at all. Then I read an interesting pager by Bramer and de Jonge – Improving efficiency and confidence in systematic literature searching* – which mentioned that Harzing’s Publish or Perish can be used to download 1000 references from GS into reference managers such as Endnote.
Could this speed up my click by click populating of Endnote libraries with GS citations (and maybe throw abstracts in as well for good measure)?
Publish or Perish, ” designed to help individual academics to present their case for research impact to its best advantage”, is a small bibliometrics program (approx 1 MB) that can be installed without admin privileges. You can indeed export multiple GS (and Microsoft Academic Search) results but – alas, alack, alay – it is not the solution to problems 1 and 2 above. Abstracts – not totally surprising as GS doesn’t provide them – aren’t included. And while you can search the Publish or Perish program in various ways (author, journal, all words etc), it just doesn’t match the way you search GS which is generally a mixture of keyword and cited by searching so you cannot easily replicate a set of results.
The subject line of this post implied a solution to the multiple GS export problem. Actually it is more a request to see if anyone else has found a fix – sorry about leading you on like that. But this issue is one of those not-so-large-but-there-must-be-a-better-way ones so I’m hoping someone can suggest a workaround.
The easiest solution would be for Google to make the My library list bulk exportable. While holding my breath and waiting for that, I wonder if anyone out there has found a clever way around this problem? Perhaps a search from Endnote GS citations to an external database such as PubMed to grab the abstracts in some fiendishly clever way?
* The systematic searching paper mentioned about can be found in PDF format and Word format, with the latter incorporating a couple of corrections as detailed at the end of this post. The paper itself is interesting for giving all sort of search tips as well as providing a framework (including online macros) for translating search queries from one database platform to another (Embase into Ovid Medline etc). It also has some nifty GS search tips and a table giving a useful search syntax summary across various platforms; the PDF version is good for printing this out. Indeed it is a paper that you need to print out and read at your leisure as not really one you can just scan through online so well.
***Note from Krafty*** 10/28/15
This post seems to generate a lot of spam mail in the comments despite anti spam measures. As a result I have disabled comments from this post. If you want to comment you must email krafty(atsign)kraftylibrarian(dot)com and if the comment is related to the post I will post it manually in the comments. Sorry for the inconvenience. Thank you.