HPC matters, Impact on Precision Medicine

I’m sitting in the first real session of the technical programme at SC16: the Plenary session of HPC Impact on Precision medicine. I’m going to try to live blog as much of the conference as possible. Bear in mind it’s what I see and hear, filtered through what I find interesting and what I have time to actually get down in text, so please don’t take it as a true and correct record of the session. But if it inspires you to look a little further into a topic, or gives you some perspective on the use of HPC – High Performance Computing – then it’s a worthwhile effort.

Here goes!

“We have a war on cancer going back many decades, but we can’t say we’ve cracked the problem. ”

“We are now on the path to produce computers that can do 10**18 operations per second, but we’re also about using computers that operate at that scale to solve problems. These tools will be central to cracking the problem of cancer.”

“Wherever there are questions that tie our brain in knots, HPC is untangling them.”

Precision medicine takes account of individual variations to tailor the most appropriate treatments to patients. It used to be called personalised medicine.

Healthcare spending in the US is heading towards 11 trillion dollars in 2021. Precision medicine provides tools to enable better outcomes for patients. Zeroing in on the most appropriate care is the goal.

It can help to save the lives of children and adults who have exhausted the possibilities of traditional medicine.

Cancer moonshot aims to accelerate cancer research, using precision medicine.

The session has 5 distinguished experts with 5 different perspectives on precision medicine.

We have Fred Streitz from Lawrence Livermore National Laboratories, Mitchell Cohen from Colorado, a surgeon and science and translational investigator, Warren Kibbe from the National Cancer Institute, Steve Scott from Cray Inc, Marti Head from Glaxo Smith Kline.

Mitchell Cohen talks about Physiological state recognition, formerly known as clinical acumen – ie knowing the state the patient is in. Recognising the situation and adapting to it. Much of the general public believes that they will get sick, go to hospital, and have a tricorder tell us what’s wrong with them and how to fix it. In practice you’re looking for an experienced clinician who knows what he/she is looking at. It’s about experience.

The question is: Can we use computational modeling and HPC and what we know about biology and physiology to model that experienced clinician??

Unfortunately the current state of the art in Physiological state recognition is not that great.

The state of the art of ICU precision medicine is a 4×6 index card with notes and test results scrawled on it. This is how we take care of the sickest patients. We use our combined clinical gestalt to make decisions based on data which may be wrong and outdated. We treat one parameter and one threshold – we’re treating univariately in a multi-variate world. If you get the astute clinician you’re going to do well. If you don’t get that astute clinician things might not go so well.

We ask – can we sequence the cancer and you and figure out which treatment will work?

I have a hope that we look back at that several years from now and say that’s not really precision medicine, that was just better pathology. The real precision medicine is identifying their physiological state – where they are at any point in time – and where they will be in 2 minutes time, an hour, a day, and how we can modify their trajectory towards better health.

The beauty of HPC is that it can sit in that chasm between model driven basic biology and data driven medicine.

This will fundamentally change the art and practice of medicine.

Warren Kibbe talks about the cancer moonshot. National Cancer Institute’s mission is to develop the scientific evidence base for understanding cancer, and lessen the burden of cancer around the world. In 2016 there will be 1,700,000 new cancer cases and 600,000 cancer deaths in the USA alone, and nearly 14,000,000 new cancer cases around the world. It’s a really important disease for us to understand, and we are understanding it in fundamental ways. The good news is that the mortality rate of cancer is declining since 2007.

Precision medicine will lead to fundamental understanding of the complex interplay between genetics, epigenetics, nutrition, environment and clinical presentation and lead to evidence based treatments.

Cancer requires biological understanding, advances in scientific methods, instrumentation, technology, data, and computation. We need to be able to model and predict cancer in a very different way to the way we do now.

We need to share data and share ideas around the world and around the research community, such as the Genomic Data Commons. We need to get the data into the cloud so that people can access it more effectively. We’ve done something fundamentally different and fundamentally different for cancer. We want to have every patient’s data across the country and across the world be accessible from a prediction standpoint.

Go to cancer.gov/brp to read the recommendations for the best ideas we have on the cancer moonshot.

Steve Scott – we ‘re trying to bring together all the information we have available to better understand the patient’s situation. We’re dealing with increasingly complex and large datasets. We need to include all the information we have about the population, about medical literature, about the environment, and mine it for the long tail, the statistical outliers, so that we can better understand the individual.

We have databases filled with a vast number of cancer mutations. Historically what we’ve done is look at the most common ones and find good treatments for those. That gives you good treatments for the most common cancers.  HPC and data analytics give you the power to look at the uncommon ones too, and use the entire database. These are the sort of understandings that can lead to precision treatments for individuals.

One group created scalable software to do complex pattern matching on a graph database of medical literature. Using this they can answer some complex questions about medical treatments and technologies, and develop specific treatments and even diagnose individual rare cases.

The emerging data analytics world has a disconnect with clinicians. We need to work together and produce practical solutions that clinicians can use now.

Computational needs are getting more and more complex. The real power of supercomputers depends more on ability to move data – on memory and interconnect – than on operations per second.

Precision medicine is moving towards large scale graph analytics and machine learning and we can learn from HPC disciplines to make this possible. We need to build solutions that actual clinicians can use without having to be computer scientists.

ExAC database is a tool that is being used by clinicians and is a great example of usable systems that are useful to the people at the coalface.

Fred Streitz from LLNL

We say that HPC matters, and in this community we know that’s true. But it’s not always easy to convince the rest of the world. We come up with stories about the development of the iphone or the creation of a Boeing jet, but we’re now talking about using HPC to change medicine, and that will matter to everybody.

We have 3 different programs at 3 different scales – cellular level looking at predictive models for pre-clinical screening. If you give a drug to a particular cell line, even one that’s ostensibly identical, you don’t get identical results. We have developed a large and growing database of these responses, so can we use machine learning to uncover the predictive patterns in that data from the pre-clinical data?

Pilot 2 is focused at the atomic/protein level. Looking at the RAS protein mutation that causes unrestrained growth, which is responsible for around 30% of human cancers, and some of the really nasty ones like pancreatic cancer and lung cancer. We want to identify targetable areas where we can develop therapeutics.

Pilot 3 is focused on the population scale, developing an effective national cancer surveillance programme, looking at all the data we already have about who has had cancer and what treatment and response they’ve had. Using natural language processing to combine all the different data sets from state to state and then using machine learning to find stuff out.

In this research partnership is crucial. The partnership  changes how you view the problem. You need to put people together with different skills and perspectives and that’s how you solve the problem.

Marti Head from GSK. We think of disease as a war and it’s an imperfect metaphor. It’s often the response of our own bodies that causes the real damage and is the real disease. Cancer we think of as something that’s coming after us, but really it’s a part of who we are. A part of our genetics, our physiology, and our environment. We need to take a holistic approach to all that I am as an individual and how that contributes to throwing my body out of homeostasis, away from health, and towards disease.

We have this amazing increase in the data available to us. The clinical data. The data that’s locked up within pharmaceutical companies. The data is complex and complicated and on different scales, and we need a transformation in the way that we discover drugs and deliver them to our patients. We need to be able to combine all the different types of data and create a holistic view of us and how we manage our health.

It takes us 5-7 years to go from a disease hypothesis to a drug candidate that can be used in the clinic, and then there are years of clinical trials before we know whether that disease hypothesis is anything close to true. If we want to work at an individual level we have to go faster. We can’t just take the same processes we’re already doing and shrinking the whitespace and rushing. We need to transform the way we do things. This is why HPC matters.

Question session

We have only a few hundreds of thousands of genomes available to us now. We need a lot more data. We need to understand the questions we’re trying to ask and the problems we’re trying to solve, and find as many creative ways as possible to fill the data gaps. We need to understand where the gaps in data are and try to fill them in order to solve the problems.

Solving cancer requires us to understand fundamentals about biology and we’re not even close to that. We need a tremendous amount of data and much better predictive models than we currently have. In the short term we can look at what we can do with the data we already know how to generate and get better at doing that.

We could start really impacting patients’ lives within 5 years. We are going for a fundamental paradigm shift in the way we do medicine, but each incremental step helps our clinicians save lives today. Even a small incremental improvement in our understanding of biology can change things at the bedside right now.

So that’s my rapid brain dump of the plenary session. If anything is unclear, please ask me! Comment, discuss, engage. 🙂

All mistakes are doubtless my own and not the responsibility of the speakers.




About lindamciver

Australian Freelance Writer, Teacher, & Computer Scientist
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s