Trying to describe a pain you feel to your doctor can be a difficult task. But soon, you won’t have to: a computer avatar is expected to tell your doctor everything they need to know.
The CompBioMed Centre of Excellence, an international consortium of universities and industries, is developing a programme that creates a hyper-personalized avatar or ‘virtual human’ using a supercomputer-generated simulation of an individual’s physical and biomedical information for clinical diagnostics.
There is a rapid and growing need for this kind of technology-enabled healthcare. 12 million people who seek outpatient medical care in the U.S. experience some form of diagnostic error. Additionally, the World Health Organization estimates that there will be a global shortage of 12.9 million healthcare workers by 2035.
Greater access to technology-enabled healthcare will allow doctors to make better and faster diagnoses – and provide the tools to collect the necessary data.
BUILDING A VIRTUAL HUMAN
The Virtual Human project combines different kinds of patient data that are routinely generated as part of the current healthcare system, such as x-rays, CAT scans or MRIs to create a personalized virtual avatar.
This enables doctors to provide more precise diagnostics, develop healthcare interventions based on a patient’s specific physiology, and run personalized medicine and clinical simulations for effective treatment.
“That’s the idea: that you can use a supercomputer to integrate information, so you can make a more informed decision – and it’s one based on an individual, not on a population,” Prof. Andrea Townsend-Nicholson, who runs one of the academic research groups working on the Virtual Human project, told ITU.
“The effectiveness of these codes in accurately predicting what takes place in the heart, our bones, or our proteins depend[s] on the computing power available,” Townsend-Nicholson said.
CODING THE VIRTUAL HUMAN
At present, the Virtual Human project is focusing on three aspects of bio-physical health and medicine:
Neuromusculoskeletal: The computer codes used to assemble the neuromusculoskeletal Virtual Human use Finite Element Methods and Finite Element Analysis to analyse forces and strain distribution, looking at how to minimise biomechanical stress in an effort to avoid bone fractures.
Molecularly-based medicine: The molecularly-based medicine computer codes are primarily molecular dynamics, a computational simulation method that is used to predict the movement of atoms and molecules in a system over a period of time. By looking at the structure of a protein, it can predict how the protein will change shape in response to a particular medication. Hence, doctors can determine which drug will be the most effective treatment. It will also help clinicians design new, more effective, medicines.
Cardiovascular: For the cardiovascular work, the computer codes use powerful computational fluid dynamics methods that can simulate complex geometries in a straightforward way.
“You can, for example, look at the MRI of a person’s heart to see what its actual anatomical structure is and combine that information with the patient’s electrocardiogramme (EKG), which measures the electrical activity of the heart, so you can see what the electrophysiological parameters are like. If the person were to need a pacemaker, those two pieces of information and anything else that was necessary, could be integrated in the Virtual Human to help the cardiologist know what size of pacemaker [was needed] and where to put it,” she said.
But creating a full virtual human is a technical challenge as it requires a large amount of computing power.
“The effectiveness of these codes in accurately predicting what takes place in the heart, our bones, or our proteins depend[s] on the computing power available,” she said.
Today’s petascale supercomputers are capable of making a quadrillion (one thousand million million or 1015) calculations per second; too slow to run the necessary computations on a timescale needed to inform clinical decision-making.
Mobile phones just helped to make an important anti-cancer discovery. What does this mean for the future of mHealth?
“What we can’t do as easily is some of the analysis that takes significant amounts of computing, like working out what an individual patient’s defective protein receptor looks like so that we can make a blocker made specifically for it,” Prof. Townsend-Nicholson explained.
“[So] although we have several parts we can assemble, we cannot yet assemble the whole Virtual Human.”
The US, China, Europe and Japan are all in the process of building exascale supercomputers – a thousand times more powerful than the petascale machines, able to make a quintillion calculations per second (a million million million or 1018) – which will have the necessary computational power to build individual Virtual Humans for everyone.
“All these things that are bringing leading-edge research into day-to-day practice require you to work very closely with policy-makers and the regulators,” Prof. Townsend-Nicholson said. "The CompBioMed Centre of Excellence that I am involved in is explicitly funded by the EU to develop high performance computing codes for biomedicine for use now and in preparation for when we reach the exascale”.
LOOKING TOWARDS THE FUTURE OF PERSONALIZED CARE
However, technology is just one part of the puzzle in making a virtual human available to the general public.
“All these things that are bringing leading-edge research into day-to-day practice require you to work very closely with policy-makers and the regulators,” Prof. Townsend-Nicholson said. “You need to work in lockstep with each other so that you can have the smoothest, most efficient and safest path forward.”
Bringing a virtual human to life isn’t just about the policy and regulatory aspects, it is also about ensuring that the technology is able to be used easily and efficiently by healthcare providers.
“The research is fabulous,” she said, “but the things I’ve been particularly interested in, is training the next generation of scientist and clinicians to get them to become familiar with this technology, to get them to be digital natives so that they are as anxious to see it come through as we are to give it to them.”
by Lucy Spencer, ITU News