Relying on increasingly personalized predictions, Precision Medicine (PM) promises to foresee the future clinical needs of patients and customize interventions on their specificities. PM uses big data and machine learning to manage the clinical needs of different individual patients, while current medicine uses statistical procedures to define disease and test treatments. What are the consequences of such a shift?
Established procedures have to be combined with the needs of personalization, with inevitable consequences for the experience of patients and staff, as well as for scientific activity and institutions. We examine the resulting challenges in three steps.
From Average Treatments to n-of 1 Prediction. Randomized Control Trials (RCTs) use population samples to empirically test the efficacy and effectiveness of medical interventions and drugs. But PM focuses on highly specific small populations and is developing n-of-1 or single subject clinical trials, using large amounts of data to collect information on an individual patient.
The Notion of Health. When PM abandons the reference to a population of "similar" individuals, the criterion of normality defining health is lost. What criteria then replace the orientation to statistical average when laboratory tests must be interpreted and therapies decided?
The Right Not to Know. Personalized forecasts tend to produce an excess of knowledge in the perspective of health (and illness) of each individual, questioning the assumptions and practicability of the established orientation on the so-called "right not to know" one's own medical information.