In the current era, the precision medicine initiative is stressed as being a priority for the best medical care one can practice/get. In addition, a statistical method called bootstrapping with the support of strong AI (e.g. powerful processing, neural networks) and computers proves to be more reliable than 'classical statistics' used in biomedical millieu. One can generate statisticaly confident results without large samples, as long as the quantitavely smaller ones included are a representatively and correctly chosen values (with enough qualitative variations needed for the study). Some questions arise when considering these large shifts and new possibilites for generating and practicing medical knowledge.

1. Will large scale studies be redundant once precision medicine methods (such as (epi-)gene-orientated analysis for assessment of pharmacotherapy efficacy or enzyme status'), become more available and cheaper? It would surely prevent lots of heavier life-threatening side-effects (therefore costs, but also quality of patient's life) and a more favorable clinical outcome for the benefit of the patient (and the entire medical system).

2. Could these new 'large scale studies' be made through correctly chosen smaller samples run with bootstrapping method? (It would save time, money, generate equal or even more significant findings due to computing possibilites previously unavailable?)

3. an example;  commonly used drugs that affect coagulation (warfarin, P2Y antagonist. new anticoagulants) can have terminal side effects that we want to evade. By having large scale studies from real population data we can assess the common risk factors/symptoms and derive scores for assessment (e.g. HAS-BLED or even the one's for primary prevention such as CHADS) and according to which we can expect direct consequences for patients but also project economic impact of such therapies. If one would use precision medicine, we could surely evade a lot of unwanted side effects that stem from drug-interactions or enzyme inhibitors/inducers concomitantly used, but also due to genetic alterations (e.g. coagulation cascade variability leading to a prothrombotic/prohemorrhagic state). 

In what direction will/should the future of decision making process gravitate to? Statistics is something that never exists in reality, but could point an insecure clinician to a safer route, while precision medicine (getting more affordable with each moment passed), is a way that resepects the basic physiology and biology which is entirely unique in each individual. Will these two methods (small scale vs. large scale) find an equal purpose or are they irreconcilable in their nature?

https://www.nih.gov/precision-medicine-initiative-cohort-program

https://en.wikipedia.org/wiki/Bootstrapping_(statistics)

Similar questions and discussions