x
[quotes_form]

Healthcare Prediction Modeling

Methods of Prediction: Main Concepts

Predictions in clinical research are fundamental techniques that can benefit patients’ outcomes and medical practices. Prediction research is the process of predicting future outcomes, based on certain patterns, markers, and variables (Waljee et al., 2014). Accurate prediction models can indicate the future course of treatment or the risk of developing an illness. Note that in contrast to popular explanatory research, predictive research does not tackle the problem of causality or preconceived theoretical concepts. Forecast models simply use statistical methods and data mining to predict future clinical outcomes. To be more precise, predictions rely on numerous scientific methods which are part of statistical inference. Usually, statistical inference techniques are used to draw conclusions from data sets and include procedures, such as regression analysis, linear regression, and vector autoregression models.

Predictions can shape not only healthcare practices but societies in general. From weather forecasts to stock market performance, predicting the future is an essential factor for success. Yet, in clinical settings, methods of prediction are paramount as they can literally save lives. By implementing accurate methods of prediction and forecast, scientists can explore the predictive properties of certain biomarkers and patients’ characteristics, which can allow them to predict numerous clinical outcomes (e.g., rehospitalization, predictability of patient beds, risk of developing a disease, etc.). Therefore, it’s not surprising that predicting health outcomes is vital to patients and families, as well as professionals and governments.

2. Types of Methods of Prediction and Validation

Methods of prediction vary between research fields, and there are numerous multivariable prediction models which tackle vital aspects, such as model development, validation, and impact assessment. Note that the most traditional predictive approach is the Bayesian approach. However, with the increasing influence of machine learning and artificial intelligence algorithms, experts can implement other sophisticated models and software programs, such as the powerful random-forest approaches (Waljee et al., 2014). Machine learning algorithms are beneficial in the identification and analysis of potential, unexpected, and marginal predictors.

In addition, it’s interesting to mention that an analysis conducted by Bouwmeester and colleagues (2012) identified and classified five types of medical studies with different models of prediction:

  • Predictor finding studies: These studies explore in detail which predictors independently contribute to the actual prediction (e.g., medical diagnosis).
  • Model development studies without external validation: This approach is based on the development of prediction models (e.g., predictive techniques to guide patient management) by assessing the different weights per predictor.
  • Model development studies with external validation: Similar to the model development studies above, these studies are based on models tested across external datasets, with a focus on external validation (e.g., temporal validation).
  • External validation studies with or without model updating or adjustment: These studies focus on assessing and adjusting previous models of prediction based on new participant data as well as new validation data.
  • Model impact studies: These studies explore the actual effect of the models of prediction on health outcomes and healthcare practices.

Note that the research team concluded that prediction models must involve external validation and impact assessments in order to be successful.

3. Methods of Prediction: Developing, Validating and Assessing Models

Although validation is a vital part of any prediction model, there are several steps that predictive research follows. The first step is the development of a predictive model. Note that the selection of irrelevant predictive variables can lead to poor performance. Missing data is another crucial aspect that experts must consider. Validation, as explained above, is one of the most important factors for success. The model performance should undergo internal validation (e.g., splitting the dataset) and external validation (e.g., new patients’ data). Apart from internal and internal validation, there are several types of validation practices: temporal (including new subjects from the same institute), geographical (focusing on another institute, city, etc.), and transmural (exploring different levels of care, e.g., primary care) (Janssen et al., 2009). Sadly, when validation shows poor performance, researchers usually develop a new predictive model instead of adjusting for errors. By not updating old models of prediction, prior knowledge is left behind, and predictive research often becomes particularistic.

The final step of predictive research is assessment: researchers must assess the performance of their predictive model. This goal can be achieved through numerous additional tests, such as calibration, discrimination or reclassification (Waljee et al., 2014).

4. Predictions in Observational Studies and Randomized Controlled Trials
  1. Observational Studies and Randomized Controlled Trials

The abundance of research methods and study designs can help experts explore a wild range of settings, populations, and phenomena. Nevertheless, observational studies and randomized controlled trials are the most popular and effective types of studies that help professionals evaluate treatment outcomes. Note that in observational studies, outcomes are observed just after a certain intervention. In randomized controlled trials, on the other hand, randomization is used to reduce bias and measurement errors (Braun et al., 2014).

Consequently, researchers claim that randomized trials are more effective than observations because randomization procedures can eliminate bias and nuisance (Trotta, 2012). At the same time, some research topics cannot be tested via a randomized trial as such studies would be unethical. Imagine randomizing non-smokers to smoking groups! It’s no surprise that the quality of the study design is the most important factor for research success.

  1. Survival Analysis and Censoring

Since medical research and epidemiological studies involve the measurement of the occurrence of an outcome, prediction models become paramount. In particular, survival analysis or lifetime data analysis focuses on the measurement of time to event (e.g., outcome). Time to event can be fatal (e.g., death), time to a clinical endpoint (e.g., disease), positive (e.g., discharge from hospital), and neutral (e.g., cessation of breastfeeding) (Prinja et al.,  2010). Note that time to event can be measured in days, months, years, etc.

Nevertheless, in survival data, the end of the follow-up period won’t occur for all patients (Altman & Bland, 1998). In fact, this is a phenomenon, known as censoring, which researchers must consider. Censoring occurs when information on time to outcome event is not available (e.g., due to loss to follow-up or an accident). We should mention that there are three types of censoring: right, left, and interval. Let’s have a look at a study about breastfeeding mothers (Ishaya & Dikko, 2013). In case participants are still breastfeeding after their last survey, this is known as the right censoring. Left censoring occurs when mothers enter the study after they’ve stopped breastfeeding. Interval censoring, on the other hand, is when mothers stop breastfeeding in between two successive check-ups.

2.1. Prediction Models and Measurement Error in Time to Event Data

Although clinical research is based on precise data and regulated research, there are many unknowns and datasets which are prone to measurement error. As stated above, predicting survival outcomes can be a challenging task prone to problems, such as censoring. To tackle such challenges, measurement error must be considered in any prediction model. For instance, Meier and colleagues developed a model for measurement error by repeated testing. They based their model on validation data and adjusted proportional hazards methods, with event data being a primary outcome. On the other hand, Braun and colleagues (2014) analyzed various scenarios for time to event data measured at a one-time point, with event data used as a covariate. Note that their adjusted prediction model was based on error-free time to event data, while the actual implementation of the model employed error-prone time to event data. To be more precise, Braun and colleagues found out that in Mendelian risk prediction models, self-reported family history is not always accurate. Therefore, the research team concluded that both sensitivity and specificity should be assessed to avoid distortions in predictions.

Note that Mendelian risk prediction models include meta-analyses; in fact, there are various powerful software packages (e.g., BayesMendel R package) which can be utilized to calculate predictions. In particular, the BayesMendel R package employs the R programming language. The package allows researchers to integrate the Mendelian models, evaluate the probability that an individual carries certain genes, and predict the risk of a disease based on a patient’s family history (Chen et al., 2004). In addition, the BayesMendel R package covers several models: BRCAPRO for breast and ovarian cancer, MMRPro for the risk of Lynch Syndrome, and PancPRO for assessing individuals at risk of pancreatic cancer.

2.2. Multivariate Survival Prediction Models and Mendelian Risk Prediction Models

The measurement error in time to event data applies to various multivariate survival prediction models and different types of predictions (e.g., error-free data, information prone to error, and adjustments). Simulation scenarios, such as the sophisticated Monte Carlo technique, can be utilized in research, including simulations for both values of sensitivity and specificity. Note that a low sensitivity indicates an underreporting of events and a low specificity to correspondents to an over-reporting of events.

It’s interesting to mention that Braun and colleagues (2014) integrated their measurement error adjustment method into a Mendelian risk prediction model not only regarding survival outcomes but time to event data all at the same time. What’s more, the research team revealed that adjusted data might be poorly calibrated in the low-risk deciles, which may affect practical outcomes, such as insurance policies and clinical decisions. It’s not a secret that insurance coverage is a complex problem – some health conditions might take a long time to be spotted or might be clouded by nuisance (Sommers et al., 2017). On top of that, insurance in clinical research is not fully explored by ethical committees.

3. Observations and Propensity Scores

Despite the popularity of randomized clinical trials in medical research, observational studies are among the most powerful research methods employed in medical practice. In clinical settings, there are retrospective and prospective observations. Retrospective studies involve the analysis of past records to collect relevant prognostics. In prospective studies, on the other hand, information is collected during treatment and follow-up. While prospective studies are time-consuming and costly, they help experts deal with bias and missing data. (Smith, 1990).

Interestingly, propensity scores are often utilized to analyze observational data. Propensity scores can be defined as the probability that a patient has been assigned to an intervention, given their covariates. As a result, such methods reduce imbalances in baseline covariate distributions between groups. There are various methods which can be employed. For instance, Rosenbaum and Rubin created a method to stratify patients based on their propensity scores and used the average effect across strata. Rosenbaum also used propensity scores to weigh individual observations and match subjects by their propensity scores. As a result, controls and cases had similar covariate values. Braun and colleagues (2014) took a different approach: they used propensity scores without having covariates balanced by treatment assignment. They showed that propensity scores are beneficial in research scenarios when there are multiple confounders; the team proved that such approaches might reduce the dimensionality. Also, Braun and colleagues show that propensity scores are more reliable than standard regression – in fact, in model misspecification, standard regression would lead to bias and errors.

3.1. Measurement Error and Observational Studies

Measurement errors are a normal part of medical research. Methods, such as likelihood-based approaches, regression calibration, and Bayesian approaches, can be used to adjust for measurement error. It’s interesting to mention that misclassification of treatment assignment will lead to errors in exposures as well as propensity scores (Braun, 2014). Actually, propensity score methods are widely used to estimate measurement error in confounders, including missing confounders. In fact, from using regression calibration to developing a consistent inverse probability weighting estimator, numerous techniques have been utilized to adjust for measurement error in confounders.

Since treatment assignment in observational studies can be measured with error, Braun and team focused on another fundamental problem: measurement error in the exposure variables/treatment assignments. They adjusted for measurement error in the propensity score using validation data. Then, the team used the adjusted scores to adjust for measurement error in the treatment effects, using external validation. Note that Braun and colleagues used four propensity score methods: stratification, inverse probability weighting of the likelihood, matching, and covariate adjustment. The research team (2014) evaluated the proposed likelihood adjustment by comparing the estimates of treatment effect for true treatment assignment, error-prone treatment assignment, and the likelihood adjustment. Note that during simulations, the team used one data set for the main study and another for validation dataset.

 

Methods of Prediction: Conclusion

 Methods of prediction are an important part of clinical research that can evaluate the risk of developing a disease. Interestingly, some popular risk scores are the Apgar score, Acute Physiology and Chronic Health Evaluation (APACHE) score, Framingham risk score, etc. (Janssen et al., 2009). While the development of new models is crucial, validation is among the most important aspects which must be conducted before application in practice. It’s not a secret that the distribution of predictors might vary between samples and populations – the lack of accuracy, face validity or user-friendly approach can simply lead to poor practice. On the other hand, the adjustment of prediction models is another vital aspect that experts must consider. As described above, Braun and colleagues (2014) revealed the need for adjustments in survival data and time to event data. Last but not the least, assessment is also fundamental. In fact, impact studies can test the effects of models of prediction on provider behavior.

With the implementation of technology in healthcare and the improvement of electronic health record systems, electronic predictive models are becoming more and more attractive. Nevertheless, methods of prediction should only complement clinical judgment and if possible, recommend practical decisions to improve patients’ well-being.

 

References:

Altman, D., & Bland, D. (1998). Time to event (survival) data. BMJ. Retrieved from https://www.bmj.com/content/317/7156/468.1

Bouwmeester, W., Zuithoff, N., Mallett, S., Geerkings, M., Vergouwe, Y., Steyerberg, E., Altman, D., & Moons (2012). Reporting and Methods in Clinical Prediction Research: A Systematic Review. PLOS Medicine, 9(5).

Braun, D. (2014). Statistical Methods to Adjust for Measurement Error in Risk Prediction Models and Observational Studies. Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:11744468

Chen, S., Wang, W., Broman, K., Katki, H., & Parmigiani, G. (2004). BayesMendel: an R Environment for Mendelian Risk Prediction. Statistical Applications in Genetics and Molecular Biology, 3.

Ishaya, D., & Dikko, H. (2013). Survival and Hazard Model Analysis of Breastfeeding Variables

on Return to Postpartum Amenorrhea in Rural Mada Women ofCentral Nigeria. IOSR Journal of Mathematics, 8, p. 1-9.

Janssen, K., Vergouwe, Y., Kalkman, C., Grobbee, D., & Moons, K. (2009). A simple method to adjust clinical prediction models to local circumstances. Canadian Journal of Anesthesia, 56(3).

Prinja, S., Gupta, N., & Verma, R. (2010). Censoring in Clinical Trials: Review of Survival Analysis Techniques. Indian Journal of Community Medicine 35(2), p. 217-221.

Smith, R. (1990). Observational studies and predictive models. Anesthesia & Analgesia, 70, p. 235-239.

Sommers, B., Gawande, A., & Baicker, K. (2017). Health Insurance Coverage and Health – What the Recent Evidence Tells Us? The New England Journal of Medicine, 377, p. 586-593.

Trotta, F. (2012). Discrepancies between observational studies and randomized controlled trials. Retrieved from https://www.pharmaco-vigilance.eu/content/discrepancies-between-observational-studies-and-randomized-controlled-trials

Waljee, A., Higgins, P., & Singal, A. (2014). A Primer on Predictive Models. Clinical and Translational Gastroenterology, 5(1).