# Answering the question, What predictors are more important?, going beyond p-value thresholding and ranking

Daniel Kapitan writes:

We are in the process of writing a paper on the outcome of cataract surgery. A (very rough!) draft can be found here, to provide you with some context:  https://www.overleaf.com/read/wvnwzjmrffmw.

Using standard classification methods (Python sklearn, with synthetic oversampling to address the class imbalance), we are able to predict a poor outcome with sufficient sensitivity (> 60%) and specificity (>95%) to be of practical use at our clinics as a clinical decision support tool. As we are writing up our findings and methodology, we have an interesting debate on how to interpret what the most relevant features (i.e. patient characteristics) are.

My colleagues who are trained as epidemiologist/doctors, have been taught to do standard univariate testing, using a threshold p-value to identify statistically significant features.

Those of us who come from machine learning (including myself) are more inclined to just feed all the data into an algorithm (we’re comparing logistic regression and random forest), and then evaluate feature importance a posteriori.

The results from the two approaches are substantially different. Comparing the first approach (using  sklearn SelectKBest) and the second (using  sklearn Random Forest), for example, the variable ‘age’ ends up somewhere halfway in the ranking (p-value 0.005 with  F_classif) vs. top-6 (feature importance from random forest)

As a regular reader of your blog, I am aware of the ongoing debate regarding p-values, reproducible science etc. Although I get the gist of it, my understanding of statistics is too limited to convincingly argue for or against the two approaches. Googling the subject, I come across some (partial) answers:

https://stats.stackexchange.com/questions/291210/is-it-wrong-to-choose-features-based-on-p-value.

I would appreciate if you could provide some feedback and/or suggestions how to address this question. It will help us to gain confidence in applying machine learning in the day-to-day clinical practice.

First, I think it would help to define what you mean by “most relevant features” in a predictive model. That is, before deciding on your procedure to estimate relevance, to declare based on the data what are the most relevant features, first figure out how you would define relevance. As Rubin puts it: What would you do if you had all the data?

I don’t mind looking at classification error etc., but I think it’s hard to make any progress at all here without some idea of your goals.

Why do you want to evaluate the importance of predictors in your model?

You might have a ready answer to this question, and that’s fine—it’s not supposed to be a trick. Once we better understand the goals, it might be easier to move to questions of estimation and inference.

Kapitan replied:

My aim of understanding the importance of predictors is to support clinical reasoning. Ideally, the results of the predictor should be ‘understandable’ such that the surgeon can explain why a patient is classified as a high risk patient. I.e. I would like to combine clinical reasoning (inference, as evidenced in ‘classical’ clinical studies) with the observed patterns (correlation). Perhaps this is a tall order, but I think worth trying. This is one of the reasons why I prefer using tree-based algorithms (rather than neural networks), because it is less of a black box.

To give a specific example: patients with multiple ocular co-morbidities are expected to have high risk of poor outcome. Various clinical studies have tried to ‘prove’ this, but never in relation patterns (i.e. feature importance) that are obtained from machine learning. Now, the current model tells us that co-morbidities are not that important (relative to the other features).

Another example: laterality ends up as second most important feature in the random forest model. Looking at the data, it may be the case that left-eyes have a higher risk of poor outcome. Talking to doctors, this could be explained that, given most doctors are right-handed, operating a left-eye is slightly more complex. But looking at the data naively (histograms on subpopulations) the difference does not seem significant. Laterality ends up in the bottom range with univariate testing.

I understand that the underlying statistics are different (linear vs non-linear) and intuitively I tend to ‘believe’ the results from random forest more. What I’m looking for is sound arguments and reasoning if and why this is indeed the case.