Fan and Lv (2008) suggested Sure Independence Screening (SIS) which is a very effective procedure to tackle ultrahigh dimensional feature problem. In the context of least squares regression, SIS algorithm starts with very simple procedure that so-called screening. The screening means ranking features that having the best marginal correlation with the response and then pick up the top features that indexed from the first rank to the feature that has been ranked at n/log(n). Say n=30 and p=100000 then 30/log(30) =9, herein I have some notes:

1. This procedure of screening transforms the features space from ultrahigh to ultra-low ( from 30X100000 to 30X9). remember that

one an important Lasso (Tibshirani;1996) shortcomings is that it selects at most n feature where p>n. Therefore, I think screening in such a way is very overstated and the features space should be transformer from ultrahigh to high dimensional space.

2. I think that using lasso and like its siblings after screening as mentioned above is unreasonable procedure and leads to biased estimates. Moreover, the sparsity property may not satisfied.

Similar questions and discussions