First, what type of images does the dataset exhibit? In general, deep features (automatic blind generation of features using deep learning) are exploited in the COVID19 cases instead of specific handcrafted user-defined features.
It depends upon your image and on the basis that you want to segment the image or you want to do classification. In feature extraction you can use intensity, shape and colour especially useful for segmentation and in feature selection you can use PCA or LDA.
Extensive and Augmented COVID-19 X-Ray and CT Chest Images Dataset
https://data.mendeley.com/datasets/8h65ywd2jr/2
This COVID-19 dataset consists of Non-COVID and COVID cases of both X-ray and CT images. The associated dataset is augmented with different augmentation techniques to generate about 17100 X-ray and CT images. The dataset contains two main folders, one for the X-ray images, which includes two separate sub-folders of 5500 Non-COVID images and 4044 COVID images. The other folder contains the CT images. It includes two separate sub-folders of 2628 Non-COVID images and 5427 COVID images.
Cite it in your research work:
El-Shafai, Walid; E. Abd El-Samie, Fathi (2020), “Extensive and Augmented COVID-19 X-Ray and CT Chest Images Dataset”, Mendeley Data, v2
what image modalities you are using or planning to use? Normally, you try to find a texture based features that have some known class discriminating power, and this is dependent on image modality and the scanned tissue/organ.