All the previous respondents addressed interesting issues but
@Shipra Suman has touched the subject of feature engineering. Deep learning allows to handle features that cannot be currently understood. And the dimensionality is high.
The fact that many problems are solved fast and well should not obliterate meaning. We need this type of analysis to handle semantics and semiotics.
Deep Learning techniques are beating the human baseline accuracy in many domains. For example, check the ISIC competition, which includes detection, segmentation, and classification of skin cancer types. The top winners are all using deep learning techniques.
Yet, both DL and traditional approaches have their trade-offs. Training deep learning is computationally expensive and needs a lot of data samples to achieve desired results. If you have the resources and the data, then literature is mostly suggesting deep learning.
I am agree with Sara Atito Ali Ahmed, and would like to further mention that DL application of Machine Learning has the capability to processes heugh amount of data in comparison to the existing conventional techniques. Other than its high computation cost one of its main drawback is its nonreliability, i.e., its mystery of producing high accuracy is still need to be solved/studied and justified is an open problem.
Medical image segmentation, identifying the pixels of organs or lesions from background medical images such as CT or MRI images, is one of the most challenging tasks in medical image analysis that is to deliver critical information about the shapes and volumes of these organs. Many researchers have proposed various automated segmentation systems by applying available technologies. Earlier systems were built on traditional methods such as edge detection filters and mathematical methods. Then, machine learning approaches extracting hand-crafted features have became a dominant technique for a long period. Designing and extracting these features has always been the primary concern for developing such a system and the complexities of these approaches have been considered as a significant limitation for them to be deployed. In the 2000s, owing to hardware improvement, deep learning approaches came into the picture and started to demonstrate their considerable capabilities in image processing tasks. The promising ability of deep learning approaches has put them as a primary option for image segmentation, and in particular for medical image segmentation.
If you go through the state-of- the- art works you will see the majority of the recent works use deep learning methods as they analyze large amounts of data with less time consuming, high accuracy and less false alarm comparing to the conventional techniques.
All the previous respondents addressed interesting issues but
@Shipra Suman has touched the subject of feature engineering. Deep learning allows to handle features that cannot be currently understood. And the dimensionality is high.
The fact that many problems are solved fast and well should not obliterate meaning. We need this type of analysis to handle semantics and semiotics.
Aside from training on the data itself, deep learning also allows for additions of other types of features, and allows for an ensemble methodology of regressors or classifiers that can be more accurate than just images alone. Also, to address the concerns many have on this thread of the volume of data required to train image segmentation approaches, transfer learning has emerged as a really promising prospect to alleviate some of these concerns.
I generally dislike those that peddle their own papers in question answers, but we published a preprint recently that addresses this exact question - transfer learning images combined with other descriptors. In this case, you could have qualitative information about your patient and medical history, blood readings, and other factors into an ensemble that will learn what is important for prediction in the regression or classification of a disease or abnormality. This is something that is hard to come by in more traditional ML approaches that makes deep learning a more promising methodology for these applications where you have multiple sources of data and not a lot of data to train on.
Paper is here - Preprint Deep Multimodal Transfer-Learned Regression in Data-Poor Domains
Some features are difficult to understand or simply have not been studied yet. I think that is part of the beauty of AI, finding features and other relevant explanations to problems that have not been yet understood well by classical methods and research.
I would also like to point out that convolutional neural networks (U-net architecture for segmentation) have the capability to understand pixel and pixel groups whilst classic segmentation techniques are more focused on simple thresholds and complicated and completely experimental filtering/preprocessing of images.
What is the logical basis of choosing one technique over another? Research methodology suggests all known techniques should be used and the best should be identified through comparative analysis.