For this category of problems, you will need two data sets:
-training set on which you tune your detection/recognition
-testing set, where you validate on independent data that your system actually works.
In terms of detection/non detection you need to define a maximum likelihood algorithm. Any good reference book in signal processing has a pattern recognition chapter.
This amounts to comparing two hypotheses : H0 object not present, and H1 object present.
You can use the Kullback-Leibler divergence as I have in my PhD, many years ago.
The experimental part is to check which thresholds work, on the training data set, and then to keep the experimentally tuned configuration, and assess if it works on the testing data set.
You can define robustness and sensitivity using :
-false detection rate (the systems chooses H1 but the reality is H0)
-missed detection rate (the system chooses H0, but reality is H1)
There are many ways to define sensitivity and robustness from these rates.
A simple explanation of Sensitivity can be found at: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
If you are using Matlab, you can calculate the sensitivity - and other statistical measures of classification performance, such as Specificity - by using the confusionmat function: https://ch.mathworks.com/help/stats/confusionmat.html
If you mean how accurate are the results, you migh need to compare with other methods, see example: Conference Paper Supervised Texture Segmentation: A Comparative Study
But if you mean how robust is the method, an example of measuring different image texture analysis methods susceptibility under noise presence can be found here: Article Assessment of texture measures susceptibility to noise in co...
Conference Paper Susceptibility of texture measures to noise: An application ...
Lamia, not sure I understand your question correctly, but when talking about detecting objects in imagery you may be talking about a form of object-based image analysis (OBIA). Here the meaning of sensitivity relates to robustness/stability. There is usually a large number of segmentations to based the object classification on, and if you track an object through those scales, some will be more stable than others. In other words, objects have variable sensitivity to scale changes. I have a lot of OBIA papers in my RG profile, of which some address segmentation optimization, which is a means to find stable segmentation scales and low object sensitivities.