I am planning to start work in this field. So I want to get suggestions from the research scholars regarding the feasibility of this work. Either you are in favor or not but kindly give me suggestion (especially if you are not in favor).
Just one image of their faces will not be sufficient. When using multiple images, you might be able to detect that they deceased. Other than that, is grasping too high.
Yes it is possible, specially if you are concerned with skin diseases. In case of hepatitis the color of skin can be also used as a sign for detection. But consulting with doctor and understanding the symptoms that are reflected on face is the best option that you can adopt. Multiple images will ensure the accuracy of diagnosis.
@ Anup : Even a specific disease is hard to detect. Jaundice can visually be detected by a trained physician, but analyzing color channels under different lightning conditions by a computer will be quite tough. I mean: a physician takes into account that the shutters are half down, the sun is low, it is October and the patient is Asian. You cannot get away with RGB thresholding...
@Lambert This could be taken care of by standardizing measurement surroundings, and using different databases for different conditions, like with different cultural backgrounds. This of course would make it more difficult to create an "automatic health check machine" for, say, use at home.
@Qaim Was your question aimed at lab environments, or a more practiacal approach?
@Malte, 'standard' surroundings would be nice! Not feasible however...
I can only envision a kind of web-cam surveying the patient's room... VERY sophisticated software MIGHT say meaningful things about the patient's welfare, but for now, my distrust is very high. I have more trust in hands-on nurses; they can respond to my needs.
My special concern is about to measure the wellness of those patients who are suffering from those decease which do impact on weakness (not any skin decease). I am planning to focus on neural decease. So, are you people agree that I can do measurement with emotion sensing approach?
Of course you can take pictures, but concluding any more than whether the patient is still alive will prove to be VERY difficult.
Maybe a 'laugh' detector on sound data is more effective than a 'happiness' detector on visual data (a happiness detector will be very hard to implement).
Ah, so you would like to have prior pictures of the patient available? For example taking a picture at the local doctor during the annual checkup and comparing it? Or shall the patients do pictures at home?
Under these circumstances, I'd have to agree with Lambert in both cases though, there are simply too many factors involved with visual data. Age, sporting activity, maybe having had a bad night before the check-up, white coat effect...
Probably a general telemedicine approach, were real docotors have a look at the pictures and are able to ask questions via email would yield much better results.
It can be done but for that you have to segment the image with respect to the ROI (Region of Interest) I think you can use the golden ratio for detecting ROI that is facial analysis in your case
@Lambert, golden ratio can be used to detect the ROI in the face i.e. eyes, nose, lips etc and its easy 1.68 is the golden ration which means that if we apply the golden ratio to the face first horizontally from both sides and then vertically from both sides then we can have the exact nose location and then we can easily detect the eyes and lips accordingly, however this needs to be checked on the database for the application of checking the health....ofcourse I am assuming that the concerned have found a way to detect the face from an image
@Ali, golden ratio is good for structural analysis. And even we have to localize facial components for it. We have more options for facial component localization. The golden ratio is the ideal structure for face. So, we have to follow an approach that can fit in any cases.
THANKS A LOT TO ALL OF YOU FOR SUCH A VALUABLE DISCUSSION.
Quote Sander: "if we apply the golden ratio to the face first horizontally from both sides and then vertically from both sides then we can have the exact nose location and then we can easily detect the eyes and lips".
OK patients: don't move, keep your head up and look straight into the camera!
Of course there are certain proportions in peoples faces. These proportions have bandwidths in a population. Some proportion might on average be 1.68 (sic), but certainly not 1.6180339887498948482...
@Lambert I dont think patient needs to keep his head up for this why do you think that...anyways you have the right of conflict but in my opinion this can be done through the proposed way.
@Sander; I said so because you propose to scan the image horizontally and vertically. When the face is tilted or 'en profile', the eyes are not aligned anymore or there is an eye missing.
Anyway, detecting faces in images has been done before. The problem is detecting the wellness of the patient.
I, again, agree to Lambert: The problem is not to identify the face, nose, etc. There are several good algorithms for that in computer vision, and i can propose some face recognition papers if needed. The problem, as far as I can see, is how to identify "bad" health, to identify symptoms, decrease in body weight or whatever. So, before stating specific solutions, a definition of what parameters "health" constitues for you would be necessary. Would overweight qualify as bad health? Also for older patients? Are all possible illnesses really part of your concern? Eyes, nose, ears? Or do you aim for broader symptoms, like jaundice as a symptom for more serious illnesses?
We have a facial analysis stream: EMOTION DETECTION, that can sense and quantify the emotional value line normal, good, very good, bad, very bad. So, how it is not possible for health. Off course dear friends it can't identify the illness but it can sense your health status because face is mirror of human body. We are concern with the patient's status that is weather he/she is feeling good or not. The treatment will not depend upon it but it will be one of those factors that a physician may follow to measure the improvement of health.
MANY THANKS TO ALL OF YOU FOR SPENDING YOUR VALUABLE TIME FOR THIS TOPIC. NOW, IN YOUR POINT OF VIEW SHOULD I GO FOR STARTING THIS WORK OR NOT?
Kindly give your valuable suggestions and the scope of such type of research.
Is your program intended to automate this daily routine, do you think it will reduce costs in healthcare, will the program be more reliable than the patients answer?
On the other side, if you can get funding: go ahead. It seems fun to make it!
And if your EMOTION DETECTION system is any good (ROC curves) you have already a solid base.
As we can detect emotions, I think there is every possibility of detecting wellness for e.g, Sinus problems we can concentrate on eyes and nose. For fever we can concentrate of eyes and skin. As skin glow can be a factor of wellness. So an algorithm can be developed for this.
If this system will be running in a hospital environment, then also consider whether it will be good for the patient's wellness when she/he knows that a camera is recording them all the time? In a hospital there already is little privacy, but now you cannot even pick your nose in peace!
@Lambert it would not be like any reality show. Just like a session interview by a doctor that ask about the health issues and feelings about the illness of the patient that is quite common in any hospital. This session can be recorded or can take a frontal photograph of the face. What I feel that the facial expression may be help us to understand his/her health conditions. And it is just my assumption that is why I am discussing it here. It is just a kind of feasibility study of work.
KINDLY ADVICE ME ABOUT THE FEASIBILITY OF THIS WORK.
My gut feeling is that the knowledge-base that a good system needs, is huge, ill defined and complicated. Image processing is still in it's infancy; the number of false positives (which will need to be checked by trained medical personnel) probably will be high.; Lightning conditions variable...
I would not invest in it. Of course the program could detect a serious health issue that the doctor missed, but the costs will be considerable.
I think more hands-on (nurses) is more efficient and cheaper and better for employment and more social.
As @ Lambert Zijp mentioned, if its not economical project and is going to be a research project where support and funding are provided then its a good choice.
But its really some subject that will be studied in near future. Also sensors which are going to be used are important (in research aspect not price).
Current hospital monitoring systems are focusing on behavior. This type can be useful but facial expression for health determination; it can be really a new field in itself.
Other point is as @Malte Buß mentioned; range of study; but essentially in facial expression recognition for this study you probably need machine learning algorithm in which every single person must be introduced. (a sour face is different for everyone and each illness has its own symptoms).
You are absolutely right and as Lambert advised, it is not a good project in financial aspect but it is just the initial step towards the big goal. We have to go miles ahead for reaching the maturity level of this project.
But it is quite natural way to initiate something new. I want to mention one example that when the very first routing algorithm was introduced for MANet, it was fully incompatible for it because it was adopted from the routing approach for wired network. It was very slow and even the route discovery process used to kill the bandwidth. But it was the first step and even in 20-30 years now it is very mature system.
There are plenty of people working on systems like this for patients with genetic syndromes. If you could incorporate that data you could be screening for undiagnosed syndromes that would impact on diagnosis and management.
These studies have addressed many of the issues raised above already.
I have seen a documentary on Discovery Channel about emotions sensing for smart cars of future. Car can sense your emotion and advice you for driving tips for safe driving. That clearly means that we can use emotion sensing for patient's health observation, however we should not 100% depend on only this factor.
So, I think that it could be initiated for Health Monitoring System.
"Face is the index of mind " ,Great thought sir your trying to find Facial expression is indicator for health.Your study will highly help on Parkinson disease patients rather than Genetic disorder .
Given how facial geometry is uniquely abnormal in some conditions, like Down's Syndrome, hemiplegic stroke, carbon monoxide poisoning, and Bell's Palsy, I suggest you start by looking at these.
Not many conditions have such "tell-tale" faces, but those that do seem remarkably consistent regardless of age, gender and race.
Well, this is very interesting topic. Facial expressions certainly convey messages. You may check vitruvian man concept and look for the fixed landmarks (e.g. point between two eyes, tip of the nose, end of both the eyes) in our face. You will also need some medical experts in this work. Kindly read about Cephalometric analysis. I had once worked on a project related to this.