I have been working in the field of domain specific artificial intelligence and domain specific synthetic languages for several years. This is a very important question. I discussed this subject with a colleague just yesterday. A few notes from our discussion follows.
There are some jobs where a person’s eyes and hands must be free to perform their work, yet they must be able to communicate with other people and computer systems that help them perform that work. In some of these jobs, safety is most important.
Although systems like Alexa and Siri are already in wide use, they do not today provide the degree of reliability and safety that would be required for use in an emergency room or neonatal ICU.
In such cases, it would be prudent to a create a task specific synthetic language designed for communication between humans and computer systems while humans perform their work, and computers answers questions, provide information, control task specific equipment, and other functions, thereby enabling a person to perform their work with their hands and eyes free to focus on the task at hand.
In short summary this would require implementation of computer systems that are able to:
· reliably hear and process spoken words
· translate spoken words into human and machine-readable form (text)
· verify that the spoken words, in human and machine-readable, form comply with the specification (grammar) of the synthetic language
· determine what question or command has been voiced by the human,
· invoke the procedure that will answer the question, provide requested information, or perform the command using task specific equipment.
· reply in some audible or visible form if the spoken words were not recognized or if the spoken words in human and machine-readable form do not comply with the specification of the synthetic language, or if they are used in the wrong context.
For added safety, before such capability is put into use, it would be necessary to generate many requests and commands both correct and incorrect in their formation to verify that both correct and incorrect commands can be successfully categorized as such.
For added safety, before such capability is put into use, it would be necessary to generate many requests and commands and to verify that the correct procedures are invoked, and through simulation, verify that the procedures safely perform the intended action.
Simply stated, a synthetic language is a set of words and set of rules for how to create requests or commands using the words of that language that can be spoken by a human, recognized by a computer, properly interpreted by a computer, and safely translated into the intended actions.
If you want to work in NLP related to medical emergency, an interesting direction is to work in development of healthcare chatbots. The chatbots that talk to user and guide them according to the situation.
You can take a basic idea of healthcare related chatbots as following:
for example, putting patients data to electronic medical records by voice with sequential recognition based on NLP which allow take less time for this job than handly.
Another case NLP using is patient testimony recognition for symptoms extraction with complex facts such as 'temperature' - when? which? 'head pain' - when, how long, location etc.
Natural language processing should focus on specific steps of the emergency care process. In my opinion, the most important is the prevention of the emergency, secondly the activation of rescue in the event of an emergency not avoided to reduce response times, and thirdly the mitigation through the support of the rescuer witness present on the site of the sudden emergency waiting for professional rescuers.