I basically haven't found any relevant research, but I think if it is data from surgery or implantable robots, is there a requirement for real-time inference/computing? I haven't found any evidence to support my thoughts
There are currently multiple wearable devices, equipped with sensors, which can read vital signs from a user. Some of these devices are not very accurate, mainly developed and targeted for the low end recreational and fitness market, others have medical grade accuracy and are targeted for the highperformance sports and medical environments.I
For a health-related solutions developer, the raw data must be interpreted according to the specific purpose of the desire solution. In that regard, data analysis and knowledge inference is a specific research domain from which can be produced software modules, encapsulating all the research and deep knowledge necessaries to create the inference functionalities. These modules can be used as building blocks for other software solutions than will then benefit from having knowledge regarding their user’s health.
Currently there are several consumer wearables and portal services that can monitor some basic vital signs, e.g., heart rate and skin temperature, allowing the user to register and view their evolution over time
Considering the increasing amount and the diversity of electronic health records and health related data, I would suggest data warehousing, mining and visualization in this context. Some interesting articles are :
Conference Paper Visualization of EHR and Health Related Data for Information
One of the emerging research topics in eHealth is methods for documenting return on investment (ROI). This is challenging given that outcomes (e.g. improved health outcomes) have many other contributory factors, making it difficult to isolate the sole contribution of health informatics
Actually, have more than seven year's experience deploying communication technology in health and social care in the UK. The approach we took was to co-create new models of care where the technology was embedded into practice. We found that, where the technology enabled new ways of working, evaluating and proving the outcomes was challenging for the care providers. I would advise anyone in the area of eHealth to think about not only the outcomes their approach can deliver but how you would evaluate that. Then build the evidence generation into the eHealth solution. In this way solutions becomes self documenting and help make the case to commissioners.
I think it depends on how "real time" you mean. There are a bunch of decision support tools using ML and inferencec that support diagnostic tests. The results of these inference readings on the data are notmally available within a minute or so of tha data being processed, and prior to any huma read (the humn reads the data as they would according to their clinical discipline with the clinicall support ML/inference results) along side of them.
But I am sensing from youur question that is not "real time" enough - that you are talking about inference being used within the systems to make judhement calls on what to do (like the positioning of a laser in a surgery procedure).
One area you might look at is Radiology. Radiology imaging has a bunch of technologies that use image processing techniques to produce images. Even plain X-Rays are digitl these days (with filtered images being produced asa an output). Then there are all the tomographic imaging modalities (CT, MRI US Tomo, SPECT, PET). These typically used maximum Likelihood algorithms for the tomographic reconstruction in this day an age (it used to be all the Filtered Backprojection method).
Hamid et al. target the confidentiality of healthcare patient’s multimedia data in the cloud by proposing a tri-party one-round authenticated key agreement protocol based on bilinear pairing cryptography
Marwan et al. propose a novel method based on Shamir’s Secret Share Scheme (SSS) and multicloud concept to enhance the reliability of cloud storage in order to meet security requirements to avoid loss of data, unauthorized access, and privacy disclosure. The proposed technique divides the secret data into many small shares so that one does not reveal any information about medical records. Besides mutlicloud architecture, data are spread across different cloud storage systems. In such a scenario, cloud consumers encrypt their data using SSS technique to ensure confidentiality and privacy.
Galletta et al. present a system developed at Instituto di Ricovero e Cura a Carattere Scientifico (IRCCS) that is claimed to address the patient’s data security and privacy. The presented system is based on two software components, the anonymizer and splitter. The first collects anonymized clinical data, whereas the second obfuscates and stores data in multiple cloud storage providers.
According to the official definition, cloud computing has five main characteristics: resource pooling, broad network access, rapid elasticity, on-demand self-service, and measured service [5].(i)Shared resources: clients can share resources like networks, servers, storage, software, memory, and processing simultaneously. Providers can dynamically allocate resources according to the fluctuations in demand, and the client is completely unaware of the physical locations of these services.
On-demand self-service: if needed, any customer can automatically configure the cloud without the interference of service technicians. Customers perform scheduling and decides the required storage and computing power.
Measured service: different cloud services can be measured using different metrics. Detailed usage reports are generated to preserve the rights of customers and providers.
Software as a service (SaaS): it is the most popular cloud service, and the software resides on the provider platform. The consumer can access the software using a web browser or an application programming interface (API). It follows a pay-per-use business model. Consumers do not need to worry about the software upgrades and maintenance; some limited application configuration capability might be available to consumers. Salesforce and Office 365 are popular examples
Platform as a service (PaaS): it provides development and testing environments. The consumer develops his/her own application on a virtual server and has some control over the application hosting environment, particularly the application and data, making it faster to develop, test, and deploy applications. Cloud Foundry is a good example
AWS Support is a one-on-one support channel that is staffed 24x7x365 with experienced support engineers. AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases.
The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long term contracts, providing the level of support that meets your needs.
Actually, Triton Inference Server is an open source inference serving software that lets teams deploy trained AI models from any framework on GPU or CPU infrastructure. It is designed to simplify and scale inference serving. Triton Inference Server supports all major frameworks like TensorFlow, TensorRT, PyTorch, ONNX Runtime and even custom framework backend. It provides AI researchers and data scientists the freedom to choose the right framework.
High Performance Inference - Triton Inference Server runs models concurrently maximizing utilization, supports GPU & CPU based inferencing, offers advanced features like model ensemble and streaming inferencing. It helps developers bring models to production rapidly.
Designed for IT and DevOps - Also, available as a Docker container, Triton Inference Server integrates with Kubernetes for orchestration and scaling. It is part of Kubeflow and exports Prometheus metrics for monitoring. It helps IT/DevOps streamline model deployment in production.