This group of questions technically involve different tasks.
1. Robo-advisor is available today.
A typical demonstration is the on-line automated custom service robot on several large on-line shopping websites. It is difficult to say which technique is the major engin in these robots. Taking the robot by the largest Chinese shopping website "Alibaba" as an example, according to the published information by this company, its engin is acutally a combination of multiple techniques, including distributed computing, probabilistic prediction, natural language processing, and automated transferring to mannual services, etc.
2. AI aided clincal diagnosis is in testing.
Several publised clinical trials aided by "AI" have shown promising results in dicision of therapeutic strategy (e.g. in septic shock), recognition of pathological specimens (e.g. diagnosis of cancers). But few convincing general diagnostic AIs have been published. Please notify there isn't an academic term "artifical intelligence". The academic term "machine learning" specially denotes the mathematical process of dimensionality reduction by a model, which can be regression, support vector machine, random forest, or deep learing, etc. Today, these models can work if your "learning" and "predictive" tasks can be transformed into structured data, which is also known as representation learning. For instance, an image can be transformed into a matrix representing the intensities of colors. Even for natural language, which is always unstructured, data should be transformed into structured form. From the view of mathematical principles, the "traning" and "prediction" of machine learning are the processes of "feature" identification. For most tasks today, these "features" are actually the numerial patterns.
After the success of AlphaGo, I have frequently heard a popular misunderstanding that AlphaGo successfully made prediction by "deep learning", but in fact, its prediction engine is Monte Carlo tree searching. Deep learning here was used to evaluate the "values" of sites on the board. Academic community has chosen Chinese Go as research interest after several considerations:
1. Chinese Go has an unimaginable theoretical searching space, some one has estimated it that it should be about 10^180 even after collapsing nonsense variations, meanwhile the estimated number of all atoms in our visible universe is about 10^80.
2. Chinese Go is the most complicated complete information game, which is easy to estimate the power of intelligent machine with an objective standard - the rate of wins.
3. Chinese Go is a game with naturally structured data, any variation can be represented by a 19 times 19 board (matrix), which is mendatory for machine learning today.
By describing these details, I only want to explain, machine learning today is successful in recognizing a "feature" even in an unimaginable sample size. However, due to the instrinsic nature that a computer actually dosen't "understand" what it has learned, the "AIs" today are easily in risk when a situation has never been seen before. It has been frequently demonstrated, such as the 4th play of AlphaGo in competition with Lee, the corruption of image recognition even by a single-pixel attack, or the corruption of "comprehension" even by a single "NOT" attack recently.
In some recent successful prediction tasks, the so-called "AIs" are actually the combinations of machine learning and multiple techniques, such as, physical or chemical principles (e.g., drug discoveries), probabilistic models (e.g., chess game), graph theory (e.g., knowledge graph), etc. Therefore, the "AIs" today actually are successful in reduction of tedious works, rather than prediction.
Biomedical data should be the most complicated data ever known, with nearly all kinds of data types (numerial - both continuous and discrete, ranking, boolean, language, symbol, etc.), all kinds of appearance (you can find nearly all types of data distribution here, even nonsense distribution! ), and finally, can be structured or unstructured. It is extremely easy to induce "high-dimensionality disaster" in machine learning. I believe it should be why there are only several successful published trials which focus on several special tasks, such as dicision of therapeutic strategy (e.g., septic shock), image recognition of pathological specimens or clinical imaging (CT, MRI, etc.). Today, it is difficult to foresee an AI to make general clinical diagnosis.
3. Data aggregation and mining is one of the major research interests in the age of big data, which is also my major. By aggregation of data from different sources, many easily negligible knowledge can be drawn, sometimes it corrects some previous wrong understanding. Pearsonly, the major barrier in this field, is made by the complicated standards by which researchers published data. If data are published in greatly different standards, "cross-platform comparison" will be very difficult.
This system has been proved by Chinese government to be a guidance for patients' admission. According to the authors, this system was trained with million level cases, and its prediction engine is an ensemble machine.
Although the report said this system shows a roughly 90% accuracy in diagnosis, this machine is in fact a "classifier", rather than "doctor".