60 Minutes show on January 13, 2019 discussed the progress that Artificial Intelligence (AI) has achieved in China. This show discussed the sight or visual recognition of AI work being accomplished to-date. What about sound recognition using AI work, anyone working in this arena?

Speech recognition has been achieved with graphs showing waveforms of each character a person says in a sentence. This helped find each character necessary for a new spoken language. Could AI work find all characters spoken in a sentence? If so, develop a new ‘hearing’ aide that visually shows on a ‘screen’ what was just spoken. This could be of great help to those that are deaf or close to being deaf.

My Lie Algebra professor at Oregon State University, in 1966, worked on developing a math model of the human brain; i.e. models for hearing, sight, taste, … all human senses. Might AI help in these arenas?

Similar questions and discussions