I use the following example in my teaching (mainly to demonstrate how awesome human hearing is): when you walk down a street and you hear the sound of (for example) an acoustic guitar being played (not too badly) you are generally able to hear whether someone is playing an actual guitar or playing back a recording of a guitar for the very reason Dr. Mannis suggests. Our psychoacoustic systems pick up so many precise directional cues that we can easily identify the predictable response of a loudspeaker in a certain acoustic condition (a park, a room with an open window etc). I imagine we might even be able to make out the difference between an electric guitar amplifier and a recording, taking the specific sound of such speakers in consideration.
Technically I assume you could analyse an experimental recording with a microphone array and look at the distribution of direction/frequency; if it's complex it's an instrument or a human or an animal (or a tree falling in a forrest) if it's relatively simply (i.e. all higher frequency in the same direction) it would likely be coming from a loudspeaker.
Sometimes by means of spectral analysis you can identify if a sound is live or recorded. I am thinking, for example, in those sounds sampled at low sampling frequencies. That is the case of call recordings, where the sampling frequency is 8 kHz. If the spectral components have a cut-off above 4 kHz (the corresponding Nyquist frequency), then it is highly probable that this is a recorded sound. This is specially true for music, because music usually contains important frequency components above 4 kHz.
However, if the sampling frequency is 44.1 kHz, as in commercial CDs, then this approach will not detect almost any difference between a live or a recorded audio.
I don´t see any other way to identify one from another. Actually the Nyquist theorem states that you cannot distinguish one from another if they are sampled at high rates.
The sounds reproduced by reading recording or synthesized in real time are irradiated by speakers having a polar diagram relatively simple and stable. The sounds produced by sound sources like animals, musical instruments, has a complex polar diagram can be varied according to the status and movement of each separate source.
I think I understand your idea, but do you know any way of classify a sound according to it´s source distribution (polar diagram), other than using several microphones? And even in such a case, it must be in a controlled environment. It would be really difficult to obtain in practice.
I use the following example in my teaching (mainly to demonstrate how awesome human hearing is): when you walk down a street and you hear the sound of (for example) an acoustic guitar being played (not too badly) you are generally able to hear whether someone is playing an actual guitar or playing back a recording of a guitar for the very reason Dr. Mannis suggests. Our psychoacoustic systems pick up so many precise directional cues that we can easily identify the predictable response of a loudspeaker in a certain acoustic condition (a park, a room with an open window etc). I imagine we might even be able to make out the difference between an electric guitar amplifier and a recording, taking the specific sound of such speakers in consideration.
Technically I assume you could analyse an experimental recording with a microphone array and look at the distribution of direction/frequency; if it's complex it's an instrument or a human or an animal (or a tree falling in a forrest) if it's relatively simply (i.e. all higher frequency in the same direction) it would likely be coming from a loudspeaker.
It is an interesting question. We make common mistakes that diminish the effectiveness of our visual abilities, I think it also is true for our auditory abilities. I mean that distinguish between a live sound and recorded one is difficult, however it could be a skill that needs practice. In this situation we should focus on background sound (noise).
Ordinary speaker are mostly set to stimulate mid-frequencies. If you capture sound played in air through such speakers and compare its spectrum with a live sound you see there are a huge number of very low and very high frequencies are missing in recorded sound. This limitations are seen in ordinary (non-studio) microphones, too.