27 July 2022 10 7K Report

Phased array antenna directivity is generally explained on transmit. For example, a uniform linear array (ULA) with element spacing of half a wavelength (scanned to boresight) has a directivity of N where N is the number of elements in the array. If the element spacing is increased, the array directivity increases due to the focusing of the energy in space - directivity is a measure of energy focusing (the beam narrows and more energy is focused toward boresight). However, when the spacing approaches one wavelength, a grating lobe appears, splitting the energy between two beams causing a decrease in directivity (even though each beam is still narrow).

Since antennas are reciprocal, we can conclude that directivity for transmitting arrays is equivalent to receiving arrays. However, I don't understand what is physically happening to cause the decrease of directivity at a spacing of one wavelength on receive.

Starting with array receive elements spaced at a half wavelength and spreading the elements would capture more of the incident plane wave on receive (consistent with the transmit understanding). Additionally, as the elements are spread out they would not be able to capture the entire plane wave leading to a decreased aperture efficiency. However, what I've described would result in an asymptotic curve with directivity approaching some max value as element spacing is increased rather than the actual curve with oscillates about N (a decrease in directivity at one wavelength spacing in particular). I've heard some explanations based on mutual coupling/impedances, but the transmit array directivity oscillations are present without considering these issues.

What is the physical explanation of what is causing directivity fluctuations for a receive antenna array?

More Nick Host's questions See All
Similar questions and discussions