Having been around in one form or another for almost thirty years, software defined radio (SDR) is still in its infancy, relegated to the research lab or the hobbyist study. Every year, more and more publications declare that some recent advances in computing power, or microprocessor architecture/augmentation, have paved the way for future systems to be completely software defined. Yet, mass market products are, invariably, hardware-defined.

What is the hold-up? Is it really limited computing power or lack of resources? Is it simply not a profitable venture?

Should we accept that, despite our best intentions, the world simply does not want, or need, SDR? Is it time to accept that SDR is just a research convenience, an interesting side-project, and re-focus our efforts on pushing the boundaries of hardware-defined receivers?

Or perhaps the momentum is still gathering, and there really is a future in SDR?

More James T. Curran's questions See All
Similar questions and discussions