Some researchers say that the concept of complexity in signal processing is a matter out of discussion, but in my work team we think that there are still several aspects to analyze.
Complexity in signal processing normally means the computational complexity means the resources efficient utilization and over all processing speed of an algorithm...
An exponential growth in computation power led many researchers to ignore computational complexity considerations. After all, the computational power of our mobiles phones is many time that of apollo 13.
In reality, reducing computational cost/complexity ables one to fit-in many more algorithms on a single core/chip/FPGA enabling implementation with lesser resources. If 2000 GPU cores can run 10 copies of algo A and 20 of algo B, where B is slightly inferior than A, in many scenarios industries would prefer B over A.
NLMS are prefered over RLS in adaptive filtering applications (echo cancellation, system identification), despite NLMS being inferior to RLS.
Never heard of it. There are two possibilities that come to mind. The almost trivial example is dealing with "complex" signal processing (I,Q) if there's any misunderstanding about where the term "complexity" came from. The other could be from "computational complexity" which is about dealing with assessing if computing certain things is, in a broad sense, difficult/time-consuming/resource-consuming. But this is generally not "signal processing" so much. Otherwise there's a lot of stuff that's "complex" but not further defined or formalized I believe. The notion of tying "complexity" to "resource hungry" is fine but not well defined I believe.