I think one of the most educational or illustrative ways of understanding the answer to this can be seen by looking in machine learning and similar fields. A (relatively) early contribution to machine learning was the development of a program/algorithm that could "learn" to play checkers. Clearly this was inefficient (especially given that it wasn't that much later a computer program defeated the best chess player on the planet). For problems that we can solve in every form by the same algorithm, methods from machine learning/computational intelligence paradigms are inefficient. However, for the traveling salesmen problem, for fitness functions either in actual evolutionary biology or in applications elsewhere, etc., the lack of any explicit solution make borrowing from complex systems (e.g., swarm intelligence, artificial neural networks, cellular automata, etc.) the only efficient method (or, in many cases, the only method).
It is postulated that conceptual processing humans and other living systems with brains (or at least with cortices) are able to process concepts via nearly zero-lag synchronization among networks in disparate parts of the brain (see attached and links below). Synchronization itself is not only ubiquitous among complex systems enables systems to efficiently navigate, solve, adapt to, or otherwise deal with "problems".
See e.g., "Sudden synchrony leaps accompanied by frequency multiplications in neuronal activity" (http://journal.frontiersin.org.ezproxy.lib.umb.edu/Journal/10.3389/fncir.2013.00176/full)
Nonlocal mechanism for cluster synchronization in neural circuits
There are two ways to answer this question. One is to look at the system's structure and the other is to look at its functions.
With regard to its structure, my hunch is that the decentralised, distributed architecture of complex systems is likely more efficient than a centralised structure. There's a reason why the internet, for instance, is built the way it is. I wouldn't be surprised if there were differences depending on the system's size, and the rate of information and energy exchange between system components and the capacity of central nodes. On the whole this is something that empirical work should be able to determine relatively easily.
If I were to take a more functional perspective, though, I would argue that that a complex system can neither be efficient or inefficient. Going back to the definition of efficiency, a system (any system) can be said to be efficient provided (a) it has a unique, clearly-defined goal and (b) works toward this goal without wasting time or energy. A complex system is by definition made up by multiple, heterogeneous agents, and from this it follows that there will be multiple competing goals sustaining its activity at any given time. In my work, I use the term 'system intentionalities' to describe these driving forces, and you can envisage them as 'purposes' that a system strives to serve. If one wanted to measure the efficiency of such a system, one would have to specify that one intentionality is the 'real purpose' of the system - then we might empirically establish whether this purpose is fulfilled efficiently or not. But such a specification would, in my opinion, be reductive and incompatible with complexity thinking.
Efficiency is not the same as Effectiveness which is not the same as Efficacy.
If we wish to pursue the idea of *sustainable complex systems* I think it would be of benefit to prioritize efficacy.
We can talk about the difference between *sustainable* complex systems and those that are less so. For sustainable systems their viability is relevant. For systems to be viable long term there needs to be a flexibility and slack for the system to be able to *adapt* to and *change* dynamically with its environment - or it will *die* or cease to exist when it cannot continue as before. So this means that a narrow view of efficiency / effectiveness and efficacy would mean that systems that are (for example) very efficient are also by definition *unsustainable* as by default they have not enough *viability* long term. In other words - a recipe for death of the system as we know it - the question is not likely to be a question of *if* but a question of *when*...
*Death* of system is basically due to lack of requisite variety available due to over specialization...
This is a good and non-trivial question. The answer will definitely depend on the actual definition of the 'efficiency'. One possibility is to take physics approach and calculate system's thermal efficiency. Second law of thermodynamics sets limits on such efficiency. But in the general context of adaptive complex systems there is no simple way to calculate 'efficiency'. Such systems are usually far from equilibrium and simple equilibrium considerations do not hold. Such systems are also open in the sense that they interact with the environment, using the energy from outside to lower their entropy. The more 'complex' these systems are, the better they become at this process of using external energy to lower their entropy - this appears to be a way to see complexity of adaptive complex systems. But strictly speaking, their thermal efficiency is lower than that of the reversible ideal heat engine. And I believe that the more complex - the more irreversible and the more (thermally) inefficient the systems are. But it is up to you how you define efficiency and in turn, how you interpret the increase of the adaptive properties of complex systems. Please check this article which in easy terms discusses some of the relevant concepts and highlights some fairly recent advances: http://www.quantamagazine.org/20140122-a-new-physics-theory-of-life/
If we talk about human organized activity: we may view it as *living systems* and so it could be relevant to take into consideration *maxwell's demon". Potentially relevant during the lifetime of such a system. This has also been discussed in cybernetics and systems science by Heinz Von Foerster and also Ernst Von Glaserfeld.