This is not my field, so I'm only guessing what you mean, but if you were talking about reasoning by people, I think the answer is obvious: we save time and effort by "intuitive leaps" that synthesize understanding derived from what seem like unrelated experiences until they are integrated by the subconscious.
This is why it takes a supercomputer to beat a chess master: the computer must try all the rules and compare their consequences, whereas the master can just skip to the good ones. Until the computers can do this, they are mere "expert systems"; once they can, they will be true AIs (or, in the current lingo, AGIs).
You have to keep the rules up to date. Also one rule might work most of the time, but needs exceptions etc. they interact in other words unless they are v simple, meaning it is harder to cover all cases. Statistical methods use lots of data, then the problem is access to the data.
I agree with Jess in considering rule-based thinking vs. intuitive thinking. However, I wonder if you consider only ambiguity or you consider also uncertainty, since solutions can be different.
I would like to add the fact that rules are not enough to get a solution. You also need information and knowledge. In using intuition, your experience integrates enough tacit knowledge to get ahead with your solution, but in rule-based approach you have to add progressively new information and knowledge since your initial knowledge based might be not enough.
I would suggest to read the famous book Blink by M. Gladwell to see how intuition works.
This is a very interesting area of research. However in the everyday world of requirements engineering I take the approach first to try and reduce the incidence of ambiguity.
If we take the meaning: "uncertainty or inexactness of meaning in language" then limiting the scope of the vocabulary available would seem to reduce the complexity of the problem. - See the use of structured business vocabularies (SBV) or Restricted English.
Next would be to control the patterns of specifications - A good resource is to view the OMG SBVR 1.3 specification and the use of Business Domain models (i.e. Fact Type models and Business Rule Speak) as example of controlled specifications that are both human readable and software readable forms.
My current work involves using UML, a subset of SBVR stereotypes, Business Domain modeling and specification patterns to generate human readable specifications (in support of a model driven analysis process).
So my answer would be a Rule Based approach seems like it would scale well if you can control the degree of ambiguity to start with.
Many years ago, Tony Arrott showed me an optimization program he had written using a values-based, as opposed to a rules-based algorithm. It solved the Travelling Salesman problem (with almost perfect optimization) many times faster and was also able to solve the Inventory problem (turning production facilities on an off optimally in response to fluctuations in demand) in real time on an old Mac.
He explained that the strategy of "scoring" each solution according to an array of values (such as "Having the item in stock is good," and "Shutting down or restarting a factory is bad.") was a lot easier and faster than trying to obey rules about which action should follow which.
I'm pretty sure this is how our brains work. It helps to explain our societal obsession with single-valued logic (e.g., "No price is too great to save a single human life!"or "The only good X is a dead X!" ). As stupid as it may be, it's a very easy way of reaching a decision. The danger in overutilizing this method is obvious.
(Caveat: again, I have no credentials to speak of such matters. :-)