I would say that the Delphi technique is still more of a people dependent estimation technique. We should call it more of a democratic process though, not necessarily accurate. The attempt is to reduce estimation bias by taking the most frequent responses. Statistically, should we consider it as accepting the mode rather than the mean?
The work-breakdown structure is an attempt to reduce estimation bias by estimating the granular elements. The philosophy essentially is that; the more granular the item, the more tangible it becomes from an estimation perspective.
I have achieved very positive results using the second approach: also known as task decomposition. My paper is still in the making; will make the results available soon.
Based on my reading and experience, Delphi is usually used to attain a first level perspective of an approach that subsequently is validated using other methods: statistical or otherwise. For instance: in a parametric study, Delphi was used to identify the critical factors impacting software effort; which was subsequently validated by factor analysis.