A very appealing, flexible (etc.) approach is the Method of Simulated Moments (MSM), also Simulated Method of Moments (SMM), where simulations of the model try to match a set of certain summary statistics (i.e. "moments") of the data as closely as possible, as measured by a distance function. Together with Frank Westerhoff (University of Bamberg, GER) I applied this method to serveral small-scale agent-based asset pricing models with daily data, one paper being Franke/Westerhoff, "Structural stochastic volatility in asset pricing dynamics: Estimation and model contest", Journal of Economic Dynamics and Control, 36 (2012), 1193-1211.
Very elegant approach. The only problem I find is that the use of Monte Carlo simulations may require too high calculation capacity in big models, such as macroeconomic ones. Do you think that in these cases it may be appropriate to estimate the parameters using genetic algorithms?.
Federico, another way of reducing the burden would be the adoption of a more parsimonious design of experiments. We combine nearly orthogonal latin hypercube (NOLH) sampling and Kriging estimations to this end. You can check this working paper for a synthetic presentation of this method: http://ideas.repec.org/p/grt/wpegrt/2012-18.html
The paper is not in its final form, we are revising it and we should have a more useful version by the end of this month.
I know of the following paper in which the authors estimate a HAM (heterogeneous agent model) of the Brock-Hommes type (dynamic asset pricing model):
Boswijk, H.P. & Hommes C.H. & Manzan, S., 2005. "Behavioral Heterogeneity in Stock Prices," CeNDEF Working Papers 05-12, Universiteit van Amsterdam, Center for Nonlinear Dynamics in Economics and Finance.
Barde, S., 2016. Direct comparison of agent-based models of herding in financial markets. Journal of Economic Dynamics and Control 73, 329–353.
Barde, S., 2017. A practical, accurate, information criterion for Nth order Markov processes. Computational Economics 50 (2), 281–324.
Barde, S. and S. van der Hoog, 2017. An empirical validation protocol for large-scale agent-based models, Studies in Economics 1712, School of Economics, University of Kent.
Grazzini, J., Richiardi, M., 2014. Estimation of ergodic agent-based models by simulated minimum distance. Economics Papers 2014-W07, Economics Group, Nuffield College, University of Oxford.
Guerini, M., Moneta, A., 2016. A Method for Agent-Based Models Validation. LEM Papers Series 2016/16, Laboratory of Economics and Management (LEM), Sant’Anna School of Advanced Studies, Pisa, Italy.
Lux, T., Zwinkels, R. C. J., 2017. Empirical validation of agent-based models. Tech. rep., Christian Albrechts Universitaet zu Kiel. (published as a chapter in: Hommes and LeBaron (Eds.), 2018, Handbook on Comp. Economics, vol. 4, Elsevier)
You should be careful. There is a methodological issue (and I know I am in a minority on this) that calibration of an ABM by "fitting" has the potential to undermine the value of the approach. This is because most ABM have lots of parameters and if they were statistical models, we would already know that too many parameters and not enough data = rubbish. The trouble is, unlike for statistics, we don't have a "formal" way of deciding how many parameters we are "allowed" not to know the values of (but fit) relative to the "amount" of data. I suspect that for ABM we will have to rely on some sort of operational procedure (analogous to sensitivity analysis) to understand this issue. But I do not want to overstate the case. Some fitting could sometimes be legitimate depending on the data and the model - but at the moment we are not really sure when or why IMO, we just do it. (But also, ask yourself, is it a bad design principle to specify a model where you cannot see how to collect the data "for real" and only fit? Sometimes the problem is only practical I agree but sometimes it is definitely bad design.) I have tried to develop these arguments more rigorously here: http://methods.sagepub.com/foundations/agent-based-models. I have also shown that calibration/validation without fitting does not have to be an impossible goal: Article Using Agent Based Modelling to Integrate Data on Attitude Change
All of this worries me. Where I come from agent based models are kind of abstract/general -- they aim to tell you what the possibilities are in a certain type of scenario. You don't expect to be able to fit them to one particular situation in detail. And models involving agents tend also to involve AI - obviously(?). But having recently seemingly had my brain trampled on by a herd of (well meaning) elephants, feel free to ignore what I have just written...
I'm not saying this claim is wrong Jim Doran but it sounds unlikely. Would anybody think to say "statistics is general. you wouldn't expect it to fit specific data." There are huge issues about whether the the social world can be encompassed by our models of it but ABM doesn't seem _particularly_ vulnerable to these issues (and may in fact have some advantages - like representational richness) relative to methods that social scientists already accept as being able to talk about the world.
"they aim to tell you what the possibilities are in a certain type of scenario. You
don't expect to be able to fit them to one particular situation in detail. "
So not really much to do with statistics at all. Eg what would you say are the most prominent features of human society now across the planet? What likely futures can you predict from these features alone? Global dictatorship may be likely. But all depends on human cognition and social behaviour which have to be made operational in the model....
That wasn't my point. My point is what kind of thing ABM "is" and how that determines what we can expect of it. Some people seem to see it as a branch of mathematics. In which case you "prove" results but then face the problem that they may not relate to anything real (which is more likely in complex models of social systems than in simple models of physical systems - but even then a lot of pure mathematics ends up having no application.) Others (myself included) think of it as a "research method" for establishing certain kinds of truth. This was the analogy with statistics that, if conducted competently, it can answer questions like "what is the effect of class origins on educational success?" (But not questions like "How do sixth formers choose a university?") A major problem with ABM IMO is that its practitioners want not to have to be bound by any "external" standard like data or systematic methodology: "Here's a model that I made up that I think is neat. If you think it is neat too then cite it." The issue is not that I think that everyone should see ABM like I do but that anyone who purports to "do something" with ABM in the scientific arena should be able to specify both "procedures" for what they did that can be linked to relevant "outcomes" and "external" criteria for evaluation of their "success". To take your specific example, how do we _assess_ (beyond mere competing assertion) whether a model does in fact tell us the range of possibilities in a general (or specific) aspect of social life?
Well look at it this way (warning - the elephants are around today) . An ABM to predict the global future of humanity is the AI equivalent of a human futurologist and subject to the usual requirements for AI acceptance - programmable, demonstrable and does the job to or BEYOND human standard. Many AI systems now meet these requirements eg the speech recogniser on my mobile phone which is amazing. One can always test a bit on the development (if you can call it that) of humanity over the last 10k years....
I'm not trying to stop anyone doing anything _except_ that whatever they do they should be able to propose some scientific (non subjective) criteria for saying whether they have "succeeded". IMO many (maybe even most) non empirical ABM do not even attempt this. The criteria may vary (because ABM is not a "finished" technology and can legitimately be used in different ways) but there should always be some. For example, you cannot criticise a formal model for the unrealism of its assumptions but you can criticise it if the designer cannot explain how the "user" should decide objectively whether or not it applies to a particular phenomenon or range of phenomena.
The calibration of ABMs is usually done at a higher level without considering their fractal properties. Given the heterogeneity of the agents within each model, there are many others if partial perspectives are considered. This makes calibration difficult but allows us to have multiple datasets that ensure the internal consistency of the model. This leads to a multi-criteria calibration in which the adjustment of the different micro levels and the macro-level must be balanced in order to make realistic inferences.
Additional data sets/comparisons are certainly a way of reducing the risk of "contingent correspondence" between model and world but the basic problem of how complicated your model "can" be given the data you have remains ...
Think about William Golding's novel "Lord of the Flies" -- what mental processes did he use to predict the society that would emerge on the castaway children's island ? What mental model of human society did he use? Why do we , the readers, find his predictions plausible? Not just because Golding writes well surely...