I was wondering which method you would recommend me to define the time complexity of human-readable algorithms obtained from a hyper-heuristic framework.
Is there any particular difference between those types of algorithms and what one would consider for any kind of algorithm?
Usually time-complexity is a formal construct that we use for any kind of algorithm that depends on the asymptotic number of steps (then considering worst-case, average-case, or best-case analysis). The only thing we require (and many would define an algorithm to have this property) is that it terminates in a finite number of steps. If it applies to what I suggest, there is a strong body of literature in theoretical computer science, in particular, analysis of algorithms involving this (these would be your O(blah)'s and more). Any reasonable theoretical computer science text will do for this (e.g., Introduction to Algorithms). My better question is why don't these standard techniques apply? I understand each of the terms you provided, but don't quite follow why you wouldn't use the standard approach for algorithm analysis, depending on your model of computation. The model can change, but the definitions need not change unless your need to consider anything in particular (e.g., if it is a parallel machine model, you may consider the time-complexity of the whole algorithm based on the longest running thread/processing unit).
If the whole algorithm doesn't terminate in finite time, it makes analysis a little less concrete, but you can also do the following:
1) Determine the time complexity of an iteration of the algorithm. If other algorithms do this, it can be a useful measure.
2) Average-case may be more useful. Consider the distribution of how the instances behave, then go right ahead.
Is there any particular difference between those types of algorithms and what one would consider for any kind of algorithm?
Usually time-complexity is a formal construct that we use for any kind of algorithm that depends on the asymptotic number of steps (then considering worst-case, average-case, or best-case analysis). The only thing we require (and many would define an algorithm to have this property) is that it terminates in a finite number of steps. If it applies to what I suggest, there is a strong body of literature in theoretical computer science, in particular, analysis of algorithms involving this (these would be your O(blah)'s and more). Any reasonable theoretical computer science text will do for this (e.g., Introduction to Algorithms). My better question is why don't these standard techniques apply? I understand each of the terms you provided, but don't quite follow why you wouldn't use the standard approach for algorithm analysis, depending on your model of computation. The model can change, but the definitions need not change unless your need to consider anything in particular (e.g., if it is a parallel machine model, you may consider the time-complexity of the whole algorithm based on the longest running thread/processing unit).
If the whole algorithm doesn't terminate in finite time, it makes analysis a little less concrete, but you can also do the following:
1) Determine the time complexity of an iteration of the algorithm. If other algorithms do this, it can be a useful measure.
2) Average-case may be more useful. Consider the distribution of how the instances behave, then go right ahead.
I think Mr. Daniel Page said everything that needed to be said about the topic.
I agree, as long as you're dealing with "standard" algorithms that terminate in finite time, the same standard approach should be applied for their analysis, no matter the model or framework you're using (unless the latter is based on different assumptions that would question the relevance of the standard approach).