Both are isomorphic in the sense that algorithms designed in one form can always be translated to the other form. Loops require mutable variables and side effects, and are of the procedural idiom. Deep recursions require compiler support such as tail call optimization to function well. These are in turn typical in functional programming languages. Whether side effects or recursive functions are more difficult cognitively is dependent on the situation and the individual.
A recursive structure can be seen as a special case of loop statement with a possibility to exit. Recursive procedure call itself. (Sorry for my very bad english, like most of the franch people;-) )
The recursive function has a problem, resources! It spend a lot of resources of machine, because need to push and pop variables, that uses lot of memory. The good thing is that simplifies the source code.
1. Is there functional equivalence between the loop and recursive structures?
In computer programming, a requirement is that a recursive program is transformed into a functionally equivavent iterative program. See e.g. recursive and iterative programs of the Tower of Hanoi, http://en.wikipedia.org/wiki/Hanoi_tower. I like this example, which is also (and better) explained as an exercise in the book Knowledge Representation and Reasoning by Brachman & Levesque (2004), p.73. This exercice is worth to give to students in Artificial Intelligence course.
2. What are the cognitive complexities of the loop and recursive structures?
Loop programs are written by programmers. Loop programs are within the subject matter of programming as a discipline. Writing a loop is the core of the program. You have to grasp recurrent relations in an intellingent way. Human intelligence is in writing an effective loop program.
Recursive programs are written by theorists. Recursive programs are within the subject matter of theoreital computer science.
Hence loop and recursive structures can be viewed as the two sides of the Moon.
Loop and recursive structure were concerned in the works of Magne Haveraan in 1990s and later in our joint research, see e.g. references in Vytautas Čyras (2007) Composition of loop modules in the Structural Blanks approach to programming with recurrences: A task of synthesis of nested loops. Informatica 18(1):37-54, http://www.mii.lt/informatica/pdf/INFO647.pdf.
Actually, the two are separate concepts: a loop is repeating a defined set of steps in a structured way. Recursive structures are partially defined by themselves. It is a bit more than self-invocation, actually, because a loop could do that too. The difference is the re-use of methodology that both defines and allows the solution. Of course from a theoretical point of view one could argue that one is a case of the other, but the underlying principle is very different.
As for functional equivalence: to a degree. It mostly depends on the language you are using. While many functions can be translated, some solutions to a given problem are more effective and some are less. Apart from the elegance of a given solution, there is the matter of resource handling. For example, if you write code in ANSI C, you do tend to use more loops and iteration and much less recurrence due to the size of the stack. Some languages prefer recurrence though. These constraints pretty much define the solution you implement in practice.
I'm not sure what you mean by cognitive complexities, but if you're referring to the process of teaching programming to newcomers, loops are pretty intuitive while recurrence is the opposite. So I tend to introduce loops a bit earlier, typically along with boolean logic. You might have meant something else though?
With loop structure you exactly know the number of iterations in advance.
Recursive structures need not know exact number of iterations it may be
any and is not known in advance.
It is possible to transform loop structure into recursive structure via splitting the direct and return branches. In direct branch you are just navigating while in return branch you perform the action body.
Cognitive complexity of recursive structure is higher because it is inherently polymorphic.
When sequences of instructions within a programming language are continually repeated and executed until a certain condition is reached or met, we refer to it as a loop. Examples abound, among which are; For –loop, while-loop etc. An infinite loop continues to execute continuously because it does not have any terminating condition that will bring it to an end or to a stop, except the computer shuts down or the battery power gets used up/exhausted. This can be particularly interesting. However, recursive constructs are programming constructs that repeatedly call themselves explicitly or implicitly. There are many examples of recursion such as: linear recursion, exponential recursion, towers of Hanoi, tail recursion, mutual recursion among others. In some cases, certain conditions are deliberately set to terminate recursive functions. In the case of a loop, a function does not call itself. However, for recursive programming statements, a function does call itself to execute a particular task.
Usually, recursive solutions tend to be more readable. However, it requires the identification of a "base case" and a "recursive case", thus rethinking the algorithm that way. This is not always that clear for beginners or students, which often prefer "the loop way".
For Cognitive complexities, Loops(iterations) Vs Recursive functions for similar solutions can be studied. A general equation can be formulated by study of the individual complexities of many problems. This can be an interesting topic of research as it can help in analyzing and comparing performance of many problem solutions like cryptography, pattern matching, image processing etc. Also, if the hypothetical proposition that recursion is lesser complex than loops holds then we can work to design algorithms that can convert any loop to an equivalent recursive function and hardware can be designed to facilitate recursive function execution.
Loops can be thought of in both its operational meaning (i.e., as a sequence of program states that cycle in some way) or as a programming structure (e.g., while loops). There are programming languages having no explicit 'loop programming structures' available (e.g., pure functional or logic programming languages), but whose programs can of course 'loop' when executed. Furthermore, you can simulate looping programming structures in these languages by using recursion. In this sense, both notions (from a programming point of view) are dual.
Programming languages have a for-loop and a while-loop (or a variant until-loop). They perform the same compound instruction repeatedly, depending an a counter (for-loop) or a on condition that must be satisfied (as with while and until).
A recursive process is a (named) procedure that can invoke itself (or, one can have several procedures that may call one another).
It seems obvious to me that a loop (either one of for/while/do etc) is isomorphic to tail recursion. It is much less obvious that recursive structures and nested loop structures are isomorphic in general, although both are executed by Neumann-machines and therefore should have the same expressivity.
As Vesa Norilo pointed out in the first comment to this question, loops rely on side effects and mutable variables. I.e., no loops without assignment statements. Thus, although many programmers are used to this idiom and therefore believe this is the more common and easier one, recursive structures are conceptually more simple. Try to do program analysis and transformation, then you will recognize that recursion is easy to handle, but side effects are not.
1)The loop is one kind of recursive structure in programming. We can have recursive functions or procedures. Both of them implies inductive structures, so we use induction principle to extract functional aspects of loops.
2) the complexity resides in the fact that we can't master the many execution paths of the loop or the recursive structure. For loops, we use the notions of invariant assertions, invariant functions and invariant relations. The latters are the main contribution of our work and they serve for the analysis of different loop functional aspects such as the extraction of the loop function or an approximation of it, the loop termination conditions, verification of the loop for a given specification, the verification of loop uncoherence for a given function, derivation of loop pre conditions and post conditions, etc.
As a further comment on the isomorphism; general recursion can be translated to tail recursion (see continuation passing style) which is something a compiler may decide to do (certainly not a C compiler though).
Mapping nested loops into recursion involves launching a separate recurring function for each loop. With gotos, it gets messy, but still encodable as seen in Steele's famous lambda papers.
A translation of a general recursive function into a loop in C might involve a switch statement inside the loop and a mutable stack that contains information on which branch to take next and the data to pass to it. At this point, it starts to resemble a virtual machine (like Klaus Birken pointed out, the isomorphism occurs at the latest at the von Neumann level!).
I do not advocate any of these transformations as a practical programming technique, but it is beneficial to realize that the mapping exists.
Recursion is a powerful structure of mathematics for a neat treatment of complex problems following a fundamental deduction-then-induction approach. The trace of recursion is an embedded set of processes; while that of iteration is a series of sequential processes.
As Peter Deutsch, the creator of the GhostScript interpreter, pointed out: “To iterate is human, to recurse divine.”
Recursion is good and elegant, but is uses call stack that may raise more performance concern. Recursion can be replaced using iteration with stack, and iteration can also be replaced with recursion. the use of either of the two depends on the problem and its complexity, performance, readability requirements.
The question if recursion and loops are the same simply said yes. Because right implemented the function shall deliver the same results form a balck box point of view. The function shall independend of the implementation.
You ask about the cognitive complexity. A loop is a sequence of statements. Dependend on a condition this sequence will be repeated as long as it takes to fullfile the condtion.
So you can read straight the statements to understand the implemented function.
To realize a recusion you need a self calling function. Including a condition to break the recursion. A recusion good implemented is elegant to read. But recusion is more complex to understand because you have to "build" a stack in your mind to understand the function.
If you apply metrics like Halstead and McCabe you should also get a clue, How complex the function is. In a loop The complexity is derived by Halsted but in recursion it is derived by McCabe. Merging both metrics to a cognitive complexity metric should answer the question of congnitive complexity as an first attempt to understand.
Peter probably has a particular execution scenario, code model and a machine target in mind. His insights probably apply to that, but not generally. Maybe in his opinion a tail recursive call that becomes a jump instruction in the machine code is not recursion but a loop.
The OP didn't ask about the particulars of the compiler or framework implementation, thus I assume code generation details like register, stack frame or heap are not particularily interesting, but rather the semantics of the two iterative models.
There is also some confusion about time - there's nothing whatsoever that stipulates that a loop nest always complete in polynomial time. It is very easy to write a non-terminating loop. while(true); is a silly counterexample, but in real world terms, terminating conditions can be equally complicated both in recursions and loops. On the other hand, in many cases the trip counts can also be reliably detected in both recursions and loop nests.
• Both iteration and recursion are based on control structure. Iteration uses a repetition structure while recursion uses a selection structure. Iteration explicitly uses a repetition structure; recursion achieves repetition through repeated calls. Iteration terminates when loop condition fails, recursion terminates when base case is reached.
• Both Iteration and recursion run the risk of looping infinitely. An infinite loop occurs with iteration if the loop-continuation test never becomes false; infinite recursion occurs if the recursion step does not reduce the problem in a manner that converges on the base case.
• Recursion requires more space than iteration. Iterative implementation is likely to be slightly faster in practice than the recursive one, because Recursion repeatedly invokes the mechanism, and consequently the overhead, of method calls. This can be expensive in both processor time and memory space. There is always some limit to the size of stack and recursive algorithms tend to require more stack space than iterative algorithms.
• Both Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, while iteration can be replaced with tail recursion.
Which approach you use depends on the problem you try to solve and the language used?
Recursion is preferred for algorithms that are naturally recursive. A "naturally recursive" algorithm is one where the solution can be achieved by dividing the problem into smaller sub-problems “Divide & Conquer”. For eexample, Iteration is preferred in imperative programming because it avoids the overhead of function calls and stack management. By contrast, in functional languages tend to prefer recursion, with tail recursion optimization leading to little overhead, and in some languages explicit iteration is not supported at all. We should also mention that, Recursion imposes a higher cognitive load on the programmer than does iterative loops. For that, when teaching a new programming language I would always start with Iterative structures and then teach Recursive.
The answer by Vyatus may be the most popular, but it is misguided, or at least misleading. Peter's notes are too long to be popular, but they're more accurate.
As a general principle, the choice of control structure should be guided by the structure of the problem. If you're processing a table, the control structure that has the simplest mapping to the data is a nest of for-loops. If you're precessing a tree, recursive functions have the simplest mapping to the data.
Examples of problems for which recursion is best: parsing a language or playing a game of chess. I suggest that readers take a course or read a tutorial on those subjects.
Re performance and memory demands: That depends primarily on the compiler or interpreter of the programming language. Some compilers make tail recursions as fast as a loop. Any problem that requires nested recursions will require some kind of stack system for storing intermediate data. The advantage of recursive functions is that the compiler does the bookkeeping for managing the stack.
But the most general and flexible programming style is message passing among heterogeneous agents. This method can be used for anything, including a server that handles a huge number of transactions per second. But even for a single problem running on a laptop, it can take advantage of as many CPUs as your system has.
I will consider the second part of the question - is there a difference in cognitive complexity. Here there is a bit of a qualification since the question as posed does not stipulate exactly how the two are to be used. Functionally we can often do the same things using each over some range of operations. In a loop we are explicitly tying some operation to our explicit iteration. In recursion we are often implicitly repeating our explicit operation.
We can see beautiful examples of recursion - like a simple factorial function - because the syntax has helped us focus on the operation (multiplication) and the multiplicands were handled implicitly across our available domain. To do this in a loop requires several explicit steps to handle the bookkeeping - we need the previous product and the iterator and the present product all at the same time. More efficient, but exposes more details. To take the example of chess (or any optimization surface) ... we may very well want to preserve the intermediate steps of computation for re-use. So the loop will force us to do it explicitly. Not a problem in principle, but adds a level of design complexity to do it well.
So the cognitive complexity to consider as a programmer is how much exposure of intermediate products is useful or wise. That a recursive program is explicitly or implicitly hiding everything in a stack only imposes performance restrictions when the number of such intermediate products is large; so a quick calculation suffices to describe if that is a worry. It is harder to estimate the probabilities of bugs in a given plan for a loop or for a messaging system. Intuitively, most programmers get good at one of the three styles and tend to use it unless the problem they are facing hits that method at its weak spots.
A summary of common properties and differences between iterations and recursions identified so far:
a) There are functional equivalence between iterative and recursive structures for solving the same problem; while the traces (run-time behaviors) of them are significantly different
c) The expressiveness of recursion is better; while the space efficiency of iteration is higher.
d) The computational complexity (for machine) and cognitive complexity (for programmer) of recursion are higher than that of iteration. With regards to the formal models of cognitive complexity of software, see [Wang, 2007].
e) Iteration may be perceived as a special case of recursion where the body of iteration is not an embedded self-calling function rather than an explicit procedure?
f) The mathematical models (or rigorous definitions) of iteration and recursion are yet to be explored in computer science, software engineering, and computational mathematics. See a related question on BB: “What are the mathematical models of the iterative and recursive structures in software engineering?”
Loop and recursion are two sides of the same coin. Clearly from the perspective of transition state diagram: a loop is a repetition of a process that could span across two or more transition states; and in comparison to program codes, this means an iteration over a given number of program statements. Whereas a recursion in transition state diagram is a self calling repetition within the same state, and in programming; this is a self calling function or method given that certain condition(s) is fulfilled.
Ken, I would generalize the metaphor to say that this coin is a polyhedron with many sides. Marvin Minsky said that if you only understand a subject in one way, you don't really understand it. That idea (among others) was an inspiration for the following article: http://www.jfsowa.com/pubs/paradigm.pdf . The title summarizes both the computational and the cognitive issues: Two paradigms are better than one, and multiple paradigms are even better.
To answer Yinxu's original question, there is no ideal cognitive method that can solve all the problems in any field. Page 4 of paradigm.pdf quotes Minsky: "What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle."
The main difference is the required memory resource. Loop needs less memory resource but more processing resource. Recursion needs more memory resource and less processing resource. In addition, Recursion can be structural (tree structure to navigate in memory retrieval like in brain: neurons activation paths) or dynamic (more computerized tree recursion: caching results of processing in memory and when reaching the leaves of the tree or specific constraints, pop up stacked results to choose the "best" path). But most of the best recursive algorithms are transformed in loops (same results without recursivity and adding) and are no more recursive, meaning that information in brain memory is not data but are transformed in neural operations or functions ! It has been shown from long time that it is the case with artificial neural network, code and tuned parameters of neural functions store any information. Note that any tree equal any computarized program.
As our human brain is plastic, it adapts the tree structure at each new or reinforced (positive or negative) recording step. The retrieval is first general and then more precise and accurate with time, meaning that more tree paths of neurons (more memory) are (disc-)covered, more we get accuracy of the item or the linear string of items we are searching for, it means that the sparse of the information distribution is sparse in space and time modalities. the latter (temporal) due to the time of recording somewhere in the string (spatial).
In addition, I think cognitive functions are just a combination of information strings, i.e. a higher level of abstraction of pieces of more basic code (from neural to spatial). So lots of regions in brain can process more complex pieces of memory (texts, pictures, videos) or cognitive operations (filtering, matching, reducing, adding,...). Our brain can be viewed as a pure dynamic mathematical system. It's a beautiful idea.
What an amazing logical reasoning?! Your answer (e) was simply contradictory to your other answers to (a), (c), and (d). If (e) is correct as you thought, you’ve already implied that the rest are true because they are derived from (e).
“Functional equivalence” between two algorithms or programs is defined as the equivalence (or identical results) of their computational outcomes in both mathematical and programming perspectives. There is nothing to do with the equivalence of space complexity or that of time complexity as you perceived. Therefore, the comparative properties of iterations and recursions as stated in (a), (c), and (d) can be derived from (e). If (e) is true, the others are valid.
A general mathematical model of iterations can be expressed by the big-R notation [Wang, 2002/07] independent from and neutral to any semantic model or programming language, i.e.:
(a) For-Do iteration (FD)
.........n
FD = R P(i)
........i=1
where R denotes an iteration (Repetition), P(i) the ith process (function) in the body of the loop, i the index of P and its internal variables with regards to the round number of the iteration, and n a positive integer.
Similarly, the “while … do …” and “repeat … until…” iterations can be formally expressed as:
(b) While-Do iteration (WD)
.......exp=F
WD = R (P)
.......exp=T
where exp is a Boolean expression evaluated as either true (T) or false (F).
(c) Repeat-Until iteration (RU)
RU = P --> WD(P)
...............exp=F
......= P --> R (P)
...............exp=T
where --> represents a sequential relation.
Details of the formal deductive semantics of the above mathematical models of iterations may be found in [Wang (2007), Software Engineering Foundations, or RTPA]. Neater mathematical forms of the above iteration models are attached in the appendix.
Recursions can also be formally expressed using the big-R notation, which will be discussed later.
Did you implied that A0(0)=1? For enabling a finite series of back substitutions in the recursion processes, Am(n) where m=n=0 have to be given or defined. Then, it’s a solvable problem in computing by a recursive structure. It is also equivalent to two loops, i.e., a forward reduction loop towards A0(0)=1 and then, a back induction loop until Am+1(n+1) is obtain. Nothing special expect that there are two variables in the recursive function.
However, it’s noteworthy that it’s not the focus of this thread to discuss an individual case. We are seeking the general mathematical models of recursions and iterations that fit all cases or functions.
The Ackermann function is not a "convergent" recursive function, where the sufficient conditions of convergence in two-variable cases as follows are not met:
A convergent recursive function is constrained by the following sufficient conditions:
a) F(n) = F(F(n-1)), and
b) F(0) = k, where k is a constant known as the base value
You may find this definition in any entry-level book on computing theories such as in [Lipschutz and Lipson, 1997].
To help you to understand what a divergent recursion is, see the following ill-defined examples:
a) What is P(5) given the Pertermann’s function: P(n) = P(P(n+1)), where P(0) = 0.
b) What is B(3) given the Brex’ function: B(n) = B(B(n)), where B(0) = B(10).
WRT the Ackermann’s function, it’s divergent because for A(x, y)= A(x-1, A(x, y-1)), it implies A(x, y) => A(x, y-1) [deadlock in recursions for x] and y = A(x, y-1) [higher order or recursion of recursions]. These are the real reasons why you always get overflows of memory. There is nothing to do with your “long long” type definition of variable precision.
BTW, your version of the Ackermann function and its implementation in C were inaccurate. Try a simple online search to find out the original problem. Also, try to not be stuck at a non-fundamental/general issue and try to provide precise and short answers/comments in order to not distract the audience.
Example of recursive algorithm ― The recursive least squares algorithm (RLS) is the recursive application of the well-known least squares (LS) regression algorithm, so that each new data point is taken in account to modify (correct) a previous estimate of the parameters from some linear (or linearized) correlation thought to model the observed system. The method allows for the dynamical application of LS to time series acquired in real-time. As with LS, there may be several correlation equations with the corresponding set of dependent (observed) variables. For RLS with forgetting factor (RLS-FF), acquired data is weighted according to its age, with increased weight given to the most recent data.
Years ago, while investigating adaptive control and energetic optimization of aerobic fermenters, I have applied the RLS-FF algorithm to estimate the parameters from the KLa correlation, used to predict the O2 gas-liquid mass-transfer, hence giving increased weight to most recent data. Estimates were improved by imposing sinusoidal disturbance to air flow and agitation speed (manipulated variables). The proposed (adaptive) control algorithm compared favourably with PID. Simulations assessed the effect of numerically generated white Gaussian noise (2-sigma truncated) and of first order delay. This investigation was reported at (MSc Thesis):
Thesis Controlo do Oxigénio Dissolvido em Fermentadores para Minimi...
Example of iterative algorithm ― Newton–Raphson method:
1. Are recursion and iteration functionally equivalent?
Yes. If your programming language is powerful enough, you can solve the same set of problems with iteration and recursion. A general transformation of recursive implementations to a iterative ones would require an explicit stack to store, e.g., the values of the parameters that are passed in the recursive calls.
2. What are the cognitive complexities of the iterative and recursive structures?
Good question. In general, one sees that students have more problems with grasping the concept of recursion than of iteration. This may have to do with cognitive complexity, but in a twisted way. It has to do with abstraction, in particular passing the mental hurdle to not look into the recursive call and take the effect for granted. Just assume that the recursive call works and write/understand the body. The natural tendency to try to understand what is going on by thinking operationally about the recursive call is what blows up the cognitive complexity. You have to stifle this tendency and think about the program more abstractly. And this often turns out to be harder than dealing with some complexity.
dear Yingxu Wang , Iteration is the repetition of a procedure (i.e. process) in order to generate an outcome, so recursive functions (structures as you put it) might do the trick to generate the outcome. Recursive functions in itself has little to do with interative programming. Given the nature of the procedure (repetitiveness) as you put it recursive functions are the special case of iteration but not vise versa. I think this reasoning is flawed. KR Rob
There are two main differences considering iteration and recursion. The first one is about the number of cycles that are needed to complete the operation. while the second one concerns the alignment of navigation and operation segment of the algorithm. With an iterative approach the maximum number of cycles has to be known prior the iteration starts and operational and navigational parts of the algorithm are aligned (navigation and operation parts are at the same level of iteration). With an recursive approach the maximum number of cycles need not be known prior the launching and depends of the navigation part stop criteria of the algorithm. Also the navigation and operation part of the algorithm are not aligned because the operation is performed after navigation finishes (while algorithm is "returning" from the recursion).