Recursion Vs Loop

All problems that are solvable with recursion are solvable with loop, and vice versa.


Is this statement right or proven at all?

If you maintain your own stack it would be correct. Otherwise recursion can do things loops can't, like walk a tree.

Loops without recursion can definitely simulate recursion if given a stack, but what about recursion, can it solve whatever problems loops can solve?

Yes. Recursion to implement loops is a common idiom in a FunctionalProgrammingLanguage.

Then are pure FunctionalProgrammingLanguages (languages without assignment) as algorithmically expressive as ImperativeProgrammingLanguages? (ImperativeProgrammingLanguages could simulate FunctionalProgrammingLanguages)

Yes. They are both TuringComplete. For example, many interpreters for FunctionalProgrammingLanguages are written in CeeLanguage.

This is closely connected to the ChurchTuringThesis. Church used a "functional" programming model (the LambdaCalculus) and Turing used an imperative model (the TuringMachine). The ChurchTuringThesis is saying they are of equivalent power.


How about blowing the stack? This is a problem with recursion that does not affect loops. My point is to indicate that different solutions are different and subtle differences can be catastrophic. If you don't know how something is to be used, it is dangerous to imply equivalence between different approaches.

In languages where recursion is used instead of looping, TailRecursion is commonly optimized out during compilation -- so no additional stack frames are used, and it won't blow the stack.

E.g. the following SchemeLanguage code for, say, a HTTP server, will not exhaust the stack. This is guaranteed by the language standard (see ProperTailCall).

 (define (server-loop)
(let ((request (get-request)))
(handle-request request))
(server-loop))

The cases where this is not possible are exactly the cases where the stack is used for real storage - things like tree walks, or QuickSort. A loop version of quicksort must simulate a stack manually.

But using an explicit stack or data structure (see ExternalizeTheStack) in general (not necessarily QuickSort) allows interesting inspections and alterations to take place. One has more control over it. For example, a database table(s) perhaps can use disk if it gets too large. Further, it makes it easier to stop the process in the middle and then come back later to finish it. Generally markers are used to mark paths or nodes that are completed so that one can continue to work on the non-completed records. (Sometimes there are two marks: one to indicate that we processed something and another indicating it's links have been logged.) This more closely mimicks how most humans would navigate a giant maze: mark where you have already been with ribbons, spray-cans, or bread crumbs. Efficiencies are tacked on by finding various ways to avoid retesting or reduce the cost of retesting finished items. -- top

You can use continuations to do that.

[True, but credit where credit is due, Top is correct that it "allows interesting inspections and alterations to take place"]


If anyone is up to a challenge, I've spent many fruitless hours noodling for a way to do AckermannFunction without recursion (or an indefinitely expandable stack). I'm now working on a proof that it's impossible, but that seems just as difficult. -- MarcThibault

Study the SicpIterationExercise and then use the same technique (called memoization http://en.wikipedia.org/wiki/Memoization).

[First off, memoization does not eliminate recursion by itself, it just eliminates redundant recursion.]

   data_points_to_be_calculated = { initial_point(s) }
   known_set = <empty>
   while {there are data points to calculate} do
      if (can calculate more data points) then
         { add the newly calculated point to the known_set }
      else
         { add more missing points to the set to be calculated }
      end if
  done
[But also, memoization doesn't help in any real sense here; try it. Whereever the fully recursive version "fails" (you get tired of waiting for it), so too will the memoized version.]

[Mathematical induction (by a human) can be used to reduce any single level of Ackermann recursion into a closed form, which vastly speeds up that level, however level+1 needs a brand new proof and a new closed form. So basically, it's still slow.]

[Just as an exercise, however, Ackermann can be implemented nonrecursively using PowerLoops. -- DougMerritt]


On Memoization:

I consider it a CodeSmell since it forces you to carry state between iterations of the loop. In a loop that by nature has to carry state, then it's no big deal. But in a loop that otherwise wouldn't carry state, it just seems ugly. -- MichaelSparks

Any looping construct carries state - in the very least the value of a counter or any state referred to in the exit condition. That's the nature of imperative loops as opposed to functional recursion. If we follow this train of thought, we declare imperative programming a CodeSmell as opposed to functional programming. Do I sense a HaskellWeenie?, waiting to come out? -- CostinCozianu

Yes, you sense correctly. I think there are lessons to be learned from all kinds of programming, including functional. You are right that all imperative loops carry state (aside from the infinite ones, naturally), but I think a reasonable distinction can be made between the two cases I wrote about. There are some loops that just have counters so that they can dereference different sets of data on each iteration. I try to think of those as big SIMD statements. The instructions and outside state stay the same for each iteration - only the input changes. Loops in general can take on lots of nasty forms if the programmer wants them to. Just because the nature of imperative loops allows it, doesn't mean we should all do it. A well-accepted principle of structured programming is to localize state as much as possible (i.e. no globals). I think that should apply not only across function calls, but across iterations of loops. That seems reasonable enough to me. -- MichaelSparks

Ok, so I wrote the IteratedAckermannExample. As I stated on the page, I'm very willing to listen to criticisms and improvements. So how can it be restructured more elegantly ? By the way, I also have a fully tail-recursive Scheme version, but that is not an improvement: only the form looks different the essence is the same.

Also I'd like you to ponder at this idea I found in one of EwDijkstra's writings (I don't have a quick reference that I can give you, so I paraphrase from memory). He says that iteration and transitive closure are more fundamental than recursion, and he goes on to say that in many examples published by recursion advocates recursion looks like trying to crack a nut with the sledgehammer. -- CostinCozianu

I'll take a look at IteratedAckermannExample. I doubt I'll have any breakthrough solutions to present. I readily admit that some loops work out better if you carry state across iterations. -- MichaelSparks

It was a 30 min exercise at the request of somebody who raised this problem. I always used this type of loop for memoization strategies before I even knew it was called memoization. I'm curious to see if there's a "better" way, but I find it curious that you find a CodeSmell in what I found the natural way to solve this type of problems. -- Costin

I see only one tiny thing in this discussion that even argues vaguely in the direction of CodeSmell: that unnecessary state is bad. But that's vacuuous, it either means, as already pointed out, "I'm a HaskellWeenie? / SideEffectsFreeWeenie? / ReferentialTransparencyWeenie?", or it's inherently redundant/circular, since the word "unnecessary" assumes the conclusion. -- DougMerritt

I took a crack at seeing if I could simplify Costin's IteratedAckermannExample, btw, and did not in fact find a simpler approach in the time I alloted to it (somewhat more than 30 minutes). I would find it interesting if it turns out there's reason to think that there is/isn't a smaller/simpler approach, since after all he said it was just a quick "finger exercize". I kept getting distracted by wanting to add HelperFunctions that always turned out to need to be recursive. :-)

I've never played with implementing the InverseAckermannFunction?, anyone know if it's interesting to fiddle with in a different/same way? -- DougMerritt


For a simple example, here is a C# implementation of the DOS 'dir /s' command:

 using System.IO;

private void recursiveDir(string currentDir) { foreach (string subDir in Directory.GetDirectories(dir)) recursiveDir(subDir); foreach (string file in Directory.GetFiles(dir)) Console.WriteLine(file); }
and for iterative:

 private void iterativeDir(string startingDir)
 {
Stack stackFrame = new Stack();
stackFrame.Push(startingDir);
while (stackFrame.Count > 0)
{
string currentDir = (string) stackFrame.Pop();
foreach (string subDir in Directory.GetDirectories(currentDir ))
stackFrame.Push(subDir);
foreach (string file in Directory.GetFiles(currentDir))
Console.WriteLine(file);
}
 }
You can clearly see the skeleton of the recursive method inside the iterative method. It simply wraps a stack around it instead of relying on the language to maintain the stack frames. You can also see how much more 'elegant' the recursive version is, and how it requires fewer lines of code.


Far from being elegant, where I work we consider recursion to be the 'goto' of the 21st century. The first use of recursion by a developer gets a verbal warning, second use a written reprimand, and we fire on the third use.

You should encourage switching to Erlang. ^_^ All I can say is that I'd be using the stack class quite often in such an environment. That said, we are encouraged to avoid recursion in our current programming environment, where the language-provided stack frames are somewhat limited and recursion can be very deep (C++ has no TailCallOptimization and Win32 threads only get a few megabytes stack space), and so I do end up using the 'stack' class quite often for performing visitors on our object database.

[To the "[f]ar from being elegant" poster: I'd be curious to know why your workplace has banned recursion. I can guess, but that would just be speculation.]


Somewhere I placed some pseudocode for table-based traversal as an alternative to recursion, but I cannot re-find it.

Basically you have a table of "nodes" with a "visited" flag. You loop through the table (or select up to N non-visited nodes) until all the visit flags are set via processing the nodes. If you encounter a branch (folder), you read the branch and load the first level of its nodes and/or any branch references in to the table. Later loops will take care of the deeper folders on that branch. Changing the sorting criteria can affect the nature of the traversal pattern, but for large sets you probably want to considering sticking to the "native" order for efficiency. Don't forget to index the visit-flag column. Arguably it can be called a "processed" flag instead of a visit flag, but that depends on what you are doing. Sometimes processing is done after the traversal (cataloging), and sometimes during.

--top


See also LoopingConstructs


EditText of this page (last edited October 7, 2013) or FindPage with title or text search