Homeostatic Computing

From NonTuringComputing:


DickGabriel and some others and I sat around a table one afternoon in 2001 conjuring up different models of computation. The one that I'm interested is some sort of massively parallel device in which it is guaranteed that some number of elements will fail... the point being that the designer of the machine can't ask for perfect replicability, which is the goal of standard computer design. We looked at the cell as providing a possible model, and asked how it deals with this issue. Someone at the table mentioned the word "homeostasis", the seeking after a norm. He said that the processes inside the cell have multiple homeostatic features, so that you can purturb the system in many different ways and the elements will come back to "normal" computation. I like this idea. Imagine gazillion elements prepared with multiple homeostatic processes, and let them loose. There will be a computation, and it will be percentagwise correct, but not absolutely correct, and generally right and generally operational, judging from mass results and ignoring small deviations... I think (A) this is NonTuringComputing and (B) and interesting approach worth development. This sort of system is suitable when there is not a need for guaranteed exact answers and the problem space is very large. I think there are a large number of such problems.

I wasn't sure whether to list this under (Possible but not natural on a Turning machine) or (Different computational model than Turing's), so I put it here for now. I think it is properly the latter. --AlistairCockburn


Is "guaranteed" failure necessary? Is the sceme you describe significantly different from what IBM's new servers do?

Good question. I don't know enough about either cell biology or IBM servers to answer correctly (but I'll answer anyway :-)

I'm thinking that the people creating server farms know that some unit will fail. They plan on some way to detect that the unit failed, take it offline, and later introduce it or another one.

I'm thinking that the cell doesn't have anything akin to people who know that some unit will fail, that the cell doesn't have the notion of a unit (although we do), nor of a unit failing, nor of taking it offline. Therefore, the computation runs in the face of units behaving differently from other units.

I do understand that we have white blood cells and T cells that attack other cells they think are "failing" and work to take them "offline". I don't know if the same holds true within the cell. --AlistairCockburn

Indeed it does. There is a complex system of hormone that functions as dead-man switches for a cell to take itself off-line (apoptasis) when it no longer functions as it should. The only difference between biology (cells) and human artifacts (server farms) is that we aim for perfection even with imperfect materials.

OK. Just to continue: it sounds like those hormones are sent by the body to a cell to have it shut itself off. Looking inside the cell, do the peer committees of cell components (ribosomes, etc) have ways to shut each other down when some subset starts "misbehaving"? (Actually, if so, then I guess that would just be another of the cell's homeostatic devices!). Thanks...

They're dead-man switches. The hormones are sent to keep the cell functioning and if it stops receiving them (eg. because it's turned cancerous and moved away from its assigned chemical environment) then it kills itself. Hmmm, none of this would be within the cell the way you've defined it.

Right. The body is a "system" composed of "units". In the "system design" of the body is a control mechanism to take misbehaving units offline (I didn't know that, so thanks for the instruction.) That isn't a bad idea, but it may not provide the basis for homeostatic computing (although it does seem to match the server farm architecture pretty well).

My question for homeostatic computing is to enquire about the internals of a cell, where to my limited knowledge there seems to be a lot less "system design" present. The chemicals are milling around - not aimlessly, but without central control. Molecules bump into molecules, and do natural molecular things to each other. There are N zillion molecules milling around... Rumor reaches me that the "system design" that is there is that lots of the molecules are reacting to other ones to keep some homeostatic parameters within bounds. I'm curious about using such a computational model. --AlistairCockburn

If the number of molecules bumping around is finite--even if some large number in the zillions--then this definitely sounds TuringEquivalent (without a more rigorous specificaton of the thing one cannot say for sure). Given an infinite number of molecules, I dunno...


I've also been thinking in these terms lately - building computers around a "probability of failure" rather than some "dream of perfection" would be an interesting concept and would potentially make it possible to make much better use of resources. Just think about the number of chips being thrown away because of failing tests - if the testing was used to set some "chip reliability rating" instead, you could replace one expensive 99-percent-correct CPU with four 97%-reliable CPUs that might cost a quarter of the price.

Well, ignoring the economic aspects of the concept, you'd still need to basically replace all parts of the current TuringMachine mode of computing with some ProbabilisticTuringMachine?. I believe that solid research hasn't yet been done in this area - if this is to solve those real-world problems being solved with current computers, you need a way to increase the correct-rate of the answers. For example, combining potentially-faulty answers straigt-forwardly in a standard computer (i.e. tracking some reliability-value of all bits in the computer) always leads to the reliability-value approaching zero. Also, how can the reliability of other hardware in this ProbabilisticComputer? (memory, extension cards, ...) be tracked?

The principal realisation is that all (physical, man-made) computing machines have degrees of error that should be accounted for. It is quite possible that a TuringMachine is no longer a good model of a computer (although it is very nice for absolutely correct computers).

Attempt at defining a ProbabilisticTuringMachine? based on the formal definition of TuringMachine:

A TuringMachine is a quintuple M = (Q, Sigma, Tau, Epsilon, q0) where:

Now, define the ProbabilisticTuringMachine? as the quintuple M_p = (M, Epsilon_p, q0_p), where:

As you see, the basic idea is to add a probability to all states produced/read by a TuringMachine. The TuringMachines would be a subset of the ProbabilisticTuringMachine?s, and if Epsilon_p is able to exactly map input correctnesses into output correctnesses the PTM can be implemented as a TuringMachine. The thing is, in an actual implementation of the ProbabilisticTuringMachine? you'd map the correctnesses to some physical measurement or hard-wired reliability rating of your hardware - and herein lies the innovation.

It might also be interesting to define "intentional equivalency" (lack of terms, sigh) of ProbabilisticTuringMachine?s: Two PTM's are "intentionally equivalent" iff their respective M's (desired behaviour) are equivalent. In other words: the difference between two intentionally equivalent PTM's lies in the mapping from input correctnesses to output correctnesses.

By combining several (intentionally equivalent, but different) PTM's, it should be possible to obtain a resulting state with a greater correctness. Left as an exercise for the reader ;-)

--SimonBrenner


EditText of this page (last edited May 8, 2005) or FindPage with title or text search