Signal Processing As Computation

Some folks wondering about ComputationAsSignalProcessing are examining some deep theoretical questions. What happens if we turn the argument around and approach it from a practical standpoint?

Consider a simple convex lens (as you might have in a domestic magnifier). This is a passive signal processing element. It has a region of space via which it accepts an input (in the form of electromagnetic waves with phase, frequency and magnitude varying in space and time), and it emits output in another region (in the same form).

Suppose that the light entering the input region encodes some data: the intensity of the light at each small patch in the input region is made proportional to a number. For the moment, imagine that these intensities do not vary in time. Then the spatial variation of the intensity of the light in each small patch in the output region encodes the 2-d FourierTransform of the data encoded in the input region.

The size of the patches (information density of the input) is bounded below (above) by the diffraction limit of the lens, unless the light you use is in a frequency band where negative refractive index materials are available. So is the maximum spatial frequency that can contribute to the FT. For a reasonably well made lens these patches can be very small indeed. For a well made multi-element lens they can be made (literally) microscopic, and a huge amount of data can be crammed through the lens.

The time takes to produce the FT is O(1) against any parameter of the size or complexity of the input you care to mention. Not only is the time taken constant, it is very small. In fact, it is the transit time for the light between the input and output regions. For a lens with a focal length of 1 metre, this will be ever so very slightly longer than 2/c seconds, call it 7 nanoseconds. The machine I'm using to type this is respectably fast as desktop machines go, it has a processor clock period of around half a nanosecond, so we can see that our lens has a bogomips count that's through the roof--so long as we only want to find 2-d FTs.

Could the operation of the lens in this case be simulated on a computer? Certainly, but only in a much longer period of time. Then again, maybe not so long a period as to be useless, depending on what we want the FT for. It's possible to search images for certain features very quickly by making a mask of the FT of the feature, and viewing the FT of each image through that mask and seeing how much gets blocked. It's also possible to search images for certain features using algorithms running on general purpose computers (which might possibly do something similar internally) quickly enough for many practical purposes. So the lens doesn’t have an outright advantage in many practical regimes. But still, all that "processing" "power" in so simple a device (which is also very cheap to run) has a certain appeal. And then there is the idea that a big enough quantitative change is a qualitative change.

Could the computational process that is required to execute that simulation be encoded in a TuringMachine? Certainly, although execution would likely take a much, much longer period (probably not a useful period).

For a while there was a lot of research interest in optical computing. Not the chicken-hearted stuff that's coming to commercialisation just now with optical interconnects between conventional digital processors, but real optical processing. For instance an opto-electronic component (a very simple analogue circuit with some optically active elements in it) that can multiply a vector of values (in principle, of arbitrary length) by a matrix. In that same time-of-flight-plus-a-bit timescale. Optical processing elements can be SIMD on a scale difficult to grasp.

Now imagine that the input to your optical processor varies in time. And you need to see the result change in real time too. Depending on the data volumes involved, that could maybe be done with a program on a regular computer too. After all, a modern-day oscilloscope or frequency analyser is typically no longer a complicated collection of analogue electronics. Rather, it has a lot of software operating on the signal in the digital domain on general purpose hardware (plus some dedicated DSP hardware too). Or consider the Reason music product, which is capable of very faithful emulation of analogue synthesis in real time on very modest hardware.

And now lets go back to our lens and have the input change in time. Can this system be modelled in a general purpose digital computer of the kind we have today? Certainly, within constraints on timeliness that might not be relevant for the problem in hand. Can the process be encoded in a Turing Machine? No. Because, just as the computer has an external process monkeying around with its bit patterns, some external system would have to monkey around with what's on the TM's tape, and that take us out of the Turing Machine domain. So, can signal processors do things that TM's can't? Trivially yes. But then so can the computer that you are reading this on do things that the TM (which after all is only a mathematical structure) cannot encode.

So why is that interesting? Because the processor and memory of the machine you are reading this on are built as a, what, a homage, a manifestation, a quotation, of the Turing Machine. And so folks tend to assume that the limits of the one are limits of the other, which might be true, but not necessarily. So, maybe we aren't using our regular hardware in ways that expose its full potential. Probably we are, though.

The day may well come when you can buy stock hardware on which you can, say, do cunning transformations on a 50 frame/s stream of images scanned from 10X8" negatives at 5000 DPI with 1024 bits of R,G,B,alpha each in real time. Or maybe not. And if you did have that hardware, maybe its raw power would let you do things that we currently consider impossible. Maybe the problem set size for which NP-complete problems stop being irksome and become intractable would be large enough for us not to care.

And maybe the quantitative advantage that a notional (or is it?) "general purpose signal processing computer" might have would be enough to move us into a qualitatively different computational domain. And then again, maybe such a machine is something so totally other than a pseudo-TM machine like the one you're using right now that it doesn't even need to be particularly quick to be capable of things that we currently struggle to imagine ways to program.

One question, then, is how best to test some of the hypotheses sketched in the paragraph above? Wrangling over the exact semantics of halting in a TM might be one way to do it...


EditText of this page (last edited April 13, 2005) or FindPage with title or text search