More thoughts on Alan Turing:
I recently wrote a letter to Barry Cooper about the renounced mathematician and logician Alan Turing and the question of whether it was possible to do a computation that resulted in an increase in the information in a system instead of the usual decrease.  Upon further reflection, not to mention having read the book The Essential Turing (ed. Jack Copeland Clarendon Press, Oxford 2004) I am thinking that there might be a testable theory here.

My observation was that an error in a digital computer would cause the whole thing to crash.  Say a single bit was missing from the instructions.  From there on the reading frame would not match the code and the result would be gibberish.  There are indeed error correcting algorithm, but a corrected error is no longer an error.  The computer’s final state is totally dependant on its initial state. 

On the other hand, tiny variances in the behavior of a neuron in a human brain should be well tolerated.  I proposed the notion that the universe by its very nature – apparently expanding – must be constantly introducing new information to account for the location of everything on the larger stage.  The human brain might be able to tap into this but a digital computer could not. 

There is an objection, of course.  If the expansion of the universe provides information that a brain can access, that is not really a result of computation.  It would be like saying you have a perpetual motion machine if only you plug it into a live outlet.  It is only a notion about the universe as a whole. 

But you cannot have logic without a universe.  Take a simple axiom: “If A implies B, then not-B implies not-A.”  For instance “if it is raining things will get wet” means that “if things are not getting wet, it is not raining.”  It seems pretty straight forward.  But try it without a universe.  “If A…”  but wait.  That is a forbidden assumption.  There cannot be A because there is no universe.  So leave the universe behind and you loose your rules of inference.  There is no logic.  So you must always include or imply an axiom, “There is a universe.” 

And what will that universe be like.  Well Einstein’s Special Theory of Relativity has been, so I am told, taken back to first principles.  In form any universe must have something of the sort.  There is the mind bogging General Theory of Relativity, but you really don’t need it.  The special theory already equates mass and energy and puts a limit on the speed of light.  Black holes are in effect possible.  A static universe it not possible.  It must be expanding or contracting, even Newton got that one.  So what does an expanding universe look like on the inside?  Here it is Wednesday.  We made a mistake on Monday for which we are being punished.  But we can’t go back and change it because the universe on Wednesday has sufficient information uniquely to describe Monday.  We will mess up again on Friday, but we can’t know that because the universe does not have sufficient information to specify a unique Friday.  This is all familiar.

But suppose time is running Friday, Thursday, Wednesday and so forth.  And here it is Wednesday.  We are going to screw up again on Monday, and we know it, but it cannot be changed because Wednesday uniquely describes Monday.  We get punished anyway.  And Friday is forgotten because Wednesday does not have sufficient information to describe it.  So in either direction the universe looks exactly the same from the inside.  So once you have said, “If A…” you have already acknowledge a universe that is expanding and is able to support rules of inference as well as to provide that extra nudge of information, could you but tap into it.

At least so the theory goes.

So let’s design a computer that can access this information.  Let’s say we program it for pattern recognition.  (Dear Men in Black: Please to not spirit me away and wash my brain.  I don’t actually know what I am talking about here.  Yes, I know that pattern recognition is a highly sensitive topic.  In fact I don’t actually know why I’m writing about this, except it seems kind of cool.)

So our pattern recognizing computer will be the digital analogue of a device that consists of a camera that looks at a black and white screen and records it as 0’s and 1’s on a thousand pixel by thousand pixel matrix.  Each pixel has a wire that sends a signal representing its state to a node just above it.  The node receives information from that pixel, from the node to each of it’s four sides and from a node above it.  It performs some logical algorithm on that information and sends the result in five directions.  (Pixel only sends.)  Above these million nodes is another array of nodes.  It takes information from above and below and from four sides, but not from the adjacent nodes.  It gets its input from the next node but one.  Otherwise it is the same.  The third layer takes information from the node four nodes away as well as above and below and so on until the reach is all the way across the array.  If it reaches the edge of the array it resumes counting from the opposite edge.

That would be ten million algorithms, which is a bit unwieldy, but live with it.

Now we show our camera a picture of a black square on a white field.  And we see whether the top layer will put out the message “square,” meaning it recognizes it.  It probably won’t.  So now we introduce small random changes in the algorithms until we get our result.  Once we begin to get results we reject any change that does less well, but continue trying little changes.  Eventually the machine can recognize the square.  Now we move the square slightly, change its size, change its location and so forth until the device can recognize it in any guise.  Then we teach it about a circle. 

The heart of the mechanism is that “random change.”  It will actually be based on a “pseudo random number” that the computer keeps as a table.  So the behavior of the system is fixed from the get go.  It cannot give us new information. 

So we tap the universe for random numbers.  Say we take an array of tiny digital thermometers that pick up fluctuations – thermal events – that are so fine that they are seldom the same from one instant to the next.  Our array is big; in fact there are ten million thermometers.  The change from moment to moment is the source of our random numbers. 

Will this contraption work?  I doubt it, although parts of the brain are kind of organized along similar lines.  The trick is this: do the truly random numbers improve the machine’s performance as fast as the pseudorandom numbers or faster.  You make sure of course that the probability of any number turning up is the same in both the table and the thermometer array.

If, then, the real random numbers sampling the universe are better at “solving the problem” than pseudo random numbers, and if there is no other reason they should be different, then I think you would have reason to say that you had tapped the information field of the universe.

The bad part of this test is that there is no obvious way to call the thing a failure.  Maybe it was the wrong thermometer array.  Maybe they needed to be lined up like the sensors.  Maybe they needed some other arrangement.  It would be hard to be sure.  But at least the idea is not utterly beyond test.

There have been 49,352 visitors so far.

Home page