Showing posts with label Undecidability. Show all posts
Showing posts with label Undecidability. Show all posts

Friday, 9 October 2015

Against the idea of a computable universe

The computable universe hypothesis, states that every operation and phenomenon in the universe is computable. So if you can describe all the phenomena in state s1 of some part of the universe (or of the whole universe), then you can compute the physical operations in any state si that is a successor of s1.
An equivalent hypothesis is that every set of physical operations can be simulated by a Turing machine or an automaton. This means that there is a Turing or some other computational machine that will accept as input the data that describe these operations and will output the result of them and then it will halt. The two hypotheses are equivalent as the simulation of every operation by a Turing machine implies the computability of the transition from state s1 to any state si.
These hypotheses also support the idea that the universe is deterministic. The transition from s1 to si is computable and as a result predictable which makes it deterministic.
There are many papers and textbooks that propose theories that are based on these hypotheses, there are also many that oppose such ideas. My opinion is against the idea of a computable universe, the reasons for this are: randomness, incompleteness and uncomputability.

Randomness


The existence of randomness in physical processes remains a debate among scientists. Einstein rejected it, as he believed that universe is deterministic and stating that "God does not play dishes"; a phrase that marked the debate. The truth is that there are many indications that some physical operations must be random.
Steven Hawking provides some very convincing arguments in one of his lectures. According to Hawking the way the universe evolves seems to be not deterministic, moreover the existence of extreme objects in it like the black holes and the Uncertainty Principle makes accurate computations and predictions impossible. Although in computer science we can only use pseudo-random operations in order to approach true randomness, in other fields true randomness seems to be present. Quantum mechanics build probabilistic models in order to study quantum phenomena that are believed to evolve truly randomly.
 

Incompleteness


Any not trivial system of logic is incomplete. This is a very simplified description of the incompleteness theorem stated and proven by Kurt Godel which is one the most important theorems in science.
Any system physical or not that operates under certain laws may be efficiently described by mathematics and more specifically by some system of mathematical logic using propositions and axioms. If the system of logic is not trivial (is not very simple) then there are true statements in it that may not be proven as true; the system is incomplete. This results that there are phenomena that may not be computed by mathematics.
This lecture of Hawking approaches incompleteness theorem from a physicist perspective reaching the same conclusions.
The issue that we face is that in order to compute some mathematical object O we need to use or to build some proper mathematical structure. But no matter how this structure will be it will always be incomplete, so a part of O will not be able to be proven using our mathematics.
 

Uncomputability


The theoretical background of uncomputability in Computer Science is related to Godel’s incompleteness theorem and states that there are instances of computational problems that may not be computed by a machine or a human. Computational procedures may be simulated by Turing Machines that perform them and then halt, so we usually state that a problem P is uncomputable if there is not a Turing Machine that may compute all the instances of P and then halt. Equivalently, if you input some data in a Turing Machine you may not know if it will halt or not.
The foundation for this was set by Alan Turing before the first general purpose computer was ever built.
Its affect in programming is something that we experience every day. No matter how hard have you worked on a piece of software you can never be sure that it will work as expected.
A parallelism in physics that I have thought is included in the question 
“How can you be sure that for operation H there is a Turing Machine that will halt on input of the description of H?”
The above arguments may seem to be weak but there is a stronger one for which I talked about in a previous post.
Causation (or else causality) refers to the cause-effect relation that may exist between two events or phenomena. The causal inference problem regards the determination of causation between events.
In my previous post I present a proof on which Causal Inference is logically and computationally undecidable. Relations among phenomena and physical processes are important factors in the structure of the universe. The Causal Inference undecidability results that there are physical phenomena for which we may not decide or compute if they are related to each other or not. This makes a part of the universe uncomputable and unexplained using logical arguments.

Friday, 3 April 2015

On the computability of causal inference

Causation (or else causality) refers to the cause-effect relation that may exist between two events or phenomena. The causal inference problem regards the determination of causation between events.
Let’s consider the case where there is an interaction between two events which is not affected by any other factor or event. The two events are observable and measurable and for each one we may record a set of data that describes it. Then the causal inference problem is equivalent to the existence of a computable relationship between the two data sets that describe the events.
As I show next, the general case of the problem of causal inference even with the above assumptions is undecidable and this has important implications in science. The undecidability of a problem means that there are some instances of it that may not be proven true or false, while other instances may be decidable.

Let’s formulate the general case of the causal inference problem as a decision problem. Let A and B be two sets of data.

CAUSAL = {A, B | There is computable function f(A) = B}
It is enough to use the formulation of CAUSAL in order to describe causal inference problem. If there is a cause-effect relationship between A and B then there is function f so that f(A) = B and if CAUSAL was proven decidable then f would be computable in any case.
Let me restrict this discussion in data sets and phenomena that their measurements are as accurate as it is demanded. The meaning of this assumption is that the data of the problem is not affected by factors that would make their computation inaccurate like the principle of uncertainty. Also, this implies that the elements of both sets are computable. The earlier assumption that the two events interact and are not affected by any other factor simplifies the definition of the problem, as it implies that if there is computable function f between A and B this describes a cause-effect relationship and it is not coincidental.
In the general case the problem is not trivial as A and B may be the products of complex or even chaotic systems and natural procedures. Here the term “trivial” is used in the sense of mathematical logic.

Theorem. The causal inference problem is logically and computationally undecidable.

Proof of logical undecidability. If the problem is decidable then we may construct a system using which we may prove every instance of the problem. We may encode using Gödel numbering and arithmetic each element a and b of A and B in a formal system of logic P so that for every sentence S of the form a -> b, S or -S is provable so that eventually the sentences A -> B or -A -> B will be provable. The problem is complex so P will be sufficiently strong.
This consideration results a contradiction, as proven by Kurt Gödel (Gödel, 1931) in such a system there are true sentences that are not provable. The problem is logically undecidable.

Proof of computable undecidability. If CAUSAL is decidable, then there is Turing Machine M which on input of a string e that contains the descriptions of A and B always halts and outputs “YES” or ”NO”.
As the problem is assumed decidable and M is computable, we may construct Turing Machine N that when inputting string w that contains the description of M and e verifies that M decides e; N accepts w in any case. There are no restrictions in the nature of A and B or their encoding and representation in e, hence w can be any string. Also there are no restrictions in the description of M and N, they can be any Turing Machines.
As a result, the language ATM = {N, w | N is a Turing Machine and accepts w} is decidable. This is a contradiction. As it is known from literature that ATM is undecidable; see (Sipser 2006, p.179). Hence CAUSAL is also computationally undecidable.
Another way to reason about the computational undecidability of CAUSAL is to consider Turing Machine N that simulates the phenomenon described by A. We input in N a string w that describes set A, then N produces a string that describes set B and then halts. If CAUSAL was decidable then N would be computable. The string that describes A may be any string as there is no restriction to the event that A describes; so N halts on any given input. Also N may be any Turing Machine.
Still these assumptions lead to the same contradiction; if CAUSAL was computable so would be language ATM.

Laplace’s demon

The undecidability of causal inference has an effect on the famous idea of the demon of Laplace.

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

Pierre Simon Laplace, A Philosophical Essay on Probabilities 1820
source: Wikipedia, also in (Laplace, 1951, p.4)

The intellect that described in the text was later named “Laplace’s demon”. There has been criticism against this idea and some published proofs against it. The proofs presented above are also against this idea. Such an intellect or demon should be able to decide CAUSAL which is undecidable.

The extend of undecidability

In every undecidable problem there are some instances that are decidable and some that are not. To put it another way, each undecidable problem is undecidable to some extent. Logic and the theory of computation have not been able to determine this extent for every problem. For some problems like the halting problem this extent seems large; only very simple instances of the halting problem seem computable.
In causal inference, things must be different. Science and especially Artificial Intelligence has managed to build successful methods in computing quite a lot of instances of the problem. The importance in these methods is that they may be applied in the analysis of real world phenomena. In my opinion, some of the reasons of successful predictions in real world phenomena are bounded inputs, relaxation, approximation and strong hypotheses.

Bounded inputs. Real world phenomena usually don’t take arbitrary large or small values. The possible values in the parameters of a phenomenon are bounded. This results that the problem range is reduced making computations easier.
Relaxation. Relaxing some parameters in the definition of a problem might lead to the computation of an easier problem. In some cases the conclusions from the definition of the easier problem are as valuable as the solution of the initial problem is.
Approximation. It is quite often that the solutions of hard problems may be approximated, which is valuable when precision is not very important.
Strong hypotheses. Sometimes scientists have strong evidence and strong intuition about a certain universal truth that cannot be proven or it is at least very difficult to be proved. In these cases researcher state it as a hypothesis and accept it as an unproven true fact and they may work on it overriding the obstacle of finding a solid proof for it. The biggest part of this post is based on such a hypothesis; it is the Church - Turing Thesis (Turing, 1936).

References

Gödel, Kurt, 1931. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. and On formally undecidable propositions of Principia Mathematica and related systems I in Solomon Feferman, ed., 1986. Kurt Gödel Collected works. Vol. I. Oxford University Press, pp 144-195.

Laplace, Pierre Simon, 1951. A Philosophical Essay on Probabilities, translated into English from the original French 6th ed. by Truscott,F.W. and Emory,F.L., Dover Publications (New York, 1951)

Sipser, Michael, 2006. Introduction to the Theory of Computation, Second Edition. Thomson Course Technology, USA.

Turing, Alan M., 1936. On Computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society. 42, 230–265.

Thursday, 19 March 2015

Why we cannot build a general method for algorithmic error correction

This post is part of this paper.

A few years ago, I was working as GIS programmer and administrator in a company that was mainly producing digital maps for navigators. At that time, we had to map the road network in big areas and cities from scratch and also update older maps in other areas. One of the issues that you have to deal in projects of this kind is the errors that occur during the mapping procedures. When there are 10 or more people that map a road network of 20.000 road segments it is unavoidable that some (few or quit a lot) of the roads will be mapped incorrectly for various reasons. For instance, a bridge may be accidentally mapped as a normal road or some road may be not characterized as unpaved while it is.
The only way to deal with this situation is to define some specifications for the mapped road network and then build some software that will detect the cases where your map is not compliant to the specifications. You may find the description of some methods of building such software in my paper (link). Then some experienced person has to look into the detected errors and correct them.
At that time I had a big disagreement with my employer as he believed that we could build software that could correct our maps without human intervention. He thought that as we know the specifications that the network should have and we also know the cases where the specifications are violated we could also correct them algorithmically. Besides him and his opinion, I have read some papers published in conferences in which their authors support more or less the same idea while in other papers in journals it is supported that this problem may be solved only partially and with some caution.
As I will show next such a problem is in generally undecidable by computers and it may be solved only in simple instances. I always believed that this should be common knowledge.
In fact the algorithmic correction of maps is a special case of a more general problem, the algorithmic correction of software bugs. In both cases you have some specification that may be written in some formal way, say logic. Then you may detect some errors or bugs and you need to manipulate the parameters and the structure of your program or dataset so as to meet your specifications.
Moreover we may model both problems as graphs. As it is known from software technology, a piece of software may be modeled as a graph. The nodes of the graph are the functions and the other units that consist the software and the edges model the ways that the units interact. The nodes and edges may be represent really complex software components or simpler components like databases with spatial information as in road networks. Studying the more general problem we decide the simpler problem as well. So in order to study this issue you have to answer one question.
Since we can build software that detects errors can we build software that algorithmically corrects these errors?
An empirical answer to this question would be “in general, no”, it is very difficult to build an algorithmic process that would take into account all the necessary parameters and lead to the desired result by correcting all the instances of errors in a piece of software or in a map. Let’s define the problem and prove its undecidability. On the proof, I use the Turing Machine (TM) as a model of general purpose computer and define the problem of algorithmic correction of software errors as an equivalent computational problem of algorithmic correction of the input string of a Turing Machine.
Let M be a Turing Machine that accepts input w. If w is build under certain specifications M prints a message of acceptance and then halts. In case w does not comply to specifications M might not halt or have unpredicted behavior. Now let N be another Turing Machine that examines any given input w. If w is incorrect according to specifications and the description of M, then N corrects w so M can run w properly and then halt.

Formally:

INPUT-CORRECTIONTM = {<M, N, w> | M and N are TM, N modifies w according to the description of M so that M halts on input of modified w}

In order to prove the undecidability of this problem it is necessary to define another undecidable problem the HALT problem. Undecidability implies that it cannot be decided by a Turing Machine and an algorithm that solves it for any given input w cannot be built.

HALTTM = {< M, w> | M is a TM and M halts on input w}

Theorem. The INPUT-CORRECTION problem is undecidable.

Proof: Turing Machine N modifies any input w, according to the description of M, so M can run w and halt. If w is correct N prints “YES”, M runs w produces an output and halts. If w is incorrect and M cannot halt while running w, N corrects w and prints “YES” then M runs w produces an output and halts.
Let H be a machine that simulates the operation of both M and N, then H halts on any given input and the HALT problem is decidable for H and any given input w. This is a contradiction since the HALT problem is undecidable.

Another approach


The approach of the proof might look difficult to understand and especially its connection with the correction of errors in spatial networks. Let’s look into it from a more practical perspective.
The specifications of every network define the valid spatial relationships of the network elements as well as the valid attributes of every element. Attributes and spatial relationships cannot be examined independently since the validation of both of them ensures network consistency. Let A be the set of all the possible valid combinations among the valid spatial relationships and the valid attributes. Let B be the set of all the possible combinations of spatial relationships and attributes that are considered invalid according to specifications.
Let f be an algorithmic function that may correct the errors in a given network. Then f accepts as input one element of B and outputs one element of A. This means that each element of B must correspond exactly to one element of A; f must be 1-1 function or many-to-one (n-1). But this can happen only to very special and simple cases, it cannot be applied in general. If an element of B corresponds to many elements of A then f may not be defined.
On the same way we may define sets A and B for any other software structure. An 1-1 or n-1 function among software with bugs and error free software may be found only in very simple instances.

   

A more intuitive example that concerns spatial networks.

  

Let N be a non-planar network of linear features. Let a and b be two elements of N where the edge of a crosses the middle of the edge of b. Suppose that the person who draw a and b neglected to apply on them the necessary parameters that denote their structure and state. We may consider the following possible cases.
  • The edge of a is on ground level and the edge of b overpasses a, edge b represents a bridge.
  • The edge of a is on ground level and the edge of b underpasses a, edge b represents a tunnel.
  • Both edges are not on ground level and edge b overpasses edge a, multilevel traffic junction.
  • Both edges are in ground level, a and b represent a crossroad. Both edges must split on the intersection point and form a proper crossroad.
There is no way how we can build an algorithm that will decide which of the four cases corresponds to the given instance, so as to correct the error.
If we consider a special case of network instance in which we may find an 1-1 function between sets A and B, we could perform algorithmic correction of a specific error case. As it is easy to see this error case will be very simple. For instance, if we consider a network where each element that participates in a specific computable spatial relationship must be characterized by an attribute of a single value. Then an algorithmic process that assigns this value to the attributes of the appropriate elements may be build. Nevertheless this would be of minor importance in the complex task of error correction.
As a conclusion, I have to say that important corrections of detected errors in maps and in general in computer software have to be manual and of course by experienced personnel.

http://wmsviewer-rodis.rhcloud.com/