Three Valued Logic

An extension of BooleanLogic (TwoValuedLogic) or BooleanAlgebra, where there are three values: 'True', 'False' and 'Unknown'. Unknown might also mean 'undefined', or 'neither'.

Three-valued logic is used quite a lot in FormalMethods.


Truth tables for ThreeValuedLogic ( | = or, & = and, ^ = xor, ! = not)

 T & T = T  T & U = U  T & F = F
 U & T = U  U & U = U  U & F = F
 F & T = F  F & U = F  F & F = F

T | T = T T | U = T T | F = T U | T = T U | U = U U | F = U F | T = T F | U = U F | F = F

T ^ T = F T ^ U = U T ^ F = TU ^ T = U U ^ U = U U ^ F = U F ^ T = T F ^ U = U F ^ F = F

!T = F !U = U !F = T
In addition, an operator isUnknown (?) is often defined; this has the following truth table

 ?T = F     ?U = T     ?F = F
Although technically "three-valued logic" does not depend on the particular operator definitions given above, these are by far the most commonly used definitions (they are used in ViennaDevelopmentMethod, for example).

Note that unknowns are not held to be equivalent. The exclusive OR of unknown and unknown is unknown (not false); as it is not necessary that the two unknowns are the same thing. Likewise, comparing two unknowns for equality returns unknown, not true. This can cause "holes" in reasoning; as such operations can be well-defined if two instances of unknown can be shown to be equivalent.

For example, the proposition ((P==NP) ^ (P==NP)), one could argue, is false. P==NP is a well-known unknown proposition; however one could argue that if it is true on one side of the ^, it's true on the other (and likewise for false); thus the exclusive-OR ought to be well-defined, not unknown.

Likewise, the proposition ((P==NP) ^ (P!=NP)), one could argue, is true.

On the other hand, the proposition ((P==NP) ^ (IsFactoringHardAsDiscreteLogarithm?)) ought to be unknown; as the two OpenProblemsInComputerScience might not have the same solution.

Many authors and computer scientists, for this (and other) reasons, prefer to avoid ThreeValuedLogic. Part of the problem, of course, is that by reducing an unknown predicate from it's explicit formulation to the singular value "unknown", one loses information in the process.

ChrisDate argues at length against ThreeValuedLogic in the context of database systems in the book AnIntroductionToDatabaseSystems (and in many of his other writings). This section has been moved to NullsAndRelationalModel.


This page, so far, appears to miss several major implications of ThreeValuedLogic. Most importantly, the operators considered (| = or, & = and, ^ = xor, ! = not) are only four of 19,683 possible ternary logic functions. Instead of discussing values of (T F U), it might be more helpful to consider the tuple (0 1 2). The truth table for an arbitrary ternary function is then:

 F|0 1 2 
 -+------
 0|a b c
 1|d e f
 2|g h i
For example, here's one:

 F|0 1 2 
 -+------
 0|1 0 2 
 1|0 2 1 
 2|2 1 0
There are 19,683 (3^9) unique permutations of the truth table, leading to the same number of values for F. In general, for nary logic, there are n^(n^2) possible single-argument functions. The number of such functions for 4-ary is 4^(4^2), or 4^16, or 2^32.

How many of them are useful? :)

  F|0 1 2
  -+-----
  0|a b c
  1|d e f
  2|g h i
  FC|0 1 2
  --+-----
   0|a b c
   1|b e f
   2|c f i  
  FCR|0 1 2
  ---+-----
    0|0 b c
    1|b 1 f
    2|c f 2
We should also consider the question of state inversion in ternary systems (as well as 4-state systems and so on). In a binary system, inversion is trivial - not 0 is 1, not 1 is 0. In three-value logic, inversion (!) is not necessarily as simple as claimed above. The inversion of 0 can be either 1 or 2. Inversions are often most useful when they are reversible, so that if !0 = 2, then !2 = 0. This leaves 1 to invert to itself. Another approach, however, has !0=1, !1=2, and !2=0 - this is invertible but not reversible, as !(!0) = 2, not 0. There are 6 reversible ternary inverters (including the identity).

This material is excerpted from http://www.multivaluelogic.com. Another good site is http://www.ternarylogic.com.

Ternary systems (and ThreeValuedLogic) is of interest in biology because it may be helpful in analysing and modeling "pathways", the chains of various reactions that promote or inhibit the production of a particular protein. Such a pathway can be modeled as a network of nodes, where each node is promoted, inhibited, or not affected by others. Some researchers have speculated that ThreeValuedLogic may be as natural to a biological computing system as binary logic is to an electronic system.

-- TomStambaugh

Only one operator is needed for all possible unary operations. (\x -> (x+1) mod N)

I'd propose the Boolean 'not' (which is identity on the unknown value) and the 'unknown?' operator, which returns true if the value is unknown, and false otherwise. With these two functions it's impossible to get undefended value from true or false value. Only by combining all three (not, unknown? x+1 mod n) it's possible to cover all unary functions. -- AnonymousDonor

Do you have a URL on that?

There's some relevant material at http://www.stats.gla.ac.uk/~microarray/talks/eddougherty_25jun02.htm. This whole field is fairly new and moving pretty rapidly, so material is fleeting and hard to find. You may find yourself doing a LOT of reading. -- TomStambaugh

These things don't show the same sorts of structure as normal binary logic. What makes them logics, as opposed to just operations on sets? I couldn't find an answer in the quoted pages.

I don't understand the distinction you're drawing between "logics" and "operations on sets". Each particular truth table corresponds to a function mapping the value of two ternary variables to a third outcome. Boolean logic has 16 such mappings. Ternary logic has 19,683. The association between the chosen values (0 and 1 or 0, 1 and 2, or -1, 0, 1, etc) and concepts such as "true", "false", "on", "off", "undefined", and so on is arbitrary.

 A ------+
         |
         v
      +--+--+
      |     |
      |  F  +---> Out
      |     | 
      +--+--+
         ĘŚ
         |
 B ------+
In the above diagram, "A", "B" and "Out" are ternary variables. The box represents an operation, F, which can be associated with one of the 19,683 possible ternary logic functions. Given values for A, B, and F, the truth table specifies the resulting value of "Out". Any semantic value associated with a value of F (such as "and", "or", "plus", "minus", etc.) is arbitrary. -- TomStambaugh

I understand all that. I'm asking you what makes a particular collection of set operations a "logic", or why the word is overloaded to include them. Normally, in mathematics, categories of objects are defined by axioms, and I have seen some versions of "logics" defined that way. But those normally at least include some rule about negation being self-inverse, and apparently these don't. Hence my question.

[The subject is complicated by the development of so many new kinds of logic in the 20th century (multi-valued, FuzzyLogic, relevance, etc), however it probably remains true that any system of "logic" contains at minimum the primitive values "true" and "false", as well as some kind of inference system for deriving truth or falsehood.]

[From that point of view, a set theory must at minimum have values isomorphic to true and false, as well as morphisms between sets which is in some sense equivalent to inference. Given that, such a set theory could be regarded as a logic, whereas without that, it wouldn't seem profitable to call it such.]

[This seems more to the point than the well-known fact that traditional logic can be modelled in set theory and vice-versa. -- DougMerritt]

The association of semantics with ternary logic, such as the assignment of "true" and "false" to values and the identification of "interesting" functions, is one of the more interesting areas of study at the moment. My cursory read of the literature is that this is being approached statistically, through simulation and computation, rather than through more classic approaches. Researchers are assembling networks of ternary operators, using brute-force relaxations from initial conditions, and comparing the results to observed gene expression data. The theme is to discover, empirically, "interesting" functions that increase the correlation between the computed and measured results. -- TomStambaugh


How does the "unknown" value compare with "top" and "bottom" elements of LatticeTheory? ?

Example:

Unknown probably doesn't correspond to either; however the database NULLs that give rise to unknowns in a relational context are very similar to the BottomType.

http://plato.stanford.edu/entries/logic-manyvalued/ (from where the diagram above is from) seems to be saying that BottomType corresponds to "unknown" and top should represent "conflict" or in their words: "conflicting information saying that the state of affairs obtains as well as fails." In which case you are then dealing with a FourValuedLogic? but is the construct valid for ThreeValuedLogic if you omit TopType?

In my personal opinion - BottomType should be used for exactly one thing - divergence. Anything else (nulls, exceptions, unknowns, etc.) should be handled with separate types, and whenever you need one of these you should construct a type-union of the "normal" datatype and whichever of the aforementioned SingularityType?s you like.

Why? Because BottomType is like a wildcard in the type system - any expression or variable, no matter the type, can also be BottomType - as BottomType is the universal subtype of anything. There is no way to exclude the possibility of any expression being _|_. Many OO languages - CommonLisp, JavaLanguage (to some extent), EiffelLanguage - treat nulls in this fashion; thus there is no way to say in any of these that a given reference cannot be null; because the type system says null is valid in any context. (I qualified Java because it doesn't explicitly equate null with BottomType; Eiffel and CommonLisp do, OTOH; however Java doesn't provide mandatory references).

Having BottomType represent divergence does make sense, because any computation can potentially diverge - and unlike the other cases, divergence doesn't mean the returning of in invalid result; it's the failure to return a result at all. -- ScottJohnson

IT DEPENDS ON THE APPLICATION!

There is no single "right" answer that applies to every situation. An obvious contradiction to your expressed preference is in divergence-free systems, such as SQL. All SQL queries terminate.

That doesn't necessary contradict my preference - in such a system, one could dispense with BottomType altogether. Of course, SQL doesn't use BottomType (and SQL nulls are very much like bot, in that any attribute, regardless of its type, can be set to NULL) for divergence, it uses it for "unknown". Whether that usage is appropriate is the subject of much heated debate in the relational community. -- sj

In terms of the diagram referred to above, the discussion on that Stanford page explains that the topmost symbol is True, the bottommost is False, the leftmost is Unknown, and the rightmost is Contradictory.

But there are other possibilities; there is no single 3-value nor 4-value logic, and any mathematician or architect is free to choose whatever system they like (and then get flamed by the people who disagree with his choice :-)

-- DougMerritt

I know of no biology researchers who are even remotely interested in the application of ternary logic to the mathematics of relational database theory. -- TomStambaugh

Exactly. It depends on the application which theory to use. -- dm

In CommonLisp, t (true) is TopType and nil (false/empty list/unknown/whatever/etc.) is BottomType. Of course, CommonLisp is a dynamically typed language, so much of the debate over the structure of the typing lattice just doesn't matter. For statically typed languages with advanced typing systems (Haskell, ML, Eiffel, even C++/Java to some extent), this sort of thing does matter.

You are right, there is no "one correct" answer. But some answers are more correct than others...

In a particular application area, yes, but even there, still not just a single clear way to go. This should be unsurprising, considering that the mathematical issues are very far from settled. We don't have an overarching formal theory yet.

A clear example of this is in how the frame problem arises in theorem proving. You would think that this could be fixed by doing a TheoremProvingSystem based on relevance logic, but as far as I can see, that's still beyond the state of the art. (If anyone knows differently, please let me know.) -- dm


In the electronics world such logical devices, known as "tri-state", have three output values: high, low, and high impedance, meaning they are essentially disconnected from the rest of the world. It is possible to have an array of devices that all talk to the same data line but only have one device driving the line at a time. It is possible to have no devices driving the data line at all, of course - although its logical value would then be undefined.

Does this analog help in the discussion somehow?

I thought about tri-states as well when I first started thinking about gene regulation pathways (long ago in a distant universe I was trained as a hardware engineer), but it's really a different concept I think. For genes, the states are "promotes" (meaning increases production of a protein), "inhibits" (meaning decreases production of a protein), and "no effect". In a typical lab setting, you have many samples of many genes. The analysis problem is to explore the space of all possible ternary networks, searching for those whose behavior matches the observed data. -- TomStambaugh

Electronics does provide another application domain where examples demonstrate that, once again, it depends on what you want. Tristate is a ternary logic with T, F, and a third truth value which typically is not indicative of error or divergence.

In a truth table for a PLA or for test vectors, inputs are T, F, or don't care.

How the third truth value is intended is different on input vs output, and there's actually a lot of things it might mean. -- dm

Um, actually, it means, "not connected," which can't have a lot of different interpretations to it. A high impedance input has no effect on fanout of a driving device, contributes very little to overall capacitive effects, and affects no logic anywhere. A high impedance output acts exactly the same way to the outputs of other devices. The third state has a pretty specific meaning under these conditions. I see no reason why tri-state logic can't act the same way.


Despite the messiness, three-value logic often models the real world better than two-value. For example, a common source of debate in this wiki is "X is better than Y". Before any evidence is applied, the "default" value of this statement should generally be considered "null". In other words, "The difference in value between X and Y is unknown".

I think until you define "better", the "default" and "actual" value of this statement should generally be considered "undefined".


See: LogicalAnd, TetralemmicLogic, NullsAndRelationalModel, MultiValuedLogic, CantHideFromNulls, NullVersusNone.


CategoryLogic


EditText of this page (last edited April 4, 2012) or FindPage with title or text search