create account | help/FAQ | contact | links | search | IRC | site news
 Everything Diaries Technology Science Culture Politics Media News Internet Op-Ed Fiction Meta MLP

Classical Logic
Before talking about alternative logics I must talk a little about classical logic, the familiar logic used by mathematicians, philosophers and others. What do we mean by logic? In the old days it simply meant reasoning correctly. That was pretty vague. These days the subject has been formalized and there are two aspects to consider.

On the one hand consider the sentences "All men have less than 3 legs" and "John is a man". From this we can conclude "John has less than 3 legs". Note the very important fact that we don't really need to know what men or legs are to make that deduction. If we were told "All X are P" and "x is an X" then we know "x is P" regardless of what x, X and P are. So we can think of deductions as mechanical operations on sentences. In fact we can consider sentences to be nothing more than strings of characters and proofs to be strings of sentences where each sentence follows from some previous ones by application of a rule. These rules are called deduction rules. Of course we also need some starting points to which we can start applying rules. Those are called axioms. An example deduction rule is the one that says if you have derived A and have derived "A implies B" then you can derive B. In fact that rule has a name, modus ponens. If you can derive a statement from the axioms then the derivation is called a proof. Sometimes, however, you may hypothesise something without being sure of it. You can still apply the deduction rules to get a derivation but it's not a proof until you can eliminate the hypothesis. In fact some systems of logic have very convenient rules for eliminating hypotheses.

On the other hand - what does it mean for a sentence to be true? Well for a sentence like "Fred has red hair" you can find out by checking directly. But what about compound sentences like "Fred has red hair and Jane has black hair". here are two propositions conjoined by "and". What you can do is test each sentence individually and assign a truth value: 0 if it's false and 1 if true. In fact we can define a function v (called a valuation), acting on sentences, that maps them to 1 if true and 0 if false. We can then use the binary operation, &, defined by the following table, to determine the truth value of the conjunction:

& |0 1
--+----- eg. 1 & 0 = 0
0 | 0 0
1 | 0 1

| | 0 1
--+----- eg. 0 | 1 = 1
0 | 0 1
1 | 1 1

~ | 0 1
--+----- eg. ~0 = 1
. | 1 0
If the result is 1 then the conjunction of the two sentences is true. Formally we're saying v(A and B)=v(A) & v(B). This is taking a truth theoretical view ie. that the truth of "A and B" depends on the truth value of A and the truth value of B. Similary we test the truth of "A or B" using the | operator so v(A or B)=v(A) | v(B) and v(not A)=~v(A). These operations are examples of logical connectives and the functions &, | and ~ are collectively known as boolean operations.

There is one more logical connective to mention: implication. v(A implies B) is v(A) -> v(B) where A -> B is defined to be ~(A & ~B). In other words to tell whether or not "A implies B" is true we look at the truth of A and B and the result is false only if A is true and B is false. Note this means that if we can find any inconsistency in our logic, ie. any proof of something provably false, then we can prove anything, because a false statement implies anything and using modus ponens we can then deduce anything we want. So it just takes one little inconsistemcy to bring the whole system crashing down.

By the way, sometimes I'll be a bit lazy and, for example, use & and 'and' interchangeably. Really they are different things: one combines sentences and the other combines truth values, but it should be clear what is going on from the context. Logicians do this all the time.

I should also mention the quantifiers for all and there exist. They also have a truth theoretical interpretation but it's quite technical so I'll leave it out.

Use these connectives and throw in a few axioms and you get what's called predicate calculus. (If you leave out the quantifiers you have the simpler system propositional calculus instead).

So we have two sides to logic: the syntactical side which involves proving things by the application of mechanical rules applied to sentences and the semantic side which is about how you decide if a sentence is actually true or valid. But does proving something make it true? Well if this is the case for a type of logic it's said to be 'sound'. And if everything true is also provable it's said to be 'complete'. When logicians consider alternative ways of doing logic soundness and completeness are among the first things they look for. And while there is good reason for sticking with logics that are sound, it is well known that there are many logical systems that are incomplete. However I'll leave discussion of Gödel's incompleteness theorems for another time.

Note. My notation is a little nonstandard - I am limiting myself to a subset of HTML that I think is viewable by most browsers. If your browser is text only you may be having trouble with even this limited subset.

Multivalued Logic
How can we modify logic then? Well there's an obvious thing to do to start with. I mentioned the truth values 0 and 1. How about throwing in some alternative values? 1/2 seems like an obvious choice. This is exactly what Lukasiewicz did in around 1917. One of his motivations was that he was able to sidestep Russell's paradox with it. But if you were following what I was saying above then that's just half the story - the semantic side. So in the thirties an axiomatization (ie. a set of axioms and deduction rules) for this system was also devised introducing new connectives corresponding to necessity and possibility. We can define new logical operators called M and L with M(1/2)=1 and L(1/2)=0. If X is a proposition MX can be interpreted as meaning "maybe X" and LX as "necessarily X".

But there are some problems with this kind of logic. Lukasiewicz defined ~x=1-x so ~(1/2)=1/2. He also defined 1/2 | 1/2 as 1/2. So we find that for some X, X | ~X is 1/2. To many people that simply doesn't feel right, they argue that X | ~X should have a truth value of 1.

Pressing on regardless we can generalize even further to allowing truth values to take any number between zero and one. One way to to this is with Fuzzy Logic but another approach is via Post logics. If you really want to you can even consider probability theory as a branch of logic where probabilities are real-valued truth values.

Relevance Logic
Now consider what I said above about implication. I basically said that implication has a truth theoretical interpretation - that you could determine the truth value of "A implies B" by knowing the truth values of A and B. But does that really capture what is meant by "implies"? If "A implies B" really were the same as "not both A and not B" then any false proposition would imply anything you like. For example, my car has air bags. So according to classical logic it's correct to say "if my car had no air bags the moon would be made of cheese". But what does my car have to do with the moon? There seems to be something fishy about a system of logic that allows you to make deductions from premisses that are completely irrelevant. So some logicians take issue with the idea that "A implies B" should be defined truth theoretically. In fact, some logicians argue that it's so obvious that you can't define implication truth theoretcially and that only several years of brainwashing (eg. a university course in mathematics) could convince someone otherwise.

This is where relevance logics come in. Relevance logic (pioneered by Dunn) attempts to define conditions under which A is relevent to B, for example by tagging the explicit use of A in the demonstration of B from A. There is an interesting consequence of saying that A has to be relevant to B to prove B. We might find that even if we have managed to prove one false statement we can't prove some others. This is different to what I said about classical logic above where just one false statement brings the whole system crashing down. So relevance logics can be robust against inconsistency making them what's called paraconsistent. Some logicians consider this to be an advantage. For example, at several points in history physicists have had inconsistent laws of physics (for example just before a scientific revolution) to work with and yet have still reasoned succsfully. But there are also other approaches to logic that deal with this issue.

Intuitionistic Logic
You may remember I ended the Alternative Numbers article with a quote from Kronecker. He had a hard time believing in some of the constructions mathematicians were using. In particular he disliked the Law of the Excluded Middle (LEM) which says that either X or not X. In other words that either a proposition is true or its negation is true. There's no middle ground between these two possibilities. It's often used by mathematicians like this: suppose I'm having a bit of trouble proving X. Instead I can suppose that not X is true and see where that gets us. If I find that it leads to a contradiction then I know something is up because in classical logic it is assumed that you cannot prove a false statement. If I did every step of the derivation correctly then the original hypothesis must have been false. There is no "middle way" where the original hypothesis might have been half true. If it leads to contradiction it must have been false. (This is known in the trade as Reductio ad Absurdum.) But here's the issue for people like Kronecker: X might be a proposition saying that a certain mathematical object exists. Using LEM you might be able to prove it exists without actually showing how to find it. It's a bit like getting something for nothing and mathematicians use it all the time. But Kronecker didn't like it at all, it seemed like cheating. He wasn't alone.

Another mathematician, Brouwer (probably more famous for his fixed point theorem), also had trouble with LEM, and he started devising an alternative form of logic called Intuitionistic logic. Actually Brouwer was dead against formalism of the type that logicians use, but nonetheless someone came along after Brouwer and formalized his logic! It turned out that this led to a particularly elegant type of logic.

Part of the intuitionist's problem is that it's possible to prove "A or B" without having a proof of A or a proof of B. To an intuitionist a proof of "A or B" is a proof, either of A, or of B. In fact - that's just how intuitionists define a proof of "A or B". When you do this it suddenly become a lot harder to 'cheat'. Consider what is meant by "A implies B". To a classical logician it's just "not (A and not B)". But this just doesn't seem to capture the "intuition" of what implication means. A better idea might be something along these lines: if you know A implies B, then as soon as you know A then you know B. In other words the statement "A implies B" should give you the means to prove B the moment you've proved A. So you can imagine "A implies B" as a recipe for converting a proof of A into a proof of B. In fact this is exactly how intuitionists view implication. A proof of "A implies B" is a function that takes as input a proof and spits out another. If the input is a proof of A then the output should be a proof of B. This is radically different to the classical view. By making implications a type of function we connect logic with set theory and with subjects like lambda calculus. And there's another really cool thing going on: mathematicians use the symbol -> in many different ways. Among them are two: they use it to mean both implication and a function mapping (eg. f:A->B means f maps from set A to set B). In other words, by strange coincidence we've turned a proof into a function and we use the same symbol for both. It turns out there is a whole branch of mathematics that deals with arrows! It's called category theory and it provides a beautiful integration of everything I have mention in the last two paragraphs.

But before you take up intuitionist logic you should be warned. Brouwer had carried out some great mathematical work in his life. But when he became an intuitionist he regarded much of his work as being no longer of value. He had to reject things that other mathematicians thought were obviously true. Eventually other mathematicians came to see Brouwer as a little crazy! On the other hand intuitionist logic is closely tied up with Constructive Mathematics (not to be confused with a similarly named educational fad) which insists that you always construct things you claim exist. This means that it goes hand in hand with computer science where people's programs are usually expected to produce a result - not just say a result exists.

Second and Higher Order Logic
Earlier I briefly mentioned quantifiers. An obvious question about them is this: when I say "for all x", what kinds of x are we talking about? If you answer this question then you know what the basic 'objects' are that your logic deals with. For example, in mathematics we generally deal with sets and when we say "there exist an x such that..." we really mean "there exists a set x such that...". In fact mathematicians generally build everything out of sets using a set of axioms called ZF (and maybe some more axioms too - like the Axiom of Choice). For example, as I mentioned in the Alternative Numbers story, mathematicians build the ordinal integers out of sets.

But what about properties of sets or numbers? How do we talk about those as opposed to the underlying sets themselves? Mathematicians have several different ways to talk about properties. For example, if you want to talk about primality you can just talk about the set of prime numbers. As ZF can talk about sets, by considering primality through the set of primes we have just turned primality into something ZF can talk about. But sometimes there are properties we want to discuss where we can't form a set. For example consider the property of being a set. If we were to convert this into a set then we'd need to have a set of all sets. But ZF doesn't allow this (because of hairy problems like the Russell paradox). So how can we talk about a property like that?

One approach is to modify logic by extending our universe to include properties. This is what is known as Second Order Logic - as opposed to the usual logic that is first order. Using Second Order Logic you can say things like "there is a property such that if two sets have this property then so does their union". You simply can't say this in First Order Logic. But there are some difficult issues associated with First Order Logic. Logicians like to prove things, not just in their logical systems, but about their systems. It's actually very hard to prove anything about Second Order Logic. There are also some other tough issues like deciding when two properties are the same. In fact Quine believed that Second Order Logic isn't even Logic. And even though Second Order Logic appears to have greater expressive power mathematicians actually get along fine with First Order Logic. Every time they make a statement about properties there's always a workaround (sometimes quite inelegant) for expressing something that's just as useful in First Order Logic. By the way, you can also generalize further, so for example in third order logic we can talk about properties of properties.

Modal Logic
When I was talking about multi-valued logic I mentioned the concepts of necessity and possibility. These are called 'modalities' by logicians and a logic that deals with this kind of concept is called a modal logic. There are many different types of modal logic but I'll mention four: alethic logic, deontic logic, epistemic logic and temporal logic. An alethic logic is one like that mentioned above. Operators for necessity and possibility are introduced. These operators are very closely related in that if it is not necessarily the case that X is not true then X is possibly the case and vice versa. We can write this, using the above notation, as MX=~L~X. Let's think about the semantics of this for a moment. What does it mean to say Bush is a Democrat is false not not necessarily false? If it's false, it's false. Well when logicians are thinking about the semantics of modal logic they often work with 'possible worlds'. Although it is true that Bush is a Republican we can imagine a world in which he is a Democrat (well I think I can anyway). Only when something is true in all possible worlds is it considered necessarily true, otherwise it's just plain true (this is not strictly the definition used but it's close enough for now). Can you imagine a world in which 1=2?

Deontic logic deals with what you ought to do. Introduce the symbol O that means 'ought'. Then OX means X ought to be the case. Just like in alethic logic we can consider the 'dual' operator ~O~. ~O~X means It is not true that it ought not to be the case that X. In other words it means X isn't forbidden. Epistemic logic deals with belief and knowledge. For example KX means X is known to be true. KX->X but the converse isn't true. Among other things epistemic logic can be used to study cryptographic protocols in a formal way. Lastly temporal logic introduces time into logic with operators like H so that HX means hitherto X. In temporal logic whenever you ask about the truth of something you need to say at what time you are talking about. HX then means X is was always true up to the time specified. Here's an exercise: what does ~H~ mean in English?

Quantum Logic
Are our logical principles necessarily correct or are they an empirical fact that we have discovered about the universe? Once upon a time Euclidean geometry was seen as necessary. It just seemed obvious that parallel lines were always the same distance from each other. But along came Lobachevsky with the notion that there were different types of geometry and then came Einstein who showed that the universe itself was actually described by one of them. So maybe logic will go the same way. One philosopher who argued this was Hilary Putnam. Just as General Relativity changed geometry he said that Quantum Mechanics changes logic and that in fact we should be using Quantum Logic!

So how do you have to modify logic to make it Quantum? It's difficult to go into the details but I can give a taste of it. Instead of truth values 0 and 1 truth values are subspaces of vector spaces. If you don't know what that means I'll try to give an example. A vector space is a flat space that extends out to infinity and has a chosen point in it called the origin or zero. A subspace is a space contained in that space with the same chosen point. For example a 2D plane can be thought of as a vector space and the lines going through the origin are examples of subspaces. We can now define how the logical connectives work: given a valuation v v(A and B) = means the intersection of v(A) and v(B). v(A or B) means the smallest subspace containing v(A) union v(B). v(not A) means the space perpendicular to v(A) (in particular not(V), where V is the entire vector space, is the set {0} containing just the origin and not({0}) is the whole of V).

So doing logic is now doing geometry. The boolean & operation, for example, is replaced by vector space intersection. If a proposition turns out to have truth value {0} then we can consider it to be false and if the truth value is the whole of V then it can be considered true. But what about all those in-betweens? Well they correspond to those weird situations in quantum mechanics where you simply can't know two things at the same time. For example because of the Heisenberg uncertainty principle it simply doesn't make sense to say "this electron has mass m and momentum p". Quantum Logic perfectly captures these aspects of quantum mechanics.

(Aside for physicists: truth values aren't strictly the subspaces but the operators projecting onto these subspaces. A projection, P, has the property that P2=P. So the eigenvalues of P are 0 and 1. This means that Quantum Logic really is the quantization of logic - we have replaced truth values with non-commuting Hermitian operators whose eigenvalues are the results we want!)

So do physicists use Quantum Logic? Well, no actually. Part of the reason is that it's a victim of its own success. It works so well and fits perfectly with quantum mechanics that it doesn't actually say much that's new. So physicists get along fine without it. But philosophers of science do occasionally make use of it. (By the way, if you thought it was weird that truth values in Quantum Logic were geometrical, then maybe I'd better not mention that one way of defining a valuation for Intuitionistic Logic is through topology!)

Linear Logic and Non-monotonic Logic
Looks like I still have space to briefly mention some other types of logic. There are still some aspects of logic that have the potential to be varied. For example in every system we've looked at above we are able to take propositions and deduce further propositions from them. As you apply the deduction rules there is an ever increasing set of theorems that you have proved. Consider the propositions "all birds can fly" and "a robin is a bird". From these we may conclude "robins can fly". However, if we also have the proposition "a penguin is a bird" we might conclude that penguins can fly. It looks like the proposition "all birds can fly" needs to be amended. Yet it doesn't seem entirely right to completely throw away that proposition - it appears to have some value. One approach is through default reasoning. By default, in the absence of other information, birds fly. Logics that can handle this type of reasoning are called non-monotonic. The name comes from the fact that if you have more information, eg. penguins don't fly, you can end up being able to prove less, eg. you can no longer use the proposition "all birds can fly" in the default way. Default logic is of interest to practitioners of AI as it gives a way to codify some aspects of 'common sense'.

You can even 'lose' theorems in one form of logic - Linear Logic. Suppose you are describing a vending machine. You put in one dollar and get either a CokeTM or a SpriteTM. If you have one dollar you can get one or the other but not both. So the proposition "I have one dollar" can be converted into either "I have a CokeTM" or "I have a SpriteTM" but once it has you can't do it again unless you have another dollar. Linear Logic is something like this and certain types of deduction rule actually 'consume' their premisses as you use them. Linear Logic can be useful for describing systems with limited resources.

Laws of Form
Phew! We're coming near to the end. So to wind down I'm going to end with a very simple logic from the book Laws of Form by G Spencer-Brown. Now most mathematicians are pretty sure that this is the work of a crackpot, nonetheless the first part introduces a fun idea that people have managed to get mileage out of. To introduce this type of logic you really have to break the most fundamental rule of logic - that sentences be strings of characters. In this type of logic there is only one symbol, a circle. (Actually, Spencer-Brown didn't use circles but this notation is equivalent and is in a similar spirit.) It doesn't have to be a precise geometric circle, it could be a squiggly circle. This is known as a distinction.

As I can't use pictures I'll use ASCII art - the net effect being that I'll be using strings of characters anyway.

Here are some circles: () and o. What can you do with a circle? Well you can put one next to the other like this:

o o.
According to Spencer-Brown the rule is that whenever you see that you can replace it with o. I.e.
o o = o.
The other rule is that if you see one circle directly inside another they cancel. I.e.
(o) = .
That's a circle within a circle on the left. (Use your imagination!) Notice there's nothing at all on the right hand side of that '=' sign! Here are some example derivations:
((o)) = o and (((o)(o)o)o) = ((o)o) = (o) = .
Now you can throw in some unknowns and start proving theorems like that for any x and y, ((x))=x and y y = y. It turns out that you can represent propositional calculus using these symbols. Have fun doodling!

I learned a tremendous amount from The Blackwell Guide to Philosophical Logic. When I needed more detail on classical logic I used Notes on Logic and Set Theory and Computability and Logic. Otherwise I used the many resources available on the web some of which I have linked to above. Incidentally I learned about Laws of Form from a British magazine Personal Computer World in 1982.

Final Note I've had to make many sacrifices in order to make this article of a reasonable length. I've had to be a little vague or sloppy in places. So please use the references and links if you are interested in anything I say. Actually you'll find the textbooks are also often vague and sloppy too. But I've probably committed the extra sin, in places, of being incorrect too.

 Display: Threaded Minimal Nested Flat Flat Unthreaded Sort: Unrated, then Highest Highest Rated First Lowest Rated First Ignore Ratings Newest First Oldest First
 Alternative Logic | 126 comments (106 topical, 20 editorial, 1 hidden)
 More on Lukasiewicz (4.50 / 2) (#19) by My Alternative Account on Sat Nov 02, 2002 at 02:53:04 PM EST

 His name is meant to have a stroke through the first letter although most browsers don't seem to be able to do this through what I think is the official way, using &lslash;. He was Polish and that is part of Polish orthography. He invented what is now known as Polish notation. You may be more familiar with the backwards version of this: Reverse Polish Notation, otherwise known as RPN.
 So presumably ... (none / 0) (#22) by Simon Kinahan on Sat Nov 02, 2002 at 06:16:13 PM EST

 ... polish notation is a prefix notation, like lisp and other functional languages use for function application ? Simon If you disagree, post, don't moderate[ Parent ]
 No it's not. (none / 0) (#36) by Lazarus Short on Sun Nov 03, 2002 at 12:15:11 AM EST

 RPN is a postfix notation, not a prefix one. The operator comes after the operands. -- "Never offend people with style when you can offend them with substance."   -- Sam Brown [ Parent ]
 Argh, never mind. (none / 0) (#37) by Lazarus Short on Sun Nov 03, 2002 at 12:19:02 AM EST

 Damnit, I read the parent posts over twice, just to be safe, and I still managed to misunderstand what you were saying until right after I posted my reply. Never mind, I thought you were talking about reverse polish notation being prefix. -- "Never offend people with style when you can offend them with substance."   -- Sam Brown [ Parent ]
 reposting part of an editorial comment (5.00 / 1) (#20) by nex on Sat Nov 02, 2002 at 02:55:07 PM EST

 To quote "My Alternative Account", the author of the very nice article above:"That's such a good question I think it should be re-asked as a topical question if the story is accepted. Then I can give a fuller answer." The question was something like this:"In 'classical' logic, if you already proofed that A implies B, then you know B to be true the moment you proof A to be true. How is intuitionistic logic different? Enter stage left My Alternative Account:
 I'm confused (none / 0) (#43) by martingale on Sun Nov 03, 2002 at 07:36:32 AM EST

 I don't really see the difference. A classical logician simply appends the proof of "A implies B" to the proof of "A". This gives a proof of "B", which can stand on its own, irrespective of whether the proof of "A implies B" is explicitly singled out as worthy of mention. Once a proof of "B" is given, it can be simplified and transformed into a number of equivalent forms, all of which can stand on their own. The subset of statements which originally were the proof that "A implies B" may have become unrecognizable in the process. So I would argue that the classical approach is just as constructive. My understanding of constructionism is that it simply restricts itself to not using the axiom of choice (and maybe a couple of others I forget) and instead requires an explicit construction in case an object's existence is invoked. But if you take a constructionist proof of "B", you can simply use it "as is" with classical logic, and perhaps, if you want, simplify parts of it through the axiom of choice. [ Parent ]
 Not necessarily (5.00 / 1) (#44) by Simon Kinahan on Sun Nov 03, 2002 at 07:48:49 AM EST

 A coventional mathematician might have proved "A implies B" by some non-constructive approach, such as proof by contradiction. That is the root difference between constructive and conventional mathematics: in constructive mathematics, to prove something exists, you have to create it. Simon If you disagree, post, don't moderate[ Parent ]
 clarifications (none / 0) (#45) by martingale on Sun Nov 03, 2002 at 08:05:40 AM EST

 Okay, maybe I didn't express myself properly. I agree with what you say. However, I find the following paragraph problematic: An intuitionistic proof of "A implies B" is like a machine that inputs proofs of A, outputs proofs of B that stand on their own. Once you have your proof of B you can throw the machine away. You can give people the proof of B that they can check without even telling people you used the machine. I claim that the above applies equally well to Zermelo Frankel mathematics and to constructivist mathematics. It doesn't serve to distinguish the two on the question of how to treat proof of "A" + proof of "A implies B". A classical logician can transform the proof of "B" so as to remove the sequence of statements which prove "A implies B", thereby throwing away the crutches (*). (*) of course, in a sufficiently simple system the proof may be irreducible. [ Parent ]
 different derivation of intuitionstic logic? (none / 0) (#68) by btherl on Sun Nov 03, 2002 at 08:44:31 PM EST

 I agree with you martingale, it doesn't sounds right.  Possibly he means something like this: Definitions:    =>  : implication as part of the logic syntax    ->  : implication as a (sequence of) inference rule(s) Then an intuitionist is allowed to use => in the classical logic sense, but is restricted to using only the intuitionistic version of -> as an inference rule (or sequence of them). This still looks like intuitionstic logic, but instead of giving "=>" an intuitionstic meaning, you keep it classical but don't allow it to be used for inference. A logic like this would never give you a usable conditional like "A => B", since by its definition such statements can't be used to infer B from A.  But a list of statements starting at A and ending up B could be used in place of it. [ Parent ]
 thanks, this helped [n/t] (none / 0) (#93) by martingale on Mon Nov 04, 2002 at 07:51:03 PM EST

 [ Parent ]
 rules of the game (none / 0) (#92) by martingale on Mon Nov 04, 2002 at 07:50:33 PM EST

 I've reread the thread to try to make sense of my difficulties and the responses. I believe we all clearly agree that the set of intuitionistic proofs is a strict subset of the set of classical proofs. What bothered me was the view that a classical proof could not stand on its own, and this was used to differentiate classical from intuitionistic - but it always can from the classical perspective. It can't always from an intuitionistic/constructivist perspective. The example you gave in your first response was phrased in intuitionistic language, but I interpreted it from a classical perspective. In fact, from the classical perspective, implication -> can be seen as a valid, constructive step. If I take a finite sequence of statements on paper (proof of "A") and I write below that "A -> B", I have now a perfectly valid sequence of statements (proof of "B") which can stand on its own. What I have done in words is exhibited a function f which takes a proof of "A" on paper and produces a proof of "B" on paper. All within the classical framework. Only an intuitionist perspective will disagree, because "A -> B" must be replaced by a sequence of statements from their restricted subset. Anyway, I'm happy now I think. [ Parent ]
 Example of f:proof of A -> proof of B? (none / 0) (#96) by btherl on Tue Nov 05, 2002 at 01:34:07 AM EST

 My Alternative Account, can you give an example of such a function f which maps proofs of A to proofs of B for some A and B?  I think I understand now what you are saying, but I'm having a hard time thinking of what such a function would look like. Also can I ask: If we have a function mapping proofs of A to proofs of B, can we say that the statement "A => B" is true in intuitionistic logic?  Or do we simply say "A implies B" as a metalogical statement? [ Parent ]
 Adiffer? NT (1.20 / 5) (#25) by bjlhct on Sat Nov 02, 2002 at 07:33:51 PM EST

 * kur0(or)5hin - drowning your sorrows in intellectualism
 Wow, there is only one thing I regret (none / 0) (#26) by mami on Sat Nov 02, 2002 at 08:30:03 PM EST

 and that is that you didn't split this in several smaller pieces and made a series out of it. Each day one part. I hadn't read your first column and got stuck to read that one first. Now for the moment I have no more strength left to go through this one. May be tomorrow. Great stuff. Real fun to read it.
 PDA? (none / 0) (#62) by pin0cchio on Sun Nov 03, 2002 at 03:38:57 PM EST

 BTW I recommend getting a PDA (even a used one from ebay) Perhaps I am getting too into the theory of computation course I'm taking, but for some reason, when I see "PDA" I don't immediately think of a "handheld computer" but rather a theoretical machine that can recognize any context-free language (i.e. a push-down automaton). Handheld computers are LBAs (linear bounded automata), which can recognize any context-sensitive grammar. Will we soon see a story about Chomsky's hierarchy? lj65[ Parent ]
 stop using decimal (1.15 / 13) (#29) by Fen on Sat Nov 02, 2002 at 09:30:48 PM EST

 I have a hard time thinking anyone who uses decimal is logical in any way. Stop it right now. --Self.
 alternative math (none / 0) (#30) by spacejack on Sat Nov 02, 2002 at 10:36:06 PM EST

 must have been used to calculate that rating. (0.57/0) ?? [ Parent ]
 Fen ... (none / 0) (#46) by Simon Kinahan on Sun Nov 03, 2002 at 08:07:19 AM EST

 ... is a member of the elite rank of untrusted users, thanks to his hexadecimalism. Simon If you disagree, post, don't moderate[ Parent ]
 He's a militant fundamentalist hexadecimalist. nt (5.00 / 1) (#76) by pattern against user on Mon Nov 04, 2002 at 02:38:59 AM EST

 [ Parent ]
 The bastard! (5.00 / 1) (#77) by jvance on Mon Nov 04, 2002 at 02:41:53 AM EST

 Everyone knows Octal is the One True Path. --- This is taking too much of my time. I've gone away. You can reach me at john_a_vance atsign hotmail dot com if you wish.[ Parent ]
 Splitters! (nt) (none / 0) (#103) by ethereal on Tue Nov 05, 2002 at 02:07:23 PM EST

 -- Stand up for your right to not believe: Americans United for Separation of Church and State[ Parent ]
 I prefer unary (2.83 / 6) (#39) by theElectron on Sun Nov 03, 2002 at 01:40:27 AM EST

 For instance, in unary, today is: Sunday, November 111, 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111111111111111111111111111111111 11 --Join the NRA![ Parent ]
 011011100110111100100001 (none / 0) (#90) by benson hedges on Mon Nov 04, 2002 at 05:30:41 PM EST

 011011110110111001101100011110010010000001100010 011010010110111001100001011100100111100100100000 011101110110100101101100011011000010000001100010 011100100110100101101110011001110010000001100101 011011100110110001101001011001110110100001110100 011001010110111001101101011001010110111001110100 00100001 -- When all is One, all violence is masochism.[ Parent ]
 An Epistemological question (none / 0) (#31) by labradore on Sat Nov 02, 2002 at 11:25:45 PM EST

 How did you come to know about all of this? I.e. what background does someone both capable and interested in writing this article have and what other things might the author know?
 many fields today use this kind of stuff (none / 0) (#33) by Work on Sat Nov 02, 2002 at 11:30:50 PM EST

 While my knowledge is nowhere near this indepth, as a CS undergrad i've had my fair share of logic courses, in fact, im currently enrolled in one that is entirely predicate and propositional calculus. I still have more to take on it as well. Besides philosophy, any field that bases itself off of some fundamental rules and axioms (computer science, mathematics, philosophy come to mind...) makes use of this. [ Parent ]
 Wow...that was much better than I expected (4.00 / 1) (#32) by kcbrown on Sat Nov 02, 2002 at 11:29:18 PM EST

 When I read this: How about going to the very foundations of mathematics itself and replacing logic itself? I thought to myself "oh, no, not a piece claiming that logic is arbitrary and can successfully be replaced with a set of arbitrary rules". Instead what we got was a very well written and very interesting article. Kudos! There's one thing I might point out: logic as a formal system didn't spring from a vacuum but has its roots in our experience with the world. As such, any replacement system of logic must yield the standard system of logic as a special case ... just as Relativistic physics yields Newtonian physics as a special case (such as when velocities are much smaller than lightspeed, in the case of Special Relativity).
 standard logic? (none / 0) (#67) by btherl on Sun Nov 03, 2002 at 08:24:04 PM EST

 There's a standard system of logic?  I've always viewed it more like this: Microsoft Logic - Has convenient syntax for proving most common theorems, but is incapable of proving some more complicated theorems that we nevertheless believe to be true. *NIX logic - Allows proof of a much larger class of theorems, but with a more arcane syntax.  Some very common theorems require a lot of practice to prove reliably. Klogic, Gnomelogic, etc - A lot of syntactic sugar for common theorems, while still allowing access to the underlying *NIX logic. And then there is Standard Logic (or the One True Logic), which is the ideal that they all strive towards..  none of the existing logics have made it IMHO (and I'm talking about logic here, not the software analogy) [ Parent ]
 That's GNU/Logic to you :) (nt) (none / 0) (#102) by ethereal on Tue Nov 05, 2002 at 02:06:05 PM EST

 -- Stand up for your right to not believe: Americans United for Separation of Church and State[ Parent ]
 Intutionistic is more than Classical (none / 0) (#119) by affenmann on Fri Nov 08, 2002 at 05:22:34 PM EST

 >Not necessarily. Classical Logic may be slightly > stronger than we need (consider Intuiutionistic > Logic for example) and there might be slightly > weaker systems that are just as useful in the > practical sense. I don't think it's right to think of Intutionistic Logic as being weaker than Classical Logic. After all, an intutionistic proof proves *more* than a classical proof of the same proposition, since it also gives effectiveness. Of course, not every classical proposition A is provable intuitionisitcally. However, for any classically true A one can prove ¬¬A intuitionisically (known as the double-negation translation). Very roughlt, one can think of taking double-negation as forgetting the computational information. Therefore, there is a full embedding of classical logic in intutionistic logic, and, in this sense, int. logic is stronger than classical logic. One finds a similar situation for Linear Logics, where the intuitionistic function (A-->B) space is being factored in replication and the linear function space (!A --o B). Just to clarify - I'll shut up now :-) [ Parent ]
 I liked it. (5.00 / 2) (#34) by Work on Sat Nov 02, 2002 at 11:36:29 PM EST

 This piece of writing is as good as any logic textbook. It's always amazed me how much logicians can get away with in their writings. Perhaps its because publishing company proofreaders dont even bother to try and understand it. I have a pretty decent book that covers propositional and predicate calculus, entitled "A Logical Approach to Discrete Math" by David Gries and Fred Schneider. It's aimed at the computer scientist, but their writings are pretty general. Only a few places in it i've seen some vagueness, and as far as logic books go, is very clear and straight forward. Not surprisingly, the authors were on friendly terms with Djikstra who worked heavily on making logic more understandable and relevant to computing.
 blame TeX (none / 0) (#35) by turmeric on Sun Nov 03, 2002 at 12:14:33 AM EST

 evidently they think 'the computer can take care of the silly busywork of appearance'. so the kind of let that thinking leak elsewhere. 'the computer can take of my unclear muddled paragraphs, irrelevant tangents, and boring style'. [ Parent ]
 not so (none / 0) (#42) by martingale on Sun Nov 03, 2002 at 06:49:17 AM EST

 TeX has nothing to do with it. In fact, quite a few people obsess over details TeX doesn't get right, like spacings and size and shape of parentheses. The responsibility for unreadable books and articles comes from two directions. One is the midnset that mathematics speaks for itself. The beauty is in the succession of theorems proved, and motivation or other text has no place in it. The second is the need to produce volumes of material. In the past, when you weren't judged by the number of papers published, you could afford to write elegant papers. Now you've got to really want to do so, and of course in any sufficiently large sample, half the points are below the median :-) [ Parent ]
 well as long as you bash university (none / 0) (#53) by turmeric on Sun Nov 03, 2002 at 11:42:17 AM EST

 i am OK [ Parent ]
 hmmm (4.00 / 1) (#85) by uniball vision micro on Mon Nov 04, 2002 at 12:15:29 PM EST

 "This piece of writing is as good as any logic textbook. It's always amazed me how much logicians can get away with in their writings. Perhaps its because publishing company proofreaders dont even bother to try and understand it. " How permanent are the archives? I would hate to see things like this dissapear. the death of one man is a tragedy the death of a million is a statistic Joseph Stalin[ Parent ]
 Another excellent article to add to my hotlist (none / 0) (#38) by kholmes on Sun Nov 03, 2002 at 01:08:48 AM EST

 Thanks. As always, I'll have to study the article (and links) in more depth later. But I have to wonder, are these alternative system''s of logic useful to philosophy, or only to esoteric math? Are there any legitimate philosophies that require one of them? And are you like a math genius or what? If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.
 Origins of Relevance Logic (4.50 / 2) (#40) by grestall on Sun Nov 03, 2002 at 04:19:10 AM EST

 Nice article, clear introduction. But the attribution of relevance (or relevant logic) to Mike Dunn is incorrect. Mike Dunn certainly had a lot to do with its development, but the true pioneers are Alan Anderson and Nuel Belnap (the authors of the first volume of Entailment), who taught Mike Dunn and Bob Meyer and many many others. Some of the history is recounted in my book An Introduction to Substructural Logics, which also says a fair bit about linear logic and related things. -- Greg Restall
 Laws of Form (none / 0) (#41) by AlephNull on Sun Nov 03, 2002 at 06:02:19 AM EST

 >> Incidentally I learned about Laws of Form from a British magazine Personal Computer World in 1982. That's where I first came across it as well. I managed to borrow a book from the library. Very fascinating to see a calculus being built from scratch, although it got a bit too deep for me after a while (especially when time got involved!). When I moved to London I couldn't find it anywhere. Even the big shops down Charing Cross Road didn't have it 'cos it was out of print. In the end I got my mum to borrow the book from the library and photocopy all the pages (!). I then spent many an hour punching holes and affixing reinforcers. I've still got it in a binder. Then, a few years back I came across ABE Books and managed to buy a second-hand copy in pretty good condition (i think i paid about £20). So 18 years from first exposure to final closure. ------------------------- Political correctness is doubleplusungood.
 Poor Sheffers (none / 0) (#47) by caine on Sun Nov 03, 2002 at 09:41:11 AM EST

 I'm sorry I didn't have time to point it out while it was in edit, but using '|' for OR is bad, since it's the symbol used for Sheffers line defined such as: A B A|B T T F T F T F T T F F T --
 What am I missing..... (none / 0) (#54) by mindstrm on Sun Nov 03, 2002 at 11:42:46 AM EST

 Didn't you just show a nor table? [ Parent ]
 Bascially yes (none / 0) (#58) by caine on Sun Nov 03, 2002 at 12:21:26 PM EST

 But it's used sometimes to build systems based on only a few logicalsymbols, and since there's an established notation for OR (v) it should be used instead of the sign for Sheffers line (|). --[ Parent ]
 i think.. (none / 0) (#60) by Work on Sun Nov 03, 2002 at 01:36:05 PM EST

 the author was trying to relate the symbols logicians use with the symbols frequently encountered by programmers (which outnumber logicians around here). You're right though, v should've been used for OR and perhaps ^ for AND. [ Parent ]
 I assume you mean... (none / 0) (#69) by Delirium on Sun Nov 03, 2002 at 10:04:23 PM EST

 ...NAND. I've also never seen | used as the NAND symbol; what I have seen is an upwards-pointing vertical arrow (not possible to render in HTML, afaik). [ Parent ]
 upwards arrow (none / 0) (#88) by Work on Mon Nov 04, 2002 at 03:33:08 PM EST

 In some books, the upwards arrow refers to Ceiling and downward arrow refers to Floor. [ Parent ]
 gah (none / 0) (#95) by Delirium on Mon Nov 04, 2002 at 09:46:56 PM EST

 Fun with notation. FWIW, the notations I've seen use a pair of symbols looking like L and its mirror image (across the y axis) for floor (basically the | | magnitude symbols but with a horizontal "floor"), and the mirror image of those two symbols across the x axis for ceiling. [ Parent ]
 Exercise (2.50 / 2) (#48) by Rasman on Sun Nov 03, 2002 at 10:44:20 AM EST

 What does ~H~ mean in English? Here's a guess: ~H~X means, "It is not the case that hitherto X was not true", which means, "X has been true at some point in the past." --- Brave. Daring. Fearless. Clippy - The Clothes Pin Stuntman
 Of course. (none / 0) (#98) by Rasman on Tue Nov 05, 2002 at 03:37:42 AM EST

 I just used "hitherto" because that's what you said "HX" was when defining the exercise. Personally I think using H as a temporal function abbreviating a word containing "hither" is pretty stupid. Either before/after or ante/post would be much better. --- Brave. Daring. Fearless. Clippy - The Clothes Pin Stuntman[ Parent ]
 Not quite... (none / 0) (#89) by No Neck Joe on Mon Nov 04, 2002 at 03:44:00 PM EST

 The definition you just wound up with there--"X has been true at some point in the past"--is H(X). We want the dual of H: the opposite of its opposite, essentially: "X was (not-hitherto) untrue." Or "X will be false in the future." [ Parent ]
 Yes, quite. (none / 0) (#97) by Rasman on Tue Nov 05, 2002 at 03:32:47 AM EST

 H(x) means that X has hitherto been true. This means that it has always been true in the past. Hence you're wrong. Or to quote the article: "HX then means X is was always true up to the time specified." --- Brave. Daring. Fearless. Clippy - The Clothes Pin Stuntman[ Parent ]
 Oops. Granted. (none / 0) (#104) by No Neck Joe on Tue Nov 05, 2002 at 03:32:31 PM EST

 Sorry, I reread my notes more carefully. Thanks. [ Parent ]
 Awesome! (none / 0) (#49) by merkri on Sun Nov 03, 2002 at 11:22:36 AM EST

 Really liked this article. I was actually thinking to myself the other day "I wonder if someone will ever do an article on modal logic." Now that someone has, and it's even more broad than modal logic, great! I love modal logic. I got interested in modal logic not because of logic per se but because of psychological measurement problems. In particular, I got interested in it because of issues with counterfactuals (issues surrounding the truth value of statements such as "If X were to be the case, Y would be the case.") It's relevant to psychological measurement because much of it makes assumptions about people that are counterfactual in nature--you never have even observed someone in some situation, but try to make statements about them to the effect of "this is the sort of person, that if you would put them in such-and-such situation, they would do Y." Eventually, this all led me to work on possible worlds, naming, and necessity, and by authors such as Kripke and Lewis. I'd highly recommend works of either Kripke or Lewis for those who are interested in modal logic or possible worlds. They both make incredibly convincing arguments for the utility of possible-world arguments (Lewis's position in Counterfactuals, about the reality of possible worlds, may ostensibly seem controversial,  but is more compelling if you consider distinctions between reality and existence). Modal logic is fascinating to me because of its many connections with statistics and statistical reasoning, which is one of my main areas of interest. The problem of how to think about possible worlds, possible scenarios, is intimately related to statistical problems and provides a satisfying theoretical "base" for the latter. It's been fascinating to me to see how individuals in various realms of thought converge on each other in thinking about issues such as causality, counterfactuals, and stochastic reality. Interesting stuff. Glad to see this stuff being brought to the attention of more people.
 Am I missing something... (none / 0) (#52) by mindstrm on Sun Nov 03, 2002 at 11:40:29 AM EST

 | | 0 1 --+-----   0 | 0 1   - no, should be 1 1 | 1 1
 No? (none / 0) (#57) by Work on Sun Nov 03, 2002 at 12:02:15 PM EST

 false v false is false. 0 | 0 = false. [ Parent ]
 Right (none / 0) (#63) by carbon on Sun Nov 03, 2002 at 04:49:34 PM EST

 But that was false v true, which is true. The answer column is the first one. Wasn't Dr. Claus the bad guy on Inspector Gadget? - dirvish[ Parent ]
 True.. (none / 0) (#64) by mindstrm on Sun Nov 03, 2002 at 06:54:40 PM EST

 but he has the right two columns as the terms, and the left as the value.... he has 1|0 1.. which means (0 * 1 = 0) which is wrong.. it should = 1. [ Parent ]
 Ambiguous (none / 0) (#71) by silk on Sun Nov 03, 2002 at 10:18:19 PM EST

 I believe he was representing a table, so an HTML representation of the above would have been:
|01
001
111
The | character was used as both a column delimiter and as or in the article. [ Parent ]
 Yes.. (none / 0) (#106) by mindstrm on Tue Nov 05, 2002 at 03:35:14 PM EST

 That's what I understood. In the table, | is just a table separator. The result is on the left. And 0 | 0 1   meaning (false or true) = false is .. .. false :) [ Parent ]
 My typo. (none / 0) (#105) by mindstrm on Tue Nov 05, 2002 at 03:32:51 PM EST

 Of course, I made a typo.. what he had was 0 | 0 1 Which is wrong [ Parent ]
 Laws of Form (none / 0) (#56) by khallow on Sun Nov 03, 2002 at 12:00:41 PM EST

 I've played with the circle logic of Spencer-Brown. My problem with it is that the 2-D representation doesn't seem to add that much. It does make commutativity and associativity trivial aspects of the logic though. Stating the obvious since 1969.
 Yes Please!! (none / 0) (#115) by mumble on Wed Nov 06, 2002 at 04:24:07 AM EST

 "I might do a story on the knots and physics stuff some time." Yes Please!!! ----- stats for a better tomorrow bitcoin: 1GsfkeggHSqbcVGS3GSJnwaCu6FYwF73fR "They must know I'm here. The half and half jug is missing" - MDC. "I've grown weary of googling the solutions to my many problems" - MDC. [ Parent ]
 Hmmm (none / 0) (#66) by jmzero on Sun Nov 03, 2002 at 07:30:45 PM EST

 Perhaps I missed the part in your article where you clarified the mechanics of "ex falso quodlibet" (whereby we reason from a falsehood to an arbitrary statement).  A & ~A (the inconsistency)  A  (from 1)  A | B (from 2, where B is arbitrary)  ~A (from 1)  B (from 3 and 4) This is worth noting, as it limits the ways in which we may attempt refuge from inconsitency. . "Let's not stir that bag of worms." - my lovely wife
 Fuzzy and probability (none / 0) (#70) by epepke on Sun Nov 03, 2002 at 10:06:38 PM EST

 In probability, P(A^B) = P(A)P(B), and P(A|B) = 1-(1-P(A))(1-P(B)). Some fuzzy systems use an analogue to this, so they effectively are probability as logic. However, IMO the most useful fuzzy systems use A^B=min(A,B) and A|B=max(A,B). This corresponds to a common saying: "a chain is only as strong as its weakest link." The truth may be out there, but lies are inside your head.--Terry Pratchett
 On fuzzy logic (5.00 / 3) (#72) by Pseudonym on Sun Nov 03, 2002 at 10:28:38 PM EST

 For the benefit of those looking in: the rules are different because the meanings are different. Bayesian logic and fuzzy logic are based on different meanings of set membership. You can think of a proposition P(x) as meaning that x is a member of a set, P. Bayesian logic is based on the idea that it is not necessarily known whether x is a member of P, and the best we know is a probability. Fuzzy logic, on the other hand, is based on fuzzy set membership. Consider, for example, W, which is the set of great weblog sites. K5 is absolutely a great weblog site, so K5 is an element of W. Consider, on the other hand, the other site. It's okay, but it's not great. This is not a probability relationship. Rather, TOS is only partly a member of W. We represent partial membership with a number between 0 and 1. The rules of fuzzy logic are motivated by actual psychology experiments, where people are shown some images, and they are asked, say, whether the object portrayed is "blue", whether it's a "cup" and whether it's a "blue cup". (They are asked to rank these out of ten; so something which is almost blue might get a rank of 8, something which is almost a cup (e.g. a goblet) might get a rank of 9 and so on.) It was found that people use a function pretty close to the minimum function to implement conjunction. Fascinating stuff. sub f{(\$f)=@_;print"\$f(q{\$f});";}f(q{sub f{(\$f)=@_;print"\$f(q{\$f});";}f});[ Parent ]
 still don't get it (none / 0) (#87) by fhotg on Mon Nov 04, 2002 at 02:12:29 PM EST

 and it's not the first time I try to. K5 is absolutely a great weblog site, Probabilities are actually probability-measures, they describe the 'size' of a set. (That set is not W, btw, here it's the elementary, two-element set (0,K5). In fact, the intuition of thinking about probabilities as size goes a long way. Now K5 is great, for sure. Means nearly everybody thinks its great. As great as it gets in a measurement space, it has size 1: P(K5)=1. We represent partial membership [TOS] with a number between 0 and 1. Yeah, let's base this in empirics and note that 2 out of ten find it great, so P(TOS)=0.2. All still in accord with probability theory. In the example with the cup, you just ask people to determine probabilities for certain objects for 'blueness' and 'cupness'. Fuzzy number for blue cup = P_blue(Object)*P_cup(Object)= P((Object in 'blue') ^ (Object in 'cup')). What's the difference ? I don't get it. ~~~ Gitarren für die Mädchen -- Champagner für die Jungs[ Parent ]
 Probability vs fuzzy membership (none / 0) (#107) by Pseudonym on Tue Nov 05, 2002 at 06:31:13 PM EST

 You're thinking about fuzzy set membership the wrong way. It is not a probability. Probability depends on a proposition P being either objectively true or objectively false. (Or subjectively, if you're working in modal logic.) In the example above, it is not true that a weblog site is either objectively great or objectively not great. There are grey areas (it might be not bad, or close to worthless but not without redeeming features). An example, where fuzzy logic has been applied in practice, is the question of what it means to be "hot". There is not a certain temperature level above which the temperature is "hot" and below which it is "not hot". There's always a grey area in the middle where it is "a bit hot". This is not the same as there being a 35% chance that it is hot. Did that help? sub f{(\$f)=@_;print"\$f(q{\$f});";}f(q{sub f{(\$f)=@_;print"\$f(q{\$f});";}f});[ Parent ]
 fuzzy buzz (none / 0) (#108) by fhotg on Tue Nov 05, 2002 at 07:28:53 PM EST

 You're thinking about fuzzy set membership the wrong way. It is not a probability I'm under the impression that it can be aequivalently expressed as probability though, and therefore in principle also 'is' the same. If an element is X is an element of fuzzy sets A and ~A, the fuzzy logical value of the proposition "X is element of A" is somewhere in ]0,1[. Right ?. You might just as well regard the two possible outcomes of the probability experiment asking "Is X in A ?" as constituting a probability space with P(X in A)= Fuzzy-Logic Value(X in A). Probability depends on a proposition P being either objectively true or objectively false In a way yes, since prob. theory (like most math) is built on ZFC. But that's not the point. I see yours though: There's always a grey area in the middle where it is "a bit hot". This is not the same as there being a 35% chance that it is hot. In every day language, this is obviously true. Formally however, I still fail to see that the probability- and the fuzzy treatment are not the same. Your example deals (like many fuzzy-examples) with a subjective value. As far as I can imagine, this can always be modelled (and the only way to empirically assign values to it is) by testing lots of peoples opinion if it's hot or not. So let 100 touch the object at various temperatures. All say it's hot when they burnt their fingers => P(Temp_ouch is hot)= 1. At each point "in the grey middle), you get something like 35 say its hot => P(Temp_maybe is hot)=0.35. Now my question is wether fuzzy-logic is just a more intuitive, better adapted to everyday understanding and maybe better to implement description of the same thing, or if it's truly different, in that you can do things with it that can't be done with prob. theory or if there are circumstances where the same phenomenon yields different outcomes wether described in fuzzy or in prob. theory. ~~~ Gitarren für die Mädchen -- Champagner für die Jungs[ Parent ]
 aha (none / 0) (#111) by fhotg on Tue Nov 05, 2002 at 10:03:52 PM EST

 I'm confused by your difficulty. yeah, stupidity can be confusing. I see. But it's still very related to prob.- theory, and now it looks to me as if prob. - measures are still fuzzy-measures (but not the other way round anymore). I think this is a pretty serious problem with fuzzy logic but if you fulfill it, and the normalization axiom, you'll end up with probability again. After reading up about Fuzzz-logic, my main beef with it is, that the rules can be defined quite arbitrarily, as fits the application. A hint in return: Stay away from Wikidedia for technical info. They're usually wrong or imprecise. For math-reference Eric Weissenstein's Mathworld works much better. ~~~ Gitarren für die Mädchen -- Champagner für die Jungs[ Parent ]
 no (5.00 / 2) (#110) by martingale on Tue Nov 05, 2002 at 09:42:19 PM EST

 Fuzzy logic is a generalized set membership. It only agrees with probability in trivial cases: An example is when the probability is a point mass, so that for any measurable set A, P(A) = 0 or 1. This corresponds to a probabilistic system where there is no uncertainty at all. If you take the fuzzy logic special case of classical logic, the two agree. The only other thing fuzzy logic and probability have in common is the notion of negation. In both cases P(not A) = 1 - P(A), whether P is a probability, any probability, or P is fuzzy membership. Probability differs from fuzzy logic in the rules for combining statements (and, or) and the fact that probability cannot assign values for all imaginable statements (only measurable sets can have a probability value), while fuzzy logic has no such difficulty. You can informally think of fuzzy membership as multiple membership. In fuzzy logic, all statements are *both* true and false, but the mixture is given by the fuzzy value. In probability, statements are usually *neither* true nor false, it's just an average. Any realization is either true or false. [ Parent ]
 Thanks (none / 0) (#112) by fhotg on Tue Nov 05, 2002 at 10:13:03 PM EST

 Meanwhile I read it up. (only measurable sets can have a probability value) and this is a good idea. The special award goes to you, if you can come up with a statement refering to some sort of application which must be modelled (fuzzywise) using a non-measurable set :). ~~~ Gitarren für die Mädchen -- Champagner für die Jungs[ Parent ]
 Hotness (5.00 / 1) (#113) by Pseudonym on Tue Nov 05, 2002 at 10:29:27 PM EST

 In your example, you get 100 people to rate a temperature as either "hot" or "not hot". What you get is something like a probability measure. What I propose instead is to get 100 people (or even one person) to rate a temperature's "hotness" on a scale of 1 to 10 (or, rather, on a scale of 0 to 1, where 0 means "definitely not hot" and 1 means "definitely hot"). Fuzzy logic's rules are somewhat arbitrary, in that the axioms are motivated by psychological tests rather than on combinatorial theory. It is intended to be something like the way that humans implement conjunction and disjunction on fuzzy categories. It has found a lot of use in control systems where you have a number of sensors and a number of controls and you need to implement rules such as "if sensor A's value goes too high, turn up control B". This can be thought of as a kind of implication over the fuzzy categories "too high on sensor A" and "control B turned up". sub f{(\$f)=@_;print"\$f(q{\$f});";}f(q{sub f{(\$f)=@_;print"\$f(q{\$f});";}f});[ Parent ]
 No point in arguing (none / 0) (#116) by epepke on Wed Nov 06, 2002 at 06:48:16 PM EST

 Lots of people have argued probability theory versus fuzzy logic, and it usually boils down to philosophy and preference. There's no point in arguing over it. Fuzzy is different from probability because the rules are different. Apart from that, to paraphrase Frank Zappe, if you like it, it's bitchen, and if you don't, it sucks. The truth may be out there, but lies are inside your head.--Terry Pratchett[ Parent ]
 probability (3.50 / 2) (#75) by martingale on Mon Nov 04, 2002 at 12:05:44 AM EST

 The rule P(A^B) = P(A)P(B) is only applicable if the statements A and B are independent. In general, the rule you quote doesn't apply. In fact, in general, people would argue about the correct probability to use much more than about the calculation, which is mechanical after that. Note that arguing about the probability measure is a problem of inference, and is damn hard. [ Parent ]
 You're right, of course (none / 0) (#80) by epepke on Mon Nov 04, 2002 at 10:29:04 AM EST

 I was just trying to point out the difference in the calculation of and and or, not get into the details of probability. The truth may be out there, but lies are inside your head.--Terry Pratchett[ Parent ]
 More on arrows (none / 0) (#73) by Pseudonym on Sun Nov 03, 2002 at 10:43:39 PM EST

 Haskell researchers (most notably John Hughes) have found out that arrows are actually a good way to represent certain kinds of computation, such as parsing and event-driven programming. Some of the gory details are available if anyone is curious. sub f{(\$f)=@_;print"\$f(q{\$f});";}f(q{sub f{(\$f)=@_;print"\$f(q{\$f});";}f});
 Is it really alternative logic? (2.50 / 2) (#74) by Big Sexxy Joe on Sun Nov 03, 2002 at 11:57:16 PM EST

 Are these systems really independent of classical logic? Or are they mathematical constructs built on top of classical logic? Is classical logic used to prove the theorems in these other systems? Great article, alternative account! I'm like Jesus, only better. Democracy Now! - your daily, uncensored, corporate-free grassroots news hour
 my logic is telling me... (none / 0) (#78) by johwsun on Mon Nov 04, 2002 at 04:35:48 AM EST

 ..that this is a very interesting article..
 An Interesting Read... (none / 0) (#117) by thanos on Wed Nov 06, 2002 at 08:59:59 PM EST

 ...that in part concerns the logic of language and the attempt to construct a logically perfect language is Logic Tractatus Philosophicus by Ludwig Wittgenstein. Savinelli testified that Pickard said on two occasions that he had accidentally spilled LSD on himself, dosing himself with the drug. Pickard acted "giddy" and was less focused and organized for about a month after the second dosing.
 Visual mathematics (none / 0) (#120) by antiroger on Sat Nov 09, 2002 at 10:59:33 PM EST

 A recommendation... (none / 0) (#121) by jurgisb on Sun Nov 10, 2002 at 10:39:09 AM EST

 You seem to be interested; so, I recommend to you the book "Goedel, Escher, Bach: an Eternal Golden Braid" by Douglas R. Hofstadter. It should teach you everything about mathematics and thinking in itself that school should have, but did not. I'm currently reading it, and even as I have not had any problems with abstract reasoning (in fact, I was quite talented in it), this book adds to my lore immeasurably, and keeps adding. Good luck! [ Parent ]
 if it's irrelevant, forget about it... (none / 0) (#126) by kiwipeso on Mon Nov 18, 2002 at 04:00:34 PM EST

 To be honest, start with algebra and then you don't really need to fully honor names for trivial or historical purposes. Nothing wrong with using algebra images for complicated concepts, you could always start there and then go for it. Kaos operating system creator.[ Parent ]
 Some things not quite right. (none / 0) (#122) by Estanislao Martínez on Tue Nov 12, 2002 at 05:36:34 AM EST

 First of all, nobody nowadays (i.e. since the 70s) thinks of modal logic as an "alternative" to classical logic at all. Modal operators are just modal operators; you can stick them into a logic that's classical, intuitionistic, linear (the bang operator in linear logic IIRC has the same proof rules as the modal box operator), relevant, etc. The standard modal logics are all classical, and in fact, propositional modal logic is simply a fragment of classical first order logic. Same thing with logics higher than first order: this is completely orthogonal to whether a logic is classical or not. What makes classical logic classical is its treatment of the sentential connectives, period. Also, your discussion of Brouwer and Kronecker misses a crucial fact: their rejection of the LEM crucially has to do with quantification over infinite domains; both accept it for finite domains. I.e. it is trivial to show that, for some (computable) property P over the natural numbers up to 100, either there exists at least one number which has P or there exists no number which does. The procedure is simple: you just check all the numbers up to 100, one by one; by the time you're done, one of the disjuncts has to have been true, thus the whole formula is true. However, when you try this for the whole of the natural numbers, you just are not guaranteed that some number will have P, and you never finish checking that no natural number has P. Hell, the catalog of logics is endless. Dynamic logic. Infinitary logics. Diagrammatic reasoning systems (e.g. Venn diagrams, Euler diagrams, Peirce's quantificational diagrams). Various sorts of substructural logic (the Lambek Calculus, and the various spinoff systems that Dutch grammarians are deriving from it). Paraconsistent logics of many sorts. And if that were not enough, there's no end of different semantics available for many logics: for modal logic e.g. Kripke frames, Tarski's topological interpretation, game-theoretic interpretations of various sorts, geometrical interpretations (a Ph.D. dissertation was done on this recently at Amsterdam), and so on. If you want the ultimate strange logic, there's Independence-Friendly logic, where quantifiers don't have to scope strictly over each other. I must stop here. The point is simple: contrary to myth, there is nothing obvious about logic. --em
 Is there a universal logic? (none / 0) (#125) by Vishakhadutt on Sun Nov 17, 2002 at 10:49:10 PM EST

 I have laos enumerated some logical systems in my reply #124. However after reading your note, I have strengthened my opinion that logic is just our way of looking at things. It is not discovering how things work but surmizing why they work that way. There is very little chance of combining all forms of logic or discovering a universal logic at the back-end of all logical systems. Logic is not a driver, its just a chart. [ Parent ]
 Very Unfair Attitude and Reasoning (none / 0) (#123) by Vishakhadutt on Thu Nov 14, 2002 at 08:29:15 AM EST