Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Subconsciously, People may be Bayesian

By leoaugust in Science
Tue Jan 20, 2004 at 05:18:20 PM EST
Tags: News (all tags)
News

DAVID LEONHARDT writes in a NY Times article about people playing the odds of everyday life with Bayesian Analysis. He describes new research, recently published in Nature, "which stands out because it offers a detailed window into how the Bayesian thought process works, showing the point when uncertainty becomes great enough to give past experience an edge over current observation." Bayesian Analysis, among researchers, is "the combining of new information with conventional wisdom." I do agree about their reliance on past observations, but I believe that they have underestimated the role of future orientation in the whole mix of decision making.


"The human brain knows about Bayes's rule," said Konrad P. Körding, a postdoctoral researcher at the Institute of Neurology in London, who conducted the study published in Nature along with Daniel M. Wolpert, a professor at the institute. (In addition to an excellent article An Intuitive Explanation of Bayesian Reasoning there is a description and links to more resources on Bayes Theorem here.) "The more uncertainty that people face -- be it caused by wind on a tennis court, snow on a football field or darkness on a country highway -- the more they make decisions based on their subconscious memory and the less they depend on what they see."

Even in everyday occurences like crossing a street or taking a child to a doctor, there is a lot of computation going on in the mind to apply the past experiences to the present situation.

  • "When crossing a street, people rely on both what they see and what they remember about the speed of cars on similar roads.
  • When deciding whether to take a sick child to a doctor, parents consider the current symptoms as well as the child's history and their general knowledge of illness."
"There seems to be little time for highly advanced quantitative analysis that weighs current observations against past experiences to suggest a plan of attack. But this kind of analysis is precisely what the human brain does when facing a physical challenge." "Most decisions in our lives are done in the presence of uncertainty," Dr. Körding said. "In all these cases, the prior knowledge we have can be very helpful. If the brain works in the Bayesian way, it would optimally use the prior knowledge."

... that weighs current observations against past experiences to suggest a plan of attack. The current focus of a lot of work is to incorporate past experiences into the present. This is a good premise for computers and machines because for them only the past and present exist - it is beyond them to conceive of the future. But, I believe, that ignoring the future orientation of humans, of what can happen and what cannot, what should happen and what not, what must happen and what must not, etc., is ignoring one of the key dimensions in the models for human-decision making process.

There is an excellent book called "Time and Inner Future" by F.T. Melges, and one of the concepts that he brings out very well is that in addition to the past and the present, our conception of the future is critical. Paraphrasing from a course description of The Psychology of Time

A final dimension which exerts a major influence upon everyday behavior is

  • temporal perspective and
  • one's relative orientation toward
    1. the past,
    2. present, and
    3. future.
Clinical psychologists agree that this orientation is central to one's mental well-being and the degree of ego strength displayed in coping with life's difficulties.

And to answer the question as to why some people seem to be better at making decisions the study in Nature suggests that "the most likely explanation may be that some people are quite good at subconsciously using statistical techniques and others are far less so." This is area where I think another dimension must be added to the research. Statistical techniques are a just one of the four fundamental techniques that we use in decision making.

  1. The first is deterministic decision making where there is a straight forward relationship of If A then B and if B then C .... etc. If we have all pieces of knowledge then we could probably apply deterministic techniques to decision making.
  2. But if there are too many pieces then we can select a small sample from the population so that the sample is representative of the population. Then we can apply statistical techniques to the sample, and as all the data is contained in the sample it reflects the population behavior.
  3. If the system is larger and we cannot see the whole population, and the sample does not necessarily reflect the behavior of the population, then we have to use probabilty distributions like binomial and poisson - i.e. apply knowledge that is obtained from outside the confines of the sample. This is probabilistic decision making. As the NY Times articles says, "some researchers remain skeptical that the human mind works like the coldly rational Bayesian machine suggested by the Nature paper. "I'm quite comfortable with the idea that people use probability," said Dr. Stigler, the Chicago statistician. "The idea that it's associated with a Bayesian approach is not quite clear.""
  4. Finally, we have those situations in which we don't have all the pieces of information (deterministic), nor does the sample reflect the behavior of the population (statistical), and nor are the known probability distributions applicable (probabilistic) to the sample or population. In these cases we resort to Cognitive decision making which is where the person brings in their unique cognitive powers and instincts to the issue to decide what to do. This Cognitive Decision Making part can never be taken over by a machine, and nor can it be fully modelled as each individual is unique.

The research of how we incorporate past experiences with the present stimuli to make decisions is very interesting, but in addition to the past experiences our future orientation should also be incorporated in these models. And further, instead of looking at them as statistical techniques, we should look at them as evolving lifecycle of deterministic, statistical, probablistic, and cognitive techniques.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o a NY Times article
o recently published in Nature,
o An Intuitive Explanation of Bayesian Reasoning
o here.
o "Time and Inner Future" by F.T. Melges,
o The Psychology of Time
o Also by leoaugust


Display: Sort:
Subconsciously, People may be Bayesian | 124 comments (106 topical, 18 editorial, 8 hidden)
Bayes Theorem (2.77 / 9) (#4)
by Scrymarch on Tue Jan 20, 2004 at 05:02:21 AM EST

I see no evidence that human rationality is vulnerable to the same hacks as Bayesian decision making.  For instance, the problem where the payout of a 50/50 bet is doubled after each failed iteration.  It's Bayesian rational to bet everything you have, but few people would.  It might be worth mentioning this, or at least that Bayesian decision making is evaluating decisions by calculating probability*utility.

Editorial: Title suggestion: Subconscious Use of Bayes Theorem.  Also, I know Jacob Neilsen says to use highlighting in order to aid skimming and microcontent, but a k5 article is closer to a newspaper article or an essay than a corporate web page, and it destroys your prose, forcing my eye to skip when it wants to read.

Title and Highlighting (none / 2) (#8)
by leoaugust on Tue Jan 20, 2004 at 05:32:00 AM EST

Thanks.

I have reduced the highlighting.

Your suggested Title is more elegant but I was trying to play on the title of the original NY times Article, i.e. "Subconsciously, Athletes May Play Like Statisticians."


The eyes cannot see what the mind cannot see.
[ Parent ]

fear not (2.40 / 5) (#15)
by martingale on Tue Jan 20, 2004 at 06:51:46 AM EST

Humans won't be vulnerable to Bayesian hacks anytime soon. This is mostly because humans do not generally, and specific humans do not often, reason entirely rationally.

In 1945, R. Cox showed that Bayesian reasoning is the only way of combining probabilities (and as a special case, certainties) in a logically consistent way (where logic is taken to be classical logic here).

It follows that non-Bayesians cannot be rational except by chance, whence your hacking fears are not necessary.

I'm afraid I also have to disagree with you on the Bayesian solution for gambling: You're ignoring the utility (enjoyment) of the time spent playing the game.

[ Parent ]

Nothing to fear, but rationality itself (none / 3) (#18)
by Scrymarch on Tue Jan 20, 2004 at 08:25:55 AM EST

I already believed humans weren't Bayesian-rational, but I didn't know about the Cox result, thanks.

It follows that non-Bayesians cannot be rational except by chance, whence your hacking fears are not necessary.

Another day, another exploded plan for world domination.

I'm afraid I also have to disagree with you on the Bayesian solution for gambling: You're ignoring the utility (enjoyment) of the time spent playing the game.

Fair enough.  I didn't really go into it; I didn't mean to suggest gambling wasn't rational (for enjoyment, say).  I just think the gambling example is one reason Bayesian rationality isn't "rational" as such, at least not in the everyday sense fo the word.  I know you could add things to the model to fix it, eg, evaluating the utility of having no money to buy groceries tomorrow, etc.  But now you're just going to hit me with Cox again, and I don't have a good counter-argument, just food for thought.

[ Parent ]

quickly a few precisions (none / 2) (#38)
by martingale on Tue Jan 20, 2004 at 06:59:46 PM EST

Fair enough. I didn't really go into it; I didn't mean to suggest gambling wasn't rational (for enjoyment, say). I just think the gambling example is one reason Bayesian rationality isn't "rational" as such, at least not in the everyday sense fo the word. I know you could add things to the model to fix it, eg, evaluating the utility of having no money to buy groceries tomorrow, etc. But now you're just going to hit me with Cox again, and I don't have a good counter-argument, just food for thought.
Don't take Cox's result as more important than it is. The Bayesian paradigm employs several ingredients, the most fundamental being Bayes' theorem of course. This tells you how to update a set of beliefs in light of new information.

So Cox tells you that, if you're going to incorporate observations into a set of prior beliefs, you can only do it a single way. But this way is actually a large family of ways, which depend on something that is normally outside your Bayesian paradigm: the model (which turns up as the "likelihood").

So there are a large number of ways of updating beliefs in light of observation, but they all can be manipulated with Bayesian theories, that's all Cox says.

Things that aren't specified by Cox: the model for the observed data points, the utility function. Normally, those are chosen completely arbitrarily (sometimes after careful experiments to determine the best model etc.). It is of course possible to apply Bayesian theory to model selection as well, in which case you're using Bayes on itself, so to speak. The nice feature is that because it's still Bayesian, the same theory can be applied at this meta-level and so on.

Think of Cox like, say, Galois' results on quintics. It's fundamental, but doesn't mean the world ends with 5th degree polynomials.

[ Parent ]

Oh no. (none / 3) (#32)
by Estanislao Martínez on Tue Jan 20, 2004 at 04:08:02 PM EST

This is mostly because humans do not generally, and specific humans do not often, reason entirely rationally.

I really, really must object to defining "rationality" as some mathematical property.

--em
[ Parent ]

once you objectify, mathematics appears (none / 1) (#39)
by martingale on Tue Jan 20, 2004 at 07:22:34 PM EST

There are several concepts which lend themselves to mathematical treatment (there is nothing deep about this, it follows from hindsight), and consistent update of beliefs is one of them. Another famous concept of around the same time is information content.

I don't see what there is to object about this. It's remarkable that a small combination of properties of a function can impose strong structural requirements on it. This is how we ended up with the formula for entropy. This is also why so much of statistical mechanics turns out to be axiomatizable in information theoretic terms, a hundred or so years after the physicists started inventing it.

In the case of Bayesian rationality, the structure appears by importing classical logical requirements together with consistency. If you do not accept classical logic as the most important kind of logic (and we've argued about this before), then I'm not sure Bayesian theory applies. For example, I'm not at all sure Cox's result holds for quantum logics.

Even if you accept classical logics as fundamental for argument's sake, the resulting Bayesian structure is extremely flexible, to the point where you can do just about any crazy thing, and "complete" it via Bayes' theorem.

[ Parent ]

Mathematics is an analytical tool (none / 1) (#47)
by Estanislao Martínez on Tue Jan 20, 2004 at 09:01:58 PM EST

The problem doesn't lie in using mathematical techniques to achieve practical ends-- say, e.g. comparative evaluation of the many courses of action available to agents. The problem is the reification of mathematical techniques and definitions as "rationality" itself. The notion of rationality antecedes any mathematical tool we bring to it, and I would argue that at heart it is not a logical one, but rather an ethical one. And to surreptitiosly replace a foundational but vague ethical notion with an inadequate but precise one is, in my mind, an evil.

--em
[ Parent ]

heh (none / 1) (#50)
by martingale on Tue Jan 20, 2004 at 10:38:54 PM EST

The notion of rationality antecedes any mathematical tool we bring to it, and I would argue that at heart it is not a logical one, but rather an ethical one. And to surreptitiosly replace a foundational but vague ethical notion with an inadequate but precise one is, in my mind, an evil.
Unless I misunderstand you, that is not an adequate criticism. You can see rationality as ethics if you want, but you'll need to qualify it in the same way I've been qualifying Cox's result in the threads. Without a common base of accepted definitions, we're just talking past each other.

As I see it, the term "rationality" isn't being subverted as much as being described, admittedly within a framework which depends crucially on the people doing the describing. Whether rationality itself antecedes the description is irrelevant. The sun has existed well before my birth, but I can still describe it now. Of course, it can always be argued that the description is incomplete. Whether that matters for the discussion at hand is an interesting question.

[ Parent ]

Is rationality ineffable? (none / 1) (#52)
by Estanislao Martínez on Tue Jan 20, 2004 at 11:46:47 PM EST

Without a common base of accepted definitions, we're just talking past each other.

We need a common base to be able to talk to each other, but I don't think in the basic case those are "definitions". To quote Wittgenstein, at some moment our justifications for acting the way we do run out, and all we can say is that we just act this way.

If rationality is something that grounds discussion, it just can't be some set of mathematical rules, and in fact any non-solipsistic "description" of it will presuppose it. This forces the question of what do said descriptions clarify at all. (I don't know what an answer to that is, but I don't think "nothing" is one.)

Thus perhaps the heart of the issue is that I think rationality is, when you get down to it, ineffable.

As I see it, the term "rationality" isn't being subverted as much as being described

This is not what I see happening in the culture at large. I see, e.g., theories of economics based on assumptions about people's (game-theoretic) "rationality", and then used to justify policy decisions which in fact cater to the interests of powerful minorities. I am inclined to call this a subversion of rationality.

--em
[ Parent ]

I don't think so (none / 1) (#58)
by martingale on Wed Jan 21, 2004 at 01:37:12 AM EST

We need a common base to be able to talk to each other, but I don't think in the basic case those are "definitions". To quote Wittgenstein, at some moment our justifications for acting the way we do run out, and all we can say is that we just act this way.
Have we already gone through all the arguments and counterarguments, to the point where rationality depends on tenuous principles we cannot bridge? Internet time sure does fly ;-)

The first steps still need to be attempts at agreement on the basics, lest we despair at the futility of discussion as a whole, don't you think? Wittgenstein's point is clearly a last resort.

If rationality is something that grounds discussion, it just can't be some set of mathematical rules
That isn't what is being claimed. Rather, a rationality whose usage satisfies certain assumptions admits an equivalent mathematical form. You're free to postulate a rationality which breaks the assumptions (e.g. quantum logics) as often as you like. But insofar as you accept the assumptions as accurate reflections of its use, you accept automatically the existence of a mathematical object which can be used interchangeably instead of the rationality.

The assumptions are basically the following (technical details removed):

1) if you have a degree of belief concerning a statement, then you also have a belief about its logical negation.

2) whenever you have a belief about two statements in conjunction, this belief depends fully on one, and conditionally on the other, given the first. ([80% rain + saturday tomorrow] depends on [saturday tomorrow] and [80% rain, given saturday tomorrow]).

3) if two sets of information are equivalent, then your beliefs are updated the same way whether you use one set or the other.

So to recapitulate: It simply doesn't matter what rationality truly is; if its use causes you to agree with these three assumptions, you've got yourself a mathematical object which can replace it (3 year parts and maintenance warranty).

This is not what I see happening in the culture at large. I see, e.g., theories of economics based on assumptions about people's (game-theoretic) "rationality", and then used to justify policy decisions which in fact cater to the interests of powerful minorities. I am inclined to call this a subversion of rationality.
Ah. That's the difference between theory and practice ;-)

[ Parent ]
irrational thinking unnecessary (none / 1) (#36)
by asolipsist on Tue Jan 20, 2004 at 06:15:50 PM EST

You seem to be saying that if you reason entirely rationally then you will be vulnerable to Bayesian hacks.
However, it seems possible to use Bayesian combination, remain rational, and avoid hacks.
Humans use reflexivity and have much higher orders of understanding than say, SpamAssasin. SA doesnt understand the meaning or context of groups of words (or even single words), so it is vulnerable to 'hacks' that use random words or combinations of words. This obviously doesn't work on humans because we do understand the meanings of groups of words.
Another reason hacks dont work on us is because we use reflexivity, we do a probabilsitic measure of our own probablistic methods, and probabalistic measure of that measure etc, etc. We also seek to gather new information and reevaluate when our filters do get 'hacked', we have the ability to form new combinations based on new information.
This synthesis ability, flexibility and reflexivity seem to give humans the ability to remain rational (in theory) and avoid 'hacks'.


[ Parent ]
correcting a small misunderstanding (none / 1) (#41)
by martingale on Tue Jan 20, 2004 at 08:09:31 PM EST

You seem to be saying that if you reason entirely rationally then you will be vulnerable to Bayesian hacks.
Not quite. It is really hard to do Bayesian "hacks", simply because Bayesian theory is very weakly structured. I was making a grand statement because I could ;-), although it is true that because Bayesian theory has *some* structure, it is conceivable that it *can* be hacked.

Your mention of spam filters suggests that you see Bayesian decision making as exemplified by them, so I'll give you a quick rundown of what Bayesian theory is about.

Suppose the state of some world can be described by a set of probabilities, ie numbers between 0 and 1. If you accept the rules of classical logic, such as not, and, or, there exists, for all (the last two are special), then it turns out that your numbers between 0 and 1 can only be consistent if they form a probability measure, in a technical mathematical sense. Every other object which gives numbers between 0 and 1 will break the logic somewhere, somehow.

When you have a probability measure, suppose it represents a degree of belief about some world. If you impose a condition (such as an observation) on the world, and if you ask for that condition to have a probability of 1, then the closest probability measure to the original (in many different senses) is a probability measure which can be calculated via Bayes' theorem. Bayesian theory is a body of results which describe the original and final probability measures in terms of each other, and other ingredients you may wish to add, such as a utility function.

That's Bayesian theory, and it applies to "everything" and "nothing" in particular. In a sense, Bayesian theory is a set of rules for rational thinking, which generalizes classical logic. It's interesting precisely because it is the only way of generalizing classical logic to linear degrees of belief structures, ie probabilities.

There is nothing to hack, yet, because Bayesian theory is very nonspecific.

Now take a system such as the SA spam filter. Assuming it truly follows Bayesian theory (I say this because most open source filters do not use Bayesian theory at all - this includes all those filters which use the "chi squared" algorithm pioneered by SpamBayes, which people like because it has the concept of "unsures"), then to make SA work you need a model for the structure of an email. The typical model is: "an email is a bag of words". When you have that, you've specified the "world" to which Bayesian theory can have something to say.

So now you could attempt to hack the filter, because you can use the structure inherent in that world, namely the "bag of words". You can play with capitalization, with word frequencies etc., and see how the probabilities change under those transformations. But you're only hacking the model, not the Bayesian theory itself.

When the authors of SA decide they want to change the world from under you, all your hacks belong to them. For example, if SA decides to use word pairs only, then the world goes from "bag of words" to "bag of bigrams" or some variation. Maybe they'll use triples of words, and remove the middle one obtaining nonconsecutive pairs. In Bayesian theory parlance, these changes would probably modify the "likelihood" function. Other things that can be changed are the prior distribution and the utility function.

Now that you understand what Bayesian theory is, and you realize it tells you practically nothing of concrete value, I'll just mention that models which use groups of words are well studied too. A sequence of n consecutive words is known as an n-gram, and usually associated with so called Markovian models.

Even trigrams are really quite good at predicting sentence structure locally, at least in English. In his famous paper on information theory, Shannon showed some examples of randomly created sentences using n-grams. It turns out that using n-grams makes a segment of approximately 2*n consecutive words sound roughly intelligible. So with English being a language whose sentences are quite short generally, a trigram model gives intelligible segments of approximately 6 words in length, and that's fairly close to the average sentence length anyhow. I expect that EM can tell us a lot more about this, it's not really my field.

I'll end by saying that n-grams aren't the only models. There are full sentence models, and complex models incorporating grammatical rules. All of these create their own "world" to which Bayesian theory can be applied, and each of those worlds can conceivably be hacked. It is not clear at all that a suitably complex model cannot come arbitrarily close to the human cognitive model.

Oh yeah, one last thing: there is some research in psychology (can't cite, not my field) which suggests that humans are exceedingly bad at estimating probabilities unless they are specifically trained. This makes it unlikely that humans internally strive to emulate a Bayesian approach to information management. Even if true, of course, that doesn't mean that Bayesian theory can't approximate humans really well. In statistical theories, many unrelated paths can lead to the same result.

[ Parent ]

holy reply batman (none / 1) (#48)
by asolipsist on Tue Jan 20, 2004 at 09:23:05 PM EST

that was some reply.
I dont think we're really at odds, your technical understanding goes way beyond mine, i dont not concider SA a prime example of bayesianess, just that its common now to call a baysian 'hack' one that breaks spam filters.
I get that:
p(A|X) =               p(X|A)*p(A)/p(X|A)*p(A) + p(X|~A)*p(~A)
doesn't really say a whole lot about how we got X or A :).

http://www.inference.phy.cam.ac.uk/mackay/Bayes_FAQ.html
was useful to me.
My point along w/ others in this thread, is that:
1) studies like these dont contain any interesting information
2) neuron behavior at any level (slug on up etc) is going to look bayesianish (even if it isn't, anytime you combine weighted averages, it could seem like combining probabilities etc)

regarding psych, one thing they should have pointed out in the tennis study is that the *winners* are likely to make shot selection that follows a bayesian model because their coaches have drilled high probability shot selection into them.

btw - shannon too huh, bonus if nextime you can work in nyquist, mandelbrot, and maybe err, godel for good measure.

[ Parent ]

splat! (none / 1) (#51)
by martingale on Tue Jan 20, 2004 at 11:07:36 PM EST

[...] concider SA a prime example of bayesianess, just that its common now to call a baysian 'hack' one that breaks spam filters.
I agree with you there. I just wanted to be clear about the misuse of bayesian terminology: it's rather like calling the Superbowl "sport", and arguing whether making a new kind of sport without commercials would help with your typical sport commentator's voice ;-)

Mackay has a set of lecture notes he's turning into a book (or has already?) which probably contains more than anyone wants to know about Bayes, from a physicist's point of view. Also, E.T.Jayne's posthumous book is quite nice. It used to be available online, but I think they've taken that down when the book got finished and published.

My point along w/ others in this thread, is that:

1) studies like these dont contain any interesting information

2) neuron behavior at any level (slug on up etc) is going to look bayesianish (even if it isn't, anytime you combine weighted averages, it could seem like combining probabilities etc)

About 1), I agree. In fact, if they truly wanted to test the Bayesian hypothesis, they could do experiments directly using Bayes' formula, and ask people to predict posterior probabilities, then compare them with the correct values. Simply saying "people learn from the past" is qualitatively uninteresting.

About 2), I'm not sure. It isn't clear to me that biological neurons do in fact give rise to Bayesian information processors.

An interesting difficulty in Bayesian theory is that performing real computations is far from trivial. Solving even a simple problem requires integrating in multidimensional space, usually.

Nowadays, this is all done by computer, but not exactly. Instead of solving the integration problem directly, statistics is used to simulate simpler systems whose long run behaviour is close to the required result.

Taking this in reverse, I expect it is quite plausible to assume that biological neuron behaviour, when suitably averaged by invoking the law of large numbers, does obtain the same results as a Bayesian information processor. There's also an evolutionary argument that can be made: Bayesian theory models what we think is desirable rational behaviour, therefore is close to what our brains are evolving towards anyway.

btw - shannon too huh, bonus if nextime you can work in nyquist, mandelbrot, and maybe err, godel for good measure.
To tell the truth, the first important Bayesians would be Laplace and Jeffreys. Here's a quote from Laplace: "God? I have no need of this hypothesis."

[ Parent ]
You are missing something important (none / 1) (#54)
by StephenThompson on Tue Jan 20, 2004 at 11:55:10 PM EST

There's also an evolutionary argument that can be made: Bayesian theory models what we think is desirable rational behaviour, therefore is close to what our brains are evolving towards anyway.

No, this argument cannot be made without pre-supposing intelligent design.

You seem to make a similar mistake in most of your arguments in the threads here.  It is as my comment above describes: you are confusing the model with the reality.


[ Parent ]

feeding the trolls? (none / 1) (#55)
by asolipsist on Wed Jan 21, 2004 at 12:04:21 AM EST

"No, this argument cannot be made without pre-supposing intelligent design."

What are you talking about?  Behaving in a 'rational' way and learning through past experience is very very advantageous and would be quickly selected for.

Monkey 1: i saw my friend get eaten by a leopard and all the other monkeys run away from it.
Monkey 2: ditto.

Leopard: Oh look, monkeys, ummm, monkeys...

Monkey 1: ack a leopard! run away run away.

Monkey 2: ack a leopard! I love you mr leopard, let me give you a kiss.


[ Parent ]

..zing.. (none / 0) (#56)
by StephenThompson on Wed Jan 21, 2004 at 12:19:11 AM EST

The phrase "evolving towards" presupposes a destination.  

As Mr Martinez points out below, rationality is ineffable.

[ Parent ]

where? (none / 1) (#60)
by martingale on Wed Jan 21, 2004 at 01:51:51 AM EST

There's also an evolutionary argument that can be made: Bayesian theory models what we think is desirable rational behaviour, therefore is close to what our brains are evolving towards anyway.
No, this argument cannot be made without pre-supposing intelligent design.
I'm supposing that Bayesian theory is the product of intelligent design, or discovery if you prefer. That's the whole point of this line of argument. Instead of marvelling at the supposed closeness between the theory and the human mind, we acknowledge that Bayesian theory is the equivalent of shooting an arrow into a tree first, and painting a bullseye around it afterwards.

All of this assumes of course that we truly believe the human mind is close to Bayesian theory. While it is a goal of Bayesian theory to be close to the human mind, it may not necessarily be very successful at this.

You seem to make a similar mistake in most of your arguments in the threads here. It is as my comment above describes: you are confusing the model with the reality.
I find that hard to believe, since I've been arguing in most of those comments that Bayesian theory is just a model satisfying certain assumptions. The true nature of reality is irrelevant to this discussion. Does a rational mind, when used as intended, satisfy some assumptions? In that case, there is an equivalent Bayesian model.

Does Evolution steer (speaking anthropomorphically) the human mind towards a Bayesian model? I don't know, but the question does make sense.

[ Parent ]

intelligent design==creationism (none / 1) (#63)
by StephenThompson on Wed Jan 21, 2004 at 04:38:27 AM EST

Perhaps you are not familiar with the term "intelligent design".  It is a form of creationism which claims that genetics and evolution are part of specific plan by god to achieve a predestined plan.

I am reading all of your posts in this thread, and it is clear that you are stuck "inside the box".  I don't know how to get you out of it; all I can say is that you are missing a big philosophical point. Perhaps a course in philosophy of science would help.

[ Parent ]

oh, I'm sorry (none / 1) (#64)
by martingale on Wed Jan 21, 2004 at 05:42:12 AM EST

I think you misread me. While I do invoke the phrase "intelligent design" in relation to Bayesian theory, it was not meant to convey the idea that the theory is a product of divine manufacture.

The intelligent beings referred to are humans - Pascal, Bayes, Laplace, Jeffreys, Keynes, de Finetti, Savage... There is also a longstanding argument whether mathematical theories are designed/developed or can only be discovered - My inclination is to the former.

Either way, I was thinking of humans inventing a mathematical device to describe themselves, and after the fact, (other humans) marvelling at how closely their minds follow this device. Nothing very deep, really.

[ Parent ]

The human mind is not a single algorithm (none / 1) (#57)
by ph317 on Wed Jan 21, 2004 at 12:42:52 AM EST


It's quite likely that our thought processes include a fair amount of bayesian decision making combined with other methods, including irrational ones, and even truly random inputs.  If I were designing a human, that's what I would do anyways.  After combining the best decision making methods into a weighted meta-system that adapts situationally, I'd add a mostly small but again situationally variable amount of irrational and/or random factors, to help thwart pathological looping attacks on the decision making process.

[ Parent ]
That's because (3.00 / 5) (#23)
by Xeriar on Tue Jan 20, 2004 at 10:51:46 AM EST

We have a logarithmic view of wealth. A 50/50 chance at doubling your wealth, as opposed to losing everything, is seen as an unfair bet, because there is value itself in having food, a place to sleep, and so on.

Similarly, spending a dollar (an insignificant fraction of most American's wealth) for a 1:90,000,000 chance at a million dollars or so is considered to be a fair trade, because you spend nearly nothing, for the chance to dream and possibly hit it big. Slim chance, sure, but some people spend more on ordering pizza every week (or day).

There are people addicted to gambling and so on, but most humans take more into account than we like to give ourselves credit for.

----
When I'm feeling blue, I start breathing again.
[ Parent ]

Gambling (none / 3) (#27)
by Scrymarch on Tue Jan 20, 2004 at 12:50:25 PM EST

So are you arguing we evaluate these things in a Bayesian manner, just with a sliding utility function that includes a "reserve" amount of cash (for eating, paying the mortgage, etc)?  The assumed linear nature of the utility function is perhaps an unfair part of the example, but I don't think it's the real problem.

As I said in another comment (but brushed over in the parent), my problem is with the gambling example as an instance of irrationality, not gambling as an instance of rationality.  I think gambling can be perfectly rational.  Indeed, after mulling a bit longer, I think people aren't Bayesian-rational because it's too narrow a definition of rationality.  There should be a whole range of responses to certain situations that can be described as rational.  It's similar to the approach taken in philosophical ethics where an ethicist describes a framework for making the decision, rather than decreeing which decision they consider moral.  That still leaves room for Bayes-like utility functions, I guess, with different definitions of utility for different people.  That leaves you with an unpleasant definitional relativism on one side (I do what my utility function says makes me happy, hence I'm rational), and difficulties with repeatability on the other, though in game contexts surely utility would be conflated with game-winning.

[ Parent ]

Not exactly (none / 3) (#31)
by Xeriar on Tue Jan 20, 2004 at 03:25:20 PM EST

But, if you have a million dollars, a hundred thousand extra is not worth 10% neither to your physical well being or your perception of it.

That might take around half a million or so. Thus, I'm arguing not for a linear scale but a logarithmic one.

----
When I'm feeling blue, I start breathing again.
[ Parent ]

Bernoulli (none / 1) (#66)
by Scrymarch on Wed Jan 21, 2004 at 05:52:40 AM EST

After consulting Google and my beleagured memory: That's Bernouilli's response of declining marginal utility.  It's still possible to construct the problem to counteract it, you just make the payout exponential.

[ Parent ]
I think there's a little more to it than that (none / 1) (#37)
by wurp on Tue Jan 20, 2004 at 06:22:20 PM EST

Because, for example, statistical reasoning will tell you it's always wise to bet if you have, for example, a 90% chance to multiply your wealth by ten and a 10% chance to lose everything.  However, regardless of how good the odds are or how much it will multiply your wealth, if you have a chance of losing everything, it is obviously wrong to sit in an infinite loop reinvesting your wealth - eventually you will lose the bet and lose everything.

Hmm, of course, you can analyze that statistically to determine how many times to reinvest... but then again comes into play your notion: what is the real value of what I have now worth vs. the real value of having ten times as much.  I wouldn't bet everything I have on a 90% chance to double it.  Being out on the street is enough of a bad prospect that it wouldn't be worth it.
---
Buy my stuff
[ Parent ]

Kelly strategy (none / 1) (#61)
by onemorechip on Wed Jan 21, 2004 at 03:07:56 AM EST

Why is it "Bayesian rational" to bet everything? Bayes' Theorem isn't about payoffs; it only relates conditional, prior, and posterior probabilities. What are the conditional probabilities in a 50/50 bet? (OK, there are the probability that you get paid if you win and the probability that your money is taken from you if you lose, but those are trivial unless you are dealing with an unscrupulous house.)

The correct strategy for the bet is the Kelly strategy: Bet that portion of your bankroll that maximizes the expectation of the logarithm of the payoff. In the case of a 50/50, double-or-nothing bet, this means you should bet nothing (which is rational, since losing 10% of your bankroll then gaining 10% of what you have left leaves you at 99%; in the long run you will lose money). If the odds are increased in your favor, then you have a non-zero bet that is optimal and can be calculated. (I believe the correct fraction for this case is 2P-1, where P is your probability of winning; for any P not greater than 0.5 you should obviously refrain from betting.)

Of course Kelly strategy assumes money is infinitely divisible so it breaks down when your bankroll shrinks to, say, 10 cents, but it is a good approximation for large amounts of money.
--------------------------------------------------

I did my essay on mushrooms. It's about cats.
[ Parent ]

A peculiar game (none / 1) (#65)
by Scrymarch on Wed Jan 21, 2004 at 05:45:13 AM EST

What are the conditional probabilities in a 50/50 bet?

The correct strategy for the bet is the Kelly strategy: Bet that portion of your bankroll that maximizes the expectation of the logarithm of the payoff. In the case of a 50/50, double-or-nothing bet, this means you should bet nothing

No doubt you're right - I gave a very misleading description in my initial comment.  The scenario I actually had in mind was not a double-or-nothing bet, but the St Petersburg Paradox.  The game is that you make an initial bet, then the house starts flipping a coin.  Your bet doubles while the coin keeps coming up heads (or tails, but the same side every time).  It's not a typical casino game where there is a new choice each time - the coin flip really just determines the payout.  The game-rational or Bayesian-rational response to such a game is to bet everything, though no-one would, and various explanations for this are offered on that Stanford page, such as the house rapidly running out of money for any of the longer runs.

[ Parent ]

OK, I see... (none / 1) (#99)
by onemorechip on Wed Jan 21, 2004 at 11:54:48 PM EST

So the game has an infinite expected payoff. Actually no such game is possible since the house would be unable to pay off if there were more than a certain number of coin flips, so let's say it's limited, but still VERY LARGE (maybe the house can stand up to 100 flips, for example).

There are two interpretations and I'm not sure which one is correct. Is the amount paid by the player a bet (so that the payoff is proportional to the bet) or an entry fee? Your description ("your bet doubles...") implies it as a bet. The web site you linked treats it as an entry fee, so the payoff is not a function of the amount paid in. I'll assume the latter interpretation.

Under Kelly strategy, you would calculate the portion of your bankroll that would maximize the expectation of the log of your bankroll after the game (your original bankroll, minus entry fee, plus winnings).

Take the table from the website and modify it as follows:

Let X be the percentage of your bankroll that you pay to enter the game. Say your bankroll is $100; then X is the size of your bet in dollars.

Then at the end of the game you will walk away with Prize + $100 - X. This number, multiplied by P(n), is the amount you take the logarithm of.

n__P(n)___Prize + $100 - X___Expected log
1..1/2....$102 - X...........0.5*log(102-X)
2..1/4....$104 - X...........0.25*log(104-X)
3..1/8....$108 - X...........0.125*log(108-X)
4..1/16...$116 - X...........0.0625*log(116-X)
etc.

Note that if X is 102 you get a negative infinity in the first row. This is a case where you could lose your entire bankroll in one game, so obviously you wouldn't play if the entry fee is so high.

I don't know what the sum of the last column converges to (assuming it converges; the number decays almost exponentially so it seems likely). If we are dealing with an entry fee rather than a bet, we should enter the contest if it converges to something greater than log(100) (the expectation of the log of your bankroll if you stay out of the game). For a finite version of the game, say with a 20-toss limit, the upper limit on the entry fee that you should pay can be found with a spreadsheet. I came up with $7.78, if I did it right; and increasing it to 30 tosses only changed the result by a penny.

If you play the game again, you add your winnings minus the entry fee to your starting bankroll. If you came out ahead the first time, then your bankroll is higher so you can tolerate a higher entry fee. If you had a net loss the first time, then you have a smaller bankroll the second time and a lower limit on the entry fee.

My original question remains, however: How is this related to Bayesian probabilities?
--------------------------------------------------

In a democracy, the government has no rights, only permission. A government that has rights is a dictator
[ Parent ]

I repeat myself (none / 1) (#106)
by Scrymarch on Thu Jan 22, 2004 at 08:22:23 AM EST

Serves me right for not laying out my arguments properly in the first place.

Your objection is the same one raised by Xeriar and Bernoulli (elsewhere on that Stanford page), and the way to deal with it is to modify the game such that the payout increases exponentially.  I'm fairly sure the entry amount / bet variants are equivalent - it doesn't matter whether you choose the amount or the house.

Bayesian rational decision making is a process whereby decisions are made by considering the probability*utility of a given outcome.  Utility is given by a utility function which depends on the context - games are often analysed because the utility is considered easy to codify via points in the game.  It is an extension to Bayes theorem, but a fairly common one.  Unfortunately I don't have a more official definition to hand, but Google will confirm that I didn't just make the term up.

This is a case where you could lose your entire bankroll in one game, so obviously you wouldn't play if the entry fee is so high.

You and I think this is obvious, but it is not a result from Bayesian-rational or indeed simpler models of expected value, and it's a reason why I think such a definition of rationality is too narrow.  For instance your example says 102 is too high - but what about $101?  What's your reserve amount?  To me there's a range of rational responses which involve not playing the game at all, or not paying the optimal amount in order to prolong my gambling enjoyment.  Irrational responses would include betting your daughter's bicycle and challenging the dealer to a duel.

[ Parent ]

Re: I repeat myself (none / 1) (#108)
by onemorechip on Thu Jan 22, 2004 at 10:28:07 PM EST

Your objection is the same one raised by Xeriar and Bernoulli (elsewhere on that Stanford page),

Nope. Kelly strategy is only about maximizing rate of return. It has nothing to do with diminishing returns.

To see how it works, consider a simple bet that pays off at odds of K:1. Normalize so that your starting bankroll is 1, and the amount you bet is X. If the probability of winning is such that you expect to win W times and lose L times out of N games (W + L = N), then after N games you expect to have

((1 + KX)**W) * ((1-X)**L)

This is your expected rate of return per game, raised to the Nth power. You want this to be maximized, and also greater than one. Maximizing this expectation is the same as maximizing the following expression with respect to X:

W*log(1+KX) + L*log(1-X)

To ensure positive returns this second expression needs to be positive. If L/W is greater than K (L, W, and K being positive), there is no positive value of X for which this is true, so this additional requirement also matches the requirement for expected utility to be positive.

The resemblence to Bernoulli's approach is purely coincidental.

and the way to deal with it is to modify the game such that the payout increases exponentially.

I think you mean "faster than exponentially", since it increases exponentially in the original formulation. Optimum KS can be calculated provided that the series converges. With the hyperexponential (for want of a better term) payouts, the series diverges. But if the series diverges towards positive infinity, the recommendation is to bet as much as possible (or if we are talking entry fee, the upper limit may be set as high as possible), provided that:

1. No term is infinitely negative or undefined (the argument of the logarithm in each term must be positive), and

2. You are dealing in infinitely divisible monetary values (e.g., the set of real numbers, or the set of rational numbers).

Condition 1 ensures that you never go bankrupt. The reason for condition 2 is easy to see: If you have a bankroll of $1, and you bet 99 cents, then if you lose you will have 1 penny left. You can't bet a fraction of a penny, so you are forced to risk your entire remaining bankroll on the next play, or quit having lost almost everything. Thus in the real world, the maximum bet/entry fee would be smaller than if you were playing in a world of infinitely divisible currency.

I'm fairly sure the entry amount / bet variants are equivalent - it doesn't matter whether you choose the amount or the house.

If it's a bet rather than an entry fee, then the payoff is proportional to the size of your bet. This modifies the last column of the table in my previous comment, and the resulting series will be a different function of X. But also, if it's an entry fee, we are unable to choose X for ourselves, so we can't strategize for maximum return. Instead we are making a yes/no decision, which we do by determining whether the sum of the series (calculated using the predetermined value of X) is positive or negative.

Both of these factors affect the calculated number: In one case we are looking for the maximum of one function, in the other we are looking for the zero crossing of a different function.

Bayesian rational decision making is a process whereby decisions are made by considering the probability*utility of a given outcome. Utility is given by a utility function which depends on the context - games are often analysed because the utility is considered easy to codify via points in the game. It is an extension to Bayes theorem, but a fairly common one. Unfortunately I don't have a more official definition to hand, but Google will confirm that I didn't just make the term up.

I thought it was Von Neumann who developed the concept of expected utility. Bayes' Theorem allows you to calculate an unknown probability from a known prior probability and two known conditional probabilities; it isn't needed nor is it helpful in calculating expected utilities. This is not to say that Bayes' Theorem can't be applied to game theory; I just don't see an application to this particular game, where all probabilities are known a priori.

"This is a case where you could lose your entire bankroll in one game, so obviously you wouldn't play if the entry fee is so high."

You and I think this is obvious, but it is not a result from Bayesian-rational or indeed simpler models of expected value, and it's a reason why I think such a definition of rationality is too narrow.

KS is rational...

For instance your example says 102 is too high - but what about $101?

$101 doesn't allow you to go bankrupt, but it does result in a negative expected rate of return, so you would not play this amount.

What's your reserve amount?

I don't know what you mean by this question. By requiring that you only bet a fraction of your bankroll, using Kelly strategy means you always have a reserve.

To me there's a range of rational responses which involve not playing the game at all, or not paying the optimal amount in order to prolong my gambling enjoyment. Irrational responses would include betting your daughter's bicycle and challenging the dealer to a duel.

KS tells you when it is rational to play (and for how much), and when it is not rational to play. However, even when KS advises you sit the game out, you might have other reasons for staying in. When Edward Thorpe was trying casino gambling strategies based on Kelly strategy, he had to keep making small bets at the blackjack tables even when the card count was against him, so that he would be ready to make a larger bet when the tide turned the other way. When the casinos got wise, he would have a partner (probably Claude Shannon), with one player making small bets and counting the cards, then surreptitiously signaling the other player to join the game as a "high roller" when the card count got interesting.

I had the good fortune to hear Thorpe give a public lecture several years ago. That is where I first heard about Kelly strategy. I suspect the wearable roulette computer he and Shannon developed must have applied Bayes' Theorem.
--------------------------------------------------

In a democracy, the government has no rights, only permission. A government that has rights is a dictator
[ Parent ]

I disagree (2.50 / 6) (#12)
by fae on Tue Jan 20, 2004 at 05:58:45 AM EST

They may be vaguely using something related to Bayes' Theorem: the idea of combining evidences.

However, our numerical intuitions are right out. Fix them here.

-- fae: but an atom in the great mass of humanity

Added link to main text (none / 1) (#19)
by leoaugust on Tue Jan 20, 2004 at 09:52:13 AM EST

Thanks for the excellent link. I have added it to the main text.


The eyes cannot see what the mind cannot see.
[ Parent ]

Agree with you. we are poor prob. calculators (none / 0) (#125)
by vqp on Sun Feb 29, 2004 at 03:20:19 PM EST

Or else how can you explain people in casinos or buying lottery tickets? We use rational thinking only when strictly necessary: in IQ tests.

[ Parent ]
I hate Paul Graham (1.25 / 8) (#21)
by Dirt McGirt on Tue Jan 20, 2004 at 10:06:24 AM EST

Bayesian is the New Economy all over again.

--
Dirt McGirt: that's my motherfucking name.
I hate him too (none / 1) (#42)
by martingale on Tue Jan 20, 2004 at 08:11:46 PM EST

Because all that talk of "Bayesian" has little to nothing to do with "Bayesian". He's managed to corrupt a concept for the masses (when I say "he", I mean all those thousands of non-experts, who discuss the stuff using the wrong words, sigh).

[ Parent ]
Rigth.. (none / 1) (#67)
by ekj on Wed Jan 21, 2004 at 07:24:13 AM EST

The test described shows that when given clear and obvious information, people react to it, while in absence of it they tend to follow old habit/experience.

So, if your teenage daugther is missing, but left a note saying "I am with Sara", most human parents would check with Sara first, while absent the note, they would tend to check first the places/people teenage daugther tends to hang out with.

This is surprising why ?

Put another way, what is the negative of this experiment ? How would the test-persons have had to behave for the researchers to conclude that we do *NOT* think "Bayesian" ?

[ Parent ]

Responses not based on statistical probability (none / 1) (#78)
by simul on Wed Jan 21, 2004 at 01:57:35 PM EST

If the subjects responses were not based on statistical probability then the subject would not be bayesian.

In other words, if the subject as "off" or "occasionally irrational", then they would be seen as non-bayesian.

What's interesting is the "sense of freedom" people have when they are performing precisely according to Bayesian rules (probability, with full information)... as if a robot.

When people lack the necessary information to complete a task, they feel more constrained or confined - even though their irrational actions may be more unpredictable and seemingly "free" to outsiders who have this information.

Read this book - first 24 pages are free to browse - it rocks
[ Parent ]

Further reading from Independent.co.uk (2.50 / 4) (#24)
by robroadie on Tue Jan 20, 2004 at 11:35:37 AM EST

Tennis stars serve up performance grounded in mathematical theory
By Steve Connor, Science Editor

When Andy Roddick returns a 149mph serve, he is performing a feat normally reserved for the best mathematical brains, according to a study into the science of tennis........

+1FP, Good Article (1.20 / 5) (#25)
by cosmokramer on Tue Jan 20, 2004 at 11:54:36 AM EST

Not sure if I can do that MSN thumbs up on here but insert that here ( ) . I'm interested in learning more on it. Anything about the abilities of the human beyond what we even believe we are capable of is fascinating.

From the captain obvious dept. (2.40 / 5) (#35)
by asolipsist on Tue Jan 20, 2004 at 05:44:33 PM EST

Meh,
Bayesian filtering works because it uses probability and weighted averages. The study of Neurons showed researchers the power of using complex weighted averages (neural networks). This is almost tautological, its like saying reading aloud resembles text to speech software.
People are not Bayesian, Bayesian is people, neurons have been doing this kind of filtering eons.
see http://vadim.www.media.mit.edu/MAS862/Project.html
for a crude rundown.

Sure, but... (2.50 / 4) (#44)
by virtualjay222 on Tue Jan 20, 2004 at 08:15:45 PM EST

You could easily argue that we perform various calculus operations everytime we take a step (our vestibular system only senses acceleration, so how do we compute velocity...?)

So which is it? Does our thought resemble Bayes's rules, or is it the other way around?

Personally, I'm leaning towards the latter. Any thoughts?

---

I'm not in denial, I'm just selective about the reality I choose to accept.

-Calvin and Hobbes


you're not far off (none / 1) (#46)
by martingale on Tue Jan 20, 2004 at 08:26:34 PM EST

It is known that Bayesian information processing is the optimal way of transforming beliefs subject to some general requirements. Assuming we're evolving towards those requirements (which isn't such a stretch, given that those requirements arise out of a goal to model desirable thought processes), the Bayesian model ought to perform quite well.

The only major problem with all this is the degree of complexity. Human machines deal with problems and data several orders more complex than what we usually model through Bayesian theory. It's not clear, I think, that at such levels, Bayesian theory would be useful or sufficiently close to the functioning of a human mind.

[ Parent ]

check out Kalman Filters (none / 1) (#59)
by Work on Wed Jan 21, 2004 at 01:45:52 AM EST

..and how they apply to robotic mappings and location.

Basically its applying bayesian theory to sensory input. By recognizing patterns similar to ones stored in memory, you can determine, with a certain amount of probability, where you are physically. In a human, the 5 senses work this way.

In robotics its a fairly advanced way to give a robot a sense of location based on what it can see with its sensors, and whats it experienced and presumably mapped before.

[ Parent ]

The computation of velocity ... (none / 1) (#62)
by EphraimT on Wed Jan 21, 2004 at 03:30:39 AM EST

... is usually done on the fly, so to speak. It most commonly involves rapid mental estimation of distance to potential impact with some other object and the pain of sudden, rapid deceleration. Thus it seems likely that we perceive patterns in the pain generated by various previous impact events, lack of pain associated with successful transition of inertial states and eupohria at surviving close calls less with mechanical certainty than with how well, or poorly, we individually assimilate our individual experiences. In the Kingdom of The Risk Adverse the Theorist is King!

[ Parent ]
Calculus (none / 2) (#68)
by warrax on Wed Jan 21, 2004 at 09:19:06 AM EST

You could easily argue that we perform various calculus operations everytime we take a step

You could argue that we do so indirectly, but when we're trying to e.g. catch a ball, our mental process does not really seem to involve solving equations or anything like that. It's more akin to pattern matching against previous experience; this is part of the reason practice is so important -- a physicist who can calculate the precise trajectory of a thrown ball is probably much worse at catching a ball than any kid off the street, simply because the kid has more (recent) experience catching a ball.

-- "Guns don't kill people. I kill people."
[ Parent ]

Feedback loop (none / 1) (#73)
by sphealey on Wed Jan 21, 2004 at 12:53:11 PM EST

You could easily argue that we perform various calculus operations everytime we take a step (our vestibular system only senses acceleration, so how do we compute velocity...?)
I would say it is more likely that the brain uses an adaptive feedback loop to catch a ball, rather than explicitly solving a calculus problem.

Whereas in the case of judgements under uncertainty, the author is arguing that the brain actually does solve the statistics problem (presumably using some sort of approximation algorithm). In this case the person is typically faced with a binary decision (step off the curb, or not) so feedback won't work. Except in the evolutionary sense ;-).

sPh

[ Parent ]

Is there a difference? (none / 0) (#127)
by BillyBlaze on Sat Mar 06, 2004 at 06:41:38 PM EST

So which is it? Does our thought resemble Bayes's rules, or is it the other way around?

I say both.

[ Parent ]

Cart before the horse (2.60 / 10) (#45)
by StephenThompson on Tue Jan 20, 2004 at 08:24:54 PM EST

People so often confuse their tools with reality.  They see that reality conforms to their tools rather than the more natural idea that their tools conform to reality.  
To say that humans think in a baysesian way ignores the fact that people thought the way they did well before the theory ever existed.  Thus, if there is any scientific vality to the idea, it would seem that bayeseian logic conforms to how humans think, and not the other way around.

It is a fallacy of the first degree to make the statement in reverse chronology, as gross as implying a true fact implies any theory that leads to it. After making this philosophical blunder, one could conclude that humans who do not conform to the bayseian standard are mentally deviant.  This may sound obtuse and silly and yet every single claim to human deviance in history is founded in this precise fallacy!  For indeed, the very concept of deviance is based on a model which attempts to observe the facts, and not the reverse!

But... WHY??!?!?! (none / 0) (#124)
by mcrbids on Sat Jan 31, 2004 at 03:27:12 AM EST

It is a fallacy of the first degree to make the statement in reverse chronology, as gross as implying a true fact implies any theory that leads to it. How about this one: 1) Many people who get into stupid auto accidents are drunk. 2) Getting people drunk make it hard to drive... If the latter theory is ridght, that's not a reason to think that the latter satement is TRUE... The fact that the theory was developed well after the situation at hand does not make the theory that much more compellingg!
I kept looking around for somebody to solve the problem. Then I realized... I am somebody! -Anonymouse
[ Parent ]
Wow (2.75 / 4) (#49)
by gmol on Tue Jan 20, 2004 at 09:27:06 PM EST

I knew of the term Bayesian for a long time, and looked it up without much of a thought.

Recently I have been trying to understand it better after realizing that I don't know anything about statistics, and have a difficult time thinking about errors etc. when it comes to experiments...

Funny that I see it pop up a lot recently (Nature etc.) for better or for worse, I think a lot of people will start looking into it...

I found an absolutley fantastic tutorial here (don't worry, you don't have to know anything about astrophysics), by Tom Loredo:

http://bayes.wustl.edu/gregory/articles.pdf

Nice, nice. (none / 2) (#69)
by bakuretsu on Wed Jan 21, 2004 at 11:42:00 AM EST

This is a very interesting article, but I would have appreciated a lot more proofreading and grammar work before it hit the front page. I found myself furrowing my brow and re-reading sentences a few times to determine their intended meaning when an article was strangely missing.

Stop slacking off in the moderation section!


-- Airborne
    aka Bakuretsu
    The Bailiwick -- DESIGNHUB 2004

THIS IS A MABBLE BUBBLE POLL! (1.12 / 32) (#71)
by johwsun on Wed Jan 21, 2004 at 12:22:29 PM EST

THE POLL:
SHALL I STOP POSTING MABBLE BUBBLE COMMENTS AT THE FRONT PAGE OF KURO5HIN?

Please rate 3 (three) this comment if you want me to stop mabble bubble, or rate it 0 (zero) if you DONT want me to stop it.

THIS IS ANOTHER MABBLE BUBBLE POLL (1.20 / 15) (#72)
by johwsun on Wed Jan 21, 2004 at 12:37:40 PM EST

the poll: How long do you wish my mabble babble comments should be?

Please rate 3 (three) if you want my "mabble_bubble" comments to have minimal lenght (just one line of mabble_babble) or rate 0 (zero) if you wish my mabble_babble comments to use the maximum allowed lines of a kuro5hin comment.

[ Parent ]

check the poll every day (1.00 / 12) (#75)
by johwsun on Wed Jan 21, 2004 at 12:57:37 PM EST



[ Parent ]
check the poll every hour (1.00 / 12) (#76)
by johwsun on Wed Jan 21, 2004 at 12:58:16 PM EST



[ Parent ]
check this poll as oftern as you can (1.00 / 12) (#86)
by johwsun on Wed Jan 21, 2004 at 02:17:21 PM EST



[ Parent ]
other (1.00 / 12) (#90)
by johwsun on Wed Jan 21, 2004 at 02:39:18 PM EST



[ Parent ]
I dislike this poll (1.00 / 12) (#91)
by johwsun on Wed Jan 21, 2004 at 02:39:48 PM EST



[ Parent ]
THIS IS THE FREQUENCY MABBLE BUBBLE POLL! (1.00 / 12) (#74)
by johwsun on Wed Jan 21, 2004 at 12:56:59 PM EST

THE POLL: HOW OFTEN SHALL I CHECK THE MABBLE_BABBLE POLL IN ORDER TO APPLY ITS RESULTS IN K5 FRONTPAGE?

Please post your poll options above, and rate them. The best rated poll option will define how often I will check the mabble_bubble poll, in order to cast or not to cast mabble_bubble comments. thanks for your cooperation. the mabble bubble team!

[ Parent ]

check the poll every second (1.20 / 10) (#77)
by johwsun on Wed Jan 21, 2004 at 12:58:56 PM EST

Are you a script, arent you?

[ Parent ]
the post one mabble bubble (1.00 / 12) (#81)
by johwsun on Wed Jan 21, 2004 at 02:09:21 PM EST



[ Parent ]
then post three mabble bubbles.. (1.00 / 12) (#82)
by johwsun on Wed Jan 21, 2004 at 02:10:01 PM EST



[ Parent ]
then post as much mabble bubles you can.. (1.20 / 10) (#83)
by johwsun on Wed Jan 21, 2004 at 02:11:52 PM EST

(you are bandwidth limited afer all)

[ Parent ]
this is the best option right now (1.33 / 9) (#115)
by johwsun on Fri Jan 23, 2004 at 11:57:45 AM EST



[ Parent ]
this is the best option right now (1.28 / 7) (#118)
by johwsun on Sat Jan 24, 2004 at 07:25:39 AM EST



[ Parent ]
check every hour, the post one mabble bubble (1.00 / 12) (#79)
by johwsun on Wed Jan 21, 2004 at 02:03:30 PM EST



[ Parent ]
check as often as you can.. (1.00 / 12) (#84)
by johwsun on Wed Jan 21, 2004 at 02:14:40 PM EST



[ Parent ]
then post as much mabble bubbles as you can (1.00 / 12) (#85)
by johwsun on Wed Jan 21, 2004 at 02:15:16 PM EST



[ Parent ]
other (1.00 / 12) (#88)
by johwsun on Wed Jan 21, 2004 at 02:38:29 PM EST



[ Parent ]
I dislike this poll (1.00 / 12) (#89)
by johwsun on Wed Jan 21, 2004 at 02:38:57 PM EST



[ Parent ]
HOW MANY MABBLE BUBBLEs? (1.00 / 12) (#80)
by johwsun on Wed Jan 21, 2004 at 02:08:38 PM EST

If I will be allowed to post mabble_bubble comments, then how many shall I post?

This is related to the frequency poll.
Go vote at the frequency poll, and the frequency result will cause me to post a single mabble_bubble in every decided period.

thank tou.

The mabble bubble team.

[ Parent ]

the mabble bubble explanation (1.00 / 15) (#87)
by johwsun on Wed Jan 21, 2004 at 02:30:33 PM EST

http://www.dolally.com/dictionary/definition.asp?Word=Mabble

[ Parent ]
I dislike this poll (1.00 / 12) (#92)
by johwsun on Wed Jan 21, 2004 at 02:42:42 PM EST



[ Parent ]
<--- vote this if you dont like the poll! (1.00 / 12) (#97)
by johwsun on Wed Jan 21, 2004 at 03:31:03 PM EST



[ Parent ]
other (1.00 / 12) (#93)
by johwsun on Wed Jan 21, 2004 at 02:43:11 PM EST



[ Parent ]
I love your discourse (none / 2) (#98)
by xutopia on Wed Jan 21, 2004 at 04:54:04 PM EST

Bayesian thinking is changing the way we look at things in science today. Certain things we can't get definitive answers from. For example in genetics we find correlations between a said gene and occurances of a certain feature. Perhaps using the Bayesian formula these genes could actually be given more meaning.

Problem is that too many scientists today want a simple smoking gun type theory. One that is easy such as the theory of gravity that Newton informed us of. Later came Einstein as one of the linked articles proposes that showed that Newton's laws only worked in a said amount of cases. That is was part of a bigger answer.

I'm really happy that you shed some light on the theorem and the formula. It is one tool that we'll need to better understand the universe.

Hmm (1.75 / 4) (#102)
by ShiftyStoner on Thu Jan 22, 2004 at 03:33:58 AM EST

 I have always used Bayesian thinking. Before I knew what it was called. I didn't think much of it. I always thought that only stupid people didn't as they tend to repeat the same stupid mistakes over and over. That's the essence of idiocy.  

( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler
[ Parent ]
Computers. (none / 2) (#105)
by Znork on Thu Jan 22, 2004 at 07:36:19 AM EST

You state, "This is a good premise for computers and machines because for them only the  past and present  exist - it is beyond them to conceive of the  future.". In what way is it beyond computers and machines to concieve of the future any more than it is for humans? Formulating a goal and analyzing possible ways to reach it is not beyond computers. A huge number of examples exist for this, ranging from chess programs through various game AI's to expert systems.

Likewise, cognitive decision making is merely the fallback to even weaker associations in a neural network (the four fundamental techniques could be seen as arbitrary levels of association strength in a neural network). Associations to very weakly related problems or situations, petering out to the level of randomness. It can be fully modelled, but to do so would require a full modelling of genetics, experiences and current state of mind of the specific individual. It could be done just as well by a machine, with a sufficiently advanced neural network, where again the model would be the sum of programming, storage and network association states. However, wether or not it would be useful is another matter.

Computers can reach for Mars ? (none / 1) (#116)
by leoaugust on Fri Jan 23, 2004 at 03:20:19 PM EST

Formulating a goal and analyzing possible ways to reach it is not beyond computers.

Computers can analyze possible ways to reach a goal, but can they formulate a goal? Which computer could have formulated a goal to reach for the Moon and Mars ? It can be used generate approaches to get there but not always to formulate the goal.

arbitrary levels of association strength in a neural network

It is more than arbitrary. The four techniques are based on the parameters of whether ALL the info is available, whether the information is within the system, whether the information is from outside the system but due to past experiences there are some patterns, and when there are no patterns.

If you could shed more light on what the various "levels of association" are according to you that determine the strength of associations it would be helpful. In fact, what is your definition of "strength."


The eyes cannot see what the mind cannot see.
[ Parent ]

is it only me (none / 2) (#109)
by dimaq on Fri Jan 23, 2004 at 05:27:47 AM EST

or anyone who has basic grasp of statistics can clearly see that it's a sham?

sure genius (none / 0) (#117)
by leoaugust on Fri Jan 23, 2004 at 04:02:56 PM EST

Sure genius, it is a sham. Now could you be kind enough to tell us why your basic grasp of statistics makes you believe that ? And be sure to talk a little about probabilities in addition to statistics.


The eyes cannot see what the mind cannot see.
[ Parent ]

Time? Future orientation? Say it ain't so! (none / 0) (#126)
by ewall on Thu Mar 04, 2004 at 02:39:04 PM EST

Well, according to this fellow, maybe our whole silly idea of time being a continuum from past thru present onto the future may be bunk. Of course, not everyone agrees with him. (See this article for summary and commentary.)

So what difference does "future orientation" make? ...Alright, I'll answer my own question: No matter your concept of time, our author is simply pointing out that we humans take in a lot more variables than past occurances and probabilities. We also weigh more "emotional" reactions, such as how we'd "feel" if we chose wrong. These and many more variables come into play, and they're not on an even playing field; somehow we add/average and weigh them all to come up with our decision.

Thus, by example, we may make a decision of when to cross a busy street a lot more carefully than whether or not to catch a ball, because the "future" consequences of getting hit by a car are a lot more severe than that of not catching (or failing to catch) the ball. Many, many more variables. Do ya catch me, here?

All that to say: sure, you could say our brains are thinking sorta Bayesian, but that's not all, folks...



Karma Police, come arrest this man...
Reminds me of RPD (none / 0) (#128)
by marktaw on Sun Sep 12, 2004 at 07:36:46 AM EST

Recognition Primed Decision Making (RPD), as posited in the book Sources of Power by Gary Klien states that we have a sort of meta-event in our heads. A single "crossing the street" memory that combines all our experiences crossing the street. Obviously some events will be weighted more than others (like the time I was almost run over by a black SUV while the driver was shouting at me "Move, Police").

We can then compare any crossing the street experience with this memory.

The best way, then, for us to convey information to others, is through story form. For example, a more experienced nurses could identify a problem baby than inexperienced nurses. They fully believed that it became intuitive after a while, but by interviewing them, it became clear that there were a set of symptoms common to many of the cases. Passing on this information to the new nurses in story form is the best way for them to get the equivelant experience.

The military is heavily invested in this method (and indeed Gary Klien came up with much of this research while working for the military) and has very expensive simulations, both computer based and real interaction based to help train soldiers.

Hands down, Sources of Power is one of my favorite books.

Subconsciously, People may be Bayesian | 124 comments (106 topical, 18 editorial, 8 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!