*Your objection is the same one raised by Xeriar and Bernoulli (elsewhere on that Stanford page),*
Nope. Kelly strategy is only about maximizing rate of return. It has nothing to do with diminishing returns.

To see how it works, consider a simple bet that pays off at odds of K:1. Normalize so that your starting bankroll is 1, and the amount you bet is X. If the probability of winning is such that you expect to win W times and lose L times out of N games (W + L = N), then after N games you expect to have

((1 + KX)**W) * ((1-X)**L)

This is your expected rate of return per game, raised to the Nth power. You want this to be maximized, and also greater than one. Maximizing this expectation is the same as maximizing the following expression with respect to X:

W*log(1+KX) + L*log(1-X)

To ensure positive returns this second expression needs to be positive. If L/W is greater than K (L, W, and K being positive), there is no positive value of X for which this is true, so this additional requirement also matches the requirement for expected utility to be positive.

The resemblence to Bernoulli's approach is purely coincidental.

*and the way to deal with it is to modify the game such that the payout increases exponentially. *

I think you mean "faster than exponentially", since it increases exponentially in the original formulation. Optimum KS can be calculated provided that the series converges. With the hyperexponential (for want of a better term) payouts, the series diverges. But if the series diverges towards positive infinity, the recommendation is to bet as much as possible (or if we are talking entry fee, the upper limit may be set as high as possible), provided that:

1. No term is infinitely negative or undefined (the argument of the logarithm in each term must be positive), and

2. You are dealing in infinitely divisible monetary values (e.g., the set of real numbers, or the set of rational numbers).

Condition 1 ensures that you never go bankrupt. The reason for condition 2 is easy to see: If you have a bankroll of $1, and you bet 99 cents, then if you lose you will have 1 penny left. You can't bet a fraction of a penny, so you are forced to risk your entire remaining bankroll on the next play, or quit having lost almost everything. Thus in the real world, the maximum bet/entry fee would be smaller than if you were playing in a world of infinitely divisible currency.

*I'm fairly sure the entry amount / bet variants are equivalent - it doesn't matter whether you choose the amount or the house.*

If it's a bet rather than an entry fee, then the payoff is proportional to the size of your bet. This modifies the last column of the table in my previous comment, and the resulting series will be a different function of X. But also, if it's an entry fee, we are unable to choose X for ourselves, so we can't strategize for maximum return. Instead we are making a yes/no decision, which we do by determining whether the sum of the series (calculated using the predetermined value of X) is positive or negative.

Both of these factors affect the calculated number: In one case we are looking for the maximum of one function, in the other we are looking for the zero crossing of a different function.

*Bayesian rational decision making is a process whereby decisions are made by considering the probability*utility of a given outcome. Utility is given by a utility function which depends on the context - games are often analysed because the utility is considered easy to codify via points in the game. It is an extension to Bayes theorem, but a fairly common one. Unfortunately I don't have a more official definition to hand, but Google will confirm that I didn't just make the term up.*

I thought it was Von Neumann who developed the concept of expected utility. Bayes' Theorem allows you to calculate an unknown probability from a known prior probability and two known conditional probabilities; it isn't needed nor is it helpful in calculating expected utilities. This is not to say that Bayes' Theorem can't be applied to game theory; I just don't see an application to this particular game, where all probabilities are known a priori.

*"This is a case where you could lose your entire bankroll in one game, so obviously you wouldn't play if the entry fee is so high."
*

*
You and I think this is obvious, but it is not a result from Bayesian-rational or indeed simpler models of expected value, and it's a reason why I think such a definition of rationality is too narrow.*

KS is rational...

*For instance your example says 102 is too high - but what about $101?*

$101 doesn't allow you to go bankrupt, but it does result in a negative expected rate of return, so you would not play this amount.

*What's your reserve amount?*

I don't know what you mean by this question. By requiring that you only bet a fraction of your bankroll, using Kelly strategy means you always have a reserve.

*To me there's a range of rational responses which involve not playing the game at all, or not paying the optimal amount in order to prolong my gambling enjoyment. Irrational responses would include betting your daughter's bicycle and challenging the dealer to a duel.*

KS tells you when it is rational to play (and for how much), and when it is not rational to play. However, even when KS advises you sit the game out, you might have other reasons for staying in. When Edward Thorpe was trying casino gambling strategies based on Kelly strategy, he had to keep making small bets at the blackjack tables even when the card count was against him, so that he would be ready to make a larger bet when the tide turned the other way. When the casinos got wise, he would have a partner (probably Claude Shannon), with one player making small bets and counting the cards, then surreptitiously signaling the other player to join the game as a "high roller" when the card count got interesting.

I had the good fortune to hear Thorpe give a public lecture several years ago. That is where I first heard about Kelly strategy. I suspect the wearable roulette computer he and Shannon developed must have applied Bayes' Theorem.

--------------------------------------------------