The Wikipedia description of Lindley's Paradox asserts an example of opposite hypothesis-testing results between the Frequentist approach and the Bayesian approach.
The example is one of testing a certain town for the ratio of boy-to-girl births. The thing that violently strikes me here is the choice of the Bayesian prior: P(theta = 0.5) = 0.5, i.e., the advance assumption that it's 50% likely for the ratio to be equal to 0.5 (the other 50% chance spread uniformly between all points from 0 to 1).
I mean: What? Why would I conceivably assume that? If I broadly picture real numbers as being continuous, then my instinct would be to assume that it's almost impossible for any given number to be exactly the parameter value, i.e., I'd assume P(theta = 0.5) = 0. Even if I didn't reason that way, I otherwise have copious evidence that human births aren't really 50/50, there's very clearly more boys born than girls -- so if anything I'd choose that as the most likely prior value.
Is that really how Bayesians are supposed to choose their prior? (It seems atrocious!) Or is this just a fantastically mangled example at Wikipedia?