This post is edited together from some comments arising out of comments at Pharyngula, where some commenters were discussing randomly selecting a positive integer (with equal probability on every positive integer), in a discussion of creationists abuse of mathematics.
The problem is, you can't do it.
It is simply not possible to actually select an integer at random where each one has the same probability of being chosen.
In order to be able to construct something which could have a uniform distribution over a countably infinite set, you'd have to drop Kolmogorov's axiom of countable additivity.
Such a thing is possible to do, but when people deal with countably infinite sample spaces they do use the axiom, so whatever "probability" you end up with by dropping it will not be like the probability that people use.
(In measure theory terms, it would be put something like: The union of a countable number of measurable sets cannot have measure 1 with all the sets having equal measure.)
What does that mean? It means you either have to give up probabilities adding to 1, or you have to give up equal probability on your countably infinite set.
A probabilist called de Finetti (and facist, as it happens, but that has nothing to do with whether his work was any good) claimed it was possible to have a notion that was effectively a uniform distribution over a countably infinite set by dropping the aforementioned axiom.
That is, in effect, he dropped the "probabilities add to 1" part that would otherwise be a consequence of the axiom. In effect, he said each probability was zero. The uncomfortable consequence is that the probability that some number is chosen is not 1, but 0.
Other probabilists have tried making each probability > 0. The uncomfortable consequence there is that of course, the probability that some number is chosen becomes infinite.
No actual mechanism for producing a random number from these somewhat unintuitive constructs exists. Any process that gives you a number will be inevitably weighted toward the small numbers (give small numbers generally higher probability than sufficiently large numbers).
Now, if you're only using such a construct to represent something like a state of knowledge**, and not actually requiring to be able to actually observe a value from it, it is sometimes possible to work with such things. That's the sort of thing that's effectively being done when Bayesians work with flat priors over the positive integers (they refer to things like that as "improper priors") - with appropriate normalization, you can get proper posteriors to come out the other end.
** Some people regard the uniform over the positive integers as representing a complete lack of knowledge, but to my mind it actually represents the dramatically strong assumption that no matter how large a value you consider, you effectively assume that essentially all values are larger than it.
A fair number of people object to this unsual kind of construct, insisting that subjective probability and actual realizable probability ought to have the same features. They also like to point to the paradoxes that come along with not doing so, with which I am not presently sufficiently familiar to discuss right now.
Anyway, the short version of all that is "You can't actually select an element from a countably infinite set (such as the positive integers, all the integers, the rationals, or the differences of the square roots of non-negative integers) with equal probability on all elements."
Indeed, it's stronger than that. For example, not only does the probability need to generally decrease as the values become arbitrarily large, the probability can't decrease very slowly. For example, it can't even decrease at a rate proportional to 1/k -- though a finite number of values need not be required to follow the general decrease.