Monday, December 29, 2008
Tuesday, December 23, 2008
Finally
Got the first of a couple of papers with my student done in working paper form at last; it went up in the working paper series today.
It's a tremendous relief - it seemed to take forever.
It's a tremendous relief - it seemed to take forever.
Stuff that shits me #36
Web pages with "search" facilities that don't tell you whether the search is by phrase or by individual term... and whose "advanced search" doesn't either. If you're just using google's site search, fine, most of us are familiar enough with that to get how it works - but plenty of sites are using special search facilities.
C'mon guys, this is not something you leave out. It's not information you tuck 5 clicks away buried 8 screenfuls down some random help page. You put it right there, close to the search box. Even better is if you can offer ways to do both (such as "use quotes to search by phrase", which conveys in 6 words that (i) otherwise, you're searching for individual terms, and (ii) searching by phrase can be done in a natural way).
C'mon guys, this is not something you leave out. It's not information you tuck 5 clicks away buried 8 screenfuls down some random help page. You put it right there, close to the search box. Even better is if you can offer ways to do both (such as "use quotes to search by phrase", which conveys in 6 words that (i) otherwise, you're searching for individual terms, and (ii) searching by phrase can be done in a natural way).
Sunday, December 21, 2008
On proper care and feeding of ideas
I'm writing several papers with an able student of mine (who is presently working as a research assistant with me).
We're working in a financially-related area. It's not the area my PhD is in - I'm a statistician, but one of my undergrad degrees was in this area, and the problems we're looking at are essentially forecasting-related and many of the published papers (at least the better ones) are at heart applied statistics, with a few twists that arise from the particular application.
One thing that has come up over and over again as we work is just how careless some of even the best-known work in this particular sub-area is. Many papers that pass peer review display a fundamental ignorance of the work that they themselves refer to (that is, they appear not to have actually read much of their references, and missed important information contained in them). Their algorithms have not been carefully checked, and fail to meet even fairly basic "reasonableness checks". They notice and then effectively ignore major clues about what's wrong (describing ways to avoid problems that should have acted as large flashing neon warning signs). They often make unsupported assertions that might seem plausible but which are in fact (if you check carefully) untrue. Even the better papers we're looking at contain many subtle errors.
It's a problem outside of the narrow area we're currently working in, and seems to infest a much broader swathe of literature.
There's a distinct lack of scholarship. Little intellectual rigor, little curiosity. Editors, referees and authors are often woefully ignorant.
Much of the best work is flawed and the more mediocre efforts are so laughable that I'd be unable to pass an undergrad for doing better work than they can get published. Some of these obviously ludicrous efforts win prizes.
When I act as a reviewer, I am frequently met with astonishment from editors at my thoroughness. I am in fact not a particularly thorough reviewer, but I do at least try to be somewhat familiar with the necessary background before trying to review work, I try to read the paper all the way through (and the substantive parts more than once) and as far as possible understand it and where feasible check what it says, make certain the arguments hang together and make sense. I do reasonableness checks where I can. That is, I do a fairly elementary level of checking that the work is not total garbage (less than with my own work).
The fact that even this fairly basic process can take me many weeks of solid work and result in a referees report longer than the original paper is not an indication that I'm obsessive, but it's an indication of just how careless most of the work in this (broader) area is.
And it's more general than that. I have helped people out in other areas (I could list something approaching a dozen, some widely divergent from the ones I've mostly worked in) where even the "classic" papers seem to be full of half-baked nonsense mingled with good ideas, joined up with death-defying leaps of logic and half-understood references.
I posted recently about how making mistakes is not necessarily a bad thing. The ability to make mistakes is an important part of getting things right. But you have to be willing to look for your errors and try to correct them. The error-filter can't be left out! And if you're writing supposedly academic papers, you should try to do it before they're published.
We're working in a financially-related area. It's not the area my PhD is in - I'm a statistician, but one of my undergrad degrees was in this area, and the problems we're looking at are essentially forecasting-related and many of the published papers (at least the better ones) are at heart applied statistics, with a few twists that arise from the particular application.
One thing that has come up over and over again as we work is just how careless some of even the best-known work in this particular sub-area is. Many papers that pass peer review display a fundamental ignorance of the work that they themselves refer to (that is, they appear not to have actually read much of their references, and missed important information contained in them). Their algorithms have not been carefully checked, and fail to meet even fairly basic "reasonableness checks". They notice and then effectively ignore major clues about what's wrong (describing ways to avoid problems that should have acted as large flashing neon warning signs). They often make unsupported assertions that might seem plausible but which are in fact (if you check carefully) untrue. Even the better papers we're looking at contain many subtle errors.
It's a problem outside of the narrow area we're currently working in, and seems to infest a much broader swathe of literature.
There's a distinct lack of scholarship. Little intellectual rigor, little curiosity. Editors, referees and authors are often woefully ignorant.
Much of the best work is flawed and the more mediocre efforts are so laughable that I'd be unable to pass an undergrad for doing better work than they can get published. Some of these obviously ludicrous efforts win prizes.
When I act as a reviewer, I am frequently met with astonishment from editors at my thoroughness. I am in fact not a particularly thorough reviewer, but I do at least try to be somewhat familiar with the necessary background before trying to review work, I try to read the paper all the way through (and the substantive parts more than once) and as far as possible understand it and where feasible check what it says, make certain the arguments hang together and make sense. I do reasonableness checks where I can. That is, I do a fairly elementary level of checking that the work is not total garbage (less than with my own work).
The fact that even this fairly basic process can take me many weeks of solid work and result in a referees report longer than the original paper is not an indication that I'm obsessive, but it's an indication of just how careless most of the work in this (broader) area is.
And it's more general than that. I have helped people out in other areas (I could list something approaching a dozen, some widely divergent from the ones I've mostly worked in) where even the "classic" papers seem to be full of half-baked nonsense mingled with good ideas, joined up with death-defying leaps of logic and half-understood references.
I posted recently about how making mistakes is not necessarily a bad thing. The ability to make mistakes is an important part of getting things right. But you have to be willing to look for your errors and try to correct them. The error-filter can't be left out! And if you're writing supposedly academic papers, you should try to do it before they're published.
Sunday, December 14, 2008
The necessity of error and the error filter
Recently, two Scienceblogs bloggers have made substantive errors. (Well, no doubt more than two have, but I saw two.)
One is a mathematician and one is a biologist.
Now the fact that these writers made errors is of no great consequence. Every human, all of us, we all make mistakes. Everyone.
In fact, if some of us (well, okay, me) make lots and lots of mistakes. Actually, making mistakes is part of achieving anything at all, so we shouldn't fear making them (at least in situations where the consequences of error are not dire) - it means we're doing something.
What matters to me is not that they got something wrong, but how errors were dealt with. In both cases they made a post that pointed out they got it wrong, and that explained in detail what was correct.
A body of knowledge, whether it resides in an individual or in a culture, contains mistakes. Some of what we think is true is necessarily going to be false. If the fact that the corpus has errors in it is accepted, there is some hope of correcting some of the errors.
If you encounter someone who cannot be wrong, you may be sure they are in a sea of falsehood. It can be no other way.
However, we need more than just an acceptance of the possibility of falsehood and a willingness to change ideas. We can't just change our beliefs willy-nilly. The fact is, for most people, even the highly deluded among us, almost all of what we believe to be true is true, or close enough to true to be valuable (most of those truths are relatively mundane facts, it's what gets us through life). So we should need some further reason to change than the simple possibility we may be in error.
We need some way of identifying our most mistaken ideas and replacing them with better ones, without mistakenly replacing a good idea with a bad one. We need some kind of "filter" that allows us to see us to tell one from the other.
Some people use prayer to try to figure out what is right. The problem with that is it's mired in error. You can't tell for sure that what you think is divine guidance isn't just your own thoughts. In fact it's obvious this must be so, not least because two people can each be sure that they've received guidance about the right path, and those pieces of guidance are contradictory; since they can't both be right, at least one must be wrong. Whether you believe in God or not, the possibility that people can be mistaken in their interpretation of the result of a prayer for guidance should be obvious.
What is this magic knowledge-generating filter?
It's reason and evidence.
This is how we learn. This is how we discover what we did not know. This is why we even have a body of knowledge at all.
In order to raise ourselves out of the muck of ignorance, we need to admit the possibility of error, and use the only reasonably reliable filter available, in order to reduce those errors.
One is a mathematician and one is a biologist.
Now the fact that these writers made errors is of no great consequence. Every human, all of us, we all make mistakes. Everyone.
In fact, if some of us (well, okay, me) make lots and lots of mistakes. Actually, making mistakes is part of achieving anything at all, so we shouldn't fear making them (at least in situations where the consequences of error are not dire) - it means we're doing something.
What matters to me is not that they got something wrong, but how errors were dealt with. In both cases they made a post that pointed out they got it wrong, and that explained in detail what was correct.
A body of knowledge, whether it resides in an individual or in a culture, contains mistakes. Some of what we think is true is necessarily going to be false. If the fact that the corpus has errors in it is accepted, there is some hope of correcting some of the errors.
If you encounter someone who cannot be wrong, you may be sure they are in a sea of falsehood. It can be no other way.
However, we need more than just an acceptance of the possibility of falsehood and a willingness to change ideas. We can't just change our beliefs willy-nilly. The fact is, for most people, even the highly deluded among us, almost all of what we believe to be true is true, or close enough to true to be valuable (most of those truths are relatively mundane facts, it's what gets us through life). So we should need some further reason to change than the simple possibility we may be in error.
We need some way of identifying our most mistaken ideas and replacing them with better ones, without mistakenly replacing a good idea with a bad one. We need some kind of "filter" that allows us to see us to tell one from the other.
Some people use prayer to try to figure out what is right. The problem with that is it's mired in error. You can't tell for sure that what you think is divine guidance isn't just your own thoughts. In fact it's obvious this must be so, not least because two people can each be sure that they've received guidance about the right path, and those pieces of guidance are contradictory; since they can't both be right, at least one must be wrong. Whether you believe in God or not, the possibility that people can be mistaken in their interpretation of the result of a prayer for guidance should be obvious.
What is this magic knowledge-generating filter?
It's reason and evidence.
This is how we learn. This is how we discover what we did not know. This is why we even have a body of knowledge at all.
In order to raise ourselves out of the muck of ignorance, we need to admit the possibility of error, and use the only reasonably reliable filter available, in order to reduce those errors.
Saturday, December 13, 2008
Thursday, December 11, 2008
Another kind of load
The discussion of Noah's ark on Pharyngula brings to mind a topic that I have mused on before, but which people who know more biology than me might actually be able to take a stab at.
Just what would be the parasite load that all the people and animals on an ark would have to entertain? Some of the parasites that humans and our vertebrate and invertebrate kin are subject to are freaking nasty.
Would Noah's family have even been able to survive 7 months (or maybe 10 - the inspired word of god isn't sure on such things) with two of each kind of parasite (or was it seven) that couldn't be carried by their animals?. Or would they and the animals all be simultaneously blind, crippled, malnourished, crazy and dead?
___
"Conservatives trust the government with tanks and nuclear weapons but not to hand out cheese to poor people" - Jon Stewart to Mike Huckabee, on conservative "small government" hypocrisy
Just what would be the parasite load that all the people and animals on an ark would have to entertain? Some of the parasites that humans and our vertebrate and invertebrate kin are subject to are freaking nasty.
Would Noah's family have even been able to survive 7 months (or maybe 10 - the inspired word of god isn't sure on such things) with two of each kind of parasite (or was it seven) that couldn't be carried by their animals?. Or would they and the animals all be simultaneously blind, crippled, malnourished, crazy and dead?
___
"Conservatives trust the government with tanks and nuclear weapons but not to hand out cheese to poor people" - Jon Stewart to Mike Huckabee, on conservative "small government" hypocrisy
Sunday, December 7, 2008
My blog leads an exciting double life.
According to Typealyzer, my blog is ISTP (Myers-Briggs Type Indicator).
Which means, among other things, that my blog enjoys driving race-cars, working as a firefighter, and thinking things over for itself. It is a master of responding to challenges that arise spontaneously (apparently, when Blogger goes down, my blog enjoys rolling up its sleeves and fixing it all by itself).
My blog apparently leads a very exciting life.
Apparently, it's considered unethical to compel taking the Myers-Briggs Type Indicator, so I shouldn't have entered my blog's URL in the address bar at Typealyzer without asking. But devil-may-care-racecar-driving heroic types like my blog probably don't worry about such things very much.
Of course, we INTJ's think Myers-Briggs is mostly a bunch of hooey, and when applied to a blog, probably more so.
Which means, among other things, that my blog enjoys driving race-cars, working as a firefighter, and thinking things over for itself. It is a master of responding to challenges that arise spontaneously (apparently, when Blogger goes down, my blog enjoys rolling up its sleeves and fixing it all by itself).
My blog apparently leads a very exciting life.
Apparently, it's considered unethical to compel taking the Myers-Briggs Type Indicator, so I shouldn't have entered my blog's URL in the address bar at Typealyzer without asking. But devil-may-care-racecar-driving heroic types like my blog probably don't worry about such things very much.
Of course, we INTJ's think Myers-Briggs is mostly a bunch of hooey, and when applied to a blog, probably more so.
The tenants of ideology
Another pet hate... I have seen this one 6 or 7 times in the last few months.
"they took the basic tenants of evolution"
That's "tenets", dammit. Why has it suddenly become "tenants" all over the place?
"they took the basic tenants of evolution"
That's "tenets", dammit. Why has it suddenly become "tenants" all over the place?
Sunday, November 30, 2008
Stunned
I had a bit of a shock. I was chatting to a generally very sensible fellow, when he came out with the "atheism is a faith" bit, accompanied by a bit of mild invective.
Since he was normally given to being very sensible, I asked what made him think that. Turns that we had different definitions of atheism - he understood it to be an absolute, categorical denial of the existence of any gods; in that situation, his claim has at least some basis, and of course some dictionaries do support that definition.
We then had a brief but very reasonable discussion.
It turns out that, under the definition I gave (a-theism, absence of belief in gods), he is an atheist. Indeed, our positions are extremely close (we're both agnostic on knowledge of existence, but both lack belief).
I pointed to some online definitions of atheism to make it clear my definition was not esoteric.
I was really glad I didn't over-react to that old chestnut. He was being quite reasonable, within the scope of his definition.
Not everyone who brings up the "atheism is a faith" thing is a fundie.
Since he was normally given to being very sensible, I asked what made him think that. Turns that we had different definitions of atheism - he understood it to be an absolute, categorical denial of the existence of any gods; in that situation, his claim has at least some basis, and of course some dictionaries do support that definition.
We then had a brief but very reasonable discussion.
It turns out that, under the definition I gave (a-theism, absence of belief in gods), he is an atheist. Indeed, our positions are extremely close (we're both agnostic on knowledge of existence, but both lack belief).
I pointed to some online definitions of atheism to make it clear my definition was not esoteric.
I was really glad I didn't over-react to that old chestnut. He was being quite reasonable, within the scope of his definition.
Not everyone who brings up the "atheism is a faith" thing is a fundie.
Tuesday, November 18, 2008
A little tidbit I should have already known
I was reading online about a game I bought while in the US and which I have only just had time to take a peek at. Someone made a point about many parts of the game being based on the fact that a right triangle with sides of 7 and 4 units has a hypotenuse with length very close to 8.
Well, 72 + 42 = 82 + 1
so the hypotenuse is close to 8, as suggested: √(72 + 42) ≅ 8.
In fact, I knew √65 to be very close to 8 1/16
(if x is not too small, √(x2 + 1) ≅ x + 1/2x).
(Note that (x + 1/2x)2 = x2 + 1 + 1/4x² , and if x >> 1, the final term is quite small )
So that's an error of around 1/128, or about 0.8%; pretty good, since the game aims for much less accuracy than that in general.
But then I thought about the fact that 16 in the denominator was a bit too small, and I wondered about how much. I realized straight away that it was in fact about a sixteenth too small. That is, it occurred to me that √65 is very close to 8 1/(16 + ¹/16).
A little light went off in my head, so I hauled out my calculator.
Try this with me, if you have a calculator handy:
Take the square root of 65. (You should see 8.06225...)
Now subtract 8 (the bit we know).
Take the reciprocal (¹/x). You get 16 and a bit.
Subtract 16 and take the reciprocal. Looks like you get the same number back...
What is this number? A tiny bit of algebra shows it's 8 + √65.
So far, that may seem like a trivial curiosity. But this happens all over.
For example, you get the same thing with any positive integer, x;
√(x2 + 1) + x is a number like that "16 and a bit", where
you can keep subtracting that integer part and taking the reciprocal.
That is, expressions like 8 1/(16 + ¹/(16+ ...)) come up lots of times (and recognizing that I'd hit one of these objects was what made the light go off).
Take √10 for example - it's 3 1/(6 + ¹/(6+...))
And you don't just get it with roots of 1 more than a perfect square. As I said before, it happens all over.
We've hit continued fractions. They come up a fair bit in mathematics, and they appear in numerous places where rational approximation comes in - I remember playing with them when dealing with asymptotic approximations in statistics, for example. There's a much nicer notation (see the wikipedia article), so if you're playing with them you're not stuck with endless layers of fraction running down the page.
So, for example, the sequence 8, 8 1/16, 8 1/(16 + ¹/16), ... 8 1/(16 + ¹/(16+ 1/16...)) would be rendered as:
8, [8; 16], [8; 16, 16], ... [8; 16, 16, ...]
Similarly, √10 is [3; 6, 6, 6, ...].
The well known continued fraction for √2 falls into this class: [1; 2, 2, 2...].
Compute a few terms in that sequence with me:
1, 1.5, 1.4, 1 5/12 = 1.416666... , ...
already we're quite close - and it continues to jump about either side of √2, getting closer and closer.
For larger numbers, the convergence is much faster. The general continued fraction for √(x2 + 1) is [x; 2x, 2x, 2x, ...].
Try seeing if you can work out what is going on with square roots with different offsets from a perfect square.
So anyway not only is there a handy way of computing square roots that are close to perfect squares, there's a handy way to improve the calculation if it wasn't as accurate as you needed.
There are many beautiful things related to continued fractions. Take a look over at MathWorld if you've a mind for some boggling factoids.
What fun.
(Two posts in one day! OMFFSM)
Well, 72 + 42 = 82 + 1
so the hypotenuse is close to 8, as suggested: √(72 + 42) ≅ 8.
In fact, I knew √65 to be very close to 8 1/16
(if x is not too small, √(x2 + 1) ≅ x + 1/2x).
(Note that (x + 1/2x)2 = x2 + 1 + 1/4x² , and if x >> 1, the final term is quite small )
So that's an error of around 1/128, or about 0.8%; pretty good, since the game aims for much less accuracy than that in general.
But then I thought about the fact that 16 in the denominator was a bit too small, and I wondered about how much. I realized straight away that it was in fact about a sixteenth too small. That is, it occurred to me that √65 is very close to 8 1/(16 + ¹/16).
A little light went off in my head, so I hauled out my calculator.
Try this with me, if you have a calculator handy:
Take the square root of 65. (You should see 8.06225...)
Now subtract 8 (the bit we know).
Take the reciprocal (¹/x). You get 16 and a bit.
Subtract 16 and take the reciprocal. Looks like you get the same number back...
What is this number? A tiny bit of algebra shows it's 8 + √65.
So far, that may seem like a trivial curiosity. But this happens all over.
For example, you get the same thing with any positive integer, x;
√(x2 + 1) + x is a number like that "16 and a bit", where
you can keep subtracting that integer part and taking the reciprocal.
That is, expressions like 8 1/(16 + ¹/(16+ ...)) come up lots of times (and recognizing that I'd hit one of these objects was what made the light go off).
Take √10 for example - it's 3 1/(6 + ¹/(6+...))
And you don't just get it with roots of 1 more than a perfect square. As I said before, it happens all over.
We've hit continued fractions. They come up a fair bit in mathematics, and they appear in numerous places where rational approximation comes in - I remember playing with them when dealing with asymptotic approximations in statistics, for example. There's a much nicer notation (see the wikipedia article), so if you're playing with them you're not stuck with endless layers of fraction running down the page.
So, for example, the sequence 8, 8 1/16, 8 1/(16 + ¹/16), ... 8 1/(16 + ¹/(16+ 1/16...)) would be rendered as:
8, [8; 16], [8; 16, 16], ... [8; 16, 16, ...]
Similarly, √10 is [3; 6, 6, 6, ...].
The well known continued fraction for √2 falls into this class: [1; 2, 2, 2...].
Compute a few terms in that sequence with me:
1, 1.5, 1.4, 1 5/12 = 1.416666... , ...
already we're quite close - and it continues to jump about either side of √2, getting closer and closer.
For larger numbers, the convergence is much faster. The general continued fraction for √(x2 + 1) is [x; 2x, 2x, 2x, ...].
Try seeing if you can work out what is going on with square roots with different offsets from a perfect square.
So anyway not only is there a handy way of computing square roots that are close to perfect squares, there's a handy way to improve the calculation if it wasn't as accurate as you needed.
There are many beautiful things related to continued fractions. Take a look over at MathWorld if you've a mind for some boggling factoids.
What fun.
(Two posts in one day! OMFFSM)
Unconsciously annoying
Here's a peeve that I've been seeing all over the place the last couple of weeks:
"I'll leave that to your conscious"
. . . That's conscience, dammit.
"I'll leave that to your conscious"
. . . That's conscience, dammit.
Friday, November 14, 2008
Thursday, November 13, 2008
58 down, two to go.
Stevens is behind Begich by 814 votes. With mostly Begich-heavy count left, Stevens is not going to pass him.
The only question remaining: whether Begich can (roughly) double that lead and avoid a petition for a recount (not that a recount would flip it).
The only question remaining: whether Begich can (roughly) double that lead and avoid a petition for a recount (not that a recount would flip it).
Saturday, November 8, 2008
A year (and a month)
With all the stuff that happened just after I got back from the US and then the hurried preparations for lecturing this subject I am still busy with, I entirely missed my blogging anniversary [for this blog, at least - I also have a long running, if recently neglected, personal blog that's been going for about 5 years].
Yep, I started this blog in October 2007, a year and a month ago.
Many thanks to my modest band of readers. Hi!
This is also my 150th post, so that's averaging about 3 posts a week.
Volume is down (and traffic with it) since with all the lack of time for much of anything but work lately (hmm... I think I have some kids around here somewhere), blogging is the thing that has had to drop off a bit.
The other thing is the fire mostly isn't there right now. There's plenty to get worked up about, but I just haven't have enough anger to go around the last few months, nor the time to deal with a more reasoned argument. I can't even keep up with science news (at last glance, my science news aggregator had about 800 unread articles).
[I have lots of ideas for things to write about, but by the time I find an hour to write a decent post, it has become out of date. I also have a number of topics that won't go out of date in a hurry, but they would take much longer to write.]
So volume is down and will probably stay that way for some weeks yet. But I am still here.
Yep, I started this blog in October 2007, a year and a month ago.
Many thanks to my modest band of readers. Hi!
This is also my 150th post, so that's averaging about 3 posts a week.
Volume is down (and traffic with it) since with all the lack of time for much of anything but work lately (hmm... I think I have some kids around here somewhere), blogging is the thing that has had to drop off a bit.
The other thing is the fire mostly isn't there right now. There's plenty to get worked up about, but I just haven't have enough anger to go around the last few months, nor the time to deal with a more reasoned argument. I can't even keep up with science news (at last glance, my science news aggregator had about 800 unread articles).
[I have lots of ideas for things to write about, but by the time I find an hour to write a decent post, it has become out of date. I also have a number of topics that won't go out of date in a hurry, but they would take much longer to write.]
So volume is down and will probably stay that way for some weeks yet. But I am still here.
Wednesday, November 5, 2008
The joy and relief around here is palpable
I'm sitting in a large building at the moment, and echoing from different parts of the building I hear cheers, laughter, loud conversation as the news filters through the building.
I've never heard people so bubbly, excited, and at the same time, relieved at a US election. Obama's election victory seems to have energized almost everyone.
One colleague said to me "What I have now is hope".
Which about sums it up, I guess.
I've never heard people so bubbly, excited, and at the same time, relieved at a US election. Obama's election victory seems to have energized almost everyone.
One colleague said to me "What I have now is hope".
Which about sums it up, I guess.
Monday, October 27, 2008
Christian charity rejects donation from evil roleplayers
Many roleplaying conventions hold charity auctions, which are often very strongly supported by the convention-goers.
Ogre Cave reports that Gen Con (a major roleplaying convention) raised over $17,000 at its annual charity auction, held in honour of (recently deceased) Gary Gygax.
Their chosen charity, Christian Children’s Fund (apparently a favorite of Gygax) learned that sales of Dungeons & Dragons materials were part of the auction and turned the money down.
Apparently saving children isn't really so important after all, if the money might have come from people who like to roll dice and make up stories.
Fisher House Foundation accepted the evil, tainted money, apparently without reservations. So far, the unstinting wrath of the almighty has failed to fall upon them, but obviously it's only a matter of time.
Ogre Cave reports that Gen Con (a major roleplaying convention) raised over $17,000 at its annual charity auction, held in honour of (recently deceased) Gary Gygax.
Their chosen charity, Christian Children’s Fund (apparently a favorite of Gygax) learned that sales of Dungeons & Dragons materials were part of the auction and turned the money down.
Apparently saving children isn't really so important after all, if the money might have come from people who like to roll dice and make up stories.
Fisher House Foundation accepted the evil, tainted money, apparently without reservations. So far, the unstinting wrath of the almighty has failed to fall upon them, but obviously it's only a matter of time.
Southern Comfort?
I no more envy people that find faith comforting than I envy people that find a double shot of whiskey in their morning coffee comforting.
... I might find it understandable, but that's not at all the same thing.
... I might find it understandable, but that's not at all the same thing.
Saturday, October 25, 2008
Speed reading
Mid-last week I was asked if I could teach a segment of a subject.
I said "I'll need to check, but it will probably be okay. When does it start?"
"In a week."
Nice to have, you know, preparation time.
I said "I'll need to check, but it will probably be okay. When does it start?"
"In a week."
Nice to have, you know, preparation time.
Friday, October 24, 2008
Wednesday, October 22, 2008
You can only rev your base up enough to vote once
What's the point in going to the extreme end of your base over and over, while losing the undecideds, independents and any crossover Dems?
I mean, really? If you rev up your base enough to vote, and you're doing public financing anyway, every time you go back to the well, you're losing votes. And its not just McCain that doesn't seem to understand the message.
The deluge of money to Tinklenberg in the MN house race, and the swing in the NC senate race from Dole to Hagan (Dole was ahead until she started attacking Hagan over actually meeting with atheists) seem to indicate that sufficiently extreme reactions will motivate people to support your opponent much more than it helps you.
So right wingnuts, here's the lowdown: once you convince your base to actually go out and vote for you, there's little point in going further - if you get them three times as worked up, they don't get to vote for you three times. They just get out the white hoods and the burning crosses. And then the decent, ordinary people, a whole heaping lot of them, suddenly start finding a few bucks for the other guy...
What's up with that? Has the right wing lost the ability to count to one?
I mean, really? If you rev up your base enough to vote, and you're doing public financing anyway, every time you go back to the well, you're losing votes. And its not just McCain that doesn't seem to understand the message.
The deluge of money to Tinklenberg in the MN house race, and the swing in the NC senate race from Dole to Hagan (Dole was ahead until she started attacking Hagan over actually meeting with atheists) seem to indicate that sufficiently extreme reactions will motivate people to support your opponent much more than it helps you.
So right wingnuts, here's the lowdown: once you convince your base to actually go out and vote for you, there's little point in going further - if you get them three times as worked up, they don't get to vote for you three times. They just get out the white hoods and the burning crosses. And then the decent, ordinary people, a whole heaping lot of them, suddenly start finding a few bucks for the other guy...
What's up with that? Has the right wing lost the ability to count to one?
Monday, October 20, 2008
Statistics as philosophy
This post is related to a point I often try to make (not that I am completely of a mind with the author, but much of what he's saying I identify with).
Fundamentally, statistics is different from mathematics, though it uses the tools of mathematics. Mathematics helps with the "what" (such as "given I want to measure this, what do I do?", but the "why" (such as in the sense of "why work that out, rather than something else") is somewhere other than mathematics.
This point is often lost on otherwise highly competent people. I have seen many good mathematicians come a cropper on it. Some of the worst explanations of statistics I have ever seen come not from people who have trouble with the mathematics in it, but from people who have no trouble with the mathematics at all. (I could name names, but I am feeling generous today.)
And it's often like that with students - I see it a lot. As a student, I had a similar experience to the poster I linked to - I was - manipulation-wise - reasonably competent at statistics. I could do the calculations, if it was reasonably clear what calculations were required. But I did two years of it and still didn't comprehend it. I didn't even comprehend that there was something to comprehend - after all, I could pass the subjects okay, so I must have 'got it' okay, even though it seemed sort of wishy-washy to me. But actually I didn't get it at all. It wasn't until I was some doing third-year subjects that it eventually clicked. I suddenly understood what all the previous subjects had been about. I got it. The material I had learned wasn't a bunch of different stuff all lumped together that was done the way it was purely by convention (though there are no shortage of conventions) - there was, in fact, a coherence to it all. It was all of a thing, it fitted together; the stuff I'd learned was the result of a limited collection of concepts applied to different problems. I could actually begin applying my understanding outside my direct learning, to problems I'd not seen before. I had a framework within which each new piece of knowledge fitted in with everything else.
I'm not certain how to even convey this understanding, though I try. Students recognise that I'm passionate, at least (or so they tell me), though I'm not sure that the "why" always comes across to more than a very few of them.
Fundamentally, statistics is different from mathematics, though it uses the tools of mathematics. Mathematics helps with the "what" (such as "given I want to measure this, what do I do?", but the "why" (such as in the sense of "why work that out, rather than something else") is somewhere other than mathematics.
This point is often lost on otherwise highly competent people. I have seen many good mathematicians come a cropper on it. Some of the worst explanations of statistics I have ever seen come not from people who have trouble with the mathematics in it, but from people who have no trouble with the mathematics at all. (I could name names, but I am feeling generous today.)
And it's often like that with students - I see it a lot. As a student, I had a similar experience to the poster I linked to - I was - manipulation-wise - reasonably competent at statistics. I could do the calculations, if it was reasonably clear what calculations were required. But I did two years of it and still didn't comprehend it. I didn't even comprehend that there was something to comprehend - after all, I could pass the subjects okay, so I must have 'got it' okay, even though it seemed sort of wishy-washy to me. But actually I didn't get it at all. It wasn't until I was some doing third-year subjects that it eventually clicked. I suddenly understood what all the previous subjects had been about. I got it. The material I had learned wasn't a bunch of different stuff all lumped together that was done the way it was purely by convention (though there are no shortage of conventions) - there was, in fact, a coherence to it all. It was all of a thing, it fitted together; the stuff I'd learned was the result of a limited collection of concepts applied to different problems. I could actually begin applying my understanding outside my direct learning, to problems I'd not seen before. I had a framework within which each new piece of knowledge fitted in with everything else.
I'm not certain how to even convey this understanding, though I try. Students recognise that I'm passionate, at least (or so they tell me), though I'm not sure that the "why" always comes across to more than a very few of them.
Thursday, October 9, 2008
Monday, October 6, 2008
Happy Happy Joy Joy
I am one happy blogger. I have completely sorted out a little mathematical problem that's been plaguing me for ages. It's one where I already knew the result, but all the proofs I could construct were either too embarrassingly clunky to use (I mean, really, really awful), or elegant but handwavy in one place.
I realized last week we really needed this result for something I'm working on with my research student. I sat and thought about it for a while today and finally noticed that the one remaining bit of argument we needed to make was obvious if you just recast the whole problem as a count from a thinned Poisson process. The crazy thing was my old handwavy argument was in effect already doing that, I just had failed to recognize it for what it was. Now that it's been set up in the right way, all the handwavy aspects drop away, and a nice clean half-page argument based on already-known results establishes the result we need.
This is one of those moments when after the fact everything is so obvious that I feel inadequate for not having seen it much earlier, but for the moment the joy is undiminished, because what this small step gets us to is something dramatic (assuming showing a bunch of well-known-in-their-application-area people that what they've been saying and doing is completely wrong is dramatic).
I realized last week we really needed this result for something I'm working on with my research student. I sat and thought about it for a while today and finally noticed that the one remaining bit of argument we needed to make was obvious if you just recast the whole problem as a count from a thinned Poisson process. The crazy thing was my old handwavy argument was in effect already doing that, I just had failed to recognize it for what it was. Now that it's been set up in the right way, all the handwavy aspects drop away, and a nice clean half-page argument based on already-known results establishes the result we need.
This is one of those moments when after the fact everything is so obvious that I feel inadequate for not having seen it much earlier, but for the moment the joy is undiminished, because what this small step gets us to is something dramatic (assuming showing a bunch of well-known-in-their-application-area people that what they've been saying and doing is completely wrong is dramatic).
Tuesday, September 30, 2008
Quote
"My basic objection to religion is not that it isn't true; I like plenty of things that aren't true. It's that religion grants its adherents malign, intoxicating and morally corrosive sensations. Destroying intellectual freedom is always evil, but only religion makes doing evil feel quite so good." - Philip Pullman
I haven't given up blogging - I'm just keeping very busy indeed. My mother even emailed me wondering what was wrong.
I haven't given up blogging - I'm just keeping very busy indeed. My mother even emailed me wondering what was wrong.
Tuesday, September 23, 2008
Drawing of the space habitat
It appears that a number of people are misunderstanding my description of the space habitat (on the order of the "invasion fleet being swallowed by a small dog" scale).
So here's a picture, approximately to scale:
(Edited to correct drawing the moon orbit too small.)
The large circle is the earth's orbit. The circle in the middle is the Sun. The little circle on the left, that's the Moon's orbit around Earth. (Earth itself is a teensy dot in the middle of that circle, about as big as the line marking the orbit of the Earth around the Sun. I drew the Earth in, but you probably can't see it in the small image). The relatively large circle on the right, that's the space habitat sitting at a Lagrange point (or rather, sitting with a Lagrange point at its centre). It is so phenomenally big it might not actually be stable here (I have not done the calculations to check).
(By comparison, the original Ringworld would be represented by the really big circle. Much larger.)
Now the habitat goes around the Sun in the same orbit as the Earth, offset by 60 degrees. It rotates about its own central axis once every 24 hours, giving it a day-night cycle. The ring would be tilted a little, so it doesn't shade itself most of the time - in fact the ring will precess as it goes. The artificial gravity induced by the rotation would be about 1g (it would, I think, vary somewhat between night and day, because at midnight you're orbiting a fair bit nearer to the Sun than you "should" for the center of mass of the habitat, and midday you're a bit further from the Sun).
So here's a picture, approximately to scale:
(Edited to correct drawing the moon orbit too small.)
The large circle is the earth's orbit. The circle in the middle is the Sun. The little circle on the left, that's the Moon's orbit around Earth. (Earth itself is a teensy dot in the middle of that circle, about as big as the line marking the orbit of the Earth around the Sun. I drew the Earth in, but you probably can't see it in the small image). The relatively large circle on the right, that's the space habitat sitting at a Lagrange point (or rather, sitting with a Lagrange point at its centre). It is so phenomenally big it might not actually be stable here (I have not done the calculations to check).
(By comparison, the original Ringworld would be represented by the really big circle. Much larger.)
Now the habitat goes around the Sun in the same orbit as the Earth, offset by 60 degrees. It rotates about its own central axis once every 24 hours, giving it a day-night cycle. The ring would be tilted a little, so it doesn't shade itself most of the time - in fact the ring will precess as it goes. The artificial gravity induced by the rotation would be about 1g (it would, I think, vary somewhat between night and day, because at midnight you're orbiting a fair bit nearer to the Sun than you "should" for the center of mass of the habitat, and midday you're a bit further from the Sun).
Thursday, September 18, 2008
Large space habitats
I was thinking about space habitats. It was probably triggered indirectly by the fact that I'm reading* a SF story at present (The Algebraist, Iain M Banks), though there's no direct connection to my resulting train of thought.
*when I'm travelling, I read. There's lots of waiting about and sitting in planes and stuff (13-14 hours across the Pacific, for starters) and I rarely feel well enough in flight to concentrate on actual work. Even if I only spend a fraction of that time reading, I will get through several books on a trip.
It occurred to me that there will be a particular radius of "ring"-type space station where one rotation in 24 hours would produce artificial gravity of about 1g (this will obviously be large, even without doing calculations).
If you put such a habitat at an L4- or L5- point on the earth-sun system, you'll get earth-level radiation from the sun (without the nice van Allen belts to protect you, unfortunately), with earth "days" and earth "gravity".
So how big is it? Well, I did the calculations, and I get a radius of 1.85 million km. This thing is huge - the earth-moon system would fit comfortably inside it.
It's not Ringworld-huge, by a long, long shot - on that scale, it's miniscule. But still, very very large indeed. If it were a thin ring about 100 km wide (it's about 11.5 million km around, so 100 km wide is indeed "thin"), it would have a "land" area roughly eight times that of earth (assuming people live only on the inner surface of the ring). For the present I'm assuming you'd have a series of interconnected domes or similar on the inner surface, which allows you to bring air, water and other resources in stages.
Assuming, that is, that I have done all the calculations correctly.
Many of the problems associated with a ringworld habitat go away - you don't need to worry about the orbit being unstable for example, though I guess if the mass gets large enough there may be some issues with the stability of the Lagrange points. The amount of material required is far, far smaller than for a ringworld - and the relatively more obtainable amounts of material mean much less hyper-engineering is required.
I haven't done any engineering calculations to work out stresses and such, so I don't know whether you could build a lot of the base structure from simple rock, or if you really need to go to strong metals or even unobtainium.
A nice little thought experiment, anyway. I don't recall seeing anything like this in a story. I'm not sure if that's because I don't read around enough or just that nobody has used an idea like this - but I cannot have been the first person to work this out, so I am curious to know if it has been used in a story somewhere.
[Edit: there's some nifty ringworld artwork to be found.]
*when I'm travelling, I read. There's lots of waiting about and sitting in planes and stuff (13-14 hours across the Pacific, for starters) and I rarely feel well enough in flight to concentrate on actual work. Even if I only spend a fraction of that time reading, I will get through several books on a trip.
It occurred to me that there will be a particular radius of "ring"-type space station where one rotation in 24 hours would produce artificial gravity of about 1g (this will obviously be large, even without doing calculations).
If you put such a habitat at an L4- or L5- point on the earth-sun system, you'll get earth-level radiation from the sun (without the nice van Allen belts to protect you, unfortunately), with earth "days" and earth "gravity".
So how big is it? Well, I did the calculations, and I get a radius of 1.85 million km. This thing is huge - the earth-moon system would fit comfortably inside it.
It's not Ringworld-huge, by a long, long shot - on that scale, it's miniscule. But still, very very large indeed. If it were a thin ring about 100 km wide (it's about 11.5 million km around, so 100 km wide is indeed "thin"), it would have a "land" area roughly eight times that of earth (assuming people live only on the inner surface of the ring). For the present I'm assuming you'd have a series of interconnected domes or similar on the inner surface, which allows you to bring air, water and other resources in stages.
Assuming, that is, that I have done all the calculations correctly.
Many of the problems associated with a ringworld habitat go away - you don't need to worry about the orbit being unstable for example, though I guess if the mass gets large enough there may be some issues with the stability of the Lagrange points. The amount of material required is far, far smaller than for a ringworld - and the relatively more obtainable amounts of material mean much less hyper-engineering is required.
I haven't done any engineering calculations to work out stresses and such, so I don't know whether you could build a lot of the base structure from simple rock, or if you really need to go to strong metals or even unobtainium.
A nice little thought experiment, anyway. I don't recall seeing anything like this in a story. I'm not sure if that's because I don't read around enough or just that nobody has used an idea like this - but I cannot have been the first person to work this out, so I am curious to know if it has been used in a story somewhere.
[Edit: there's some nifty ringworld artwork to be found.]
Sunday, September 14, 2008
Living in interesting times
Regular readers will have noticed a dramatic reduction in my posting frequency over the last month. This has been largely due to an upcoming trip to the USA and some other things keeping me busy; I will be in the USA next week.
I will be busy for a time after I return as well, so the sporadic posting will continue for at least a little while.
I will be busy for a time after I return as well, so the sporadic posting will continue for at least a little while.
Saturday, September 13, 2008
TV Science Fiction on humanism and nonbelief
Characters who lack belief are common in SF, yet - unusually for television - they are usually presented in a very positive light. Let's consider a few popular shows.
Star Trek
The characters in the various series are almost universally humanist, and the Federation is almost relentlessly humanist. Star Trek is famous for the sense of hope it conveys about the future, and I think that's largely connected to the humanist sentiment that runs through the entire franchise. In the original series the only main character with any apparent religious sensibilities is McCoy, who does refer to God, and does seem to have a religious background and appears to maintain some level of faith (though he does not appear to be observant of religion). The only main character with any real involvement in even quasi-religious ceremony is Spock, but that ceremony is disconnected from supernatural beliefs. In TNG, the main characters are if anything, more humanist. Worf, while raised by humans, appears to have had enough steeping in Klingon culture to have some degree of acceptance of Klingon religion (he does make reference to Sto'Vo'Kor, for example) and practices some Klingon rituals. None of the humans is particularly religious. Deep Space Nine presents a strongly faith-based culture (the Bajorans), but it is made clear in the show that the beings they base their religion on are not supernatural, simply very powerful aliens.
Firefly
The captain of the Firefly, Mal Reynolds, repeatedly discusses his lack of belief, and is consequently presented as an atheist. One of the other main characters, Book, is a "Shepherd", a kind of priest, but we almost immediately discover very strong indications that he is much more than a simple priest. In one 'episode' (in both senses) River fixes Book's bible, by removing or changing all the parts that make no sense - the book ends up in tatters. The main "spiritual" character is Inara, her religious sensibilities are more Eastern, and several times she is seen to minister to Book; to my recollection, one episode ends with what can only be described as a kind of benediction - Book kneels before her in misery while she places her hand on his head in a metaphorical blessing. Religion may be somewhat important in the wider culture (it features in several episodes, but is as frequently a source of hatred and manipulation as much as comfort), but it is not important in the lives of most of the most of the crew. A sense that people will be good or evil with or without religion clearly comes through. Firefly is somewhat more independent and libertarian in sentiment than the other shows, but many of its characters have a strong humanist bent.
Battlestar Galactica
This show is unusual in that it's an overtly religious society, though many characters are not particularly religious (and a few are openly doubtful). Doubters are not regarded as "evil". Religious and nonreligious characters generally seem broadly accepting of each other. The main religion of the humans is polytheistic, that of the Cylons is monotheistic. Several humans become much more religious over time, but one is of dubious sanity (starting out sane and atheist and becoming apparently insane and somehwat religious); in each case the increase in belief is associated with apparent evidence consistent with that belief (even though some of the resulting beliefs are contradictory).
Stargate
Stargate is another overtly "humanist" program, but it is much more explicit in its treatment of religion. There are three main "enemies" in the series - the Goa'uld, the Replicators and the Ori. The first and last are gods to their followers - false gods, but gods with great powers nonetheless, so it is little surprise that they have great followings. The heroes aim to convince their followers that those they worship are not gods, and ultimately to defeat the false gods. None of the main characters are religious (though Te'alc is initially a believer, he throws off his religion). The Ori in particular, mirror the worst aspects of fundamentalist, dominionist religiosity.
Dr Who
The Doctor himself is an avowed lover of humanity, and the show is unremittingly humanist. Religious themes do come into the show sometimes, but the explanation is generally more on the natural side than the supernatural.
All of these programs deal in some way with "constructed families"; Firefly is probably the most explicit of these, but in each of them, "family" is something you make, not somethng you are. 'Traditional' families are not a major aspect of any of the shows (on DS9, Chief O'Brien has a 'traditional' family ... and marriage problems); one parent families are common. Yet love and loyalty to friends and colleagues run very strongly in all of the programs. All four show quite clearly that morality and religious belief are largely orthogonal. Stargate is perhaps the strongest in its anti-faith message (it takes no clear position that all religions are false - but all the religions that have a substantive place on the show clearly are false, dangerous and evil). People who lack overt belief are common, and almost all of them are moral, heroic, loyal, loving... and most of all, human.
Science Fiction, and in particular, TV science fiction - because of it reaching a large and regular audience, has a small but significant influence on our society. Because it is forward looking, the influence is generally strongly progressive, and in the way it presents its major characters, generally presents atheism in an extremely positive way, far out of keeping with the common depiction of atheists in other programmes (who are often presented as cold, abrasive, misanthropic or amoral, when they're presented at all).
--
Book: What are we up to, sweetheart?
River: Fixing your Bible.
Book: I, um... What?
[River is working on a mangled bible. Passages have been crossed out or corrected. Loose pages lie about.]
River: Bible's broken. Contradictions, false logistics... doesn't make sense.
Book: No, no. You - you can't...
River: So we'll integrate non-progressional evolution theory with God's creation of Eden. Eleven inherent metaphoric parallels already there. Eleven. Important number. Prime number. One goes into the house of eleven eleven times, but always comes out one. [She looks him in the eye.] Noah's ark is a problem.
Book: Really?
River: We'll have to call it "early quantum state phenomenon". Only way to fit [laughing quietly] 5,000 species of mammals on the same boat... [She rips more pages out.]
Book: River, you don't... fix the Bible.
River: [Speaking gently.] It's broken. It doesn't make sense.
Book: It's not about... making sense. It's about believing in something. And letting that belief be real enough to change your life. It's about faith. You don't fix faith, River. It fixes you.
(Book tries to pull some of the ripped out pages from River's hand, but they tear.)
Book: You hang on to those then.
Firefly: Jaynestown
Star Trek
The characters in the various series are almost universally humanist, and the Federation is almost relentlessly humanist. Star Trek is famous for the sense of hope it conveys about the future, and I think that's largely connected to the humanist sentiment that runs through the entire franchise. In the original series the only main character with any apparent religious sensibilities is McCoy, who does refer to God, and does seem to have a religious background and appears to maintain some level of faith (though he does not appear to be observant of religion). The only main character with any real involvement in even quasi-religious ceremony is Spock, but that ceremony is disconnected from supernatural beliefs. In TNG, the main characters are if anything, more humanist. Worf, while raised by humans, appears to have had enough steeping in Klingon culture to have some degree of acceptance of Klingon religion (he does make reference to Sto'Vo'Kor, for example) and practices some Klingon rituals. None of the humans is particularly religious. Deep Space Nine presents a strongly faith-based culture (the Bajorans), but it is made clear in the show that the beings they base their religion on are not supernatural, simply very powerful aliens.
Firefly
The captain of the Firefly, Mal Reynolds, repeatedly discusses his lack of belief, and is consequently presented as an atheist. One of the other main characters, Book, is a "Shepherd", a kind of priest, but we almost immediately discover very strong indications that he is much more than a simple priest. In one 'episode' (in both senses) River fixes Book's bible, by removing or changing all the parts that make no sense - the book ends up in tatters. The main "spiritual" character is Inara, her religious sensibilities are more Eastern, and several times she is seen to minister to Book; to my recollection, one episode ends with what can only be described as a kind of benediction - Book kneels before her in misery while she places her hand on his head in a metaphorical blessing. Religion may be somewhat important in the wider culture (it features in several episodes, but is as frequently a source of hatred and manipulation as much as comfort), but it is not important in the lives of most of the most of the crew. A sense that people will be good or evil with or without religion clearly comes through. Firefly is somewhat more independent and libertarian in sentiment than the other shows, but many of its characters have a strong humanist bent.
Battlestar Galactica
This show is unusual in that it's an overtly religious society, though many characters are not particularly religious (and a few are openly doubtful). Doubters are not regarded as "evil". Religious and nonreligious characters generally seem broadly accepting of each other. The main religion of the humans is polytheistic, that of the Cylons is monotheistic. Several humans become much more religious over time, but one is of dubious sanity (starting out sane and atheist and becoming apparently insane and somehwat religious); in each case the increase in belief is associated with apparent evidence consistent with that belief (even though some of the resulting beliefs are contradictory).
Stargate
Stargate is another overtly "humanist" program, but it is much more explicit in its treatment of religion. There are three main "enemies" in the series - the Goa'uld, the Replicators and the Ori. The first and last are gods to their followers - false gods, but gods with great powers nonetheless, so it is little surprise that they have great followings. The heroes aim to convince their followers that those they worship are not gods, and ultimately to defeat the false gods. None of the main characters are religious (though Te'alc is initially a believer, he throws off his religion). The Ori in particular, mirror the worst aspects of fundamentalist, dominionist religiosity.
Dr Who
The Doctor himself is an avowed lover of humanity, and the show is unremittingly humanist. Religious themes do come into the show sometimes, but the explanation is generally more on the natural side than the supernatural.
All of these programs deal in some way with "constructed families"; Firefly is probably the most explicit of these, but in each of them, "family" is something you make, not somethng you are. 'Traditional' families are not a major aspect of any of the shows (on DS9, Chief O'Brien has a 'traditional' family ... and marriage problems); one parent families are common. Yet love and loyalty to friends and colleagues run very strongly in all of the programs. All four show quite clearly that morality and religious belief are largely orthogonal. Stargate is perhaps the strongest in its anti-faith message (it takes no clear position that all religions are false - but all the religions that have a substantive place on the show clearly are false, dangerous and evil). People who lack overt belief are common, and almost all of them are moral, heroic, loyal, loving... and most of all, human.
Science Fiction, and in particular, TV science fiction - because of it reaching a large and regular audience, has a small but significant influence on our society. Because it is forward looking, the influence is generally strongly progressive, and in the way it presents its major characters, generally presents atheism in an extremely positive way, far out of keeping with the common depiction of atheists in other programmes (who are often presented as cold, abrasive, misanthropic or amoral, when they're presented at all).
--
Book: What are we up to, sweetheart?
River: Fixing your Bible.
Book: I, um... What?
[River is working on a mangled bible. Passages have been crossed out or corrected. Loose pages lie about.]
River: Bible's broken. Contradictions, false logistics... doesn't make sense.
Book: No, no. You - you can't...
River: So we'll integrate non-progressional evolution theory with God's creation of Eden. Eleven inherent metaphoric parallels already there. Eleven. Important number. Prime number. One goes into the house of eleven eleven times, but always comes out one. [She looks him in the eye.] Noah's ark is a problem.
Book: Really?
River: We'll have to call it "early quantum state phenomenon". Only way to fit [laughing quietly] 5,000 species of mammals on the same boat... [She rips more pages out.]
Book: River, you don't... fix the Bible.
River: [Speaking gently.] It's broken. It doesn't make sense.
Book: It's not about... making sense. It's about believing in something. And letting that belief be real enough to change your life. It's about faith. You don't fix faith, River. It fixes you.
(Book tries to pull some of the ripped out pages from River's hand, but they tear.)
Book: You hang on to those then.
Firefly: Jaynestown
Friday, September 12, 2008
Two spins, one story
In looking for some links to a completely different story, I came across two different headlines for exactly the same piece of news:
NSW students 'above average in all areas' (which goes on to say "over 90 per cent at, or over, the national minimum benchmarks")
Story quotes the State Education Minister:"It's just such a tribute to our teachers, to our principals and of course to our kids themselves. They've done brilliantly."
vs
10pc of students fail to meet minimum literacy standards
and quotes the Federal Education Minister:"While it was very pleasing that 90 per cent of students met minimum standards, governments have to focus on making sure every Australian child succeeds at school."
NSW students 'above average in all areas' (which goes on to say "over 90 per cent at, or over, the national minimum benchmarks")
Story quotes the State Education Minister:"It's just such a tribute to our teachers, to our principals and of course to our kids themselves. They've done brilliantly."
vs
10pc of students fail to meet minimum literacy standards
and quotes the Federal Education Minister:"While it was very pleasing that 90 per cent of students met minimum standards, governments have to focus on making sure every Australian child succeeds at school."
Wednesday, September 3, 2008
A Sherlock moment
This morning I travelled to the university with my partner and as she pulls up, I look out the window at the car in the adjacent space and had a Sherlock moment (it happens now and then). As I stepped out of the car, I said:
"The driver of that car wears a lot of large rings on their right hand".
"What? How on earth can you know that?"
"Look at the car door. See the pattern of scratches on the duco where the handle is? Lots and lots of little gouges made by something sharp ... and its all up and down near the handle, not concentrated in one particular part, like it would be if it was just one finger. They must have rings on several fingers. And they're *big* - see way up here? The rings are also scratching way up here, *above* where the handle indentation starts, so either the rings are really big, or the person is very tall and has large hands to boot. Now see where the key fits - you really can't unlock it with your right hand and open it with your left (I cross my hands to demonstrate), so those are right hand scratches. Now see the area around the lock? Rings on the left hand, too, but maybe fewer - there are no scratches under the lock, so the little finger is probably ring free, or only has a small ring. I'm guessing a woman with a *lot* of rings - and either she is really tall, or those rings are *huge*, probably set stones."
If we'd had a bit more time I probably could have figured out a half dozen other things out from just looking at the car. A half-decent sleep, and I just start to notice stuff like this.
Does that happen to everyone and they just keep it to themselves, or is it just me?
"The driver of that car wears a lot of large rings on their right hand".
"What? How on earth can you know that?"
"Look at the car door. See the pattern of scratches on the duco where the handle is? Lots and lots of little gouges made by something sharp ... and its all up and down near the handle, not concentrated in one particular part, like it would be if it was just one finger. They must have rings on several fingers. And they're *big* - see way up here? The rings are also scratching way up here, *above* where the handle indentation starts, so either the rings are really big, or the person is very tall and has large hands to boot. Now see where the key fits - you really can't unlock it with your right hand and open it with your left (I cross my hands to demonstrate), so those are right hand scratches. Now see the area around the lock? Rings on the left hand, too, but maybe fewer - there are no scratches under the lock, so the little finger is probably ring free, or only has a small ring. I'm guessing a woman with a *lot* of rings - and either she is really tall, or those rings are *huge*, probably set stones."
If we'd had a bit more time I probably could have figured out a half dozen other things out from just looking at the car. A half-decent sleep, and I just start to notice stuff like this.
Does that happen to everyone and they just keep it to themselves, or is it just me?
Monday, September 1, 2008
Life expectancy may not be what you expect
Odd Nectar makes some good points against ID.
Along the way, he says the following (LE is "life expectancy"):
"If we look back at the Greco-Roman days, LE was about 25 years. Now that's design at its best, don't you think? I suppose if I were an illiterate desert farmer circa 100 b.c.e. having a staring contest with death at 20 years of age..."
The implication being that if your life expectancy is 25 (actually, 25 is could even be a little high), and you're 20, you expect to live only a few more years ("a staring contest with death"). In fact, most people would assume you expect to live five more years.
Here's a simple two part experiment that may help with the ideas. (You can actually do this experiment if you like, but it will take a while. Or you can simulate it on a computer if you know how). Or, if you're in a hurry, I'll just tell you the answers (for a fair die) in a little while.
I) roll a six-sided die, counting the number of rolls until you get a '1' (including the roll on which you do get a 1). Repeat this many times (say, until you get 90 ones - it should be about 540 rolls, give or take). Average the counts for each set of rolls until a '1' appeared
II) roll a die 4 times. If you didn't get a '1' in those rolls, start counting how many additional rolls you need until you get a 1 (if you did get a '1' in those initial 4 rolls, forget that one and start over). Repeat this many times. [Actually, you can use the information from the experiment in part (I): if the count of rolls was 4 or less, throw it away, and if it was greater than 4, subtract 4 from the count.] Average the counts you keep.
The question we're interested in is "How much larger is the average in experiment I than in experiment II?"
What you you guess?
A lot of people would guess 4. (It's the same as the reasoning in the life expectancy example I quoted above.)
Well, actually, the averages are much closer. If you roll a great many times, and your die is fair, you should get 6 for both!
(I just did this experiment using Excel to simulate the die roll - 540 times for experiment I and reused the 272 of them that exceeded 4 for experiment II - the results were about 5.8 and 5.4, which is not quite as close to six as it should be, but at least we can clearly see that the two numbers don't differ by anything like 4.
[Why is this related to life expectancy? Well, assume we have some creature that has a 1/6 chance of death each year (it dies when it rolls a '1') - so its life expectancy is six years. When it reaches 4 years of age, what's its remaining life expectancy? ... in this case, the answer still six!]
Human life expectancy is not that much like the die roll experiment (even if we put a lot more sides on the die), because the probability of death isn't constant at all ages. However, the basic ideas carry over.
Actually, in ancient times, at age 20, your remaining life expectancy then may even have been more than an addtional 25 years!
At birth, the average life span may have been 25, but the average adult was far older than 25.
What made life expectancy so low? Well, higher death rates, obviously, but the higher death rates didn't impact all ages equally. Most of the increase in death rates was for the youngest ages - especially for newborns. If you could survive past about 5 years of age, death rates were much lower - your chances of making it to adulthood were pretty good, and once you were an adult, your life expectancy was reasonable (not great by today's standards, but it was a lot more than a handful of years).
To simplify things dramatically, imagine there's a 50% chance of dying in your first month, and a life expectancy at birth of 25. What's your life expectancy if you survive that first month?
Well, it's 50 (less maybe a few weeks). The overall average in this case will be the average of the lifespan of those who die in the first month - almost 0 - and those who don't. If those who don't die near birth average 50 years, that makes the overall average lifespan (0 + 50)/2 = 25.
Infant mortality rates were extremely high. I don't know the figures for ancient times, but 50% within the first few years is probably reasonably close.
So your expected lifespan at birth was 25, but your expected lifespan conditional on getting past the most dangerous early part was much higher.
Most of the increase in our lifespan during the 19th and 20th centuries was caused by dramatically reduced infant mortality. A large number of dead infants has a huge impact on the average lifespan, so when you improve infant survival, you greatly increase average lifespan. Of course, survival at all ages improved a lot, but it was the infant mortality where the greatest improvements were realised (and these are also the ages where that survival has the greatest impact on average lifespan).
And what was the dramatic increase in lifespan caused by? Mostly better sanitation, clean drinking water, fewer foodborne diseases (with improvements in handling, storage and so on) and other basic disease prevention measures. Inventions like antibiotics in the mid-20th century were amazing, saving/prolonging millions of lives - but basic sanitation and clean water was even more dramatic, particularly among the young - the biggest improvements in expected lifespan happened before the advent of antibiotics.
So our 20 year old Roman-era desert farmer was not staring death in the face, waiting out his last handful of years ... a 20 year old faced many dangers, but their life was much less risky than the at-birth expectancy figure might make you think, unless you're also thinking about the fact that surviving the first few years was the really hard part.
Along the way, he says the following (LE is "life expectancy"):
"If we look back at the Greco-Roman days, LE was about 25 years. Now that's design at its best, don't you think? I suppose if I were an illiterate desert farmer circa 100 b.c.e. having a staring contest with death at 20 years of age..."
The implication being that if your life expectancy is 25 (actually, 25 is could even be a little high), and you're 20, you expect to live only a few more years ("a staring contest with death"). In fact, most people would assume you expect to live five more years.
Here's a simple two part experiment that may help with the ideas. (You can actually do this experiment if you like, but it will take a while. Or you can simulate it on a computer if you know how). Or, if you're in a hurry, I'll just tell you the answers (for a fair die) in a little while.
I) roll a six-sided die, counting the number of rolls until you get a '1' (including the roll on which you do get a 1). Repeat this many times (say, until you get 90 ones - it should be about 540 rolls, give or take). Average the counts for each set of rolls until a '1' appeared
II) roll a die 4 times. If you didn't get a '1' in those rolls, start counting how many additional rolls you need until you get a 1 (if you did get a '1' in those initial 4 rolls, forget that one and start over). Repeat this many times. [Actually, you can use the information from the experiment in part (I): if the count of rolls was 4 or less, throw it away, and if it was greater than 4, subtract 4 from the count.] Average the counts you keep.
The question we're interested in is "How much larger is the average in experiment I than in experiment II?"
What you you guess?
A lot of people would guess 4. (It's the same as the reasoning in the life expectancy example I quoted above.)
Well, actually, the averages are much closer. If you roll a great many times, and your die is fair, you should get 6 for both!
(I just did this experiment using Excel to simulate the die roll - 540 times for experiment I and reused the 272 of them that exceeded 4 for experiment II - the results were about 5.8 and 5.4, which is not quite as close to six as it should be, but at least we can clearly see that the two numbers don't differ by anything like 4.
[Why is this related to life expectancy? Well, assume we have some creature that has a 1/6 chance of death each year (it dies when it rolls a '1') - so its life expectancy is six years. When it reaches 4 years of age, what's its remaining life expectancy? ... in this case, the answer still six!]
Human life expectancy is not that much like the die roll experiment (even if we put a lot more sides on the die), because the probability of death isn't constant at all ages. However, the basic ideas carry over.
Actually, in ancient times, at age 20, your remaining life expectancy then may even have been more than an addtional 25 years!
At birth, the average life span may have been 25, but the average adult was far older than 25.
What made life expectancy so low? Well, higher death rates, obviously, but the higher death rates didn't impact all ages equally. Most of the increase in death rates was for the youngest ages - especially for newborns. If you could survive past about 5 years of age, death rates were much lower - your chances of making it to adulthood were pretty good, and once you were an adult, your life expectancy was reasonable (not great by today's standards, but it was a lot more than a handful of years).
To simplify things dramatically, imagine there's a 50% chance of dying in your first month, and a life expectancy at birth of 25. What's your life expectancy if you survive that first month?
Well, it's 50 (less maybe a few weeks). The overall average in this case will be the average of the lifespan of those who die in the first month - almost 0 - and those who don't. If those who don't die near birth average 50 years, that makes the overall average lifespan (0 + 50)/2 = 25.
Infant mortality rates were extremely high. I don't know the figures for ancient times, but 50% within the first few years is probably reasonably close.
So your expected lifespan at birth was 25, but your expected lifespan conditional on getting past the most dangerous early part was much higher.
Most of the increase in our lifespan during the 19th and 20th centuries was caused by dramatically reduced infant mortality. A large number of dead infants has a huge impact on the average lifespan, so when you improve infant survival, you greatly increase average lifespan. Of course, survival at all ages improved a lot, but it was the infant mortality where the greatest improvements were realised (and these are also the ages where that survival has the greatest impact on average lifespan).
And what was the dramatic increase in lifespan caused by? Mostly better sanitation, clean drinking water, fewer foodborne diseases (with improvements in handling, storage and so on) and other basic disease prevention measures. Inventions like antibiotics in the mid-20th century were amazing, saving/prolonging millions of lives - but basic sanitation and clean water was even more dramatic, particularly among the young - the biggest improvements in expected lifespan happened before the advent of antibiotics.
So our 20 year old Roman-era desert farmer was not staring death in the face, waiting out his last handful of years ... a 20 year old faced many dangers, but their life was much less risky than the at-birth expectancy figure might make you think, unless you're also thinking about the fact that surviving the first few years was the really hard part.
Thursday, August 28, 2008
Transitional forms
Creationists deny transitional forms by asking for evidence of implausible chimeras - "half-fish, half-cow" and the like. That's like denying that coffee cools down because there's no time at which half the cup is scalding and the other half is cold.
[The plausible transitions - that is, actual transitional forms, such as lineages of fossils displaying characteristics of both fish and amphibians, for example - are not asked for much these days - because there are far too many of them, and more all the time.]
(Related post: A creationist looks at Janie's photo album)
[The plausible transitions - that is, actual transitional forms, such as lineages of fossils displaying characteristics of both fish and amphibians, for example - are not asked for much these days - because there are far too many of them, and more all the time.]
(Related post: A creationist looks at Janie's photo album)
Labels:
creationism,
crocoduck,
evolution,
transitional forms
Quote
"Evangelising to people who don't want to hear it [...] is like exposing yourself in public." - Pat Condell
Wednesday, August 27, 2008
Captaining the Titanic
Bad arithmetic can leave us like the captain of the Titanic - convinced we're unsinkable while we confidently steam toward the iceberg.
The inability of the Clinton advisors to perform basic number crunching cost them dearly in their primary campaign.
After the expensive loss in Iowa the Clinton campaign focused on states with a primary, such as Texas, pouring an enormous amount of resources into winning them while Obama racked up win after win in states they weren't even running polling in... only to discover that the time, money and effort had gained them little advantage in several of the contests that they focused on. Clinton won the Texas primary, but it didn't translate into a big advantage in delegates. A little number-crunching (which plenty of people pointed out well before the Texas contest took place) would have shown that a big effort in Texas would have conferred a relatively minor advantage, given the way that Texas' system works. But their campaign apparently didn't understand the issue - in spite of the fact that the issue was well understood by others - until too late; the Clinton camp started whining about it a few days before the primary, but it is not like Texas' circumstances were a secret before then.
Clintons' campaign paid "millions of dollars to consultants who offered up dubious advice".
These "experts" then managed to make further, even simpler elementary mathematical mistakes (by applying a calculation suitable for districts with 6 delegates to districts with different numbers of delegates), which meant that time after time, Clinton must have been mispending money, by allocating resources where they would be certain to be wasted and failing to allocate them where they could make a real difference, in effect multiplying Obama's financial advantage many-fold.
[Mathematics was not the only problem in the Clinton campaign, by any means - but it was a very important one that should never have arisen at all.]
What is it that causes monumental errors on the scale of using a calculation based on six delegates - that the target should be 7/12 of the vote ("the magic number is 59%") - for other districts?
Might it be the Dunning-Kruger effect? Is it just arrogance? Is it getting so focused on things like spin and sound-bites that you can't even remember the rules of the game?
Perhaps the Dunning-Kruger effect might also explain why the California Supreme Court have ruled that courts, not statisticians, will decide which calculations are to be used in cases involving so-called "cold-hit" DNA-matches. Statistical experts are to be reduced to "calculators", performing court approved calculations, whether the circumstances merit the calculations or not.
Innumeracy is what lets a political leader spend a trillion dollars on a largely futile and deadly war, and at the same time veto spending a million dollars on an essential education program, with a stunningly small backlash, partly because many voters don't realize the first is a million times as large as the second. Trillion, billion and million all just become different ways of saying "gazillion", and even the most implausible justifcation can be made to sound iron-clad.
A citizen needs enough mathematics to understand such differences in scale, and a political advisor certainly needs at least enough to be able to formulate a sensible strategy. Without it, we're all in for some very painful and expensive lessons.
The inability of the Clinton advisors to perform basic number crunching cost them dearly in their primary campaign.
After the expensive loss in Iowa the Clinton campaign focused on states with a primary, such as Texas, pouring an enormous amount of resources into winning them while Obama racked up win after win in states they weren't even running polling in... only to discover that the time, money and effort had gained them little advantage in several of the contests that they focused on. Clinton won the Texas primary, but it didn't translate into a big advantage in delegates. A little number-crunching (which plenty of people pointed out well before the Texas contest took place) would have shown that a big effort in Texas would have conferred a relatively minor advantage, given the way that Texas' system works. But their campaign apparently didn't understand the issue - in spite of the fact that the issue was well understood by others - until too late; the Clinton camp started whining about it a few days before the primary, but it is not like Texas' circumstances were a secret before then.
Clintons' campaign paid "millions of dollars to consultants who offered up dubious advice".
These "experts" then managed to make further, even simpler elementary mathematical mistakes (by applying a calculation suitable for districts with 6 delegates to districts with different numbers of delegates), which meant that time after time, Clinton must have been mispending money, by allocating resources where they would be certain to be wasted and failing to allocate them where they could make a real difference, in effect multiplying Obama's financial advantage many-fold.
[Mathematics was not the only problem in the Clinton campaign, by any means - but it was a very important one that should never have arisen at all.]
What is it that causes monumental errors on the scale of using a calculation based on six delegates - that the target should be 7/12 of the vote ("the magic number is 59%") - for other districts?
Might it be the Dunning-Kruger effect? Is it just arrogance? Is it getting so focused on things like spin and sound-bites that you can't even remember the rules of the game?
Perhaps the Dunning-Kruger effect might also explain why the California Supreme Court have ruled that courts, not statisticians, will decide which calculations are to be used in cases involving so-called "cold-hit" DNA-matches. Statistical experts are to be reduced to "calculators", performing court approved calculations, whether the circumstances merit the calculations or not.
Innumeracy is what lets a political leader spend a trillion dollars on a largely futile and deadly war, and at the same time veto spending a million dollars on an essential education program, with a stunningly small backlash, partly because many voters don't realize the first is a million times as large as the second. Trillion, billion and million all just become different ways of saying "gazillion", and even the most implausible justifcation can be made to sound iron-clad.
A citizen needs enough mathematics to understand such differences in scale, and a political advisor certainly needs at least enough to be able to formulate a sensible strategy. Without it, we're all in for some very painful and expensive lessons.
Tuesday, August 26, 2008
Magnetic cows? Well, maybe not.
Science journalists love a kooky headline.
The latest is magnetic cows. It seems rather blown out of proportion.
Google earth photos indicate a tendency for cows (and some other grazers) to line up north-south. But supposedly you can't tell if they are facing north or facing south or any mixture of the two.
So, here's a conjecture*:
In the morning, cold cows stand side on to the sun (which is in the east), thus exposing a greater area to the sun.
In the late afternoon, cold cows stand side on to the sun (which is in the west).
In both cases, they will face either north or south (and probably won't prefer one much over the other).
In the middle of the day, warm cows in a paddock with little shade might even stand backside toward the sun (thus reducing sun exposure when they're hot)... and thus tending to face either north or south, depending on the hemisphere.
Consequently, there would be some tendency for cows to line up north-south, purely as a way of managing sun exposure in order to be more comfortable.
[*Yes, yes, I know it says in the BBC article Their study ruled out the possibility that the Sun position or wind direction were major influences on the orientation of the cattle. Dr Begall said: "In Africa and South America, the cattle (were) shifted slightly to a more north-eastern-south-western direction."
"But it is known that the Earth's magnetic field is much weaker there," she explained. How does that establish anything? If sun and wind hypotheses have been ruled out, why not explain how? The bit about Africa and South America really doesn't help that much, so if that's what they've got, it's damn weak.]
Edit: Well, reading the physorg link more closely, it looks like they used "shadows" to discount the effect of the sun. It's not clear what precisely they did -- there are a variety of possible conjectures that might involve the sun. I'll have to try to get the PNAS paper. I just went looking for it, but in spite of the fact that the library supposedly has electronic access, the article is not showing up, either in the most recently available issue there, nor in the articles that have immediate access.
Indeed, I can think of several other perfectly simple explanations that I'd want to eliminate before I start hypothesizing magnetic cows.
Does that mean that I assert cows don't have a magnetic sense like some birds?
Not at all. Such a thing would be particularly useful for some migrating animals, say, caribou or wildebeest, so it's certainly not completely out of the realm of possibility. But you have to at least show that it's not some rather obvious and simple thing like sun exposure before you take it at all seriously. And you need a lot more than "they looked at shadows" or "the cattle were at a slightly different angle in Africa" before concluding that you've eliminated alternative explanations.
That cows tend to align north-south is interesting. But magnetic cows is a bit overblown on that basis alone. At best its a plausible conjecture.
Well, I guess we wait and see. If anyone does see the paper, I'd be curious to know just how strong the arguments for the elimination of alternative explanations actually was.
The latest is magnetic cows. It seems rather blown out of proportion.
Google earth photos indicate a tendency for cows (and some other grazers) to line up north-south. But supposedly you can't tell if they are facing north or facing south or any mixture of the two.
So, here's a conjecture*:
In the morning, cold cows stand side on to the sun (which is in the east), thus exposing a greater area to the sun.
In the late afternoon, cold cows stand side on to the sun (which is in the west).
In both cases, they will face either north or south (and probably won't prefer one much over the other).
In the middle of the day, warm cows in a paddock with little shade might even stand backside toward the sun (thus reducing sun exposure when they're hot)... and thus tending to face either north or south, depending on the hemisphere.
Consequently, there would be some tendency for cows to line up north-south, purely as a way of managing sun exposure in order to be more comfortable.
[*Yes, yes, I know it says in the BBC article Their study ruled out the possibility that the Sun position or wind direction were major influences on the orientation of the cattle. Dr Begall said: "In Africa and South America, the cattle (were) shifted slightly to a more north-eastern-south-western direction."
"But it is known that the Earth's magnetic field is much weaker there," she explained. How does that establish anything? If sun and wind hypotheses have been ruled out, why not explain how? The bit about Africa and South America really doesn't help that much, so if that's what they've got, it's damn weak.]
Edit: Well, reading the physorg link more closely, it looks like they used "shadows" to discount the effect of the sun. It's not clear what precisely they did -- there are a variety of possible conjectures that might involve the sun. I'll have to try to get the PNAS paper. I just went looking for it, but in spite of the fact that the library supposedly has electronic access, the article is not showing up, either in the most recently available issue there, nor in the articles that have immediate access.
Indeed, I can think of several other perfectly simple explanations that I'd want to eliminate before I start hypothesizing magnetic cows.
Does that mean that I assert cows don't have a magnetic sense like some birds?
Not at all. Such a thing would be particularly useful for some migrating animals, say, caribou or wildebeest, so it's certainly not completely out of the realm of possibility. But you have to at least show that it's not some rather obvious and simple thing like sun exposure before you take it at all seriously. And you need a lot more than "they looked at shadows" or "the cattle were at a slightly different angle in Africa" before concluding that you've eliminated alternative explanations.
That cows tend to align north-south is interesting. But magnetic cows is a bit overblown on that basis alone. At best its a plausible conjecture.
Well, I guess we wait and see. If anyone does see the paper, I'd be curious to know just how strong the arguments for the elimination of alternative explanations actually was.
Wednesday, August 20, 2008
You ain't so special... part whatever
Some birds can tell they're looking at themselves in a mirror.
[Prior&al paper at PLoS Biology]
This is a version of the mirror test, used to gauge self-awareness.
There are now at least nine species where this has been observed; this is the first non-mammal to have unequivocal results - and there are a number of other bird species that I would be utterly unsurprised if they had similar results.
You have to wonder - do they have a theory of mind? From what I've seen from certain corvids, I would have to suppose that they do.
As the wikipedia link (which has already been updated to include the new information) points out, it's not a suitable test for all animals, so there may well be some creatures who don't perform at the mirror test who are nevertheless self-aware.
[Prior&al paper at PLoS Biology]
This is a version of the mirror test, used to gauge self-awareness.
There are now at least nine species where this has been observed; this is the first non-mammal to have unequivocal results - and there are a number of other bird species that I would be utterly unsurprised if they had similar results.
You have to wonder - do they have a theory of mind? From what I've seen from certain corvids, I would have to suppose that they do.
As the wikipedia link (which has already been updated to include the new information) points out, it's not a suitable test for all animals, so there may well be some creatures who don't perform at the mirror test who are nevertheless self-aware.
Too much mathematics homework
Recently I commented over at En Tequila Es Verdad, saying in part that I thought too much mathematics homework was a bad thing, education wise.
The response to headlines about US falling behind in education (say, like this one) is usually to increase homework.
Well, a paper in Econometrics Journal apparently concludes that for average students (about half of them, speaking roughly), lots of mathematics homework is not productive.
[Of course, this is looking at relatively short term effects. What will be the effects of too much homework five years down the line? My guess is that long term it will probably be unproductive for an even larger percentage.]
The linked news article says: According to Henderson, the learning process needs to remain a rich, broad experience.
Which is one of the main points I was getting at in my lengthy comment over at Dana's blog. Nice to see I'm not talking complete bullshit.
Think of it this way:
Imagine art class consisted of having to practice drawing a duck, over and over, until you could produce a good outline of a few very particular kinds of duck, drawn just so. You would do half an hour of ducks every night for homework. Then back to school the next day for more ducks. Then you'd move on to chickens. The generalization to all birds would be sort of handwaved, because the curriculum is kind of packed. It's time to move on to drawing fish! If you didn't learn to draw ducks, you would get even more work on drawing ducks. Some aspects of what you learned in drawing ducks could be used in drawing fish, but the relationships aren't very intuitive, and anyway, there's just so many bits to remember and it's all so confusing and WTF, now I have to go home and do fish for an HOUR?
And then suddenly you're drawing battleships, and while drawing kind of made sense before, suddenly it makes no sense. You never quite got the hang of ducks and now you're trying to catch up that, fish and now battleships? How on earth are you ever going to remember all the parts of a battleship? And god forbid you should draw the parts in the wrong order!
If art was like that, most people would hate it.
Imagine Rembrandt at a party, who desperately wants to convey something of the beauty and importance of chiaroscuro. What would he hear, over and over, as he brought up the topic of art?
"Art? I was never any good at that. I always hated art! My worst subject. All those ducks! You must be very strange."
Most people - if you forced them - would be able to draw a fairly reasonable-looking duck, but there'd be precious little art in their lives. They'd certainly have no sense that it could be moving and beautiful - or indeed that it was about anything other than ducks and fish, and maybe something painful about battleships.
The response to headlines about US falling behind in education (say, like this one) is usually to increase homework.
Well, a paper in Econometrics Journal apparently concludes that for average students (about half of them, speaking roughly), lots of mathematics homework is not productive.
[Of course, this is looking at relatively short term effects. What will be the effects of too much homework five years down the line? My guess is that long term it will probably be unproductive for an even larger percentage.]
The linked news article says: According to Henderson, the learning process needs to remain a rich, broad experience.
Which is one of the main points I was getting at in my lengthy comment over at Dana's blog. Nice to see I'm not talking complete bullshit.
Think of it this way:
Imagine art class consisted of having to practice drawing a duck, over and over, until you could produce a good outline of a few very particular kinds of duck, drawn just so. You would do half an hour of ducks every night for homework. Then back to school the next day for more ducks. Then you'd move on to chickens. The generalization to all birds would be sort of handwaved, because the curriculum is kind of packed. It's time to move on to drawing fish! If you didn't learn to draw ducks, you would get even more work on drawing ducks. Some aspects of what you learned in drawing ducks could be used in drawing fish, but the relationships aren't very intuitive, and anyway, there's just so many bits to remember and it's all so confusing and WTF, now I have to go home and do fish for an HOUR?
And then suddenly you're drawing battleships, and while drawing kind of made sense before, suddenly it makes no sense. You never quite got the hang of ducks and now you're trying to catch up that, fish and now battleships? How on earth are you ever going to remember all the parts of a battleship? And god forbid you should draw the parts in the wrong order!
If art was like that, most people would hate it.
Imagine Rembrandt at a party, who desperately wants to convey something of the beauty and importance of chiaroscuro. What would he hear, over and over, as he brought up the topic of art?
"Art? I was never any good at that. I always hated art! My worst subject. All those ducks! You must be very strange."
Most people - if you forced them - would be able to draw a fairly reasonable-looking duck, but there'd be precious little art in their lives. They'd certainly have no sense that it could be moving and beautiful - or indeed that it was about anything other than ducks and fish, and maybe something painful about battleships.
The delights of Biddleonian nose-wrestling
Every Olympics we get the cries of "that's not a sport!" about ... well, about almost every sport that the speaker is not a rabid follower of.
I have no patience for it.
The delight of the olympics is watching the very best exponents of whichever activity it happens to be duke it out in front of a potential audience probably thousands of times larger than at any time they have competed in their lives.
Some sports seem obscure, but in fact are not particularly - just to our parochial mindset. Not every country likes the same stuff. A sport nobody plays in your country may actually be one of the more popular competitive sports in some other countries.
Some other sports are indeed obscure. That's fine too - we just need to mix em around a bit, so other obscure sports can have their time in the sun. I love them all. Actually, I want more obscure sports. Tennis - meh, I can watch the same people play any old time. Give me Royal Tennis!
And I reckon every olympics should have 5 or 6 completely made up sports, and the best one gets to come back next olympics, along with another 5 or 6 newly-minted modes of competition. If a made up sport wins twice in a row, it should be a regular fixture.
The new sports would be part of the Olympics bid process, so people who like the sound of a new sport can have a nice long training period to get good. Remember Eddie the Eagle from the winter olymics way back? If there were more sports, there'd be more half-mad semi-fit ordinary people having a burl. More power to 'em.
There's a very fertile ground in mixing pre-existing sports. I want to see fencing mixed with trampolining. Now there's a sport.
What about clay pigeon shooting from a hang-glider? That takes skill. None of this "perfect score" shit - you manage to wing a couple, you're probably in line for a medal.
[Edit: I see Greta Christina made some similar points (though she's taking a somewhat different tack). I've had this post in mind since reading a poll a few days ago asking "Which sports don't belong at the olympics?"; any similarity to Greta's post is coincidental]
I have no patience for it.
The delight of the olympics is watching the very best exponents of whichever activity it happens to be duke it out in front of a potential audience probably thousands of times larger than at any time they have competed in their lives.
Some sports seem obscure, but in fact are not particularly - just to our parochial mindset. Not every country likes the same stuff. A sport nobody plays in your country may actually be one of the more popular competitive sports in some other countries.
Some other sports are indeed obscure. That's fine too - we just need to mix em around a bit, so other obscure sports can have their time in the sun. I love them all. Actually, I want more obscure sports. Tennis - meh, I can watch the same people play any old time. Give me Royal Tennis!
And I reckon every olympics should have 5 or 6 completely made up sports, and the best one gets to come back next olympics, along with another 5 or 6 newly-minted modes of competition. If a made up sport wins twice in a row, it should be a regular fixture.
The new sports would be part of the Olympics bid process, so people who like the sound of a new sport can have a nice long training period to get good. Remember Eddie the Eagle from the winter olymics way back? If there were more sports, there'd be more half-mad semi-fit ordinary people having a burl. More power to 'em.
There's a very fertile ground in mixing pre-existing sports. I want to see fencing mixed with trampolining. Now there's a sport.
What about clay pigeon shooting from a hang-glider? That takes skill. None of this "perfect score" shit - you manage to wing a couple, you're probably in line for a medal.
[Edit: I see Greta Christina made some similar points (though she's taking a somewhat different tack). I've had this post in mind since reading a poll a few days ago asking "Which sports don't belong at the olympics?"; any similarity to Greta's post is coincidental]
Monday, August 18, 2008
Sunday, August 17, 2008
Why I look forward to the death of atheism
As should already be obvious, I'm an atheist.
But I'm an atheist who looks forward to a time when there are no people who identify as atheist - to the death, as it were, of atheism.
After all, I don't identify as an a-tooth-fairy-ist; it's just naturally accepted that as an adult, I don't carry such a belief. Being an atoothfairyist is so universally common that it isn't even a term. Atoothfairyism, if it were ever to have existed, is certainly now long dead. It's not necessary to identify as a skeptic of the religious beliefs prevalent in ancient Greece. Such a lack of belief is, essentially, a dead issue.
So it goes with the religions presently at large -- I hesitate to say "modern religion", because there's little about their fundamentals that's modern. I look forward to the day when it's utterly pointless to ever mention a lack of belief in them, because it is essentially universal. When that day comes, atheism as we presently understand it, will be utterly dead. We'll just be people, getting on with life.
Maybe then we can really get to work of fixing the mess we're in.
But I'm an atheist who looks forward to a time when there are no people who identify as atheist - to the death, as it were, of atheism.
After all, I don't identify as an a-tooth-fairy-ist; it's just naturally accepted that as an adult, I don't carry such a belief. Being an atoothfairyist is so universally common that it isn't even a term. Atoothfairyism, if it were ever to have existed, is certainly now long dead. It's not necessary to identify as a skeptic of the religious beliefs prevalent in ancient Greece. Such a lack of belief is, essentially, a dead issue.
So it goes with the religions presently at large -- I hesitate to say "modern religion", because there's little about their fundamentals that's modern. I look forward to the day when it's utterly pointless to ever mention a lack of belief in them, because it is essentially universal. When that day comes, atheism as we presently understand it, will be utterly dead. We'll just be people, getting on with life.
Maybe then we can really get to work of fixing the mess we're in.
Saturday, August 16, 2008
The hunting of the snark
There's a very brief interview with a beautifully snarky Gore Vidal in Esquire that's online here.
(My apologies to whichever blog I saw this at a few days ago - I have completely forgotten).
(My apologies to whichever blog I saw this at a few days ago - I have completely forgotten).
What difference will quantum computing make to me, anyway?
I think, broadly speaking, a lot of what's written about quantum computers is somewhat wrong-headed. It will make a difference to cyptography (in that it will be easier to have "unsnoopable"** communications). It will speed up some algorithms dramatically.
**of course all that means is that the "snooping" shifts to other parts of the process.
It's usually this last thing that gets the attention. What quantum computing essentially does is give you the potential to "halve the exponent". If something was going to take something on the order of 2^60 calculations, then if you're lucky, you may be able to reduce that to on the order of 2^30 quantum calculations. And if the exponents happen to be of that order, that could be enormusly useful. It might be possible, for example, to reduce a computation from centuries to maybe days or weeks (and ordinary parallelism could reduce that further, of course).
But there aren't all that many calculations that matter to us right now that have exponents of that order (well, actually there are a lot, but as a fraction of calculations we want to do, not so much). Most calculations are either much, much smaller or much, much bigger.
There's not much to be gained in taking a calculation from 2^400 to 2^200 - it's still going to take longer than you likely have.
So in the region past the outer limits of what we can practically do with ordinary parallelism, quantum computing will make a big difference - it will revolutionize what we can do with certain kinds of calculation. Most of those kinds of calculation won't impact you directly (indirectly, sure - in things like design of drugs and such) in the sense that you probably won't be doing much in the way of calculation you weren't before. At least not to begin with.
I think the big impacts on our personal lives will come in ways we can't even anticipate right now. The software that will change everything - maybe even decades after we have quantum computers as things we can buy - we don't know what that's going to be like. We won't until we have quantum computers, until we have coding environments and programming paradigms that let us think about things in ways we don't right now. Until we've had a generation of people who grew up with quantum computers.
Because when that exponent-halving is available, we're going to invent new problems that lie in that space above what we can do right now with parallelism, but that quantum computers can do. And we won't know what they are until we're there.
**of course all that means is that the "snooping" shifts to other parts of the process.
It's usually this last thing that gets the attention. What quantum computing essentially does is give you the potential to "halve the exponent". If something was going to take something on the order of 2^60 calculations, then if you're lucky, you may be able to reduce that to on the order of 2^30 quantum calculations. And if the exponents happen to be of that order, that could be enormusly useful. It might be possible, for example, to reduce a computation from centuries to maybe days or weeks (and ordinary parallelism could reduce that further, of course).
But there aren't all that many calculations that matter to us right now that have exponents of that order (well, actually there are a lot, but as a fraction of calculations we want to do, not so much). Most calculations are either much, much smaller or much, much bigger.
There's not much to be gained in taking a calculation from 2^400 to 2^200 - it's still going to take longer than you likely have.
So in the region past the outer limits of what we can practically do with ordinary parallelism, quantum computing will make a big difference - it will revolutionize what we can do with certain kinds of calculation. Most of those kinds of calculation won't impact you directly (indirectly, sure - in things like design of drugs and such) in the sense that you probably won't be doing much in the way of calculation you weren't before. At least not to begin with.
I think the big impacts on our personal lives will come in ways we can't even anticipate right now. The software that will change everything - maybe even decades after we have quantum computers as things we can buy - we don't know what that's going to be like. We won't until we have quantum computers, until we have coding environments and programming paradigms that let us think about things in ways we don't right now. Until we've had a generation of people who grew up with quantum computers.
Because when that exponent-halving is available, we're going to invent new problems that lie in that space above what we can do right now with parallelism, but that quantum computers can do. And we won't know what they are until we're there.
Labels:
exponent halving,
future,
quantum computers,
software
Quantum computers, RSN?
Graphene seems to be the new favourite material - it's cropping up everywhere, the way that fullerenes were a few years ago, and recently SWCNTs in particular. Just lately, it's specifically graphene.
At the same time, quantum computers have a number of serious technical hurdles to deal with.
Well, as highlighted at the physics arXiv blog, in a recent paper at arXiv, "Z"-shaped pieces of graphene nanoribbon are being suggested as a solution to several of those issues. Specifically, you need to have things that don't readily interact with the environment, yet can be manipulated and can interact with each other; electrons are problematic because they interact with the environment (which is why a fair bit of quantum computing focus has been on photons - but they have their own problems). The "corners" in the graphene Z's are where the action happens - the electons are "stored", not interacting with the environment, but where their spins can interact with each other.
The authors say: “Due to recent achievement in production of graphene nanoribbon, this proposal may be implementable within the present techniques.”
You won't see quantum laptops in stores for next Christmas. But instead of "decades" we're maybe now looking at a few years for something on a lab bench, and maybe getting closer to a decade for practical devices, and perhaps another 5 years after that for consumer products. Maybe.
But Vista will still suck on them.
At the same time, quantum computers have a number of serious technical hurdles to deal with.
Well, as highlighted at the physics arXiv blog, in a recent paper at arXiv, "Z"-shaped pieces of graphene nanoribbon are being suggested as a solution to several of those issues. Specifically, you need to have things that don't readily interact with the environment, yet can be manipulated and can interact with each other; electrons are problematic because they interact with the environment (which is why a fair bit of quantum computing focus has been on photons - but they have their own problems). The "corners" in the graphene Z's are where the action happens - the electons are "stored", not interacting with the environment, but where their spins can interact with each other.
The authors say: “Due to recent achievement in production of graphene nanoribbon, this proposal may be implementable within the present techniques.”
You won't see quantum laptops in stores for next Christmas. But instead of "decades" we're maybe now looking at a few years for something on a lab bench, and maybe getting closer to a decade for practical devices, and perhaps another 5 years after that for consumer products. Maybe.
But Vista will still suck on them.
Wednesday, August 13, 2008
Emotion and mathematics
It would be easy for people outside of mathematical areas to assume that the exercise of mathematics is an austere and unemotional activity, and that as a result mathemamaticians are, whether by nature or by habit, cold and disinclined to emotion.
Having observed many people (including myself) doing mathematics and discussing mathematically-related topics, this is far from the case.
I have had many congenially heated arguments with colleagues, and I have even caught myself grinning in delighted anticipation of going another round with a valued fellow-traveller.
(I've been called crazy a lot of times - but more times in mathematically-related discussions than anywhere else - and with unstinting good humour to boot. "You're crazy! You can't do that." "No, really, it's right. You can do it here..." "No, no, it's nuts to do it that way even if it's right." -- and so on back and forth; in fact, I think that's how a lot of mathematical arguments get polished)
Even as a solitary activity, mathematics is for me, intensely emotional, even visceral. Many times, equations I have worked with have various kinds of symmetry, and many of those symmetries will carry through the equations as the argument develops.. this is, I presume, what produces a strong sense of rightness that I often feel as the steps progress. There's also a corresponding sense that there is some mistake - for me a feeling something like that moment on a roller coaster as it begins to descend, though it is sometimes even stronger than that - before being aware of exactly what is wrong, or precisely where it lies.
If you work with particular kinds of expressions a lot, you built up a sense of what they "should" look like, and it becomes easier to recognize that something is wrong before you can say precisely what the problem is; because the intellectual cognition is behind the pattern-recognition, it has an emotional quality.
A really clever manipulation (I can't help but think of them as "tricks") or an inspired substitution that makes a difficult problem easy can produce a tingling sensation up the back of my neck and head. A particularly beautiful piece of mathematics can, on occasion, move me almost to tears.
Then there's joy and delight. On occasion I have had the fortune to look at some neat, if modest, just-derived result and wonder if perhaps I am the first to have ever seen it (it is, obviously, rarely the case that I am - it is not unusual to find that my result has been tucked away in some mathematical corner for many decades ... on one occasion I found I had been beaten by Gauss - but the thrill of discovery is there all the same).
There's also what I call the "stupid feeling". When I'm working on something new or unfamiliar (or even, on occasion on what ought to be familiar), I can spend long periods - days, weeks, or even, shamefully, months - where I feel intensely incompetent, like I'm reaching around in the dark for something that I know is right there, but can't seem to locate it - and then there's a fleetingly brief moment of joy as I see how to do it (often barely long enough to say "Yes!"). Then quickly after, in retropspect (sometimes as I see an even better way to do it), it is all so utterly obvious, so agonizingly plain, that the prior feeling of incompetence seems, if anything, far too mild.
For me, that feeling is occasionally so intense I cannot even bear to write it up properly, or sometimes even to mention it, because the whole thing is so painfully facile. (I doubt that most people feel this quite so keenly; I'd be curious to know.)
Mathematicians don't discuss emotion much; a kindly supervisor might have a few words on dealing with the disappointments that naturally come with trying to get some result to come out, or those that come with trying to get something published. But outside of that, the preference is almost always to talk about the mathematics itself.
But just because we don't talk about our feelings with each other doesn't mean we're not feeling them.
Having observed many people (including myself) doing mathematics and discussing mathematically-related topics, this is far from the case.
I have had many congenially heated arguments with colleagues, and I have even caught myself grinning in delighted anticipation of going another round with a valued fellow-traveller.
(I've been called crazy a lot of times - but more times in mathematically-related discussions than anywhere else - and with unstinting good humour to boot. "You're crazy! You can't do that." "No, really, it's right. You can do it here..." "No, no, it's nuts to do it that way even if it's right." -- and so on back and forth; in fact, I think that's how a lot of mathematical arguments get polished)
Even as a solitary activity, mathematics is for me, intensely emotional, even visceral. Many times, equations I have worked with have various kinds of symmetry, and many of those symmetries will carry through the equations as the argument develops.. this is, I presume, what produces a strong sense of rightness that I often feel as the steps progress. There's also a corresponding sense that there is some mistake - for me a feeling something like that moment on a roller coaster as it begins to descend, though it is sometimes even stronger than that - before being aware of exactly what is wrong, or precisely where it lies.
If you work with particular kinds of expressions a lot, you built up a sense of what they "should" look like, and it becomes easier to recognize that something is wrong before you can say precisely what the problem is; because the intellectual cognition is behind the pattern-recognition, it has an emotional quality.
A really clever manipulation (I can't help but think of them as "tricks") or an inspired substitution that makes a difficult problem easy can produce a tingling sensation up the back of my neck and head. A particularly beautiful piece of mathematics can, on occasion, move me almost to tears.
Then there's joy and delight. On occasion I have had the fortune to look at some neat, if modest, just-derived result and wonder if perhaps I am the first to have ever seen it (it is, obviously, rarely the case that I am - it is not unusual to find that my result has been tucked away in some mathematical corner for many decades ... on one occasion I found I had been beaten by Gauss - but the thrill of discovery is there all the same).
There's also what I call the "stupid feeling". When I'm working on something new or unfamiliar (or even, on occasion on what ought to be familiar), I can spend long periods - days, weeks, or even, shamefully, months - where I feel intensely incompetent, like I'm reaching around in the dark for something that I know is right there, but can't seem to locate it - and then there's a fleetingly brief moment of joy as I see how to do it (often barely long enough to say "Yes!"). Then quickly after, in retropspect (sometimes as I see an even better way to do it), it is all so utterly obvious, so agonizingly plain, that the prior feeling of incompetence seems, if anything, far too mild.
For me, that feeling is occasionally so intense I cannot even bear to write it up properly, or sometimes even to mention it, because the whole thing is so painfully facile. (I doubt that most people feel this quite so keenly; I'd be curious to know.)
Mathematicians don't discuss emotion much; a kindly supervisor might have a few words on dealing with the disappointments that naturally come with trying to get some result to come out, or those that come with trying to get something published. But outside of that, the preference is almost always to talk about the mathematics itself.
But just because we don't talk about our feelings with each other doesn't mean we're not feeling them.
Monday, August 11, 2008
How can we be sure when the bible is being literal?
Over at Friendly Atheist, Hemant quoted Rochelle Weiss of the Freedom from Religion Foundation, who wrote about whether Jesus was righteous, and finds he comes up short.
Needless to say, apologists came in (and as usual, saying different things). This post is based on a comment I made there.
As always, when the bible say uncomfortable things, the apologists come right in with “this doesn’t mean what it says”.
Which is fine, I could accept that - unless, unless they also say about other parts “this means what it says”. You can’t have it both ways. You can’t declare the parts you don’t like to be “figures of speech” unless you accept that the same will be true of parts you would like to be literal; conversely, you can’t say “I think this is literally true” unless you’re prepared to accept that some of the parts you don’t like are also literally true.
It’s astoundingly convenient the way that it’s apparently almost universally only the most inconvenient parts of the bible that are held to be figures of speech (or in some other way should not be taken to mean what they plainly say).
“Okay, here, hate doesn’t mean hate. Actually, it means love, just not quite so much as someone else. But over there, well, it means hate.”
Even George Orwell didn’t imagine wordplay quite as sinister as that.
The problem with the “figure of speech” argument is that people of the time (as with people now) sometimes said similar things literally. There’s nothing in the bible to clearly say that the claim of “figure of speech” is in fact so. And it gets worse, because if some part isn't literal, you have the further problem of guessing exactly what it's supposed to mean.
It’s guesswork. Sometimes it’s educated guesswork, but much of the time it’s just a hopeful guess. Where does this supreme authority come from to know with certainty what is literal and what is not?
If nothing was at stake but academic pride, I wouldn't care.
What if you guess wrong about what’s literal and what’s not? What could loving Jesus have in store for you? Well, he tells us - get the wrong things wrong, and it's infinite torture. For guessing wrong, or believing someone else who claims their guess is right. So before you start casually declaring one bit not literal, and another bit literal, you better be damn sure you’ve got it right. You better have a lot more evidence than is on display in the comment thread there.
Now if Jesus really did mean that bit about hating parents literally, and you don't, or you tell others not to hate theirs, you could be totally screwed, depending on which other bits are literally true (the problem is, if some is literal and some is figurative, there is no solid foundation for any claim). But then again maybe even Jesus’ tender Hell is also just a figure of speech. So maybe nothing's at stake. For the sake of all the apologists, we better hope so, eh?
I see lots of opinion on display from the religious, and precious little fact. Yet, they’ve got the unmitigated arrogance to be happily playing around with their apologetics, apparently putting others in danger of infinite torture.
Fingers crossed, eh? Good luck with that.
Needless to say, apologists came in (and as usual, saying different things). This post is based on a comment I made there.
As always, when the bible say uncomfortable things, the apologists come right in with “this doesn’t mean what it says”.
Which is fine, I could accept that - unless, unless they also say about other parts “this means what it says”. You can’t have it both ways. You can’t declare the parts you don’t like to be “figures of speech” unless you accept that the same will be true of parts you would like to be literal; conversely, you can’t say “I think this is literally true” unless you’re prepared to accept that some of the parts you don’t like are also literally true.
It’s astoundingly convenient the way that it’s apparently almost universally only the most inconvenient parts of the bible that are held to be figures of speech (or in some other way should not be taken to mean what they plainly say).
“Okay, here, hate doesn’t mean hate. Actually, it means love, just not quite so much as someone else. But over there, well, it means hate.”
Even George Orwell didn’t imagine wordplay quite as sinister as that.
The problem with the “figure of speech” argument is that people of the time (as with people now) sometimes said similar things literally. There’s nothing in the bible to clearly say that the claim of “figure of speech” is in fact so. And it gets worse, because if some part isn't literal, you have the further problem of guessing exactly what it's supposed to mean.
It’s guesswork. Sometimes it’s educated guesswork, but much of the time it’s just a hopeful guess. Where does this supreme authority come from to know with certainty what is literal and what is not?
If nothing was at stake but academic pride, I wouldn't care.
What if you guess wrong about what’s literal and what’s not? What could loving Jesus have in store for you? Well, he tells us - get the wrong things wrong, and it's infinite torture. For guessing wrong, or believing someone else who claims their guess is right. So before you start casually declaring one bit not literal, and another bit literal, you better be damn sure you’ve got it right. You better have a lot more evidence than is on display in the comment thread there.
Now if Jesus really did mean that bit about hating parents literally, and you don't, or you tell others not to hate theirs, you could be totally screwed, depending on which other bits are literally true (the problem is, if some is literal and some is figurative, there is no solid foundation for any claim). But then again maybe even Jesus’ tender Hell is also just a figure of speech. So maybe nothing's at stake. For the sake of all the apologists, we better hope so, eh?
I see lots of opinion on display from the religious, and precious little fact. Yet, they’ve got the unmitigated arrogance to be happily playing around with their apologetics, apparently putting others in danger of infinite torture.
Fingers crossed, eh? Good luck with that.
Saturday, August 2, 2008
Pentagonal Tiling...
I was reading Julie Rehmeyer's current column (I like Julie's writing and have been following MathTrek for more than almost a year now) at Science News, which is on quasicrystals.
But this column I think she said something other than what she intended... I'd have commented there but you have to register, which I'm not going to do just to leave one small comment, and anyway, posting it here gives me a change to prattle on and point at pictures and such.
To quote:
"That's why you've never seen a bathroom tiled with pentagons - it'd be impossible to cover the whole surface with no gaps."
Now the problem is, this statement is wrong... in fact, here's a counterexample I knocked up in a few seconds (it's a bit rough, but you can see what's going on easily enough).
This is called the Cairo pentagonal tiling. It's one of fourteen known tilings of pentagons (it's probably not obvious, but "Type 4" on that page is the same type of tiling). Another favourite of mine is the the Floret pentagonal tiling. (Go take a look at those two named ones, they're pretty.)
What Julie meant was "...tiled with regular pentagons". Cos, yeah, that doesn't work.
Anyway, aside from that hiccup, it's a good article; worth a read.
But this column I think she said something other than what she intended... I'd have commented there but you have to register, which I'm not going to do just to leave one small comment, and anyway, posting it here gives me a change to prattle on and point at pictures and such.
To quote:
"That's why you've never seen a bathroom tiled with pentagons - it'd be impossible to cover the whole surface with no gaps."
Now the problem is, this statement is wrong... in fact, here's a counterexample I knocked up in a few seconds (it's a bit rough, but you can see what's going on easily enough).
This is called the Cairo pentagonal tiling. It's one of fourteen known tilings of pentagons (it's probably not obvious, but "Type 4" on that page is the same type of tiling). Another favourite of mine is the the Floret pentagonal tiling. (Go take a look at those two named ones, they're pretty.)
What Julie meant was "...tiled with regular pentagons". Cos, yeah, that doesn't work.
Anyway, aside from that hiccup, it's a good article; worth a read.
Thursday, July 31, 2008
End of mandatory detention
A few days ago the Rudd government in Australia decided to end mandatory detention for asylum seekers while their applications for asylum are processed. This can be a very long process, often taking years.
This is great news; it ends a period of appalling treatment of people who in many cases have already suffered a great deal.
This is great news; it ends a period of appalling treatment of people who in many cases have already suffered a great deal.
Thursday, July 24, 2008
Unasking the question
The problem with asking a theist to give evidence for a God is not so much the fact that they lack evidence.
It's that they don't seem to have a coherent concept of what they mean when they say the word. The God concept slips around in any argument - one moment its one thing, the next moment it's some other, contradictory thing.
(And woe betide anyone should they dare ask for a coherent definition! Your likely fate is to be buried under increasingly thick layers of blather, each one getting further from any sense of definition.)
I have come to think "God" is not a concept in the usual sense. It's a bag for holding and dealing with a large pile of (often conflicting) emotions, with a vaguely concept-like structure imposed on it by hundreds of generations of talking about it.
I wonder if perhaps the debate should shift. We should unask "What evidence is there?", and instead we should perhaps be saying "You keep using that word - what do you think it means?".
If you can get an answer that sounds like it means anything whatever ask "Umm, can we write that definition down, please?"
You still won't get any evidence, of course, but maybe you can get them to talk about the one damn conception of God for more than one sentence at a time.
Well, it depends on what the meaning of "is" is.
It's that they don't seem to have a coherent concept of what they mean when they say the word. The God concept slips around in any argument - one moment its one thing, the next moment it's some other, contradictory thing.
(And woe betide anyone should they dare ask for a coherent definition! Your likely fate is to be buried under increasingly thick layers of blather, each one getting further from any sense of definition.)
I have come to think "God" is not a concept in the usual sense. It's a bag for holding and dealing with a large pile of (often conflicting) emotions, with a vaguely concept-like structure imposed on it by hundreds of generations of talking about it.
I wonder if perhaps the debate should shift. We should unask "What evidence is there?", and instead we should perhaps be saying "You keep using that word - what do you think it means?".
If you can get an answer that sounds like it means anything whatever ask "Umm, can we write that definition down, please?"
You still won't get any evidence, of course, but maybe you can get them to talk about the one damn conception of God for more than one sentence at a time.
Well, it depends on what the meaning of "is" is.
Labels:
coherence,
defining god,
evidence for god,
mu,
religion,
religious belief
Wednesday, July 23, 2008
The parable of the histogram
I must be some kind of heretic. I'm a statistician, and here I am pointing out the problems in yet another common statistical tool.
We'll see how the histogram, which is a very popular way of displaying the distributional shape of a set of data, must be viewed with a good deal of caution.
Even though histograms are often found in the media, the problems with histograms are almost unknown among the general public. Indeed, most places that teach statistics at university completely fail to mention them.
I'd like to say that the problems are well known among professional statisticians, but that might be too strong. Certainly problems have been pointed out in the literature, and many statisticians are aware of the problems, but it seems many still are not, and the appropriate cautions are not always explained.
I'm going to show you a simple example.
Here's some data (40 observations in this sample), which I'm going to draw a histogram of. I have rounded the numbers off to two decimal places.
I give the numbers so you can (if you are so inclined) confirm for yourself what I will tell you in my little parable.
(Edit added Feb 2012: I noticed that the results didn't quite reproduce in R - three observations in the original data set I gave occurred exactly on bin boundaries for some situations. This was either a problem caused by rounding, or possibly by different conventions of different packages for handling observations at bin boundaries; I have accordingly altered those three observations by tiny amounts to move them off boundaries and avoid the issue, whatever its source. There is R code at the end of the post that works.)
The parable
This data set was given to a student, Annie. She constructs her histogram of the data by counting the number of values between 0 and 1 (but not including exactly 1), between 1 and 2, and so on, and then drawing a series of boxes each of whose base covers the subset of values that the count came from and whose height is the count for that range of values. Annie's histogram is shown in the top-left of the picture below.
She obtains a histogram whose shape corresponds to a distribution that is skewed to the left. See, for example, this description of using histograms to assess distributional shape here (edit: broken link replaced with an alternative) - that's pretty much precisely the way many elementary books on statistics describe the way to assess the shape of a distribution (and usually it's going to give you the right sort of impression).
Note that I could remove the scale and I could still describe the shape - I don't need to know the numbers on the scale in order to arrive at my description.
Three of Annie's friends, Brian, Chris and Zoe (Hah! Psych!) also get data sets with 40 observations, and they all do exactly as Annie did. Their histograms are given below (Annie's data is V1, Brian's is V2 and so on).
(click pic for a larger image)
Correspondingly, Brian describes his distribution as symmetric (and he might add "uniform"). Chris describes his as skewed to the right. Zoe describes hers as symmetric and bimodal (it has two main peaks).
So far so good - this is exactly how the books tell you it all works.
So while they're comparing their histograms, Annie idly starts looking at Brian's actual numbers. She realizes something odd is going on. She quickly places all their data sets side-by-side.
"Look, Chris!" Annie says, "all Brian's values are smaller than mine by 0.25. All yours are a quarter smaller than Brian's, and Zoe's are a quarter smaller than yours!"
They all confirm that she is correct - each set of values is the same, but with its origin merely shifted a little. Their data sets are identical in shape, but the resulting histograms are not.
That is to say, assessment of distributional shape in histograms can be dramatically affected by choice of scale (specifically, by the choice of the origin and width of the histgram bins). Here ends the parable.
It usually isn't this dramatic, of course, but the fact is, if one can generate a seemingly innocuous set of numbers whose histogram will look completely different (and for which many people will assert the distributional shape is completely different) every time we merely add or subtract a quarter, it can happen with real data too. And it does happen. Mostly the difference in impression is more modest... but not always.
So if you see a histogram, just keep in the back of your mind that it's perfectly possible that a different choice of bin boundaries would yield a somewhat different impression of the data.
Imagine I want to show some students that I write "easy" tests (I don't know why this should be such an object of fascination for students since they all do the same test, and marks are generally scaled, but it is). In preparation, I draw a histogram and it turns out to like Chris' - it looks like most students score below the middle of the range of marks. But lo, I discover with a bit of fiddling around that if I make my bin centres where the edges were (and so on), the completely opposite impression is given - just like Annie's histogram. Yay, "easy test" ... and many fewer worried queries from students in the run-up to the test, because they tend to feel there's a good chance of scoring "above the middle".
Did I lie? No. Did I fudge the data? Well, no. I did something, though. Or rather, I didn't do something.
This is a sin of omission. I fail to explain what the data would have looked like given a different choice of bin location.
Clearly, when circumstances are right, the ability to choose the location and width of the bins can give us the opportunity to somewhat alter the impression given by a histogram. Without fudging the numbers themselves, we can sometimes fudge the impression they give.
What do statisticians do? Well, there are other ways to look at distributional shape. Kernel density estimates are popular, and they completely get rid of the "bin-location" issue, though there's still the equivalent of a "bin-width" issue (choice of bandwidth, also called the "window"), which is often dealt with by looking at more than one choice of width (usually a width that gives a nice smooth result and then one that is smaller, giving a "rougher" result, in order that we can see there's nothing unsual hiding away - like the blue and green curves in the graph at topleft right** at the wikipedia link a few lines up). But there are a variety of other tools that might be used (which I don't plan on going into here).
**(did I ever mention that I have trouble with correctly attributing the words "left" and "right"? - well as you see, sometimes I do. But not when describing the shape of a distribution, isn't that odd?)
What can you do? Well, assuming you don't have anything more sophisticated that a basic histogram tool, at the least (with continuous data, anyway), try shifting your bin starts forward or back a fraction of a bin-width (if you're lazy, maybe try something near a half, otherwise maybe try a couple of values). Also try a narrower bin width. If you do a few different histograms that all give the same general impression, it doesn't matter much which one you use. And if they don't give the same impression, you better either say so, show more than one, or find some other way to convey the information.
[Or you can do a kernel density estimate readily enough - many packages (including some free ones) will do them; there are pages online that can draw them if you just paste in some data. Implementing a kernel density estimate of your own is fairly straightforward - you can compute one in a spreadsheet easily enough - if anything, it's probably slightly simpler to compute one than it is to compute counts for a histogram, which is in itself pretty straightforward. ]
Caveat Emptor
___
Added in edit in Feb 2012:
Here is some R code to create the data:
Here is some R code to generate the histograms:
Here is some R code to generate some density estimates:
Here is some R code to generate some other informative displays:
First - the sample cumulative distribution function
Second, a stripchart that shows the positions of the individual observations as they move back.
We'll see how the histogram, which is a very popular way of displaying the distributional shape of a set of data, must be viewed with a good deal of caution.
Even though histograms are often found in the media, the problems with histograms are almost unknown among the general public. Indeed, most places that teach statistics at university completely fail to mention them.
I'd like to say that the problems are well known among professional statisticians, but that might be too strong. Certainly problems have been pointed out in the literature, and many statisticians are aware of the problems, but it seems many still are not, and the appropriate cautions are not always explained.
I'm going to show you a simple example.
Here's some data (40 observations in this sample), which I'm going to draw a histogram of. I have rounded the numbers off to two decimal places.
3.15 2.28 2.06 3.43 4.85 3.22 4.01 4.43 5.46 3.12 5.53 5.51 5.56 5.52 5.31 4.96 3.28 4.10 5.19 2.54 1.89 1.84 2.56 1.90 4.20 3.42 2.39 3.64 4.84 4.31 5.11 5.60 1.98 3.91 1.88 4.33 5.74 2.01 2.58 1.92
I give the numbers so you can (if you are so inclined) confirm for yourself what I will tell you in my little parable.
(Edit added Feb 2012: I noticed that the results didn't quite reproduce in R - three observations in the original data set I gave occurred exactly on bin boundaries for some situations. This was either a problem caused by rounding, or possibly by different conventions of different packages for handling observations at bin boundaries; I have accordingly altered those three observations by tiny amounts to move them off boundaries and avoid the issue, whatever its source. There is R code at the end of the post that works.)
The parable
This data set was given to a student, Annie. She constructs her histogram of the data by counting the number of values between 0 and 1 (but not including exactly 1), between 1 and 2, and so on, and then drawing a series of boxes each of whose base covers the subset of values that the count came from and whose height is the count for that range of values. Annie's histogram is shown in the top-left of the picture below.
She obtains a histogram whose shape corresponds to a distribution that is skewed to the left. See, for example, this description of using histograms to assess distributional shape here (edit: broken link replaced with an alternative) - that's pretty much precisely the way many elementary books on statistics describe the way to assess the shape of a distribution (and usually it's going to give you the right sort of impression).
Note that I could remove the scale and I could still describe the shape - I don't need to know the numbers on the scale in order to arrive at my description.
Three of Annie's friends, Brian, Chris and Zoe (Hah! Psych!) also get data sets with 40 observations, and they all do exactly as Annie did. Their histograms are given below (Annie's data is V1, Brian's is V2 and so on).
(click pic for a larger image)
Correspondingly, Brian describes his distribution as symmetric (and he might add "uniform"). Chris describes his as skewed to the right. Zoe describes hers as symmetric and bimodal (it has two main peaks).
So far so good - this is exactly how the books tell you it all works.
So while they're comparing their histograms, Annie idly starts looking at Brian's actual numbers. She realizes something odd is going on. She quickly places all their data sets side-by-side.
"Look, Chris!" Annie says, "all Brian's values are smaller than mine by 0.25. All yours are a quarter smaller than Brian's, and Zoe's are a quarter smaller than yours!"
They all confirm that she is correct - each set of values is the same, but with its origin merely shifted a little. Their data sets are identical in shape, but the resulting histograms are not.
That is to say, assessment of distributional shape in histograms can be dramatically affected by choice of scale (specifically, by the choice of the origin and width of the histgram bins). Here ends the parable.
It usually isn't this dramatic, of course, but the fact is, if one can generate a seemingly innocuous set of numbers whose histogram will look completely different (and for which many people will assert the distributional shape is completely different) every time we merely add or subtract a quarter, it can happen with real data too. And it does happen. Mostly the difference in impression is more modest... but not always.
So if you see a histogram, just keep in the back of your mind that it's perfectly possible that a different choice of bin boundaries would yield a somewhat different impression of the data.
Imagine I want to show some students that I write "easy" tests (I don't know why this should be such an object of fascination for students since they all do the same test, and marks are generally scaled, but it is). In preparation, I draw a histogram and it turns out to like Chris' - it looks like most students score below the middle of the range of marks. But lo, I discover with a bit of fiddling around that if I make my bin centres where the edges were (and so on), the completely opposite impression is given - just like Annie's histogram. Yay, "easy test" ... and many fewer worried queries from students in the run-up to the test, because they tend to feel there's a good chance of scoring "above the middle".
Did I lie? No. Did I fudge the data? Well, no. I did something, though. Or rather, I didn't do something.
This is a sin of omission. I fail to explain what the data would have looked like given a different choice of bin location.
Clearly, when circumstances are right, the ability to choose the location and width of the bins can give us the opportunity to somewhat alter the impression given by a histogram. Without fudging the numbers themselves, we can sometimes fudge the impression they give.
What do statisticians do? Well, there are other ways to look at distributional shape. Kernel density estimates are popular, and they completely get rid of the "bin-location" issue, though there's still the equivalent of a "bin-width" issue (choice of bandwidth, also called the "window"), which is often dealt with by looking at more than one choice of width (usually a width that gives a nice smooth result and then one that is smaller, giving a "rougher" result, in order that we can see there's nothing unsual hiding away - like the blue and green curves in the graph at top
**(did I ever mention that I have trouble with correctly attributing the words "left" and "right"? - well as you see, sometimes I do. But not when describing the shape of a distribution, isn't that odd?)
What can you do? Well, assuming you don't have anything more sophisticated that a basic histogram tool, at the least (with continuous data, anyway), try shifting your bin starts forward or back a fraction of a bin-width (if you're lazy, maybe try something near a half, otherwise maybe try a couple of values). Also try a narrower bin width. If you do a few different histograms that all give the same general impression, it doesn't matter much which one you use. And if they don't give the same impression, you better either say so, show more than one, or find some other way to convey the information.
[Or you can do a kernel density estimate readily enough - many packages (including some free ones) will do them; there are pages online that can draw them if you just paste in some data. Implementing a kernel density estimate of your own is fairly straightforward - you can compute one in a spreadsheet easily enough - if anything, it's probably slightly simpler to compute one than it is to compute counts for a histogram, which is in itself pretty straightforward. ]
Caveat Emptor
___
Added in edit in Feb 2012:
Here is some R code to create the data:
histdata <- c(3.15,5.46,3.28,4.2,1.98,2.28,3.12,4.1,3.42,3.91,2.06,5.53 ,5.19,2.39,1.88,3.43,5.51,2.54,3.64,4.33,4.85,5.56,1.89,4.84,5.74,3.22 ,5.52,1.84,4.31,2.01,4.01,5.31,2.56,5.11,2.58,4.43,4.96,1.9,5.6,1.92)
Here is some R code to generate the histograms:
opar<-par() par(mfrow=c(2,2)) hist(histdata,breaks=1:6,main="Annie",xlab="V1",col="lightblue") hist(histdata-0.25,breaks=1:6,main="Brian",xlab="V2",col="lightblue") hist(histdata-0.5,breaks=1:6,main="Chris",xlab="V3",col="lightblue") hist(histdata-0.75,breaks=1:6,main="Zoe",xlab="V4",col="lightblue") par(opar)
Here is some R code to generate some density estimates:
opar<-par() par(mfrow=c(2,2)) plot(density(histdata,bw=.2),main="Annie") plot(density(histdata-.25,bw=.2),main="Brian") plot(density(histdata-.5,bw=.2),main="Chris") plot(density(histdata-.75,bw=.2),main="Zoe") par(opar)
Here is some R code to generate some other informative displays:
First - the sample cumulative distribution function
plot(ecdf(histdata))
Second, a stripchart that shows the positions of the individual observations as they move back.
x<-c abline="" c="" each="40)" g="" histdata-.25="" histdata-.5="" histdata-.75="" histdata="" pch="|" rep="" stripchart="" v="(2:5),col=6,lty=3)</pre" x=""> end edit-c>
Labels:
bad graphs,
math,
mathematics,
probability,
R package,
statistics
Religion in its purest form... is a vast work of bunkum
Dana at En Tequila Es Verdad discusses an interview in Salon with James Carse.
He (Carse) condemns Richard Dawkins and Sam Harris for not being "historians or scholars of religion" and so "it's too easy for them to pass off a quick notion of what religion is."
This is a common argument; PZ called it the "Courtiers Reply". The fact is, when atheists engage with less sophisticated versions of religion, they're doing it because that's what people believe. Was Dawkins' aim to engage with esoteric, sophisticated versions of religious belief? No. Indeed, it's a pointless effort, because the apologist can simply keep moving their goalposts, and refusing to pin down exactly what it is they would expect the atheist to engage with. Carse's difficulty in defining religion in the interview is a case in point.
What these atheists do is argue with veiws on religion that people actually hold.
My first thought on reading this was "he's got a double standard there - he's not asking Christians to study sophisticated versions of other religions, to be historians or scholars of religion before they reject other religions".
But actually, what made me write this is that I realized it's worse than this. There's an even worse double standard.
Carse says:
"To be an atheist, you have to be very clear about what god you're not believing in. Therefore, if you don't have a deep and well-developed understanding of God and divine reality, you can misfire on atheism very easily."
Apparently, the meaning of atheism is lost on him, since, of course, we lack belief in any gods (unless gods are defined so weakly that the term becomes, essentially, pointless).
That's funny, because Carse is apparently attempting "to find some underlying unity to all religions".
His understanding of atheism is laughable. He also says:
"To be an atheist is not to be stunned by the mystery of things or to walk around in wonder about the universe."
So while he berates atheists for failing to have the sophisticated understanding that would come from being an historian and religious scholar, he can't even do us the courtesy of engaging with an everyday relatively unsophisticated version of atheism. Instead, he makes up his version of atheism out of whole straw.
That's an astounding double standard.
He (Carse) condemns Richard Dawkins and Sam Harris for not being "historians or scholars of religion" and so "it's too easy for them to pass off a quick notion of what religion is."
This is a common argument; PZ called it the "Courtiers Reply". The fact is, when atheists engage with less sophisticated versions of religion, they're doing it because that's what people believe. Was Dawkins' aim to engage with esoteric, sophisticated versions of religious belief? No. Indeed, it's a pointless effort, because the apologist can simply keep moving their goalposts, and refusing to pin down exactly what it is they would expect the atheist to engage with. Carse's difficulty in defining religion in the interview is a case in point.
What these atheists do is argue with veiws on religion that people actually hold.
My first thought on reading this was "he's got a double standard there - he's not asking Christians to study sophisticated versions of other religions, to be historians or scholars of religion before they reject other religions".
But actually, what made me write this is that I realized it's worse than this. There's an even worse double standard.
Carse says:
"To be an atheist, you have to be very clear about what god you're not believing in. Therefore, if you don't have a deep and well-developed understanding of God and divine reality, you can misfire on atheism very easily."
Apparently, the meaning of atheism is lost on him, since, of course, we lack belief in any gods (unless gods are defined so weakly that the term becomes, essentially, pointless).
That's funny, because Carse is apparently attempting "to find some underlying unity to all religions".
His understanding of atheism is laughable. He also says:
"To be an atheist is not to be stunned by the mystery of things or to walk around in wonder about the universe."
So while he berates atheists for failing to have the sophisticated understanding that would come from being an historian and religious scholar, he can't even do us the courtesy of engaging with an everyday relatively unsophisticated version of atheism. Instead, he makes up his version of atheism out of whole straw.
That's an astounding double standard.
Labels:
atheism,
Carse,
Courtiers Reply,
Dawkins,
double standard,
Harris,
religion
Sunday, July 20, 2008
Fighting Mathiness
'I know what you're thinking about,' said Tweedledum: 'but it isn't so, nohow.'
'Contrariwise,' continued Tweedledee, 'if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
'Can you do Addition?' the White Queen asked. 'What's one and one and one and one and one and one and one and one and one and one?'
'I don't know,' said Alice. 'I lost count.'
'She can't do Addition,' the Red Queen interrupted. 'Can you do Subtraction? Take nine from eight.'
'Nine from eight I can't, you know,' Alice replied very readily: 'but-'
'She can't do Subtraction,' said the White Queen. 'Can you do Division? Divide a loaf by a knife-what's the answer to that?'
Jordan Ellenberg defined mathiness as "a series of fervent gestures that gives the impression that mathematical ideas are being expressed, but doesn’t actually deliver the goods".
Let us examine some examples of mathiness, and some examples where honest attempts to deal with mathematical situations have foundered, and try to understand how we can be led astray by mathematical arguments.
Skewness
I recently wrote about how in statistics, the measure that is often called skewness doesn't really mean what popular lore holds it to mean, and that it is often misused - for example, when people assert that zero skewness implies symmetry. I later pointed to several sites that made the kinds of errors I was talking about. In the brief time since then, new instances of the same issue have come up on some mathematics-related blogs. It's a case where the verbal "description" of the situation is not in agreement with the mathematical tools being used - mathematical ideas appear to be expressed, but the goods are not being delivered.
Misleading Graphics
In another vein, bad statistical graphics, such as this
lie in graphical form
can mislead us, whether by accident, or as in this case, by design.
(via Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science; there's other good examples to be found there.)
Examples abound in the media. Here's one from the NYT (via the Gallery of Data Visualization’s Missed Opportunities and Graphical Failures - click image for bigger version):
The top plot there is a graph of happiness against GNP-per-capita for a number of countries. The NYT has circled the countries in the top left hand corner, noting that many countries "had higher ... happiness than their economic situation would predict". This is the cardinal sin of treating inherently nonlinear relationships as linear - as they point out at the Gallery, an appropriate transformation - in this case looking at log-GNP, not raw GNP, makes these supposed "outliers" seem much more in keeping with the rest, and the apparent relationship more linear - and indeed, if anything, some entirely different points don't fit the general pattern. We seem to find nearly-linear relationships easier to understand, so transformation is often a useful strategy.
I have discussed the same issues - both the danger of treating nonlinear relationships as linear and the value of transformations in understanding relationships better in another context - relationships involving percentages. It's so easy to fall into the rut of linear thinking that we should consider taking advantage of the tendency to think that way and use transformation to reduce nonlinearity.
A common "nonlinear effect treated as linear" is when people try to average miles per gallon (or miles per hour, or a variety of other rates) - such as "I got 15 mpg going up and 45 mpg coming back, so I averaged 30 mpg overall" (when it's actually 22.5). In terms of transformations - the reciprocal (gallons per mile) - is linear and can be averaged.
Relying on a False Premise
Seemingly mathematical arguments may just be based on bad premises (such as one requiring selecting from the positive integers with equal probability - an impossibility that completely sinks the argument that relies on it).
That "infinity" thingy can be tricky - it seems to cause problems for journalists as well because they tend to underestimate how big it is.
Adding percentiles
Treating percentiles of distributions as if they were additive is unfortunately extremely common. In the case of official estimates of total oil reserves, it means that we probably have a fair bit more oil that we think.
What's the square root of that?
Or, sometimes, it seems, mathiness comes in because someone has no clue what the heck they're talking about, so we can be told that the Maya knew how to take the square root of a rectangle.
Mathematical arguments can feel unsually convincing, even unassailable, and we're awash in them for precisely that reason. It's too easy to forget that just because something seems to be laid out mathematically, it's not necessarily true - or even meaningful at all. Mathiness, like truthiness, is all around us. Even among skeptics, it's possible to put too much store in an argument couched in mathematical terms. We should be at least as skeptical of mathematical arguments - and in basically the same kinds of ways - as any other kinds of arguments, because we're all too often misled by them.
Unfortunately, it seems that we sometimes accept the (often implicit) conclusions of a mathematical argument without even realizing that an argument was being made.
If we fail to treat these arguments with the skepticism they deserve, we're open to being deceived by charlatans.
Labels:
bad graphs,
mathematics,
mathiness,
probability,
statistics
Subscribe to:
Posts (Atom)