Funding cannibalism motivates concern for overheads

Summary: Overhead expenses’ (CEO salary, percentage spent on fundraising) are often deemed a poor measure of charity effectiveness by Effective Altruists, and so they disprefer means of charity evaluation which rely on these. However, ‘funding cannibalism’ suggests that these metrics (and the norms that engender them) have value: if fundraising is broadly a zero-sum game between charities, then there’s a commons problem where all charities could spend less money on fundraising and all do more good, but each is locally incentivized to spend more. Donor norms against increasing spending on zero-sum ‘overheads’ might be a good way of combating this. This valuable collective action of donors may explain the apparent underutilization of fundraising by charities, and perhaps should make us cautious in undermining it.

continue reading »

2014/08/30  Leave a comment

Why the tails come apart

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6’7″). Faster serves in tennis improve one’s likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail. Although 6’7″ is very tall, is lies within a couple of standard deviations of the median US adult male height – there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren’t the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this).1

The trend seems to be that although we know the predictors are correlated with the outcome, freakishly extreme outcomes do not go together with similarly freakishly extreme predictors. Why? continue reading »

  1. One might look at the generally modest achievements of people in high-IQ societies as further evidence, but there are worries about adverse selection.

2014/07/22  Leave a comment

Off duty

“Hi, single to the hospital, please.”

“You’re not ill, are ya?” The bus driver teased.

“No, no, I’m just training there.”

He let her go with a laugh and she walked up the aisle of the bus. The white tunic with blue trim poking out from under her coat made me guess student nurse. I was next in the queue; the driver spared me the same joke when I asked for the same ticket.

I swung into my seat near the front, and put my newly bought laptop (long-awaited, and now looked-forward-to) on my lap. It was around seven, and the sky was in the throes of a stretched out summer sunset. The bus lurched forward, and my attention wandered.

“It’s his stop!” continue reading »

2014/07/14  Leave a comment

How good were the old greats?

Summary: In many fields, the ‘greatest’ (be they philosophers, playwrights, composers, etc.) are selected disproportionately more from those who lived in the distant past. I speculate as to what might be driving this bias towards ‘ancient greatness’, but one important takeaway is that we can be confident that the greatest of the past are likely inferior to the greatest amongst us today in terms of ‘innate ability’. So perhaps we should not regard them so highly.

[Very rough draft: advice/criticism on data, analysis, or style welcome, as is advice on whether this is worthy of going into academia, and if so how. Thanks to Rob Wiblin, Will Crouch, Catriona McKay, and Sam Bankman-Fried for ideas/prior discussion.]

continue reading »

2014/02/16  4 Comments

Valediction

I am now a doctor. The result that I passed my finals came on Friday, and the declaration happened Sunday afternoon. I start work as the most junior of junior doctors – a Foundation Year 1 – at the end of July in Milton Keynes. So now you know where to avoid if you get sick in August.

Traditionally (at least in the ‘States) students have a valedictory address at the end of their course, given by the top-ranked student of the year. This is a less auspicious farewell: I am a long way down any order of merit you care to name, and medical school has been a struggle. Things did get better in my final year, but the last six years haven’t been some Bildungsroman of how I transformed from spotty student to modern medical professional. continue reading »

2013/07/01  Leave a comment

God, Evil, and Appearances: A Dialogue

[Published in THINK (2013; issue 12, pp 9-23)]

ADAM: Consider this: Neil and Kazumi Puttick, and their son Sam were, by all accounts, an idyllic family. One friend said: ‘If you could bottle up a perfect marriage, theirs would be it’. They were involved in a car accident in 2005. Kazumi’s legs and pelvis were broken. Sam – then 18 months old – had his spine severed at the neck. He would have died were it not for two doctors who happened to be passing by. After being rushed to hospital, Neil and Kazumi were told that Sam’s injuries were catastrophic. Neil was defiant:

… I believe in my heart the doctors are wrong and he will win. I believe God is with us and Sam will walk, talk, and breathe again.

He was a miracle when came to us, it was a miracle when he survived the crash and it will be a miracle when he recovers. These things do happen and they will happen to Sam.

Sam survived, and although he didn’t recover from paralysis, flourished in all other respects. Neil and Kazumi quit their jobs to devote their time looking after Sam and raising money for his care. The local community pitched in too: one of the things they did was take photographs of themselves from all over the world holding cards saying ‘Hi Sam!’ which Sam enjoyed immensely. Later the local government agreed to pay the costs of Sam’s medical care. Neil and Kazumi continued their work, now directed towards raising awareness of spinal injuries. Sadly, the story doesn’t end there.

Three years after the accident (just after he’d started at school) Sam contracted pneumococcal meningitis, a highly virulent and aggressive infection. Despite intensive care, it became clear there was no hope of survival. Neil and Kazumi took him back home, and he died shortly afterwards.

Beachy Head is a notorious suicide blackspot, so much so a chaplaincy has been set up expressly to patrol the cliffs and counsel those contemplating whether to jump. Despite this, no one saw two figures wearing rucksacks who leapt to their deaths. The bodies were discovered the following morning. They were Neil and Kazumi Puttick. Sam’s body was in one their rucksacks; the other contained his toys.

The ‘problem of evil’ can mean many different things. It could be a moral problem: ‘What should we do to stop the evil things in the world?’ It could be a motive for existential crisis: ‘How can we bear to live in a world with so much that is evil?’ It could be an obstacle to religious faith: ‘How can I love a God that lets these evil things occur?’

The sort of ‘problem’ I want to talk about is really an argument, that starts from the existence of evil, and ends up concluding that there is no God. Awful stories like the Putticks’ are meant to demonstrate we do not live under the watchful benevolence of God, but rather in one of blind, pitiless indifference to our wellbeing. continue reading »

2013/03/19  4 Comments

Why you shouldn’t believe the Resurrection happened

The 12th (and final) part in “20 Atheist answers to questions they supposedly can’t.”

  1. What accounts for the empty tomb, resurrection appearances and growth of the church?

Short answer: We shouldn’t be that confident of these facts, but in any case the base rate fallacy and selection bias nixes the confirmatory power.

Longer answer: The argument implied in the question is that the historical record of Jesus provides strong evidence to believe he actually died and rose again, which provides evidence that Christianity’s central claims (e.g. God exists, Jesus is the son of god) are true. The question neatly summarizes the three main ‘planks’ of evidence usually offered:

  1. The Empty Tomb. When Jesus died, his body was placed in a tomb. Not only was a stone rolled in front of it, but also the authorities posted sentries outside the tomb to stop anyone stealing the body. Despite this, the stone was discovered to be rolled away, and the body had gone. (e.g. Mark 16:4, Luke 24:2-3)
  2. Resurrection appearances. Several different groups of people (the disciples, some women, etc.) are reported to have seen Jesus after he died. (e.g. Luke 24:15-31, 36-48; Matthew 28:9-10)
  3. The growth of the church. After Jesus died, his apostles (and figures like Paul) were committed to the message of Jesus, and helped the church spread rapidly. (cf. Acts, but also the historical record re. the Holy Roman empire, etc.)

The idea is this data is very hard to explain via purely atheistic means. Maybe Jesus didn’t really die, but is it plausible he could have got up and escaped the guarded tomb after being crucified and speared for good measure? Maybe the disciples managed to steal the body, but how did they manage to get that past the guards? (And what was in it for them? Why would many of them go on to die for a belief they knew to be false?) Maybe the appearances of the resurrection were just hallucinations, but how could there have been so many hallucinations, of so many different people, and why didn’t the authorities just squash the story by presenting the public with Jesus’s corpse?

So, it’s argued, the best explanation for the historical data is the Christian one: Jesus rose from the dead and left the tomb miraculously, and then appeared to people like the apostles and women who visited the tomb, and these people, convinced by the truth, to go on and grow the church. continue reading »

2013/02/13  4 Comments

Unfriendly AI cannot be the Great Filter

[Slightly odd topic, originally posted on LessWrong.]

Introduction

The Great Filter is the idea that although there is lots of matter, we observe no “expanding, lasting life”, like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already ‘passed’ the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.

One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter – one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/’Grey goo’, nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.

The concern with AI is something like this:

  1. AI will soon greatly surpass us in intelligence in all domains.
  2. If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
  3. Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.

Or, as summarized by Luke:

… AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good.

So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are ‘friendly’ (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.

‘Where is everybody?’

So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):

Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)

This made me realize an UFAI should also be counted as an ‘expanding lasting life’, and should be deemed unlikely by the Great Filter.

Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction. 1

A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can’t seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I’m not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life ‘merely’ has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.

What do you guys think?

  1. It also gives a stronger argument – as the UFAI is the ‘expanding life’ we do not see, the beliefs, ‘the Great Filter lies ahead’ and ‘UFAI is a major existential risk’ lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don’t see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).

2012/12/22  Leave a comment

Atheist Prayer Experiment: Conclusion

Several weeks late, but never mind.

I was another one of those who got no answer, despite my best efforts. If anything, my life was marginally less numinous than usual: nothing resembling spiritual longing, my life was slightly more fraught and disappointing than usual during the ‘prayer experiment’ (although nothing dramatic).

You can probably guess what I’m going to take from this based on my earlier posts: a negative answer is good (further) evidence for Atheism, as I’m pretty confident I behaved in a manner such that God (if he was really there) would get in touch. Yet he hasn’t.

In medical school you are taught to only order tests you plan to act on. Although this principle isn’t watertight (sometimes you will only act on particular results of a test, for example), it is a good heuristic. So my action plan now is to spend less time on philosophy of religion: my evaluation of the merits of various arguments are now pretty stable, so I don’t anticipate further study will be that fruitful in terms of value of information – there might be some killer argument to change my mind, but I don’t hold out much hope for it. In contrast, the concerns around ethics and epistemology (like the ethics of charity, what should be prioritized, how to consider future generations) seem like problems worth more of my time as theism fades into unlikelihood.

I’ll still remain open to divine revelation, of course: it makes no sense to try and close oneself off from potential data. But I’ll mostly have given up looking – if it arrives regardless, well and good, but I have bigger problems than to keep waiting for a (now very delayed) damascene moment. I think Mawson’s analogy would agree with me: if you’re told a man is in a dark room who is worth talking to, yet you shout out 40 or so times and get no reply, you might keep an ear out, but you won’t keep shouting ‘just in case’ he decides to finally respond.

I remain hopeful that some God exists – it would make the world a better place, at least for those who suffer. But hope isn’t expectation, and I do not anticipate this hope will be fulfilled. In the meanwhile, it seems best to live ones life with care to try and make the world a better place, especially for those rendered the worst off. Take care of that, and hope Marcus Aurelius is on the right track:

Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.

Enjoy life,

Thrasymachus

2012/11/28  1 Comment

In defence of the Genetic Fallacy

Many objections to religious belief take the form of ‘debunking arguments’. On these views, religious belief is a manifestation of class oppression, or psycho-sexual disfunction, of some evolution-inspired psychological glitch (agency detection) or similar. An objector might want to nail his colours to the mask of a particular debunking account of religious belief, or he might instead make a more general appeal: that given the demographic pattern of religious belief, some sort of epistemically untrustworthy process (acculturation, indoctrination, or something) is probably going on, even if we are not confident what exactly that should be. 1

Besides disputing how well a given ‘debunking’ account really explains religious belief, a common line of reply is to consider these sorts of arguments exercises in the genetic fallacy. Even if Christianity was motivated in all cases by (for example) a misfiring agency-detector, that wouldn’t demonstrate the Christian God did not exist. Belief in God could be uniformly irrational but nonetheless true. As the saying goes: “Just because you’re paranoid doesn’t mean they aren’t out to get you”.

I’m now much less convinced by this response. For in the case of God, the genetic fallacy does not seem fallacious: we should not expect most people to irrationally believe God exists if God exists, and so an argument that showed the bulk of religious belief was irrational would be evidence for Atheism. continue reading »

  1. This is what Loftus’s Outsider Test for Faith should be doing if Loftus was not such a generally confused thinker.

2012/11/28  3 Comments

« older posts