Biased and Inefficient

I'm a statistical researcher in Auckland. This blog is for things that don't fit on my department's blog, StatsChat.
I also tweet as @tslumley

Feynman and the Suck Fairy

There’s been a bit of…discussion…about Richard Feynman recently. In one Twitter discussion, Richard Easther said he had been thinking of using Feynman’s commencement address "Cargo Cult Science" with a first-year physics class, and had decided against.

I was a bit surprised. It’s been a long time since I read that piece, but I couldn’t remember anything objectionable in it. So I re-read it.  It’s still really good in a lot of ways. But. Yeah. That.

The problems are more obvious in Feynman’s book of autobiographical anecdotes "Surely You’re Joking, Mr Feynman". I read that when it came out, and loved it. I was in high school at the time. It wasn’t that I loved the casual sexism; I just didn’t really notice. I was interested in the science bits of the book, and I didn’t care what he did in bars in Buffalo. It’s harder to re-read it now without wincing. 

If you find yourself feeling defensive about Feynman, you might like to read Jo Walton on the Suck Fairy in literature. The Suck Fairy goes through books you used to love, and edits them to make them suck when you read them again. The whole point of the name, of course, is that this isn’t true; the book always did suck and you just didn’t notice when you were younger.  The book you remember is still good, it just isn’t real.

The "Surely You’re Joking.." I remember, and its author and hero, were great. There’s no need to deny that. The problem is, they weren’t the actual book and the actual physicist. The actual person was a genius, but not really a role model. 

The imaginary Feynman, the one who wrote the imaginary book I remember reading, would have wanted us to be honest about the faults of the actual Feynman. 

Unacceptable Use

The East-West Center at the University of Hawai’i has one of the most bizarre Internet Acceptable Use Policies I have ever seen. Among other things, it “strictly prohibits”

The distribution, dissemination, storage, copying and/or sale of materials protected by copyright, trademark, trade secret or other intellectual-property right laws. 

That bans making this post, downloading the slides for the Summer Institute in Statistical Genetics that I want to revise, installing new R packages, and probably even reading email (unless it was sent by someone who is a US government employee as part of their work).  You might think there’s an implied “except with permission of copyright owner”, except that in lots of other places in the policy they make that sort of exception explicit. Also, the policy goes on to say that this includes, but is not limited to, illegal copying and distribution. 

More practically annoying is the fact that they block ssh and scp, so I can’t upload the files for my course tomorrow. Maybe I’ll go for a walk and see if I can get an eduroam connection. 

Herd immunity simulations

Especially for vaccines that are not 100% effective, a large chunk of the benefit comes from ‘herd immunity’, the fact that incomplete vaccination makes it harder for an epidemic to get started and spread. Increasing the proportion of people vaccinated helps those people, and it also helps the people who aren’t vaccinated.

Here’s a set of simulations (code, needs FNN package and R) that show the effect. There is a simulated population of 10,000 people living on a square (actually, a doughnut, since it wraps around).  Vaccinated people are green, unvaccinated are black. 

Each day there is a 1/30 chance of a new disease case arriving. 

If you are near a disease case you have a chance of being infected (red) which is lower, but still not zero, if you are vaccinated. The disease lasts four days and then you are immune (blue). Everyone moves slowly around, except that each day 1% of people get on a plane and move a long way.

With 10% vaccinated there is no herd immunity. As soon as an epidemic gets going, it spreads everywhere.

With 50% vaccinated, the epidemics still spread, but more slowly, and there’s a lower chance of starting one when a case arrives.

With 70% vaccinated, the epidemics burn out before covering the population

With 90% vaccinated, the epidemics don’t even get started.

 

Monotonicity and smoothness

Andrew Gelman has an interesting discussion of monotonicity as a modelling constraint.  I basically agree with what he says, but since my first real statistical research (my M.Sc. thesis) was on order restrictions I thought I’d write about a related aspect of the problem.

Assuming that a relationship is monotone sounds like a very strong assumption, and therefore one that you’d expect to gain a lot by making. Asymptotically, this isn’t true.  If the relationship between $X$ and $Y$ is only known to be monotone, you get $E[Y|X=x]$ estimated to $O_p(n^{-1/3})$ where $X$ has non-zero density. By assuming smoothness you can get $O_p(n^{-2/5})$, which is better. That is, if you have a lot of data and you know a relationship is smooth, you don’t gain anything by knowing it is monotone, but if you know it is monotone you do gain by knowing it is smooth.

I think that’s non-intuitive, and I think the reason it’s non-intuitive is the asymptotics. If you have relatively sparse data, knowing that the relationship is monotone is fairly powerful, but knowing it is smooth is pretty useless. If you have very dense data, knowing a priori the relationship is smooth is useful, but knowing a priori that it is monotone is not all that helpful, since it will be fairly obvious whether it’s monotone or not. 

Anchoring bias

Anchoring bias: high school students asked to add up the digits in their phone number and to estimate how many countries there are in Africa.

image

(phew, it worked)

(I did delete one data point as non-responsive: estimated number of countries in Africa was 1)

(with adults I’d use last two digits of phone number, but with teenage girls I thought a bit more information-hiding was appropriate)

Fiction about science

I recently discovered that Rosemary Kirstein’s Steerswoman series of books has been reissued electronically and is available wherever good e-books are sold licensed for download.

These books are some of the best ever fictional characterisations of the scientific process, as opposed to engineering or technology.  It’s not obvious until you’re reading them, because they have the formal structure of epic fantasy of the quest variety (powerful evil wizard subvariety).  The protagonist, Rowan, is a wandering member of an order of, basically, scientists: people whose mission it is to learn and understand things and then tell people.  Rowan has to discover what’s actually going on at the same time the reader does, and Kirstein manages very well the trick of giving Rowan clues to what’s going on in a way that she’s sometimes ahead of the reader and sometimes behind. 

Other incidental attractive features include the fact that nearly all the scientists are women, and that science communication is explicitly the price paid for public support and cooperation. If you ask a steerswoman a question, she must answer truthfully; if she asks you a question and you do not answer truthfully, you are excluded from the deal and they will not answer your questions in the future.

The biggest negative is that this is theoretically a five book series and Kirstein makes George R. R. Martin look like Barbara Cartland. The first book was published in 1989; the fourth, in 2004. The books are stand-alone novels, so they are still readable, but if you get annoyed waiting for an author, she is not an author to take up. 

Some people won’t be able to get past the genre fiction form, but for anyone who likes good genre fiction, or anyone that knows a teenage girl who is put off by popular depictions of science and scientists, should try them.

Randomisation without consent

The issue of randomisation without consent has come up in New Zealand. Because I’m on the HRC Data Monitoring Core Committee, which monitors some NZ clinical trials I don’t want to say much about any current NZ clinical trials, even ones we’re not monitoring. I do want to talk about the principle.

The always-useful NZ Science Media Centre has rounded up a couple of bioethicists on the topic, and you should read what they say. I’m not a professional bioethicist, but I have been involved in discussions about the ethics and conduct of clinical trials since I was in high school, learning from people with internationally recognised expertise. 

In contrast to physicians, who tend to start off from the doctor-patient relationship, my views have generally been that informed consent, if you could do it right, would be the only thing needed. If research participants understand all the issues and freely choose to take part, it doesn’t matter what anyone else thinks.

In practice, though, you can’t get perfect informed consent.  Many people don’t have the resources to get a really thorough understanding of the issues even when they are healthy, much less when they are sick. They will often go along with a recommendation from their physician, even if she doesn’t think she’s giving one.

Since you can’t get perfect informed consent, you need other safeguards as well. Physicians need to decide whether a trial is suitable to suggest to patients; ethics committees need to decide whether it meets guidelines; paperwork needs to be filled in; whistleblowers need to be protected. 

On the other hand, consent is still primary. New Zealand was home to one of the dramatic examples of what can happen when clinical research decisions are left to people who think they are on to something. 

Given that I believe participant consent is the primary ethical principle in research and that the other principles are there as safeguards, does that mean I’m opposed to all research without consent? In fact, no. Any research that can be done with informed consent should be done that way.  Any research that can be done with advance consent before people become incompetent should be done that way. Any research where the patient can’t consent but a guardian or next-of-kin can consent should be done that way. Any research that can be done with even very limited patient assent (people with some types of mental illness, older children, people with dementia) should be done that way in addition to any other safeguards. And as with any vulnerable group, research should only be done on people who can’t give consent if the treatment is primarily intended to benefit precisely that group of people. 

However, I used to live in Seattle. Seattle has a long record of research and development in resuscitation medicine: treatments that are given to people who can’t consent because they are basically dead at the time. Not unrelated to that, Seattle has one of the highest survival rates for out-of-hospital cardiac arrest.

There are three possibilities for resuscitation medicine

  1. No new treatments are ever introduced.
  2. New treatments are introduced, but not evaluated
  3. People are randomised without consent

None of these is ideal, but I think the first two are worse than the third. 

It’s still important for randomisation without consent to have additional safeguards that aren’t needed for normal clinical trials.  In Seattle this included special ethics review, public consultation, public advertisement of how to opt-out of randomisation, and monitoring of how many people withdrew consent once they were in a state to be asked.

That is, the extra safeguards were intended to ensure that trials proceeded without consent only if there wasn’t any other way, and if there was good reason to believe people would have consented given the opportunity. If the public consultation was negative or if lots of people withdrew consent for data use, consent to randomisation could no longer be assumed and you pull the plug on the trial. 

In New Zealand there seems to be a term “retrospective consent” for when people wake up and you ask them how they feel about being randomised.  I think this is the wrong way to phrase the issue. We need to recognise that participants are being randomised without consent, just as unconscious patients are routinely treated without consent if there is no alternative. We can, and must, ask patients whether they consent to their data being used, and whether they approve of having been randomised. But that’s not consent to randomisation and treatment. It’s too late for that. 

Henchperson wanted

PhD Scholarship in Statistics: model-based and model-assisted analysis of complex samples

This project is funded for three years by a grant from the Royal Society of New Zealand Marsden Fund. There are multiple possible PhD topics in analysis of complex samples and its interface with semiparametric mathematical statistics. The research group includes Thomas Lumley, Alan Lee, Alastair Scott, Chris Wild, and twothree current PhD students (with one more having recently graduated).

Funding is available to start as soon as possible, but no later than September 2014. Applicants must have suitable qualifications for entry into the University of Auckland PhD, and should have significant training in at least two of complex sampling, statistical programming, mathematical statistics.

Contact:
about the application process: stat.phdofficer@auckland.ac.nz
about the research project: t.lumley@auckland.ac.nz

Einstein, Wikiquote, and fact checking

It’s not only Pi Day in the USA (3/14, they write dates backwards), it’s Einstein’s 135th birthday. Einstein, like Mark Twain, 孔夫子, Churchill, Disraeli, and the Chinese proverbs, is a quote magnet. He said many quotable things, and even more are attributed to him.

The NZ Herald has a list of ten Einstein quotes. Annoyingly, none of them say where or when they were said. So I did the absolutely minimal level of fact checking. I looked on Wikiquote’s Albert Einstein page and the talk page.

All ten quotes appear on Wikiquote. Five have reliable sources originating with Einstein. One more is something Einstein said, but that he didn’t claim was original. One has long been attributed to him but is disputed. Two look like recent inventions. One is definitely from someone else.

1. E. F. Schumacher “Small is Beautiful”
2. Quoted by Einstein but not original to him. A popular saying in French before his time "La culture est ce qui reste lorsque l’on a tout oublié"
3. Einstein. From a speech to the New History Society (14 December 1930), reprinted in “Militant Pacifism” in Cosmic Religion (1931)
4. No reliable source known, not attributed to him before 1990s
5. From Science and Religion (1941)
6. Not Einstein. Apparently first attributed to him by Ram Dass in 1970.
7. Einstein. Attributed in The Encarta Book of Quotations to an interview on the Belgenland (December 1930), which was the ship on which he arrived in New York that month.
8. Einstein. Speech made in honor of Thomas Mann in January 1939, when Mann was given the Einstein Prize given by the Jewish Forum.
9. Disputed. No reliable attributions. Similar quotes in French reliably sourced to 1880s
10. Einstein: said to Samuel J Woolf, Berlin, Summer 1929. Cited with additional notes in The Ultimate Quotable Einstein by Alice Calaprice and Freeman Dyson, Princeton UP (2010) p 230

Not a terribly good record for a serious newspaper. 

My likelihood depends on your frequency properties

[note: you may need to click on a single post for the typesetting to work; it doesn’t always work for the blog as a whole]

The likelihood principle states that given two hypotheses $H_0$ and $H_1$ and data $X$, all the evidence regarding which hypothesis is true is contained in the likelihood ratio
$$LR=\frac{P[X|H_1]}{P[X|H_0]}.$$

One of the fundamentals of  scientific research is the idea of scientific publication, which allows other researchers to form their own conclusions based on your results and those of others. The data available to other researchers, and thus the likelihood on which they rely for inference, depends on your publication behaviour. In practice, and even in principle, publication behaviour for one hypothesis does depend on evidence you obtained for other hypotheses under study, so likelihood-based inference by other researchers depends on the operating characteristics of your inference.

Consider an idealised situation of two scientists, Alice and Bob (who are on sabbatical from the cryptography literature). Alice spends her life collecting, analysing, and reporting on data $X$ that are samples of size $n$ from $N_p(\mu, I)$ distributions, in order to make inference about $\mu$. Bob is also interested in $\mu$ but doesn’t have the budget to collect his own $N_p(\mu,I)$ data. He assesses the evidence for various values of $\mu$ by reading the papers of Alice and other researchers and using their reported statistics $Y$. In the future, he might be able to get their raw data easily, but not yet. 

Alice and Bob primarily care about $\mu_1$ which is obviously much more interesting than $\left\{\mu_i\right\}_{i=2}^p$, and more likely to be meaningfully far from zero, but they have some interest in the others. Alice bases her likelihood inference on the multivariate Normal distributions $f_X(X|\mu_i)$, Bob bases his on $f_Y(Y|\mu_i)$.

Compare Alice and Bob’s likelihood functions for the hypotheses $\mu_i=0$ and $\mu_i=\delta$ with $\delta$ meaningfully greater than $0$ in the following scenarios. In all of them, Alice collects data on $\mu_1$ and reports the likelihood ratio for $\mu_1=0$ versus $\mu_1=\delta$. 

  1. Alice collects only data on $\mu_1$ and reports the likelihood ratio for $\mu_1=0$ versus $\mu_1=\delta$.
  2.  Alice also collects data on $\mu_2$ and reports whether she finds strong evidence for $\mu_2=\delta$ over $\mu_2=0$ or not.
  3. Alice also collects data on  $\mu_2\ldots\mu_q$ for some $q\leq p$.  If she finds evidence worth mentioning in favour of $\mu_i=\delta$, she publishes her likelihood ratio, otherwise she reports that there wasn’t enough evidence. 
  4.  Alice also collects data on $\mu_2\ldots\mu_q$ for some $q\leq p$. If she finds sufficient evidence for $\mu_i=\delta$ for any $i>1$ she reports the likelihood ratios for all $\mu_i$, otherwise only for $\mu_1$. 

Alice’s likelihood ratio is the same in all scenarios. She obtains for each $i$

$$\frac{L_1}{L_0}=\frac{L(\mu_i=\delta)}{L(\mu_i=0)}=\frac{\exp(n^2(\bar X_i-\delta)^2/2)}{\exp(n^2\bar X_i^2/2)}.$$

and because she has been properly trained in decision theory her beliefs and her decisions about future research for any $\mu_i$ depend only on $\bar X_i$, not on $q$ or on other $\bar X_j$ or on how she decided to what to publish.

Bob’s likelihood ratio for $\mu_1$ is always the same as Alice’s. For the other parameters, things are more complicated.

  1. no other parameters
  2. Bob’s data is Alice’s result, $Y_2=1$ for finding strong evidence, $Y_2=0$ for not. His likelihoods are $L_1=(1-\beta)^{Y_2}\beta^{1-Y_2}$ and $L_0=\alpha^{Y_2}(1-\alpha)^{1-Y_2}$, where $\alpha$ is the probability Alice finds strong evidence for $\mu_2=\delta$ when $\mu_2=0$ is true and $\beta$ is the probability Alice fails to find strong evidence for $\mu_2=\delta$ when $\mu_2=\delta$ is true.
  3. Bob has a censored Normal likelihood, which depends on $\alpha$ and $\beta$. If he ignores this and just uses Alice’s likelihood ratio when it’s available, he will inevitably end up believing $\mu_i=\delta$ for all $i>2$, regardless of the truth.
  4. Bob’s likelihood ratio for the other $\mu_i$ depends on $\alpha$, $\beta$, $q$ and on the values of $\mu_j$ for $j\neq i$.

In scenarios 2-4, Bob’s likelihood depends on Alice’s criterion for strength of evidence and on how likely she is to satisfy it — if Alice were a frequentist, we’d call $\alpha$ and $\beta$ her Type I and Type II error rates. But it’s not a problem of misuse of $p$-values. Alice doesn’t use $p$-values. She would never touch a $p$-value without heavy gloves. She doesn’t even like being in the same room as a $p$-value.  

In scenario 4, Bob also needs to know $q$ in order to interpret papers that do not include results for $i>1$ — he needs to know Alice’s family-wise power and Type I error rate. That’s actually not quite true: if Bob knows Alice is following this rule he can ignore her papers that don’t contain all the likelihood ratios, since he does know $q$ for the ones that do.  His likelihood for $i>1$ still depends on $\alpha$, $\beta$, $q$, and the other $\mu$s.  

At least, if nearly all Alice’s papers report results for all the $\mu$s, Bob knows that the bias from just using Alice’s likelihood ratio when available will be small and he may be able to get by without all the detail and complication. 

This isn’t quite the same as publication bias, though it’s related. At least if $q$ is given and we know Alice’s criteria,  she always publishes information about every analysis that would be sufficient for likelihood inference not only about $\mu_i=0$ vs $\mu_i=\delta$, but even for point and interval estimation of $\mu_i$. Alice isn’t being evil here. She’s not hiding negative results; they just aren’t that interesting.

Of course, the problem would go away if Alice published, say, posterior distributions or point and interval estimates for all $\mu_i$, at least if $p$ isn’t large enough that the complete set could be sensitive

tl;dr:  If I can’t get your data or at least (approximately) sufficient statistics, my conclusions may depend on details of your analysis and decision making that don’t affect your conclusions. And if you ever just report "was/wasn’t significant," Bob will hunt you down and make you regret it.