Either a National Review on Disability Living Allowance is going to be published soon stating that DLA overpayments are £600 million because claimants are failing to inform the DWP of changes in circumstances(and explaining how they calculate that figure), or Maria Miller is a liar.
How does she differ from a bullshitter like John Humphrys? Well Humphrys doesn't know what he's saying, Miller I can say unequivocally does. The overspending due to fraud and error for DLA is £220 million. Maria Miller wants you to believe that is not representative of overspending in DLA. She wants you to have the number '£600 million' in your head in relation to DLA and overspending. Like with her claim about drug and alcohol addicts claiming more higher-rate DLA Mobility than Blind and Deaf people, she doesn't always state the clarifying point to get fact-checkers off her back(and they rarely are on her back when she's feeding these figures to our complacent mainstream media). Her garbage on that was subject to a fact-check by FullFact.org about it but it wasn't an outright lie. Miller has lied frequently about a lot of things, but never on the public record where she could be caught out. Until now.
For the past two weeks this figure of £600 million in DLA overpayments has been cropping up in places- the Mail, the BBC etc. None of them have bothered to actually check it or just ask for a source from Miller. Is it fraud and error? Well no, it's not and Miller even clarifies very vaguely that this is about something not counted as customer error: claimants unreported changes in circumstances. Yet the annual reports on fraud and error in the benefits system have defined their terms as such:
The bit that catches my eye is under Customer Error: "failed to report a change in their circumstances". This is precisely the grounds on which Maria Miller is asserting overpayments for DLA are £600 million. What report treats Customer Error and unreported changes in circumstances as different? That would be the national review for DLA carried out in 2004 and published in 2005. The previous one before that in 1996 had been a pre-cursor that would guide the direction of the disastrous Benefit Integrity Project. The 2005 one could be considered a follow-up to the failed BIP, attempting to assert it's values (the BIP executors denied it was an attempt to reduce fraud, yet accused as many as 20% of the claimants of being fraudulent with no evidence).
The 2005 national review for DLA summarised that one of its key findings was that DLA overspend is £730 million, or 9.1% of the total expenditure. It unfortunately doesn't provide the breakdowns of figures necessary to see actually how many claimants of a percentage this is; it could be a lot of lower-rate claimants for just one component or a few higher-rate claimants for both components. It investigated a mostly randomised sample of 1200 claimants from around the UK excluding Northern Ireland. Some other exclusions were made but not statistically significant enough to affect the results. I've checked and re-checked and there is not much wrong with the research.
...Except the premise it is based on.
The conclusions are incredible, taking a dozen or so skips in logic past all the most likely explanations. They trust excessively in the ability of DWP decision makers to be consistent, which I can tell you they aren't. An anecdotal example would be Sue Marsh at Diary of a Benefit Scrounger; she has claimed DLA before. She has voluntarily cancelled her claim when circumstances completely not at all related to her health but her family and economy situation have changed, which means she was still eligible. Things have taken a turn for the worse since, but she tried to go back on DLA a while ago and her claim was rejected. Her medical circumstances and the effect they have, did not change. There could be as many as 1 in 10 people for whom this applies.
In the break-downs it does give, the 2005 report has the figures for expenditure as percentages and rounded totals as follows:
If this were 11.2% of expenditure being lost by 7.8% of cases, it would make sense. Instead it's the other way round: 7.8% if expenditure is being lost by...11.2% of cases where claimants have not reported changes in circumstances. I considered that this could be caused by people being eligible for lower brackets than what they are one rather than not being eligible for DLA at all, but the differentials between the brackets are higher than the lowest brackets for DLA. I think the figure is likely an artefact of there being many other claimants outside the figure who are higher rate claimants. We have to remember that that the percentage figures are the 95% confidence ones within those ranges displayed next to them. Human error on this is massive; compare it with the ranges for fraud and official error. Customer error also seems to have got lost, or it's been included even though the report is supposed to be treating it and changes in customer circumstances differently. I do know it's never been large enough to really make a different however.
Can any reasonable person be sure that out of two and a half million people with complicated long-term conditions, there will not be at least 10% of them which two people will not come to the same conclusion on? This would be especially true of rare, complicated and widely misunderstood conditions. This to me seems to be within the large margin of human error. The design of the study relied on DWP decision makers looking at changes in circumstances where identified as unreported by the Review Officers and then making a decision about whether they aligned with the descriptors for eligibility. How it differs from the annual reports estimating DLA overpayments are £220 million(as of this year so far according to the preliminary report from July) is that it estimates a figure of how much expenditure has been lost because claimants interviewed did not report changes in circumstances. The annual reports acknowledge that they can not be sure when circumstances change and therefore can't be certain how much was overpaid in the time since. At least I think that is what is different. Unfortunately the 2005 national review does not state the methods used to come to the actual number of £630 million and I would have thought that considering it was far greater than fraud and official error they might have been more cautious about it. Instead it just assumes based on a sample that 300,000 people have cost over £2000 on average more than they should and not told us why they think that.
Based on this, Miller would want to push the case that an 'objective test' can remove the human error and make assessments more consistent. They can't. The test itself like the WCA may be objective, but you know what isn't?
The expected answers.
Miller did not have to choose the national review paper. There have been five complete annual reports since then putting total overspend at 1/5th of what the review did. Unlike Humphrys, Miller can not pretend not to know this stuff as a defence. She is not a bullshitter this time, she is a liar.