Become a Patreon!


 Abstract

Excerpted From: Maneka Sinha, Junk Science at Sentencing, 89 George Washington Law Review 52 (January 2021) (353 Footnotes) (Full Document)

 

ManekaSinhaIn 2018, with his sentencing hearing just around the corner, T.K., a seventeen-year-old in Washington, D.C., appeared set to head home for a period of probation. T.K. had pleaded guilty to a felony, and all relevant parties--including the prosecutor and the juvenile probation agency--agreed that incarceration was unnecessary. He was a conscientious student on the cusp of high school graduation, he was surrounded and supported by a close-knit family, and he had, by all accounts, taken every opportunity to turn his life around. As a result, it appeared that the only issue left to be resolved at T.K.'s sentencing was not whether T.K. should be sentenced to a period of probation, but rather how long that period should be.

But the day before sentencing, the parties received the results of a violence risk assessment, the Structured Assessment for Violence Risk in Youth (“SAVRY”), that had been conducted as part of a routine psychological evaluation ordered by the court. The SAVRY purports to predict the likelihood of future violence or reoffending in adolescents. According to the assessment, T.K. was a high risk for committing future violence. It was a life-altering report that drastically changed T.K.'s prospects at sentencing.

The assessment was flawed, however. The SAVRY rates twenty-four risk factors supposedly associated with a juvenile's risk of violent reoffending. An evaluator considers each of these risk factors and assigns a rating of low, moderate, or high risk to each. Low risk translates to a numerical value of zero, moderate to one, and high to two. Once all factors are evaluated, the evaluator assigns a total risk rating.

But, in T.K.'s case, the examiner misapplied the tool. First, she improperly double-counted certain behaviors by listing them in multiple categories, an inappropriate method of risk evaluation using the SAVRY. Second, the evaluator made an arithmetic error: she miscalculated the total number of elevated risk factors, assigning T.K. more elevated risk factors than were present in her own assessment. To top it off, the evaluator incorrectly assessed T.K.'s risk of future dangerousness as “high,” when her own data justified only a “low,” or at most, “moderate” risk rating. In short, she never should have found that T.K. was a high risk of future dangerousness, but nevertheless, T.K.'s future changed overnight.

T.K.'s case is not atypical: judges regularly rely on risk assessments and other flawed scientific evidence at sentencing hearings. This is particularly troubling because today, the overwhelming majority of criminal defendants plead guilty and proceed to sentencing without ever having a trial. Their cases are functionally resolved at sentencing, where there is no meaningful way to screen out junk science and prevent judges from relying on the type of flawed evidence supplied in T.K.'s case.

After reviewing the SAVRY results, both the prosecutor and probation agency rescinded their recommendations for T.K. to be placed on probation and, instead, asked for him to be committed to the custody of the local youth rehabilitation agency. Commitment would mean that, rather than being free to attend one of the colleges to which he had been admitted, T.K. could spend years at a secure detention facility--a euphemistic term for what is, in effect, prison for children.

T.K. sought to challenge the admissibility of the risk assessment under Federal Rule of Evidence 702, which governs the admissibility of expert evidence at trial. He argued, through an expert in psychology and risk assessments, that the risk assessment in question is unreliable in predicting future violence and that, even if the tool were generally reliable, it is not reliable as applied to his case.

But as T.K.'s judge noted, Rule 702 does not apply at sentencing. As a result, T.K.'s prospects for leniency were seriously diminished as the prosecutor and probation department requested a harsher sentence than they had sought prior to reading the unreliable report.

There is widespread agreement that evidentiary rules applied at trial, including expert admissibility tests, are important in that they aid in the search for truth and promote fair outcomes. But because Rule 702 and other rules of evidence do not apply at sentencing, judges often make extremely consequential punishment decisions based on unreliable or untested evidence. All over the country, in federal jurisdictions and the many states that have adopted Rule 702 or a similar rule, criminal defendants find themselves in the same shoes that T.K. did in 2018. At sentencing, defendants are vulnerable to the use of flawed scientific evidence in a way they would not be at trial, where Rule 702 and similar state rules serve to filter out unreliable scientific, technical, and specialized evidence (“STS evidence”), or junk science. STS evidence may constitute junk science because (1) the underlying science itself is inherently unreliable, (2) an otherwise valid method is misapplied to produce faulty results, or (3) forensic examiners exaggerate results. Judicial reliance on such evidence may directly contribute to a criminal defendant spending more days, weeks, years, or even decades in prison.

This Article argues that the frequent use of often flawed STS information at sentencing, coupled with how critical the sentencing stage is to the administration of criminal justice, necessitates greater scrutiny of such evidence than is currently applied in sentencing decisions. It offers the first in-depth exploration of STS evidence at sentencing. It bridges the extensive scholarship on junk science at the trial stage with scholarship advocating for extension of procedural protections to sentencing. And it further builds upon these literatures by proposing an implementable mechanism for evaluating STS evidence at sentencing while retaining special protections for criminal defendants.

This Article proposes that Rule 702 and its state analogs be extended in a modified, asymmetrical format to sentencing. Specifically, it proposes that the same types of evidence that would be subject to Rule 702's admissibility test at the trial stage be subject to the Rule 702 admissibility test at sentencing, when offered by the prosecution or probation officer as support for harsher punishment, but not when offered by a defendant for mitigation purposes.

The Article proceeds in three parts. Part I explores the need for increased examination of the use of STS evidence at sentencing. It first details the role flawed scientific evidence has played in contributing to troubling outcomes at the trial phase. It then lays out the critical importance of the sentencing stage in the modern criminal legal system and describes the ways in which STS evidence is used at sentencings today. Finally, Part I explains the operation of Rule 702 at trial and contrasts the Rule with what little exists in the way of a legal standard to filter out unreliable evidence at sentencing.

Part II proposes a mechanism for screening unreliable STS evidence at sentencing that (1) is calculated to improve sentencing accuracy, (2) allows judges to consider a broad range of information, and (3) avoids compromising defendants' ability to present mitigating evidence. It suggests that when STS evidence is offered by the prosecutor or a neutral party in support of an argument for an increased sentence, it be subjected to the admissibility test in Rule 702 or the state equivalent--but that STS evidence offered by a defendant at sentencing need not be subjected to such a test. It then evaluates this proposal on a number of dimensions.

Part III analyzes a sampling of previous recommendations to improve sentencing accuracy and increase protections for criminal defendants at sentencing. It then considers their suitability for filtering out junk science at sentencing, concluding that while valuable in their own right, none of these recommendations adequately addresses the problem of junk science at sentencing.

[. . .]

Admissibility thresholds apply at trial to keep “junk science” from resulting in wrongful convictions and other miscarriages of justice. At sentencing, where the liberty interest may be greater than at trial, the same should apply. Still, there has been insufficient discussion of the extent to which STS evidence has contributed to unjust outcomes at sentencing.

It is in no one's interest--not the defendant's, not society's, not even the prosecutor's sentences to be based upon unreliable evidence. The criminal justice system prioritizes protecting liberty above securing detention. The solution proposed by this Article targets the problematic reliance on STS evidence of questionable validity while acknowledging and promoting this principle.

 

Ultimately, T.K.'s story had a relatively happy ending that illustrates how the proposal made herein is administrable and can be effective in screening out junk science at sentencing. T.K.'s judge did not have to grant a hearing on the reliability of the SAVRY results or assess the admissibility of those results under Daubert. But he decided to anyway, holding a hearing at which he heard evidence from an expert on the SAVRY's lack of reliability and the problematic application of the tool in T.K.'s case. That decision was pivotal. The peek behind the curtain showed T.K.'s judge that, despite the aura of trustworthiness STS evidence carries with it, not all such evidence is reliable. He found that the risk evaluator's conclusion was not supported by her data, and he decided not to rely on the assessment at sentencing.

T.K.'s challenge was successful because the stars aligned for him in a way that is not likely to happen again; he is one of very few criminal defendants to have been represented by well-resourced defenders who were willing to litigate his claim despite knowing the law would not support it. On top of that, he was fortunate to have been assigned a judge who was willing to analyze whether the purportedly scientific evidence he was presented with was actually reliable. But T.K.'s case does not have to be a one-off. As described above, some commenters may argue against the proposal advanced here by noting that Rule 702 has proven ineffective in screening out unreliable STS evidence at the trial stage. But tell that to T.K.: T.K.'s judge essentially applied the model advanced here; and, in doing so, changed T.K.'s future. Though T.K.'s case is only a single data point, it demonstrates that the proposal advanced here is administrable and, if adopted, can result in fewer miscarriages of justice.

If not for the judge's highly unusual decision to review the reliability of the report submitted against T.K., junk science would have cost T.K. his liberty and, likely, his prospects for a successful future. Few, if any, defendants are afforded the opportunity that T.K. was and are, therefore, at the mercy of unreliable STS evidence at sentencing. This Article has advanced one proposal for changing that.


Assistant Professor of Law, University of Maryland School of Law.


Become a Patreon!