Why validate?
This blog post explains why results from forensic method validation are just as important as real casework results, and why validation is critical to ensuring the accuracy and reliability of forensic methods, in particular within the context of machine-generated results in forensic analysis
The need to validate
Validation ensures that the methods used in forensic science are fit for purpose, the limitations are known and the performance of the method is empirically assessed, based on scientific data. Validation is a critical component of forensic standards, like ISO 17025 and the UK Forensic Science Regulator’s Codes of Conduct.
Despite validation being an essential part of forensic accreditation and quality assurance, there is still a notable scarcity of published validation studies for many forensic methods, including some disciplines that are well established with a long history.
The lack of scientific empiricism in the development and validation of forensic methods has been acknowledged and commented upon in the wider scientific community for many years (see the 2009 NAS report and 2016 PCAST report). But despite this scrutiny validation is still lacking across the forensic sciences.
The need to validate forensic methods is not going away, and this post argues that the results from validation studies are equally as important to the criminal justice system as the results from casework, and that validation is necessary to ensure that the forensic evidence produced by a method is safe, impartial and able to be relied upon.
Why is validation lacking across forensic science ?
There are three potential contributors to this current status quo:
1. The perceptions of validation by forensic practitioners.
2. A lack of training and experience within the forensic practitioner community on how to scientifically test a method and the tools used within it,
3. The manner in which many forensic methods are developed in-house at forensic laboratories.
In terms of the perceptions of validation by forensic practitioners, views can swing both ways. On the one hand validation can sometimes be seen simply as a tick box exercise that, in reality, does little to rigorously test a method. Then there is the other extreme, where the view of validation is that it is so complex and must cover all eventualities that it is impossible to implement and maintain, and is never started, let alone finished.
These two opposing (and both incorrect) views likely arise from the second point, a lack of training and experience amongst forensic practitioners in how to scientifically validate a method. How many training and competency frameworks for forensic practitioners include validation as a competency requirement? From experience, it is not a common requirement.
In most situations, practitioners are being trained in how to carry out a method but not in how to critically develop, test and validate those methods. This can often result in validation becoming the responsibility of one or two individuals that may have shown an aptitude for testing or have some prior experience. So, if practitioners are not being consistently trained in how to validate methods, is it any wonder that validation is lacking?
Both of these points then feed into the final point, that most methods are developed in-house by forensic practitioners. If staff are developing methods without the requisite training and competency in how to validate those methods and have skewed perceptions of validation to begin with, there is less chance that internally developed methods will be robustly validated.
These observations are, of course, sweeping generalisations and do not reflect the entire forensic science community. There are organisations out there who are carrying robust and effective method validation (and if you work for one of those organisations, share the knowledge!).
Machine-generated results
There is another shift in forensic science that is making the need for robust validation more critical than ever. This is the increasing use of machine-generated results in forensic analysis.
The use of technology in forensic science brings huge benefits, automating routine processes and allowing examinations to be performed more quickly and at scale. But as forensic methods become more reliant on machine-generated results, and the fact that most vendors do not verify the tools that produce these results in a way that meets forensic standards, the onus for validating the forensic methods that rely on these tools falls onto forensic laboratories.
In the absence of robust tool verification and method validation there is a risk of handing over the analysis of forensic traces to a unvalidated technological black box, where the accuracy and reliability of the results are largely unknown. Coupled with the recent drive by vendors to implement artificial intelligence and machine learning capabilities into forensic tools it is essential that the machine-generated outputs from forensic tools are robustly tested as part of method validation.
The forensic method
Before going further into validation we should establish the purpose of a forensic method. The Forensic Science Regulator’s Codes describes a method as:
“a logical sequence of operations or analysis which may include the use of software, hardware, tools and action by the practitioner”
This functional definition describes what a method is, but doesn’t really describe the purpose of a method.
In the wider scientific community a method is a process of experimentation where repeated observations are used to try and explain phenomena in the real world.
In forensic science we are trying to assist the trier of fact (a judge or jury) to answer a question about a real world event that has already happened, for example:
“Did this fragment of glass come from the broken window?”
“Is the person shown in the CCTV the defendant?”
“Is the victim’s blood on the defendant’s t-shirt?”
To attempt to answer these questions the forensic scientist must infer information from traces of evidence that are linked to the event, through a process of analysis, interpretation and evaluation. In essence this is a forensic method. But critically, the traces only provide partial information about the event, as the evidence may be degraded or inherently limited in the information it can provide. Also, the real world is complex and unpredictable, and we cannot accommodate every possible factor or variable into a forensic examination. This means any forensic method will have intrinsic limitations, and this leads to uncertainty in the results.
So, we can think of the forensic method as an attempt to infer information about a past event from partial subsets of information within a constrained set of parameters, where there is a high degree of uncertainty.
Essentially, when carrying out a forensic method we are running an experiment based on the observations of a very limited set of data (the trace(s) of evidence), where the observations of the data can be perceptual, based on measurement or made by a machine. We hope that our method/experiment is an accurate model of the real world but we know we cannot replicate the real world in its entirety.
We also cannot rely on casework to demonstrate the validity of a method, as casework data is limited, uncontrolled and the ground truth is unknown (conviction and acquittal rates are not reliable ground truth data).
In order to understand the intrinsic limitations of a forensic method and the parameters that impact the performance of the method we must validate it with representative test data. At a basic level this means carrying out the end-to-end method on test materials that replicate those encountered in casework (insofar as possible).
How to validate?
The validation of forensic methods can follow one of two approaches, general (or anticipatory) validation, and case-specific validation.
General validation
For general validation, we will validate the method prior to its introduction into live casework. This requires a detailed validation plan that includes a technical specification of how the method is expected to operate, testable end-user requirements, and acceptance criteria for how the requirements of the testing will be met.
The plan should also document who will undertake the validation and the test materials that will be used. In general validation, the test materials should be representative of a range of casework materials, though, of course, this cannot be exhaustive.
The validation should take place in the environment that the method will operate in, using the same software and hardware, and the testers must be competent in applying the method.
What testing is required will depend upon the technical specification and end-user requirements, but in general principles the validation should test the accuracy and reliability of the method, and how well calibrated the results are to the expected result.
A validation report on the results of the testing is required, which should also include any limitations and caveats for the use of the method.
When validating a method with the general approach we cannot realistically cover every eventuality, so there will be inherent limitations. But having other quality control and assurance processes in place can help to address these limitations.
If any significant aspects of the method change, or the software or hardware used is replaced or updated, it may be necessary to revalidate the method.
Case-specific validation
In some instances we may want to validate our method on a case-by-case basis, for example if the method is evaluative or is infrequently used. Here the same principles apply as for general validation but the reference materials and testing reflect the parameters of a specific case, so we will not be testing the reliability of the method across different types of cases.
A change in approach
Even with a thorough understanding of method validation it is still an onerous task, and there is much replication and repetition of validation across different organisations providing forensic science services.
Here are a few recommendations for how we could potentially change the current approach to forensic method validation:
Ensure that forensic practitioners receive adequate training in the validation of methods and that method validation forms part of a practitioner’s competency requirements. Organisations need to invest time in developing the necessary skillsets for effective validation.
Standardise methods insofar as possible to allow for the sharing of validation data. This eases the burden of validation and drives consistency in analyses.
Work with forensic tool providers to ensure that any tool testing and verification data from the provider is released in a format that meets the requirements of relevant national and international forensic standards. Forensic practitioners are not beta-testers for tools. Harmonising tool test results with international standards like ISO 17025 and making these results available would mark a massive step change in the approach to forensic method validation.
Work with research institutions and universities to communicate the need for compliant method validation. There are a plethora of undergraduate and postgraduate forensic science courses, and most students will need to undertake some kind of research project. Partnering universities with forensic science organisations to support with method validation provides a mutual benefit to students and forensic science practitioners alike.
Forensic method validation isn’t easy and (to most people) isn’t fun. But it is essential to ensure the validity of forensic results entering the criminal justice system.
Hopefully this blog post has been helpful in setting out some of the requirements of method validation and highlight areas where collaboration and sharing of data can support a more effective approach.