In late 2023, Kathryn Hayes and Marya Lieberman at the University of Notre Dame published one of the most important harm-reduction papers of the decade. "Assessment of two brands of fentanyl test strips with 251 synthetic opioids reveals 'blind spots' in detection capabilities" (Harm Reduction Journal 20:175, 2023) did exactly what the title says. It tested two of the most-distributed fentanyl test strips in North America against 251 synthetic opioids, including 214 fentanyl analogs, and asked a basic question: which compounds get caught and which slip through?
The results were uncomfortable. Of the 251 compounds, 121 were detected by both brands, 50 were detected by neither, and 80 were detected by only one of the two. The two brands were not redundant. They were complementary, missing different subsets of the analog landscape. And the 50 compounds missed by both included molecules that have appeared in fatal overdoses.
Anyone who deploys fentanyl test strips at scale should read that paper. The deeper lesson goes beyond which brand to pick.
Why the gaps exist
Lateral-flow immunoassay strips work by binding an analyte to an antibody that was raised against a specific molecular feature called an epitope. The antibody recognizes that feature and a small region around it. If the analyte molecule is altered in a region the antibody does not recognize, the strip still catches it. If the alteration is in the epitope, the antibody does not bind, and the strip reads negative.
The Hayes and Lieberman paper traced the structural basis of the two brands' complementary blind spots. One brand's antibody was raised against an epitope at the carbonyl end of fentanyl, making it sensitive to the phenethyl end and blind to phenethyl-modified analogs. The other was raised against an epitope at the piperidine end, making it sensitive to the carbonyl end and blind to carbonyl-modified analogs. Two different antibody design choices produced two different coverage profiles. Neither was a "better" antibody. They were different.
Why "high sensitivity" is not enough
A common procurement question is "what is the limit of detection?" That question matters: a strip that requires 1000 ng/mL to produce a positive will miss trace amounts that a strip detecting 100 ng/mL will catch. But sensitivity says nothing about coverage. A 100 ng/mL strip that recognizes only 60 percent of the analog landscape will quietly miss a class of molecules that a less sensitive but more broadly covering strip catches.
This is especially relevant in the current synthetic opioid market. The fentanyl analog landscape continues to expand. The nitazene class, with at least 36 known analogs and counting, is even more challenging because the molecular diversity within the class is larger than within fentanyl analogs. A 2024 paper by de Vrieze, Stove, and Vandeputte at Ghent University (Harm Reduction Journal 21:118, doi:10.1186/s12954-024-01078-8) found that a leading nitazene strip detected 24 of 33 tested nitazene analogs and was completely blind to "desnitazenes" (compounds lacking the 5-nitro group), some of which have appeared in fatal overdoses.
How to design for class coverage
The structural lesson of the Hayes and Lieberman work is that an antibody raised at a single epitope on a single parent molecule will inevitably miss analogs modified in that region. The implication for product design is that broad class coverage requires either:
- Antibodies raised against multiple epitopes within the analyte class, combined on a single strip;
- Antibodies raised against a structural feature that is conserved across the class (the difficulty being that "conserved" usually means smaller, which usually means more cross-reactivity); or
- Multiple parallel strips with documented complementary coverage profiles, used together.
Each path has trade-offs. Multi-epitope strips can blur sensitivity. Conserved-feature antibodies can throw false positives on adjacent classes. Parallel strips multiply procurement and training cost. There is no free lunch in immunoassay design. There is, however, a lunch worth paying for.
How DSG approaches class coverage
DSG designs its drug-checking strips with class coverage as a primary specification, not just sensitivity. For fentanyl, the analog coverage at standard test concentrations is comparable to or broader than the major incumbents in published comparative testing. For nitazenes, desnitazene detection was explicitly tested during validation, knowing the published gap. For benzodiazepines, coverage extends beyond the legacy designer benzo set into the most recently emerging analogs.
The coverage profile DSG documents includes both what is detected and what is not. Buyers receive both pieces of data with each institutional order.
The right question for a strip is not "what does it catch" but "what does it miss, and is the miss a class we care about."
What buyers should be asking
Three procurement questions follow directly from the structural class coverage frame:
- What analog set was the strip validated against? Asking specifically for the list. A short list is informative.
- What classes of analogs is the strip blind to? A supplier who answers "none" has not done the work. A supplier who answers "we are blind to compounds with [specific structural modification] above [specific concentration]" has.
- Has the strip been tested against published comparison panels? The Hayes and Lieberman 251-compound set is the canonical fentanyl test. The de Vrieze et al. 33-analog nitazene panel is the canonical nitazene test. A supplier who has tested against these and is willing to share results has done their homework.
The answers are not always pretty. They are honest. And in a field where the supply changes faster than the strips can validate, honest beats pretty every time.
Further reading
- Hayes KL, Lieberman M. (2023). Assessment of two brands of fentanyl test strips with 251 synthetic opioids reveals "blind spots" in detection capabilities. Harm Reduction Journal 20:175. doi:10.1186/s12954-023-00911-w
- Lockwood TLE, Vervoordt A, Lieberman M. (2021). High concentrations of illicit stimulants and cutting agents cause false positives on fentanyl test strips. Harm Reduction Journal 18:30. doi:10.1186/s12954-021-00478-4
- The Notre Dame Paper Analytical Devices project, where the Lieberman group's drug-checking and analytical work originates, is documented at padproject.nd.edu.