Who This Was Written For?
To celebrate the latest release of our systematic literature review tool, the CiteMed team wanted to share with our fellow medical writers some of the hard-earned experience that’s come from performing hundreds of SLRs and taking them through Notified Body Audits.
If you are:
- Performing your own systematic reviews
- Managing a team and want to tune-up your Protocol and Process
Then this whitepaper will be relevant and (we hope) save you hours of response time for issues that could have been corrected with just a little foresight.
Enjoy!
The Common Categories of Non-Conformity
In our experience all feedback related to the literature review (and by some extensions the CER) falls into one of three categories.
- Protocol design and format
- Approach and method issues
- Lack of ‘enough’ information issues.
Protocol Design and Format
If your protocol is non-existent or not laid in a way that an outsider could sit down and understand your full process… you will have problems.
Approach and Method Issues
How are you evaluating literature? What is your method of grading abstracts? Sometimes, manufacturers don’t take a rigorous enough approach to evaluating the good literature teh find (or discarding the bad).
Strategy Issues
These are the toughest issues to sort out and almost always involve the case you are trying to support in the CER. Choosing safety and performance endpoints and defining and searching alternative treatments and SoTA therapies can derail a literature review from the start.
Format of This Document
We tried to keep this as simple as possible because a lot of the feedback can be dense. So within each category we have distilled into headlines the most important top-level issues that are found in our data (and then a short discussion of how to start addressing them).
Final Note – We Address A Lot of These For You
Our systematic review software (citemed.io) is designed to resolve the vast majority of structure, validation, and audit non-conformity questions that arise. So by simply using our tool (starting from only $50/Month, seriously) you can avoid a ton of pain. You can set up a project in our sandbox environment for free, just drop us a line.
Can’t get even that pricing approved? You can still read, copy, re-use our literature search protocols that we’ve published here. In our opinion, trying to do this review type and format it all in excel is a massive waste of time, but we get that sometimes you have no choice.
The Common Literature Review Non Conformity List
1. Protocol Design and Format Issues
There is no protocol for the systematic search of the literature currently available on the issues of safety, performance, clinical benefits for the patient, design features and intended use of the Device and/or equivalent Devices.
The most obvious feedback you can receive, not even having a protocol to provide! In the MDD days, a lot of tech files got away with just providing a list of good articles that they referenced individually in the CER. Those days are no more.
You need a defined and easily found Protocol (we prefer to create this as a separate document, but it can also just go inside your Literature Review report). Ours is available for free here.
No Flow chart for evidence analysis or Prisma Present
Prisma Charts have become the De-facto standard expectation for notified bodies. While it does not technically require the chart somewhere (to our knowledge), anyone who forgets to include one will get comments, so best to go with the tide here.
Prisma’s can be a slight annoyance to calculate (and re-calculate if you change or update your reviews). Our lit review system does this for you, but of course it can be done manually via excel (here’s a source link)
No ‘results’ of the literature search and review were provided.
This is another organization comment. The ‘results’ of a systematic literature review are not just the final good articles in a nice zip file. When you get a comment about ‘results’ of your review, make sure you are replying with all the info from the review:
- The results of each search term, and database (how many articles included, excluded, duplicates etc.)
- Full table of all citations and what happened to them (excluded with a reason, included for extraction, marked as duplicate)
- All of your ‘incldued’ articles with hopefully a detailed data-extraction process
So basically, what you searched (the counts of results), what you did with every single article reviewed, and of course the good stuff you extracted from the most relevant articles.
The reviewer could not replicate the literature review process. Please provide further details on the literature search and review protocol so search method can be re-created.
This is another protocol complaint. Either the reviewer didn’t see your beautifully crafted search strategy, or it doesn’t exist yet.
Take a look at our protocol, and make sure yours has similar levels of document flow + detail.
The reviewer was unable to validate the results of the literature search. It is important that the literature search methods can be appraised critically, the results can be verified, and the search reproduced.
Ouch, if you receive this cryptic reply you’ve got some explaining to do. Any keywords are validation, mean that your audit logs of your search and not up to the standards of the reviewer.
Here’s how to address it.
After you’ve updated your protocol to go into exhaustive detail about what you will search, why, and how you will assess the citations, the reviewer is going to want to see some kind of ‘proof’ that the results you are showing here are fully inclusive.
This is where the audit trail comes in. If you aren’t using our tool yet (built-in automatic audit trail), then the best way to do this is by showing the following:
- The ‘Raw’ Search files, dated and labeled neatly (these are the exported files from PubMed, Embase Etc.)
- A complete table that lists:
- Every citation
- Which search it came from
- The action that was taken (include, exclude, duplicate)
We’ve never had an auditor try and dig into our raw search files and reconcile the results, but having it prepped and included in your submission helps them feel that you’ve done your diligence and are on top the quality of your review.
2. Appraisal Technique Issues
It is unclear what methods were used and what results were obtained during the search on the subject device, state of art treatment, and competitor/similar devices.
Language like this can lean towards a protocol issue. But also potentially a search term and strategy issue. To correct this, make sure your search strategy clearly communicates the following things:
- What methods are you using to pick your search terms (PICO etc.)
- That your search terms are fully covering your device, State of the Art, and Similar/Competitor devices.
- That you have a framework for assessing each article (a flow chart works nicely here)
- Your process for grading full-texts, and extracting relevant data is clearly defined
The reviewer was unable to identify any systematic search/assessment methods that may have been used (i.e. PICO, PRIMSA, etc.) to support the adequacy of the search terms used.
This is more of the same above, but with a focus specifically on the search terms selection. Unfortunately our experience with search term definition has varied across submissions and notified bodies. What we can say though for sure:
- Your terms need to appear comprehensive (without being insanely broad)
- Showing your process of ‘narrowing’ down terms generally works in your favor.
- If you reference specific performance endpoints, or safety issues they must appear in some of your terms.
- State of the Art is a separate search, one or two search terms won’t cut it (more on this later).
There are no specific criteria for inclusion and exclusion of articles and systematic methods of research.
Criteria for Inclusion and Exclusion should be specified in your protocol. With Exclusion criteria, we generally stick to a defined list of 5-7 reasons and apply one of those reasons to every single excluded article.
Inclusion reasons don’t need to be paired so rigorously, it’s enough to say that an article is ‘included’ because it met all the criteria listed.
Reasons for excluding certain articles should be provided and accessible to the reviewer
Every article that’s been excluded needs a paired reason explaining why. You do not have to write a custom explanation for every single article. Re-using the same 5 reasons over and over work just fine.
3. Strategy and Clinical Eval Sync Issues
Lack of clinical evidence shown to support X claim
This is really a CER comment but it’s still relevant. When you see a lack of evidence comment, there are really two options. You either adjust the claims you are making, or you dig out more literature and present it more clearly.
Oftentimes a poorly structured literature review will lead the auditor to miss/skip over the data your really wanted her to see. Other times you just don’t have enough to back up whichever claim you made. Tread lightly on these responses, arguments about what’s sufficient will not get you anywhere with auditors generally and just waste your revision rounds.
Safety and Performance Objectives were not well defined or supported in the literature review
S&P endpoints are a critical component of a well-focused literature review. In short, they define ‘what are we trying to measure’ for our device claims. If we are claiming to be the highest performance flowing catheter, then our performance claims should probably be related to flow-rates.
“What if my device is simple and doesn’t have any easily measured endpoints?”. It can still be done, but often with an inversion of measurements. For example if your device was a surgical glove (never studied directly for safety or performance, but used in many procedures), your performance endpoints would be best set around breakage rates or lack of reports of skin irritation.
The research carried out by the manufacturer relates to the technology of the device in general and not to its performance.
Or
The reviewer could not locate any analysis of the content of the identified literature towards demonstrating safety and performance of the subject device.
Similar to the above, your review needs to focus on something measurable that ties into whatever claims you are making (for safety and performance) in the CER.
Literature search strategy is not well defined for device specific SoTA and product search in the literature search protocol
State of the Art search has to be seen as logically separate in your literature review. You don’t need to create a separate report, but the terms and results from a search targeting SoTA should be easily found by the reviewer.
Search term coverage insufficient (EU spellings vs. American spellings, wildcard uses )
This one comes up sometimes, and is fairly picky commentary regarding the syntax and word variations you are using in your search. Fairly simple to resolve.
It is not clear (from the search terms, and search results) how the search supports intended use, indications or patient population for the device.
Your strategy was likely not lead by a logic flow from safety/performance endpoint definition → Search Terms Selection → Appraisal Plan. PICO Strategy in search term selection works well here to address this, but make sure that your endpoints are as clear as possible too.
There should be a documented justification for excluding non-English articles, since researchers who do not operate in a primarily English speaking country do not usually put the resources into publishing negative data, and Europe is a multilingual market with a large variation in clinical practice, so excluding non-English articles could introduce bias.
Putting ‘non-English’ as an exclusion criteria for an article will always get you in trouble. Just don’t do it on your first submission. Technically you should be considering any and all articles that are relevant to your device regardless of language. This doesn’t always work out in practice (as many don’t have the time to translate every single article), but stating this explicitly will get you questions.
An indication of how the weighting factors have been applied should be provided
How are you ‘grading’ or ‘weighting’ your included articles? Some methodology of scoring system should be clear to the reviewer here. You can read about our process here if you need something to work off of.
Full texts should be provided
Your review needs to include a list of all the used/included citations, and attach all of the full-text PDFs as well. Make sure to rename these properly so it’s easy for the reviewer to find everything.
It is clear that this search time frame is sufficient for evidence of SOTA and clinical data.
Justification for the time frame of your search is important to address somewhere in your protocol. If you are using a shorter time frame than the lifetime of the device, or one that fits with the SoTA treatment history, make sure a reason is provided.
The search listed does not cover all accessories under review in the covered accessories.
Another strategy concern, if your device has accessories you are mentioning in the CER (even if they are basic or simple), there needs to be some reference and coverage in your search terms chosen for the review.
Literature Review author credentials were not provided, nor was justification of their experience, or declaration of interests submitted with the review.
The CV of whoever performed the review must be submitted along with a declaration of interests. The DOI is used to sniff out potential conflicts of interest that might taint the review. You don’t have to have a 3rd party performing the review (but it doesn’t help), but if the engineer who created the device is also performing the review… you’ll get additional inquiry from your reviewer.
Are you qualified to be performing the literature review in the eyes of the Notified Body? Check out our article on some of the Experience/CV guidelines that your NB will be looking for.
Conclusion
The literature review is an all too common cause of pain in the CER audit portion of an EU MDR submission, and it doesn’t always have to be. If you read through this whitepaper you probably noticed that the vast majority of feedback can be addressed with some forethought, and better organization/presentation of your review results.
We’ve been at this for years (hundreds of successful reviews), and our best advice is to make sure your protocol and process are dialed in. If you can afford a tool like ours, most of the common slip-ups and structures are addressed for you.
Sincerely,
Ethan Drower & Team CiteMed