P003 Clinical Guidelines: The Supply Chain to Performance Measurement

Burstin Helen
2013 BMJ Quality and Safety  
Systematic reviews have become a sine non qua in guideline development. The reasons for this are obvious: recommendations must be based on the best available evidence and systematic reviews allow for transparent methods and processes for evaluating the evidence. However, best practice of collaborating and implementing the laudable goal of basing recommendations in guidelines on systematic reviews has not been defined. For example, better coordination between guideline developers and systematic
more » ... eview authors is required to efficiently use resources (e.g. avoid unnecessary duplication of efforts, delays and non-credible evidence reviews). The Cochrane Collaboration and the Guideline International Network are looking for ways to collaborate. Attractive models include the development of a database of health care questions for which systematic reviews exist, are planned or missing. A database will allow guideline developers and systematic reviewers to work in synergy and ensure that evidence is synthesised when needed and used when recommendations are formulated. An interactive database of questions, linked to summaries of evidence and systematic reviews would support guideline developers and systematic reviewers to work together. This collaboration is not simple; it requires involvement of both review authors and guideline developers early on in the process of systematic reviews. In this presentation we will review challenges and proposed solutions based on existing models, examples and ideas to advance the field. It will report on work from the GIN Partnership Taskforce, the Cochrane Applicability and Recommendations Methods Group, other Cochrane entities and the GRADE working group to support these solutions. Evidence-based policy making calls for the use of the best available evidence to support decision-making. Traditional hierarchies of evidence interpret this as focus on RCTs with most placing systematic reviews at the top. The proliferation of systematic reviews has now led to 'tertiary analysis' -reviews of multiple sources of secondary analysis. However, experienced users recognise there are good and bad systematic reviews. The systematic review may not answer the exact policy question being posed and it is only as useful as good as the underpinning evidence base. Best practice would suggest new systematic review is undertaken for each policy decision. Yet there are often time constraints and funding pressures and a limited skill set. Prioritisation is therefore necessary. On a strategic level, the use of average data derived from RCTs to make decisions about individual patients is being questioned. There are multiple stratified and personalised medicine initiatives backed by funding for methods and infrastructure to support the use of observational data for comparative effectiveness research. Yet little attention has been given as to how to integrate these different types of evidence to make decisions. Dr Garner has provided technical advice to NICE for the past 12 years and in addition to consuming systematic reviews for policy decisions she is a writer and editor of systematic reviews. Dr Garner will provide insights into the policy maker's dilemma and the scientific arguments underpinning the debate. She will share a number of initiatives at the policy level and put forward potential options for how the evidence-based medicine community can address the policy maker's dilemma. Ideally, performance measures are based on high quality evidence regarding the interventions and services that will achieve desired outcomes and reflect high quality care. As guidelines and performance measures are increasingly used for public reporting and payment, the necessity for a strong evidence base has become more urgent and compelling. To achieve the intended positive effects of quality measurement and minimise potential unintended consequences, measures should be based on the best evidence for the focus of measurement. While outcome measurement is increasingly preferred, many measures continue to focus on process steps distal from the desired outcome, even when there is evidence for a more proximal intervention or intermediate outcome that can be linked to the desired outcome. Guidelines are a critical step in the supply chain to performance measures and ultimately evidence-based improvement processes. The quality of the guideline and the evidence review has significant downstream implications for measure development. The complexity of guidelines may also limit the ability to translate into feasible performance measures. The degree of specificity in the guideline has implications for the precision of the measure specifications. Measurement is impeded by the lack of specificity in guidelines, such as imprecise "high risk" population designations and insufficient information regarding periodicity. Though potentially useful for clinical care, extensive use of exceptions in guidelines makes them difficult to operationalize into measures. To ensure that guidelines can be readily adaptable for performance measurement, greater communication and collaboration is needed between the guideline and measurement communities. Ideally, guidelines would be developed with experts in performance measurement and clinical decision support at the table to ensure that evidence synthesis and guidelines can effectively serve the needs of measurement and improvement.
doi:10.1136/bmjqs-2013-002293.3 fatcat:lhnwy5mbuvcpxfxg3amgzqsf74