Flexible, Adaptive or Attractive Clinical Trial Design

Shein-Chung Chow
2012 Drug Designing: Open Access  
In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency for identifying clinical benefit of the test drug or therapy under investigation. The use of adaptive design methods in clinical trials is motivated by the Critical Path Initiative by the United States Food and Drug Administration (FDA) in early 2000. The Critical Path Initiative is to assist the sponsors in identifying the
more » ... cientific challenges underlying the medical product pipeline problems because it was recognized that increasing spending of biomedical research does not reflect an increase of the success rate of pharmaceutical (clinical) development. In 2006, the FDA released a Critical Path Opportunities List that outlines 76 initial projects (six broad topic areas) to bridge the gap between the quick pace of new biomedical discoveries and the slower pace at which those discoveries are currently developed into therapies. Among the 76 initial projects, the FDA calls for advancing innovative trial designs, especially for the use of prior experience or accumulated information in trial design. Many researchers interpret it as the encouragement for the use of adaptive design methods or Bayesian approach in clinical trials. Chow et al. [1] define an adaptive design as a design that allows adaptations to trial and/or statistical procedures of the trial after its initiation without undermining the validity and integrity of the trial. Alternatively, with the emphasis of the feature of by design adaptations only (rather than ad hoc adaptations), the Pharmaceutical Research Manufacturer Association (PhRMA) Working Group on Adaptive Design refers to an adaptive design as a clinical trial design that uses accumulating data to decide on how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial [2]. In practice, depending upon the adaptations employed, Chow and Chang (2011) classify adaptive designs into the following types: (i) an adaptive randomization design, (ii) an adaptive group sequential design, (iii) a flexible sample size re-estimation design, (iv) a drop-theloser design, (v) an adaptive dose escalation design, (vi) a biomarkeradaptive design, (vii) an adaptive treatment-switching design, (viii) a hypothesis-adaptive design, (ix) an adaptive seamless phase I/II (or II/ III) trial design, and (x) a multiple adaptive design. Detailed information regarding these adaptive designs can be found in Chow and Chang [3] . In many cases, an adaptive design is also considered a flexible design [4,5] or an attractive design [6]. In February 2012, the United State Food and Drug Administration (FDA) circulated a draft guidance on adaptive design clinical trials. The FDA draft guidance defines an adaptive design as a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of (usually interim) data from subjects in the study [7] . The FDA classifies adaptive designs into well-understood designs and less well-understood designs. Well-understood design refers to the typical group sequential design, which has been employed in clinical research for years. Less well-understood designs include the adaptive dose finding and two-stage phase I/II (or II/III) seamless adaptive designs. Many scientific issues surrounding the less well-understood designs are posted in the draft guidance without recommendations for resolution. This raises the question whether the use of adaptive design methods in clini-cal trials (especially for those less well-understood designs) is ready for implementation in practice. As indicated by Chow and Corey [8], possible benefits for the use of adaptive design methods in clinical trials include that (i) it allows the investigator to correct wrong assumptions made at the beginning of the trial, (ii) it helps to select the most promising option early, (iii) it makes use of emerging external information to the trial, (iv) it provides the investigator the opportunity to react earlier to surprise (either positive or negative), and (v) it may shorten the development time and consequently speed up development process. The use of adaptive design methods in clinical research and development provides the investigator the second chance to modify or re-design the trial after seeing data from the trial itself at interim or externally as recommended by the independent data monitoring committee (IDMC) of the study. While enjoying the flexibility and possible benefits of adaptive design methods in clinical trials, it should be noted that more flexibility could lead to a less well-understood design as described in the FDA draft guidance. A less well-understood adaptive design is often more flexible and yet more complicated. Under a complicated and less well-understood adaptive design, statistical inference is often difficult, if not impossible, to obtain although valid statistical inferences for some less well-understood designs are available in the literature. Both Chow et al. [1] and Gallo et al. [2] indicated that adaptive design methods must not undermining the validity and integrity of the trial. Validity is referred to (i) minimization of operational biases that may be introduced when applying adaptations to the trial, (ii) correct or valid statistical inference,(iii) convincing (e.g., accurate and reliable) results to a broader scientific community. On the other hand, integrity is to (i) minimizing operational biases, (ii) maintaining data confidentiality, (iii) assuring consistency during the conduct of the trial especially when multiple-stage adaptive design is used. From statistical point of view, major (or significant) adaptations (e.g., modifications or changes) to trial and/or statistical procedures could (i) introduce bias/variation to data collection, (ii) result in a shift in location and scale of the target patient population, and (iii) lead to inconsistency between hypotheses to be tested and the corresponding statistical tests. Chow [9] classified sources of bias/variation into four categories, namely (i) expected and controllable such as changes in laboratory testing procedures and/or diagnostic procedures, (ii) expected but not controllable such as change in study dose and/or treatment duration, (iii) unexpected but controllable such as patient non-
doi:10.4172/2169-0138.1000e104 fatcat:gr7sf2x5wbe5nfhuijyjdmcn6m