by Stephen G. Thein, PhD
Dr. Thein is Director/Principal Investigator of Pacific Research Network, Inc., in San Diego, California.

With companion commentary by
Atul R. Mahableshwarkar, MD
Dr. Mahableshwarkar is Senior Medical Director at Takeda Global Research & Development Center, Inc. in Deerfield, Illinois.

Innov Clin Neurosci. 2012;9(2):21–25

Funding: No funding was received for this article.

Financial Disclosures: Dr. Thein has nothing to disclose and no conflicts of interest relevant to the content of this article. Dr. Mahableshwarkar is an employee of Takeda Global Research & Development.

Key words: clinical trial, trial design, site selection, drug development

Abstract: First, parallels are drawn between the conduct of clinical trials and a few events in history that share a management style known as “top-down” management or a hierarchal decision-making process. The author suggests that this process isolates investigative sites from sponsors and contributes to the failure of clinical trials. Trial design, patient recruitment, site selection, the use of electronic data devices, and enrollment timelines are examined in greater detail. Suggestions for a more open or shared process are offered, with the belief that fewer trials might fail and fewer questions might remain in the case of those that do. Next, in the companion commentary, some of the problems arising in drug development and clinical trials are mentioned along with a partial listing of solution providers. An outline of circumstances involved in the decision-making process in drug development are presented along with some factors leading to decreased signal detection.

It’s only a little ice: A personal view
by Stephen G. Thein, PhD

On a clear night in 1912, the world’s largest passenger ship struck an iceberg and sank, killing hundreds. The Titanic’s lookout, or observer, had the knowledge that could have changed history.

Seventy-four years later, the space shuttle Challenger exploded, ending the lives of a team of astronauts. Again, an observer—this one an engineer—had the knowledge to prevent an impending disaster before it happened.

The two events are eerily similar. Both involved spectacular vehicles designed to cross huge distances that were monumental feats of scientific engineering, and both involved ice. However, more importantly, both tragic events were absolutely preventable had the few with important knowledge not been isolated from those in charge. As it was, no one would have listened to Titanic’s observer had he directed that the ship be slowed. Almost three-quarters of a century later, few listened to the engineers on the ground, who said the sealing rings on Challenger’s solid-fuel booster rockets could fail when ice formed on their external surfaces. Sadly, it still took another epic, ice-related disaster, this time to the shuttle Columbia, along with a congressional hearing and an audit of the entire space program, for the National Aeronautics and Space Administration (NASA) to understand that something else more fundamental was the underlying cause; that a hierarchal decision-making process, sometimes called “top-down management,” is doomed to fail.

In an analogous way, each clinical trial is like a rocket or ship, and our trial sites are like the engineers on the ground or the lookout on the Titanic. If a pharmaceutical company produces a finished protocol without input from knowledgeable researchers in various settings and then seeks (and selects) sites for a trial with a protocol that doesn’t provide a single company contact associated with the study, aren’t we committing the same process that doomed two space shuttles and a luxury liner? Similarly, even if a study is well-conceived, if it is conducted in an environment of restricted communication, where site selection and operations are outsourced to an entity that has little knowledge of either the disease or the best suited sites to provide clinical services, the reported presence of metaphorical ice may not reach critical ears. With 25-percent fewer compounds progressing to Phase III development during the years 2005 to 2009, as compared to 2003 to 2007, and in the same period of time, almost a 50-percent decrease in overall success rate,1 we must ensure more of our “rockets achieve orbit.” Let the skilled investigators provide input into protocol design, patient selection criteria, and recruitment issues before a trial is finalized; let the knowledgeable site tell us about remote data capture and entry systems before they are implemented.

How many of us have experienced studies where our first opportunity to communicate our confusion or reservations is at an investigators’ meeting? It is a little late to learn that the population sought does not exist or does so in very small numbers, while the efforts to recruit the necessary population are not possible with the recruitment dollars provided. Management is unlikely to embrace the notion that attending investigators, experienced with the target population, recognize that screen fails are likely to exceed what anyone expects, necessitating alterations to contracts at this late date. Frequently, capping recruitment reimbursement or the establishment of artificial and unrealistic screen fail ratios demonstrates an awareness of recruitment difficulties. These top-down management approaches may be reflective of corporate or even misperceived corporate needs on the part of project directors in implementing what is seen as effective cost containment. The use of outside recruitment companies suggests an equally removed solution to recruitment problems, when imposed on sites regardless of need or past performance. In such a situation, it is important to be aware that self-validation by a recruiting company (presenting metrics that demonstrate the seemingly overabundance of possible patients, enhancing the perceived value of the recruiter and perhaps suggesting investigator ineptitude) can actually carry a risk of alienating a site and sponsor.

When disparate entities operate independently and in isolation, we see the results via increasingly complex studies, with duplicative or poorly timed procedures and sites forced to use newly developed and untested electronic data capture systems. Often these problems become evident only after a study is launched, because of this same isolative top-down management process. Recently, a sponsor purchased an entire study data-entry system, functioning in real time, on a tablet-type device, where all site staff and patients were required to enter all data in a sequence seen as appropriate by the device maker. Unfortunately, this system was developed in a vacuum by software engineers; no one stopped to consider that no two sites are the same and neither the sponsor nor device maker, prior to its release, spoke with those who might actually use the device. Sites and contract research organizations (CROs) are caught off-guard by sponsor-mandated, remote data entry systems, each often unique and requiring special training, with a few even untested in a real-world trial.

Site selection is another example, in the author’s experience, where any combination of a less than well-engaged project leader and a hierarchal management style can be a recipe for disaster. Experienced project managers and medical directors usually have developed relationships within the field that provide a conduit for two-way communications, an invaluable tool for soliciting protocol input and later in selecting qualified sites. If these managers and directors isolate themselves from the process, delegating it to less-informed personnel, they are much like the captain on the Titanic, deprived of the information vital to the task. A few years ago, I was asked to consult for a pharmaceutical company in protocol design, CRO selection, and patient recruitment. Imagine my surprise when, a day before the investigators’ meeting, I learned a CRO was selected and site selection was complete. When I called the CRO I was told that “there were so many sites, selection was difficult” and most telling, the young man making the decision was new to the therapeutic area (Alzheimer’s). When I offered that my earlier consulting to the company on this trial, of which he was unaware, might convey an expertise in the area and perhaps I might call the sponsoring drug company directly, I was told, “No, our duties include site selection.”

Outsourcing and delegation is not the problem; the problem is how the process is implemented. An engaged director, who remains very involved, responsive, and, most importantly, accessible, eliminates many of the barriers to communication usually present when a myriad of entities coalesce to conduct a trial. Two examples are worth noting: one director ran two entire programs for a small company, where contract employees were almost universal. However, she was always available to discuss patient issues (from eligibility to developing medical findings), ratings (who was qualified to rate and even telephoning on seemingly aberrant ratings), budgets (beginning to end, and payments), contracts. Of note, she also called experienced sites as she developed protocols. The second was someone whom I met at “big” pharma and subsequently moved on to direct projects in smaller companies, where the use of CROs and contract personnel was mandated. His availability never changed. He was available for any issue, at any time. His answer to the spiraling craziness of rater training was that raters must have the clinical skills and experience with the disease spectrum. He required that a rater have experience with the assessment tool(s) being used and personally reviewed and approved each rater for his trial, often asking a simple, pointed question: “Is this person qualified to evaluate your mother?” He felt that it was these raters (and subsequent ratings) that would decide the fate of his trial, mandating his personal involvement; no certifications were required. Site selection and clinical issues received the same hands-on approach.

When trials fail as a result of an unacceptable placebo-response rate, frequently untrained or unskilled investigators are cited as a likely cause. As a result, central nervous system (CNS) trials have witnessed the rise of rater certification entities, where rater suitability is determined mechanistically. In this model, the art of an interview, learned through education and experience, is replaced by strict conformance to dictated methods and views (not unlike “paint by numbers”). The benefits of rater certification are controversial but, again, it is a process that is usually implemented in a top-down manner, without the input of experienced investigators. Most importantly, if we blame these failed trials on rater issues we are unlikely to consider the implications of poorly chosen sites, isolated investigators, unexpectedly complex trials, and unrealistic timelines for patient recruitment. The additional pressure resulting from artificial screen-fail caps and/or the failure to support site-specific recruitment is also important to consider.

We must use the knowledge that experienced investigators bring to the table when making critical drug development decisions. Experienced and knowledgeable directors are pharma’s best asset when they actively solicit input of the key players conducting a trial. These directors should identify what type of center will be conducting the bulk of the study, recognizing that academic trial sites and for-profit, dedicated research centers, for example, often have very different populations, recruitment approaches, staffing, data-entry methods, and even an understanding of timelines for enrollments. These directors should also consider that the needs, goals, and even level of investigator involvement differ between types of sites, as well. With this understanding, it is possible to craft a protocol that addresses many of the concerns, needs, and desires of the pharma company and typical investigator before the trial is ever implemented. Perhaps this concept could be expanded to include a “panel of investigators,” who can serve as a conduit of communication between the sponsoring entity and investigative sites, aiding in site selection and operations. Whatever the process, the goal is to create a team approach, involving investigators, recruiters, data managers, and perhaps rater certifiers, where needs and goals can be communicated and coordinated before a trial is finalized. In this fashion, we can reduce potential communication barriers, foster shared goals and methodologies, with the common goal of the successful design and launch of a clinical trial. This requires a process where anyone, like those Challenger engineers or the spotter aboard Titanic, can warn them of “ice.”

If we want to learn from our mistakes, let us try to open channels of communication. Ensure that anyone working on a project has a line of communication enabling them to be able to make suggestions that reach important ears, which are then openly communicated to all and, if necessary, “change course or scrub the launch.” Imagine the outcome if the Titanic’s lookout was permitted input to slow the ship in icy waters. Similarly, had a work environment allowed the engineers of the shuttle’s booster rockets to express their concern about ice on sealing rings that cold morning, the Challenger would not have been lost. Now, imagine the outcome if sites provided input in trial designs and recruitment timelines, where fewer trials were thwarted by an inability to recruit appropriate patients and placebo-response rates simply reflected that a drug is no better than placebos, without raising questions of site qualifications. Just imagine…

References
1.    KMR Group Pharmaceutical Benchmarking forum. Public Release 2008 and 2010. http://kmrgroup.com/ForumsPharma.html. Access date February 1, 2012.

Water Water Everywhere, Nor any drop to drink: A Companion Commentary
by Atul R. Mahableshwarkar MD

Samuel Coleridge’s, “The Rime of the Ancyent Marinere,”[1] from which the title of my commentary comes, tells the story of a sailor who has returned from a long voyage during which his ship enters unchartered waters, becomes becalmed, and even though there is water all around them, there is not a drop to drink because it is all sea water. Looking at the attrition rates of compounds in development, and more so for CNS drugs, it would not be surprising if the observer feels like the ancient mariner—passing through storms, currently becalmed with news in the form of failed trials, discarded programs, and companies reportedly moving out of developing drugs for CNS diseases. If the ship is thought of as our clinical trials, the water that is everywhere could be the many different companies (small and large) that offer to solve the myriad of problems arising in developing drugs with varying levels of evidence to support their claims.

A partial listing of the problems ranges from the macro (“decreasing research and development productivity”) to the micro (“unrealistic recruitment expectations”); from the conceptual (“increasingly complex studies”) to the practical (“capping recruitment reimbursement”); from blaming the science or the scientists (“where are the breakthrough drugs?”) to blaming the sites and the people conducting the studies (“why should clinical trials be run at for-profit centers?”); from documented observations in the scientific literature of an increasing placebo response in trials over time[2,3]  to personal observations of seeing the same subjects enrolling in the same trial at multiple sites.

Spending time in this world of clinical trial practitioners, you hear almost as many different opinions on how to “fix the problems” as there are perspectives from people of different backgrounds and functions involved in this enterprise. Given the high failure rate of trials combined with the challenges in getting trials completed on time and budget, it is not surprising that a number of methods and providers of solutions to these problems have emerged. In planning a study, a list of all possible vendors who could be utilized to run it would be long. At one end of the spectrum would be a large, full-service company with global reach and at the other end are the individual consultants for specific services. In between these extremes, there are boutique companies for designing studies, packaging drugs, setting up systems for randomization of subjects, meeting planning, and providing audio visual services. There are vendors who train and monitor raters as well as those who provide the raters themselves. There are specialty evaluators of study quality, trial monitors, specialist recruitment vendors—the list goes on.

A newcomer to the field may be excused if he or she feels somewhat overwhelmed when considering all these options. Not only is coordinating the many functions, providers, and sites of a clinical trial a daunting task, but there is also the legitimate concern that with so many “solutions” to the problems faced by clinical trials, there might emerge new problems because of the solutions themselves—the law of unintended consequences.

Dr. Thein’s commentary, “It’s Only A Little Ice,” very aptly raises such issues. He raises questions about the appropriateness of the decision-making processes in the sponsor organizations and highlights the dangers arising from lack of feedback from experienced investigators at a number of decision  and time points in the life of a trial. He finally suggests that establishing an open and honest two-way communication process between sponsors, investigators, and others involved in the trial would yield better results than what we see currently. The concerns raised and solutions suggested are both reasonable and have an inherent logic about them and should be thought about in some depth.

Management approaches to making decisions in the design and conduct of trials by the pharmaceutical industry may seem opaque and capricious from the outside, but things may not be as they seem. “The Pharmaceutical Industry” is not monolith with all companies sharing similar characteristics and processes. The, “Industry” includes large companies spanning the globe with tens of thousands of employees, small start ups in a single location with only a few people, and the remaining that are in between these two extremes. Not only are there many layers in the larger organizations but they also have greater breadth and depth of expertise available to designing and running studies. Decision making would expectedly vary depending on size and complexity of the sponsor and also on the internal organization. There may be a few decision makers or a process involving many from different functions, such as clinical science, operations, packaging, outsourcing, supply, data management, and contracting jointly responsible for decisions. Different companies also have different systems of performance assessment and rewarding performance, which would be expected to lead to differences in how trials are set up, conducted, and monitored. All of which, no doubt, leads to many different processes to be followed when a site is conducting studies for different sponsors. How does having to follow different processes for studies being run at the same time impact sites? Does it increase chances of error; require more time from primary investigators, raters, coordinators, and subjects; or decrease subjects’ willingness to participate in studies or to adhere to protocol requirements? While the answer would appear to be a clear yes, these questions merit studying.

The increasing complexity of studies has been postulated to arise as a consequence of multiple entities operating in isolation. Have clinical trials in CNS gotten more complex over time? According to Kenneth A. Getz,[4] in the six years between 1999 and 2005 there was a 65-percent increase in the frequency of procedures across all therapeutic areas. For depression trials, there was an 80-percent increase in the time spent to complete procedures for an interim study visit between 2006 and 2011.[5] Similar increases were seen in studies for schizophrenia and the increase was mainly driven by an increase in the number of unique assessments (from 4 in 2006 to 11.3 in 2011). Over the same period of time, there was an increase of almost 400 percent in the length of the informed consent form. This supports the statement that studies are getting more complex, but is disparate entities operating independently designing studies the only reason? The need to answer different questions from regulators, insurance companies, and other payors may be another reason.[6] With increasingly complex studies that require greater time commitment from subjects, are subjects enrolling in studies recently similar to those from previous decades? The broader question may not be why studies are getting more complex but should they be simplified at the cost of not getting the needed answers?

Establishment of artificial screen fail ratios, capping recruitment reimbursement, unrealistic timelines, poorly designed electronic data capture or other systems, selection of less than ideal sites, issues in rater certification, poor sponsor site communication, and a lack of incorporation of knowledge from experienced investigators are pointed out as contributors to failure of clinical trials. While some of these could lead to inefficient trials resulting in either increased time or budget to complete the study, selecting less than ideal sites or raters may lead to study failure.

While study design, set up, and conduct do have a significant impact on ability of a trial to separate drug from placebo, the examples of the Titanic and Challenger point out the dangers of decision makers not having access to feedback. When failure can have unacceptable consequences, multiple, redundant systems may need to be in place in order to ensure that even backup systems have backups of their own. Clinical trials, as currently set up, do have measures of assessing quality and data integrity in an ongoing manner. The suggestion to utilize the experience of investigators should be viewed as another feedback loop to improve the ability of our studies in detecting signals of efficacy.

References
1.    Samuel Taylor Coleridge. The Rime of the Ancyent Marinere. 1798.
2.    Khan A, Bhat A, Kolts R, et al. Why has the antidepressant-placebo difference in antidepressant clinical trials diminished over the past three decades? CNS Neurosci Therapeut. 2010;16(4):217–226.
3.    Mallinckrodt CH, Zhang L, Prucka WR, Millen BA. Signal detection and placebo response in schizophrenia: parallels with depression. Psychopharmacol Bull. 2010;43(1):53–72.
4.    Getz KA. The heavy burden of protocol design. Applied Clinical Trials. May 1, 2008.
5.    Wilcox C. Protocol complexity at the site level: from the sites’ perspective. Presented at CNS Summit; Nov 17–20, 2011; Boca Raton, Florida.
6.    Dunn J. Increased complexity in clinical trial protocols. Presented at CNS Summit; Nov 17–20, 2011; Boca Raton, Florida.