«

»

Sep
15

Minding the gaps in tuberculosis research – What Science Translational Medicine

Today I received the link for the recent article (Minding the gaps in tuberculosis research) with Joel Freundlich at Rutgers and Eric Nuermberger at Johns Hopkins. The article is free for 50 days from today, which I think is a very useful idea by Elsevier to make their article available for a while, get some visibility and downloads and raise awareness of the journal Drug Discovery Today. As an editorial board member I think this is one of the smartest things I have seen Elsevier do in a long time and this certainly goes some way to offsetting criticism these businesses deal with.

Anyway, once again this paper has a bit of a story. We submitted initially to Science Translational Medicine – thinking this was the perfect home for it..but that was not too be.

Here are the anonymous reviews for 2 reviewers:

MSID: 3009006 Title Minding The Gaps in Tuberculosis Research
Author Name: Ekins, Sean
The authors present a convincing argument for the detailed retrospective analysis of accumulated in vivo data to inform the selection of new hits for development as anti-tuberculosis drugs. They describe analysis of a
curated database that collates the available data and this is available through various online resources. I would endorse the views expressed in this focus piece, but have one minor suggestion and one more substantial:
Minor – the authors, in the abstract, refer to a 30 year valley of death, then in the conclusion refer to a 40 year gap in approvals between rifampicin and bedaquiline. These figures are measures of different things, however, I
would suggest that in a Focus piece that will be oft quoted it might be tidier to focus on one or the other.

Major: the authors indicate that the debate about which mouse model or indeed which animal should be used is outside the scope of this review. They then, without presenting evidence, make the staement that ‘the
correlation between treatment outcomes in mice and infected humans cannot be ignored’. I would suggest that this particular paragraph assigns a confidence in the mouse model that does not reflect the state of the debate
in the community. As there is not space to discuss these points in full, I would suggest deleting the sentences ‘Outside the scope…..be ignored’ but replace them with a senetence that posits the database, as presented
here using the mouse model, as an examplar of how the accumulated animal model data as a whole could be used to inform hit selection.

Review
General comments:
This commentary describes the history of anti-TB drug discovery and a 30-year gap that occurred in research to identify new drugs, as well as the gap between in vitro and in vivo testing activities. The authors conducted a
literature review to quantitatively assess these gaps. These gaps are already well recognized in the field, but the main contribution of this commentary is that these gaps, based on the methodology the authors used to
review the literature, are quantitatively depicted. Unfortunately, the discussion the authors provide to explain the gaps is largely descriptive and not very compelling. Some obvious explanations are not provided.

Specific comments:
1. This commentary is about anti-TB drug discovery. The title is somewhat misleading—“Mining the gaps in tuberculosis research”, as there are many other major gaps in TB research other than drug discovery. The title
should be changed to reflect the focus of this paper.
2. The most obvious explanation for the time gap in research is the AIDS epidemic. If the AIDS epidemic had not occurred, TB would not have received the attention it finally did in the mid 1990s. For example, the Global
Fund to Fight AIDS, Tuberculosis and Malaria did not begin its operations until 2002, which corresponds to the period when the increase in in vitro testing began (as shown in Fig 1). The discussion would have been more
illuminating if the authors contrasted their findings to the level of TB research funding activities that occurred during the same time period.
3. Another potential explanation for the time gap in research is the time it took for multidrug-resistant TB prevalence to emerge and become high enough to receive attention by developed countries.
4. The gap between in vitro and in vivo testing activity could also have many explanations, which were not suggested in this commentary. It is not surprising that in vivo assay activity peaked in the 1940s-60s since no
anti-TB drugs were available until the late 1940s and TB was highly prevalent during this period even in the US and other Western countries. It is not surprising that the activity decreased after good drugs were discovered
and the prevalence of the disease decreased. In fact, CDC published a report in 1989 A Strategic Plan for the Elimination of Tuberculosis in the United States, which proposed to eliminate TB by 2010. There is no incentive
to develop new drugs for a disease that was slated to be eliminated.
5. The wide gap between in vitro assay activity and in vivo assay activity after 2002 could be explained by many other reasons including the need to identify new targets, since most existing drugs share similar targets and are
subject to rapid selection of resistance. Hits and leads that targeted the known targets would not undergo further tests.
6. In the Abstract, the authors state, “…. a paradigm shift is required in drug discovery and development to meet the global health need for new therapies.” Without an attempt to explain the possible causes of these gaps, the
authors cannot make any suggestions about what these “paradigm shift” is going to be.
6. Unclear the purpose of the supplement table of the large list of hits and leads. In summary, this commentary would have been more meaningful and contributory if the authors included a
more detailed discussion on why these gaps may have occurred.

Review
This is a very short review/commentary analyzing the history of the drug development against tuberculosis. The paper shows that drug discovery has been performend mostly in the 1950’s when many drugs were tested in vivo.
The second wave of discovery is recent, in the 1990’s when millions of compounds were tested in vitro, but not so many were tested in vivo.
Overall the paper shows that although there is great need for anti-tuberculosis drugs, the present efforts are not going to deliver and there is a need for a change in the approach. There is no suggestion on what change should happen.

——

It appears they had sent out an earlier version of the manuscript before they had asked us to make some changes requested by the editor.

So it went out for review again (comments in bold) but to no avail.

************************
REVIEWS
************************
Reviewer 1
 
The manuscript describes a problem and potential solution, at least a conceptual one.  The message is provocative message and deserves to be published, however I think are a number of assumptions made that weaken the overall argument and that changes to the presentation would strengthen the message the authors are trying to convey.  I highly recommend the follow actions be taken before publishing.
1)         Minor point on personal preference—I know this term is becoming more common, but I think the term ‘valley of death’ referring to research and/or drug discovery, especially in relation to infectious diseases, is overly dramatic and does detracts from the good messages contained in the draft Do not agree – actually our figure 1 shows the valley of death quite nicely.
2)         There are no references listed for the supplemental table 1 in the version I received The references were at the bottom of Supplemental table 1

3)         A number of assumptions or extensions of information are made through the draft.  For example, on pp 3 “…described 66 hits (Supplemental Table 1) under consideration for advancement”.  Just because something is published does not mean it was ‘considered for advancement’ and assumptions like this one I feel overly inflate the numbers used in the draft throughout.  I fully agree with the concept the authors propose, but I think it would be just compelling with the ‘estimations I disagree as any analogs made are under consideration for advancement; it just depends on their respective profiles
4)         I agree that there is not infrastructure to understand the overall pipeline, and this a laudable goal, but I think the focus should be on the pipeline, or at least things of potential interest, not just a catalog of everything publish with Mtb activity.  How else would we learn from what has been done before?  Value of curation cannot be understated, no one has ever done this before us.
5)         The assertion that ‘the next logical step would be to progress these and other lead compounds into an in vivo efficacy model’ is likely flawed.  I fully support the idea of cataloging interesting compounds from the literature, etc and finding a way to move them forward, especially in an area like TB, but the potency of the cmpds in Table 1 range from 0.02 to 32 uM.  Surely it would be irresponsible to progress all or likely even many of these into in vivo experiments.  In fact, a number of the more potent compounds in Table 1 already seem to have in vivo data.  We could point out this range of in vitro efficacies and comment on the often disconnect between in vitro activity and in vivo efficacy (e.g, pyrazinamide).
6)         The curation of molecules from the literature might be a useful exercise, but it is difficult to ascertain due to the limited information.  Minimally, the final set used to general Figure 1 should be available as Supplemental Information (there are references to it being available on public resources, but no links are given).  This can be done and is minor. As we stated in the manuscript it is already available in multiple locations e.g. figshare Mouse TB in vivo data over 70 years  we have even tweeted it previously so its very publically accessible!
7)         The authors compare the discovery to approval time being ‘a couple of years’ in the 40s-60s vs 16 years for bedaquiline—I would think this delay could have had much more to do with the more rigorous process overall as compared to a lack of push for mouse in vivo efficacy.  They missed the point that we are making about asking the critical question of in vivo efficacy sooner rather than later!
8)         The comparison of number of compounds tested in the 1950s vs. 2000-2010 period assumes that the 2 million compounds actually cover significantly more ‘diversity’ than the ‘thousands’ in the 1950s.  I think is more likely reversed.  Regardless, it isn’t just a numbers game.  I disagree as it is in many ways; we do not know what an in vivo active looks like and thus the more shots on goal the better. The fact of the matter is we tested more compounds in vivo in the past than we do now as a proportion of in vitro screening.
9)         The statement ‘Amongst the 1000s of compounds…..how many would be active in vivo and….progress to the clinic’.  This all depends on the profiles of the compounds doesn’t it.  Using a typical High Throughput Screen, with a 2% hit rate (which seems rather high), as a model, this might suggest that of 10000 molecules tested only 200 would be ‘hits’ and perhaps only a handful potent enough to even consider progressing to in vivo studies. We are not at all sure the reviewer is even making a point here thats relevant to what we are saying.
10)       I don’t see exactly what the algorithm proposed to ‘prioritize’ in vitro active compounds will do beyond what the MIC data can do itself. Missed our point as far as machine learning and learning from the past in general. Using a model that learns from the past can help filter the in vitro hits so we enrich the compounds tested in vivo in likely active compounds.
11)       Figure 1 could be described better presented (axes hard to read, etc). This figure is perfectly readable in the file we provided.
************************
Reviewer 2
 
The authors propose that we should be learning from the historic data from the validation in the in vivo mouse model for prediction of whether in vitro drugs work in vivo.  They seem to conflate two issues- gaps in which drug testing is low and improving the efficacy of testing.  It is not clear what we can really learn from this.
 
1.  The final sentence of the abstract says, “This suggests a rethink of approaches is required…”.  This is not much of a conclusion for a high profile paper. – It’s a conclusion to the abstract of a very short opinion piece. We actually do propose a rethink and have many recommendations in the final paragraphs. Perhaps the reviewer should have read that.
 
2.  In the manuscript, the authors suggest that “…it may be time to rebalance resources dedicated to in vitro and in vivo efficacy assays of candidate antitubercular compounds”.  This is a broad statement without a defined idea on how to do this.  They also “propose that the data collated in this study could be sued to build machine learning models…”.  How will the factors which predict success or failure be studied? 
Actually we are proposing more in vivo testing as in the past, combined with learning from historic data and using models based on this data. The models will learn from actives and inactives, perhaps the reviewer should have looked at the reference we provided S. Ekins, J. S. Freundlich, J. V. Hobrath, E. L. White, R. C. Reynolds, Pharm Res 31, 414 (2014).
 
3. The authors want to “improve the efficiency of the discovery process by shortening or remove the gaps…”.  It is not clear that the gaps they discuss, the years where there has been lags in discovery are related to the efficiency in discovery or simply a lack of investigation.
We think we can do both by learning from the past and doing more in vivo screening after using models to prioritize the in vitro hits.
 
4. I was confused by the sentence “One optimistic observation from these mouse data is that the fraction of in vivo actives tested is higher in the current as compared to the past”.  I was not sure if this was related to my observation that the percentage of active vs. total compounds tested (interpolated from the authors graphs), has improved (see below).  The gaps were periods where there was lack of testing. Is improving efficiency really necessary?  These numbers look pretty good.  The authors do talk about predicting efficacy, but again, the discussion is very vague.
 
Percent  of drugs which were efficacious in the mouse
1950     1960     1970     1980     1990     2010     2010
year      43%     37%     50%     60%     57%     56%
 
As we state in the manuscript this may also represent a publication bias in recent years, we have noticed that publications in the past used to publish many active and inactive in vivo data in a single study whereas it is now common that a typical paper usually has one or two in vivo actives only. This could be because any inactives are excluded to make the paper look better, or they are testing fewer compounds or really are much better at picking in vivo actives. As no one has previously published this observation for TB we are also breaking new ground.
 
5.  The authors do not address combinations of drugs which are often used in TB.  There may also be interactions of drugs with host defense mechanisms.
 
We clearly did not have space to address every facet of TB research over 70 years, other reviews have done that and we have a maximum of 10 references anyway.
 
************************
Naturally, I am in no rush to try STM again.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>