Underwhelming big pharma response to Ebola – I urge more collaboration

A few weeks ago I asked where are the big pharma companies who have been largely surprisingly quiet during the Ebola epidemic. Today I see a press release announcing J&J will have their vaccine next year and GSK later this year. Possibly other companies will be involved at the WHO’s urging. What took so long? This has been all over the press for months and these companies are just now waking up from their stupor or pulling themselves away from playing Angry Birds or the “who can we buy out next” game! And why oh why is there no mention of trying small molecule drugs? Where are all those big pharma’s the Merck’s, Pfizer, Sanofi, Novartis etc?

Lets not forget that in the second world war many drug companies ‘collaborated’ to manufacture penicillin – what was the commercial gain then if any? What is the difference between then and now? The big pharma companies have largely lost their direction, perhaps even their moral compass in some senses. Where are the visionary leaders? I can bet if Paul Janssen was still around he would have been all over Ebola.

There are patients dying in Africa and elsewhere, get out of those meetings and do something, send your drugs to USAMRIID, get them tested vs Ebola and then if they are useful crank up the manufacturing and ship them out. Now there is a strategy. Stop talking about doing something with WHO and actually DO SOMETHING.


In response to the NIH director

I just found a post by Dr. Antony Fauci and Dr. Francis Collins on Ebola. So I have added my 2 cents – lets see if it gets accepted and a response from some moderator (..Tax dollars well spent – who would want that job responding to folks like me). There are plenty of smart people in the world and I am not seeing anyone using the great human assets that are out there to identify treatments for ebola.

Dear Dr’s Fauci and Collins,

A full 2 weeks before this post I searched the NIH’s own Pubmed (anyone can do this) and found a study from last year that DTRA funded to screen FDA approved drugs (http://www.ncbi.nlm.nih.gov/pubmed/23577127)-.. Several drugs (Amodiaquine, Chloroquine, etc ..) were found active and with promising data in mouse models. A Dr. in Haiti then alerted me to another paper with additional compounds (http://www.ncbi.nlm.nih.gov/pubmed/23785035). It appears there are no shortage of FDA approved drugs that have activity in vitro and in vivo in mouse etc.. there is even a common pharmacophore which I put in the public domain (http://www.collabchem.com/2014/10/02/a-pharmacophore-for-compounds-active-against-ebola/).

It would not take much for any of these drugs to be explored further. I am amazed that all the discussion is on a vaccine / biologics and yet there has been considerable efforts to fund screens of small molecule drugs and they have been largely ignored. I am also saddened by the lack of big pharma response (http://www.collabchem.com/2014/10/08/where-is-the-big-pharma-knight-riding-in-to-slay-ebola/).

We have known about the disease for 40 years and yet we did not have a plan for when it went beyond one village in Africa? That is very surprising to me as a scientist.
There are many questions that someone should answer, like why was the funding for small molecule screening and exploration of the hits/ leads stopped? Why was there no exhaustive effort to screen every FDA approved drug? Why are the existing drugs already on the shelf in Africa not being used? Why is nobody looking at those that are not getting Ebola, is it because they are already taking a medicine that is protecting them?

Food for thought.



How the experiment may impact the data

Here’s one to file under “I am still trying to get my head around it”.

Back in April at the CDD community meeting Christopher (Chris) Lipinski presented some slides looking at kinase selectivity and the relationship with ligand efficiency. There seemed to be a general trend that more selective compounds had a better ligand efficiency. My colleagues at CDD have been digging deeper into this and will present a webinar Wed, Oct 22 from 2-3pm ET at which Chris and Matt Soellner will debate Entropic and Enthalpic Propensities Inherent in SBDD and HTS.

Now my involvement has been pretty limited to thinking of some interesting datasets to compare. Obviously my personal bias is towards the neglected diseases and anything that is in the public databases. For one I was interested to see how the >1000 whole cell Mycobacterium tuberculosis (Mtb) hits coming out of high throughput screens compared with ligands from structure based drug design studies (SBDD) for which there are examples in the PDB. A measure of enthalpy of the SBDD hits suggested it was higher than for the HTS hits. Because several datasets have been released on antimalarial HTS hits we can do the same comparison with SBDD hits.

Without trying to give too much away I would say some of the slides I have previewed were very interesting. Of course most will want to hear about the kinase data but lets think about what other questions could we ask. SBDD by its nature is trying to optimize the fit of compounds into a target, simplistically it is trying to get good interactions. Phenotypic HTS is not bothered about that, key determinants of activity are getting the molecule into the cell and then shutting down some target/s. So hydrophobicity is predominantly driving whole cell activity for Mtb, as we see many of the hits have a higher calculated logP (using whatever method you decide), although other properties may also be key.

So fundamentally depending on what kind of experimental approach we use to get Mtb active compounds we are biasing towards compounds with different physicochemical properties. We have come full circle. Target based approaches to antibacterial drug discovery have been a failure because one they found few hits and two the few hits did not have whole cell activity. It seems obvious now but target based drug discovery is really finding a needle in a haystack, trying to get very specific interactions while whole cell approaches ‘just’ need to get the compound in and perhaps have OKish affinity for one or more targets. Maybe the latter represents more of a complete system effect (more targets to interact with vs a single target).

So what does this say about our efforts using computational approaches to find compounds active against Mtb? Will they also have some of the same issues inherent in HTS and SBDD? For example docking molecules in a crystal structure as part of SBDD is going to drive towards very specific interactions, and if the method and scoring functions are poor then the hit rate will be very low. Machine learning methods are going to learn from just the mass of data you give them. So if you feed in whole cell data all you are going to do is basically replicate the physicochemical properties that allow you to get compounds into Mtb and hit a whole array of potential targets. Is there some middle ground here, a hybrid approach?

Perhaps running compounds through whole cell assays and just feeding those hits into SBDD as starting points? Then followed by feeding the resulting SBDD designs/hits into whole cell assays to ensure that there is a balance between specificity and ability to get into the cell. Perhaps this iterative approach would be more efficient computationally as a pipeline where the known whole cell hits are fed into docking against as many Mtb structures in the PDB as possible and those that have good scores would serve as a starting point for design.

Another question you could perhaps ask is are the compounds that we want to avoid in HTS (like PAINS) different in some way? Would they stand out from real HTS hits and real SBDD hits. Is a PAIN found by docking more useful than a PAIN found by HTS? Do they have different enthalpy scores?

Well I am sure the webinar will have others asking questions too. Its certainly got me thinking.




Where is the big Pharma Knight riding in to slay Ebola?

In full disclosure my better half works for a big pharma, this is not going to hold me back from writing what I am sure everyone is thinking. It saddens me as someone who worked for big pharma (until 2001) to have to write this.

We are months into the rapidly escalating catastrophe that is Ebola. We have experimental therapeutics that are depleted, and we now have patients in Europe and the US that are critically ill. Thousands have died in Africa. The latest news is that a US patient is taking an experimental broad spectrum antiviral (read not FDA approved) from a small biotech.

But the people who have been incredibly silent are the big pharma companies? Where are they? I cannot think of one article I have read, one news item that even mentions them offering something. I am sure they are, have or are thinking about it. If there was one opportunity to right all those wrongs that seem to get press such as the latest allegation of corruption, Ebola is it.

We hear about the times companies do something for malaria , TB and other neglected diseases – such as: screen their vast compound libraries, dump the thousands of hits into the public domain, open up their patents etc.. But why are they not throwing their compounds at the CDC, the army or whoever can test them against Ebola? Why are the big pharma vaccine makers like GSK and Novartis who have been paid by the government to provide vaccines for Flu pandemics and more not responding quicker? Let me repeat, we knew this was likely not just another small scale Ebola outbreak months ago.

What would it take to organize some effort from the pharma side to show they were responding. Namely we could get drugs and compounds off the shelf from pharmas and test them. The compounds big pharma offered to NCATS for example might be a place to start? Maybe this is happening but the NIH have not communicated this.

I have already highlighted in multiple places what must seem the most astute science ever done that was funded by the government. Namely two studies that independently screened FDA drugs against Ebola in 2013. And yet even this published work is being ignored and the data from screening is not publically accessible. PLOSONE somehow overlooked this. Now if only the data from the FDA approved drugs screens was publically accessible, it could be used for machine learning models to help identify possibly better compounds. Has not a single journalist looked in Pubmed? If there was a time to highlight the need for open data this is it. What about using the various computational approaches out there that could search through the huge corpus of knowledge and suggest compounds to test, or dock compounds into targets for Ebola?

So are big pharma just moving so slowly because that is the way they operate? Or are they caught deer in the headlights like? Could they put something out there via PhRMA (even a small press release) so the general public has some idea they are responding to Ebola in some small way and not waiting for it to go away on its own? Big Pharma may be scared of putting their drugs out there and them having no impact, they may see it as a double hit, the market is small and the publicity around failure might cripple their share prices. Perhaps government could offer some reassurance that this could be done anonymously to avoid any bad publicity.

Who will be the knight that rides into slay the dragon that is Ebola? Will it be a small biotech that has nothing to lose and everything to gain, it will likely not be an NGO or the WHO. If big pharma could find the knight to lead them this could be a battle they might win by marshaling their cumulative brainpower, resources and then perhaps the public will have a new faith in this industry.

This is not a dream, Ebola is the worst nightmare we could imagine. Its going to take big pharma to get involved to end it. Please someone wake them up.



The most expensive data set I have modeled – dont blink (over $500M)

The cost of data generated in science is probably not something many people ponder. When I look back over some of the datasets I have looked at over the years its interesting to speculate on the costs of the underlying data. Working for pharmaceutical companies, data from specialist tests for toxicities (e.g. hERG) probably ran from the tens to hundreds of thousands of dollars for 10s to 100s of compounds of interest (back when the assays were low throughput and contracted out). Even high throughput screens performed inside companies probably had similar costs.  But the prize for the most expensive data set I have worked on surely goes to the recent analysis of over 300 NIH funded chemical probes. 

By conservative estimates this project and therefore the data derived from it likely cost well in excess of $500M (as quoted in 2010) . If there are any bean counters out there with an updated value please let me know. To date the countless grants have funded hundreds of screens. The result a little over 300 probes so far. So each probe compound is worth well over $1M ! OK I am NOT exaggerating, this is by FAR the most expensive dataset I have had the opportunity to model. Lets put this in some perspective, it cost over $3B to fund the human genome project. When I think of potential for impact on healthcare etc, somehow this dataset does not register even close to the human genome. Thats not to discount it, but what will it lead to? Probably you could say the same for the human genome but many would argue its propelled many insights, projects and products in science (almost like the billions sunk in the space race in the 60′s).

So what does this all mean? Well the analysis we recently published suggested that over 20% of the probes were undesirable based on the experience (40 yrs) of a medicinal chemist. That suggests 20% of the > $500M may have been a complete wast of time. Thats over $100M down the drain (conservatively). This of course is just small change for an agency that has an annual budget of $30.15 billion . $100M could make a significant impact on rare disease research, it could be used productively for Ebola research as well as many other diseases. But you may argue this project is past tense. The various research groups took their hefty chunk of overhead, and there were costs that were sunk in equipment and staffing etc. But the data lives on and resides in PubChem.. half a billion dollars of data just sat there.

We used all the data to try to model the medicinal chemists decision using machine learning methods. This suggests that perhaps we can use such models to prioritize compounds before we invest more time and resources in them. Literally the $500M machine learning model exists ! This might never have happened but for a discussion with Christopher Lipinski and Nadia Litterman back in April to see if a model could be created to emulate his decision making in some way. So while I am amazed at the costs to generate this data, without this we would not have been able to do the analysis we did. Is this the end of it? Probably not. This is not really BIG DATA but it took BIG MONEY.

If anyone has examples of more expensive datasets of molecules and screening data please point them my way.







A pharmacophore for compounds active against Ebola

I was looking through Pubmed yesterday looking for repurposing efforts applied to Ebola and found the paper by Madrid et al. This described many hits, including chloroquine and amodiaquine. Then I was tweeting with@DokteCoffee last night and she suggested another paper by Johansen et al., which described SERMs such as clomifene and  toremifene. Interestingly Madrid et al., had also found toremifene as active but did not test viral replication.

I have generated a common feature pharmacophore with these compounds and posted it on figshare. To narrow down the number of hits when screening libraries I added the van der Waals surface of amodiaquine. Interestingly the pharmacophore has multiple hydrophobic features and a single hydrogen bond acceptor feature. If anyone wants the Discovery Studio files  please let me know and I will email them out. I have posted the searches as csv. files on figshare for the CDD library and microsource library. There may be additional compounds that have not been screened in vitro that could be worth looking at.

Estradiol maps very well to the pharmacophore and although Madrid showed no viral replication activity in vitro, this compound would be interesting to test in vivo.



Promoting younger scientists to editorial boards of journals

Today I realized I am in the fortunate position to be on the editorial boards of several journals and that I have a “voice”. Then I realized I am no longer a young scientist, I am 44..What about those that are doing their postdocs in their mid 20′s when do they get a voice in the say of journals? I look at even the new journals that keep starting, (and another was in my in box today), and their editors are closer to my demographic mid to late 40s or beyond. Yes experience is important but what about listening to those that are doing science now, not science 20 years ago. I am not saying that we are past it as scientists, I am just proposing that editorial boards of journals with those in their 40s- 50s-60s is probably not ideal or representative of the population at major conferences. Just go to an ACS, AAPS, SLAS, etc.. there are thousands of young scientists.

So I tweeted a bit about this today.

Tweet 1 = Just thinking – isn’t it time publishers put a journal together that has a “younger scientist” focused ed. board that addresses needs (1/2)

Tweet 2 = (2/2) Younger scientist journal, needs ultra fast turnaround, transparent reviews and reviewers, free & open, open to new ideas & feedback

Tweet 3 = perhaps some much younger scientists could chime in on my comments.. I feel less relevant as I am > 40 but all journals focus on experienced

I have also sent the following to the editors of the 4 boards I am on, namely Pharmaceutical Research, Drug Discovery Today, Mutation Research Reviews and The Journal of Pharmacological and Toxicological Methods. These are by far not huge journals but they are with Springer and Elsevier and I do think its time the big publishers were more responsive to younger scientists.  Just look at the editorial boards.

Here is the text I sent to each journal editor: I was thinking today – is there a way we could get some “younger scientists” on the editorial board.. I look at myself and I am 44, I am no longer a “young scientist”. I think it might be important to get some input from those under 40 and possibly in the early 30s because things are moving quite fast in the publishing world e.g. F1000Research, PeerJ etc.. and we may risk seeing competition from these types of journals that may attract younger scientists.”

Yes I do realize that this comment may get me kicked off these journals, and hopefully replaced by someone younger.



Minding the gaps in tuberculosis research – What Science Translational Medicine

Today I received the link for the recent article (Minding the gaps in tuberculosis research) with Joel Freundlich at Rutgers and Eric Nuermberger at Johns Hopkins. The article is free for 50 days from today, which I think is a very useful idea by Elsevier to make their article available for a while, get some visibility and downloads and raise awareness of the journal Drug Discovery Today. As an editorial board member I think this is one of the smartest things I have seen Elsevier do in a long time and this certainly goes some way to offsetting criticism these businesses deal with.

Anyway, once again this paper has a bit of a story. We submitted initially to Science Translational Medicine – thinking this was the perfect home for it..but that was not too be.

Here are the anonymous reviews for 2 reviewers:

MSID: 3009006 Title Minding The Gaps in Tuberculosis Research
Author Name: Ekins, Sean
The authors present a convincing argument for the detailed retrospective analysis of accumulated in vivo data to inform the selection of new hits for development as anti-tuberculosis drugs. They describe analysis of a
curated database that collates the available data and this is available through various online resources. I would endorse the views expressed in this focus piece, but have one minor suggestion and one more substantial:
Minor – the authors, in the abstract, refer to a 30 year valley of death, then in the conclusion refer to a 40 year gap in approvals between rifampicin and bedaquiline. These figures are measures of different things, however, I
would suggest that in a Focus piece that will be oft quoted it might be tidier to focus on one or the other.

Major: the authors indicate that the debate about which mouse model or indeed which animal should be used is outside the scope of this review. They then, without presenting evidence, make the staement that ‘the
correlation between treatment outcomes in mice and infected humans cannot be ignored’. I would suggest that this particular paragraph assigns a confidence in the mouse model that does not reflect the state of the debate
in the community. As there is not space to discuss these points in full, I would suggest deleting the sentences ‘Outside the scope…..be ignored’ but replace them with a senetence that posits the database, as presented
here using the mouse model, as an examplar of how the accumulated animal model data as a whole could be used to inform hit selection.

General comments:
This commentary describes the history of anti-TB drug discovery and a 30-year gap that occurred in research to identify new drugs, as well as the gap between in vitro and in vivo testing activities. The authors conducted a
literature review to quantitatively assess these gaps. These gaps are already well recognized in the field, but the main contribution of this commentary is that these gaps, based on the methodology the authors used to
review the literature, are quantitatively depicted. Unfortunately, the discussion the authors provide to explain the gaps is largely descriptive and not very compelling. Some obvious explanations are not provided.

Specific comments:
1. This commentary is about anti-TB drug discovery. The title is somewhat misleading—“Mining the gaps in tuberculosis research”, as there are many other major gaps in TB research other than drug discovery. The title
should be changed to reflect the focus of this paper.
2. The most obvious explanation for the time gap in research is the AIDS epidemic. If the AIDS epidemic had not occurred, TB would not have received the attention it finally did in the mid 1990s. For example, the Global
Fund to Fight AIDS, Tuberculosis and Malaria did not begin its operations until 2002, which corresponds to the period when the increase in in vitro testing began (as shown in Fig 1). The discussion would have been more
illuminating if the authors contrasted their findings to the level of TB research funding activities that occurred during the same time period.
3. Another potential explanation for the time gap in research is the time it took for multidrug-resistant TB prevalence to emerge and become high enough to receive attention by developed countries.
4. The gap between in vitro and in vivo testing activity could also have many explanations, which were not suggested in this commentary. It is not surprising that in vivo assay activity peaked in the 1940s-60s since no
anti-TB drugs were available until the late 1940s and TB was highly prevalent during this period even in the US and other Western countries. It is not surprising that the activity decreased after good drugs were discovered
and the prevalence of the disease decreased. In fact, CDC published a report in 1989 A Strategic Plan for the Elimination of Tuberculosis in the United States, which proposed to eliminate TB by 2010. There is no incentive
to develop new drugs for a disease that was slated to be eliminated.
5. The wide gap between in vitro assay activity and in vivo assay activity after 2002 could be explained by many other reasons including the need to identify new targets, since most existing drugs share similar targets and are
subject to rapid selection of resistance. Hits and leads that targeted the known targets would not undergo further tests.
6. In the Abstract, the authors state, “…. a paradigm shift is required in drug discovery and development to meet the global health need for new therapies.” Without an attempt to explain the possible causes of these gaps, the
authors cannot make any suggestions about what these “paradigm shift” is going to be.
6. Unclear the purpose of the supplement table of the large list of hits and leads. In summary, this commentary would have been more meaningful and contributory if the authors included a
more detailed discussion on why these gaps may have occurred.

This is a very short review/commentary analyzing the history of the drug development against tuberculosis. The paper shows that drug discovery has been performend mostly in the 1950′s when many drugs were tested in vivo.
The second wave of discovery is recent, in the 1990′s when millions of compounds were tested in vitro, but not so many were tested in vivo.
Overall the paper shows that although there is great need for anti-tuberculosis drugs, the present efforts are not going to deliver and there is a need for a change in the approach. There is no suggestion on what change should happen.


It appears they had sent out an earlier version of the manuscript before they had asked us to make some changes requested by the editor.

So it went out for review again (comments in bold) but to no avail.

Reviewer 1
The manuscript describes a problem and potential solution, at least a conceptual one.  The message is provocative message and deserves to be published, however I think are a number of assumptions made that weaken the overall argument and that changes to the presentation would strengthen the message the authors are trying to convey.  I highly recommend the follow actions be taken before publishing.
1)         Minor point on personal preference—I know this term is becoming more common, but I think the term ‘valley of death’ referring to research and/or drug discovery, especially in relation to infectious diseases, is overly dramatic and does detracts from the good messages contained in the draft Do not agree – actually our figure 1 shows the valley of death quite nicely.
2)         There are no references listed for the supplemental table 1 in the version I received The references were at the bottom of Supplemental table 1

3)         A number of assumptions or extensions of information are made through the draft.  For example, on pp 3 “…described 66 hits (Supplemental Table 1) under consideration for advancement”.  Just because something is published does not mean it was ‘considered for advancement’ and assumptions like this one I feel overly inflate the numbers used in the draft throughout.  I fully agree with the concept the authors propose, but I think it would be just compelling with the ‘estimations I disagree as any analogs made are under consideration for advancement; it just depends on their respective profiles
4)         I agree that there is not infrastructure to understand the overall pipeline, and this a laudable goal, but I think the focus should be on the pipeline, or at least things of potential interest, not just a catalog of everything publish with Mtb activity.  How else would we learn from what has been done before?  Value of curation cannot be understated, no one has ever done this before us.
5)         The assertion that ‘the next logical step would be to progress these and other lead compounds into an in vivo efficacy model’ is likely flawed.  I fully support the idea of cataloging interesting compounds from the literature, etc and finding a way to move them forward, especially in an area like TB, but the potency of the cmpds in Table 1 range from 0.02 to 32 uM.  Surely it would be irresponsible to progress all or likely even many of these into in vivo experiments.  In fact, a number of the more potent compounds in Table 1 already seem to have in vivo data.  We could point out this range of in vitro efficacies and comment on the often disconnect between in vitro activity and in vivo efficacy (e.g, pyrazinamide).
6)         The curation of molecules from the literature might be a useful exercise, but it is difficult to ascertain due to the limited information.  Minimally, the final set used to general Figure 1 should be available as Supplemental Information (there are references to it being available on public resources, but no links are given).  This can be done and is minor. As we stated in the manuscript it is already available in multiple locations e.g. figshare Mouse TB in vivo data over 70 years  we have even tweeted it previously so its very publically accessible!
7)         The authors compare the discovery to approval time being ‘a couple of years’ in the 40s-60s vs 16 years for bedaquiline—I would think this delay could have had much more to do with the more rigorous process overall as compared to a lack of push for mouse in vivo efficacy.  They missed the point that we are making about asking the critical question of in vivo efficacy sooner rather than later!
8)         The comparison of number of compounds tested in the 1950s vs. 2000-2010 period assumes that the 2 million compounds actually cover significantly more ‘diversity’ than the ‘thousands’ in the 1950s.  I think is more likely reversed.  Regardless, it isn’t just a numbers game.  I disagree as it is in many ways; we do not know what an in vivo active looks like and thus the more shots on goal the better. The fact of the matter is we tested more compounds in vivo in the past than we do now as a proportion of in vitro screening.
9)         The statement ‘Amongst the 1000s of compounds…..how many would be active in vivo and….progress to the clinic’.  This all depends on the profiles of the compounds doesn’t it.  Using a typical High Throughput Screen, with a 2% hit rate (which seems rather high), as a model, this might suggest that of 10000 molecules tested only 200 would be ‘hits’ and perhaps only a handful potent enough to even consider progressing to in vivo studies. We are not at all sure the reviewer is even making a point here thats relevant to what we are saying.
10)       I don’t see exactly what the algorithm proposed to ‘prioritize’ in vitro active compounds will do beyond what the MIC data can do itself. Missed our point as far as machine learning and learning from the past in general. Using a model that learns from the past can help filter the in vitro hits so we enrich the compounds tested in vivo in likely active compounds.
11)       Figure 1 could be described better presented (axes hard to read, etc). This figure is perfectly readable in the file we provided.
Reviewer 2
The authors propose that we should be learning from the historic data from the validation in the in vivo mouse model for prediction of whether in vitro drugs work in vivo.  They seem to conflate two issues- gaps in which drug testing is low and improving the efficacy of testing.  It is not clear what we can really learn from this.
1.  The final sentence of the abstract says, “This suggests a rethink of approaches is required…”.  This is not much of a conclusion for a high profile paper. – It’s a conclusion to the abstract of a very short opinion piece. We actually do propose a rethink and have many recommendations in the final paragraphs. Perhaps the reviewer should have read that.
2.  In the manuscript, the authors suggest that “…it may be time to rebalance resources dedicated to in vitro and in vivo efficacy assays of candidate antitubercular compounds”.  This is a broad statement without a defined idea on how to do this.  They also “propose that the data collated in this study could be sued to build machine learning models…”.  How will the factors which predict success or failure be studied? 
Actually we are proposing more in vivo testing as in the past, combined with learning from historic data and using models based on this data. The models will learn from actives and inactives, perhaps the reviewer should have looked at the reference we provided S. Ekins, J. S. Freundlich, J. V. Hobrath, E. L. White, R. C. Reynolds, Pharm Res 31, 414 (2014).
3. The authors want to “improve the efficiency of the discovery process by shortening or remove the gaps…”.  It is not clear that the gaps they discuss, the years where there has been lags in discovery are related to the efficiency in discovery or simply a lack of investigation.
We think we can do both by learning from the past and doing more in vivo screening after using models to prioritize the in vitro hits.
4. I was confused by the sentence “One optimistic observation from these mouse data is that the fraction of in vivo actives tested is higher in the current as compared to the past”.  I was not sure if this was related to my observation that the percentage of active vs. total compounds tested (interpolated from the authors graphs), has improved (see below).  The gaps were periods where there was lack of testing. Is improving efficiency really necessary?  These numbers look pretty good.  The authors do talk about predicting efficacy, but again, the discussion is very vague.
Percent  of drugs which were efficacious in the mouse
1950     1960     1970     1980     1990     2010     2010
year      43%     37%     50%     60%     57%     56%
As we state in the manuscript this may also represent a publication bias in recent years, we have noticed that publications in the past used to publish many active and inactive in vivo data in a single study whereas it is now common that a typical paper usually has one or two in vivo actives only. This could be because any inactives are excluded to make the paper look better, or they are testing fewer compounds or really are much better at picking in vivo actives. As no one has previously published this observation for TB we are also breaking new ground.
5.  The authors do not address combinations of drugs which are often used in TB.  There may also be interactions of drugs with host defense mechanisms.
We clearly did not have space to address every facet of TB research over 70 years, other reviews have done that and we have a maximum of 10 references anyway.
Naturally, I am in no rush to try STM again.


Rethinking what I need in a journal and how I read papers

Last week I was invited at the behest of a major society (the ACS) that has lots of journals, to chat with a consultant – “The objective is to better understand the role scientific publications play in the research community.” – that piqued my interest so I agreed. The discussion happened today, and I felt like I would be pretty open and honest because, well that is how I am. What surprised me was that as my words came out I realized I had to remember as much as I could and put a post out. Perhaps other journals can benefit from these ideas?

The questions ranged from what I looked for in a journal..My answers included how well respected the journals were (not high impact but well respected by researchers – so I rely on my peers) with a reliable turnaround time  – I also tend to prefer those that have a good editorial quality to them, not too strict in terms of format etc but they might spot things I missed and improve on what I submitted.

I also mentioned I had pretty much resigned myself to never publishing in the highest impact journals, because after trying with some of what I thought were good ideas, and getting rejected it was just a waste of time. They were looking for papers from groups they knew, or from certain universities or the editorial staff just had no clue and were just hitting triage.

I was also asked what I would like to do to improve journals.. top of my list here was just standardizing within a publisher (even the ACS) on the formats between journals..For example why does a society have X journals and all have different formats so if you get rejected in one journal and need to resubmit elsewhere you basically have to reformat the paper every time. I also proposed that the publishers should just streamline the ability to share papers between journals so the emphasis was not on the author, for example when submitting to a publisher you could prioritize the journals 2nd, 3rd etc you would like to submit too. The more I think about it now there is probably a business model here to take a scientists paper and automatically be able to submit to journals and if rejected it keeps being submitted until accepted somewhere. Lets just take the human processing out of it which besides the reviews probably takes a similar amount of time.

The questions made me look at my own publication history and try to understand how its changed..When I started in the mid 90s my papers primarily went to ASPET journals even as I started doing modeling my papers never really veered from pharmacology journals. Then I started to publish in Pharmaceutical Research in 2005, another society journal (AAPS) then J Med Chem in 2006 (ACS) then in 2008 Chem Res Toxicol (ACS) and my first open access journal paper in BMC Evolutionary Biology with collaborators was published in 2008,  followed by one in PLoS Comput Biol in 2009. So from this point the papers start being published in many different journals both open access and closed. Its only in recent years with my increase in machine learning work that my options have narrowed substantially as to where I can submit these papers (computational journals including ACS journals).

For one thing the consultant kept asking questions about would I be interested or what did I think of ‘general multidisciplinary’ journals – he did not expand on this but immediately PLOS ONE came to mind as did Springer Plus and others – so my guess is yet another such journal is being proposed. Which really makes you wonder whether folks ever have any original ideas.

Then I started to mention how reading papers nowadays is so different from in the past when I read them in journals in the 90′s, now I get papers either sent to me by friends or others who think I might be interested in them or I see a link on a blog post or Twitter etc. I only really research papers when working on a new project and need to do a literature search. This is a pretty sad state of affairs because I barely have time to read or look out for things. And yet I used to be a voracious reader of papers and as an editorial board member I used to get paper copies of journals. That stopped happening a few years ago so I no longer read even those journals unless there is a paper published in there and I find it on PubMed or Google Scholar etc. Now that should be a nudge to these publishers if your journal editors do not even have time to read the journal and I am sure its not just me..!

In summary – this whole scenario points to a few things. For myself all the journals could be the same ‘vanilla’, as long as formats, references etc were identical. The big handful of highest impact journals do not interest me and I rarely read them unless I have to find a paper.  The Nature Reviews Journals maybe once in a while something of interest pops up but honestly they are not on my radar anymore. I do think publishers need to rethink, perhaps standardize on layout and formats so its easier for resubmissions across journals and even within publishers. Do I care if a journal is a single topic or multidisciplinary – NO, I only care about quality, how my colleagues think of the journal and if it processes submissions in a timely manner. Pretty basic  – perhaps almost utilitarian. The journals are just a means to get my ideas and research out there. If I can publish in an open access journal and have the money or opportunity I will.  The journals IMHO do little to get people to read them or the articles as that comes down to our social networks as authors. For me that is strongly brought home by recent experiences publishing in open access journals, tweeting, blog posts etc can have a remarkable impact in who looks at your papers. At some point all journals will need to have metrics for accesses and downloads so that authors can see how they are read or not. For me thats also reinforcing for publishing in the same journal again if I know it finds an audience for my work.

So these are my most recent rumblings, it will be interesting to see how this changes over the coming years. I do sense change over the past few years in my publishing and research strategy and part of it is from being overwhelmed by information but at the same time the challenges of research and publishing. A fresh disruptive wind needs to blow through academic publishing and I and many others will welcome it (if it has not already started).




In Memoriam

It is with great sadness that we learned of the passing of Dr. Martin John Rogers this past Friday. Dr. Rogers was Program Officer, Preclinical Parasite Drug Development, Parasitology and International Programs Branch, Division of Microbiology and Infectious Diseases, National Institute of Allergy and Infectious Diseases. We were honored when he agreed to speak at our 2014 community meeting at CDD. He was a wonderful source of information and connections on neglected tropical diseases and will be greatly missed by all the researchers he worked with. Our thoughts are with his family, friends and colleagues during this time. We are very thankful that we had the opportunity to work with such a wonderfully kind and enthusiastic man.

Older posts «