Sep
19

Promoting younger scientists to editorial boards of journals

Today I realized I am in the fortunate position to be on the editorial boards of several journals and that I have a “voice”. Then I realized I am no longer a young scientist, I am 44..What about those that are doing their postdocs in their mid 20′s when do they get a voice in the say of journals? I look at even the new journals that keep starting, (and another was in my in box today), and their editors are closer to my demographic mid to late 40s or beyond. Yes experience is important but what about listening to those that are doing science now, not science 20 years ago. I am not saying that we are past it as scientists, I am just proposing that editorial boards of journals with those in their 40s- 50s-60s is probably not ideal or representative of the population at major conferences. Just go to an ACS, AAPS, SLAS, etc.. there are thousands of young scientists.

So I tweeted a bit about this today.

Tweet 1 = Just thinking – isn’t it time publishers put a journal together that has a “younger scientist” focused ed. board that addresses needs (1/2)

Tweet 2 = (2/2) Younger scientist journal, needs ultra fast turnaround, transparent reviews and reviewers, free & open, open to new ideas & feedback

Tweet 3 = perhaps some much younger scientists could chime in on my comments.. I feel less relevant as I am > 40 but all journals focus on experienced

I have also sent the following to the editors of the 4 boards I am on, namely Pharmaceutical Research, Drug Discovery Today, Mutation Research Reviews and The Journal of Pharmacological and Toxicological Methods. These are by far not huge journals but they are with Springer and Elsevier and I do think its time the big publishers were more responsive to younger scientists.  Just look at the editorial boards.

Here is the text I sent to each journal editor: I was thinking today – is there a way we could get some “younger scientists” on the editorial board.. I look at myself and I am 44, I am no longer a “young scientist”. I think it might be important to get some input from those under 40 and possibly in the early 30s because things are moving quite fast in the publishing world e.g. F1000Research, PeerJ etc.. and we may risk seeing competition from these types of journals that may attract younger scientists.”

Yes I do realize that this comment may get me kicked off these journals, and hopefully replaced by someone younger.

 

Sep
15

Minding the gaps in tuberculosis research – What Science Translational Medicine

Today I received the link for the recent article (Minding the gaps in tuberculosis research) with Joel Freundlich at Rutgers and Eric Nuermberger at Johns Hopkins. The article is free for 50 days from today, which I think is a very useful idea by Elsevier to make their article available for a while, get some visibility and downloads and raise awareness of the journal Drug Discovery Today. As an editorial board member I think this is one of the smartest things I have seen Elsevier do in a long time and this certainly goes some way to offsetting criticism these businesses deal with.

Anyway, once again this paper has a bit of a story. We submitted initially to Science Translational Medicine – thinking this was the perfect home for it..but that was not too be.

Here are the anonymous reviews for 2 reviewers:

MSID: 3009006 Title Minding The Gaps in Tuberculosis Research
Author Name: Ekins, Sean
The authors present a convincing argument for the detailed retrospective analysis of accumulated in vivo data to inform the selection of new hits for development as anti-tuberculosis drugs. They describe analysis of a
curated database that collates the available data and this is available through various online resources. I would endorse the views expressed in this focus piece, but have one minor suggestion and one more substantial:
Minor – the authors, in the abstract, refer to a 30 year valley of death, then in the conclusion refer to a 40 year gap in approvals between rifampicin and bedaquiline. These figures are measures of different things, however, I
would suggest that in a Focus piece that will be oft quoted it might be tidier to focus on one or the other.

Major: the authors indicate that the debate about which mouse model or indeed which animal should be used is outside the scope of this review. They then, without presenting evidence, make the staement that ‘the
correlation between treatment outcomes in mice and infected humans cannot be ignored’. I would suggest that this particular paragraph assigns a confidence in the mouse model that does not reflect the state of the debate
in the community. As there is not space to discuss these points in full, I would suggest deleting the sentences ‘Outside the scope…..be ignored’ but replace them with a senetence that posits the database, as presented
here using the mouse model, as an examplar of how the accumulated animal model data as a whole could be used to inform hit selection.

Review
General comments:
This commentary describes the history of anti-TB drug discovery and a 30-year gap that occurred in research to identify new drugs, as well as the gap between in vitro and in vivo testing activities. The authors conducted a
literature review to quantitatively assess these gaps. These gaps are already well recognized in the field, but the main contribution of this commentary is that these gaps, based on the methodology the authors used to
review the literature, are quantitatively depicted. Unfortunately, the discussion the authors provide to explain the gaps is largely descriptive and not very compelling. Some obvious explanations are not provided.

Specific comments:
1. This commentary is about anti-TB drug discovery. The title is somewhat misleading—“Mining the gaps in tuberculosis research”, as there are many other major gaps in TB research other than drug discovery. The title
should be changed to reflect the focus of this paper.
2. The most obvious explanation for the time gap in research is the AIDS epidemic. If the AIDS epidemic had not occurred, TB would not have received the attention it finally did in the mid 1990s. For example, the Global
Fund to Fight AIDS, Tuberculosis and Malaria did not begin its operations until 2002, which corresponds to the period when the increase in in vitro testing began (as shown in Fig 1). The discussion would have been more
illuminating if the authors contrasted their findings to the level of TB research funding activities that occurred during the same time period.
3. Another potential explanation for the time gap in research is the time it took for multidrug-resistant TB prevalence to emerge and become high enough to receive attention by developed countries.
4. The gap between in vitro and in vivo testing activity could also have many explanations, which were not suggested in this commentary. It is not surprising that in vivo assay activity peaked in the 1940s-60s since no
anti-TB drugs were available until the late 1940s and TB was highly prevalent during this period even in the US and other Western countries. It is not surprising that the activity decreased after good drugs were discovered
and the prevalence of the disease decreased. In fact, CDC published a report in 1989 A Strategic Plan for the Elimination of Tuberculosis in the United States, which proposed to eliminate TB by 2010. There is no incentive
to develop new drugs for a disease that was slated to be eliminated.
5. The wide gap between in vitro assay activity and in vivo assay activity after 2002 could be explained by many other reasons including the need to identify new targets, since most existing drugs share similar targets and are
subject to rapid selection of resistance. Hits and leads that targeted the known targets would not undergo further tests.
6. In the Abstract, the authors state, “…. a paradigm shift is required in drug discovery and development to meet the global health need for new therapies.” Without an attempt to explain the possible causes of these gaps, the
authors cannot make any suggestions about what these “paradigm shift” is going to be.
6. Unclear the purpose of the supplement table of the large list of hits and leads. In summary, this commentary would have been more meaningful and contributory if the authors included a
more detailed discussion on why these gaps may have occurred.

Review
This is a very short review/commentary analyzing the history of the drug development against tuberculosis. The paper shows that drug discovery has been performend mostly in the 1950′s when many drugs were tested in vivo.
The second wave of discovery is recent, in the 1990′s when millions of compounds were tested in vitro, but not so many were tested in vivo.
Overall the paper shows that although there is great need for anti-tuberculosis drugs, the present efforts are not going to deliver and there is a need for a change in the approach. There is no suggestion on what change should happen.

——

It appears they had sent out an earlier version of the manuscript before they had asked us to make some changes requested by the editor.

So it went out for review again (comments in bold) but to no avail.

************************
REVIEWS
************************
Reviewer 1
 
The manuscript describes a problem and potential solution, at least a conceptual one.  The message is provocative message and deserves to be published, however I think are a number of assumptions made that weaken the overall argument and that changes to the presentation would strengthen the message the authors are trying to convey.  I highly recommend the follow actions be taken before publishing.
1)         Minor point on personal preference—I know this term is becoming more common, but I think the term ‘valley of death’ referring to research and/or drug discovery, especially in relation to infectious diseases, is overly dramatic and does detracts from the good messages contained in the draft Do not agree – actually our figure 1 shows the valley of death quite nicely.
2)         There are no references listed for the supplemental table 1 in the version I received The references were at the bottom of Supplemental table 1

3)         A number of assumptions or extensions of information are made through the draft.  For example, on pp 3 “…described 66 hits (Supplemental Table 1) under consideration for advancement”.  Just because something is published does not mean it was ‘considered for advancement’ and assumptions like this one I feel overly inflate the numbers used in the draft throughout.  I fully agree with the concept the authors propose, but I think it would be just compelling with the ‘estimations I disagree as any analogs made are under consideration for advancement; it just depends on their respective profiles
4)         I agree that there is not infrastructure to understand the overall pipeline, and this a laudable goal, but I think the focus should be on the pipeline, or at least things of potential interest, not just a catalog of everything publish with Mtb activity.  How else would we learn from what has been done before?  Value of curation cannot be understated, no one has ever done this before us.
5)         The assertion that ‘the next logical step would be to progress these and other lead compounds into an in vivo efficacy model’ is likely flawed.  I fully support the idea of cataloging interesting compounds from the literature, etc and finding a way to move them forward, especially in an area like TB, but the potency of the cmpds in Table 1 range from 0.02 to 32 uM.  Surely it would be irresponsible to progress all or likely even many of these into in vivo experiments.  In fact, a number of the more potent compounds in Table 1 already seem to have in vivo data.  We could point out this range of in vitro efficacies and comment on the often disconnect between in vitro activity and in vivo efficacy (e.g, pyrazinamide).
6)         The curation of molecules from the literature might be a useful exercise, but it is difficult to ascertain due to the limited information.  Minimally, the final set used to general Figure 1 should be available as Supplemental Information (there are references to it being available on public resources, but no links are given).  This can be done and is minor. As we stated in the manuscript it is already available in multiple locations e.g. figshare Mouse TB in vivo data over 70 years  we have even tweeted it previously so its very publically accessible!
7)         The authors compare the discovery to approval time being ‘a couple of years’ in the 40s-60s vs 16 years for bedaquiline—I would think this delay could have had much more to do with the more rigorous process overall as compared to a lack of push for mouse in vivo efficacy.  They missed the point that we are making about asking the critical question of in vivo efficacy sooner rather than later!
8)         The comparison of number of compounds tested in the 1950s vs. 2000-2010 period assumes that the 2 million compounds actually cover significantly more ‘diversity’ than the ‘thousands’ in the 1950s.  I think is more likely reversed.  Regardless, it isn’t just a numbers game.  I disagree as it is in many ways; we do not know what an in vivo active looks like and thus the more shots on goal the better. The fact of the matter is we tested more compounds in vivo in the past than we do now as a proportion of in vitro screening.
9)         The statement ‘Amongst the 1000s of compounds…..how many would be active in vivo and….progress to the clinic’.  This all depends on the profiles of the compounds doesn’t it.  Using a typical High Throughput Screen, with a 2% hit rate (which seems rather high), as a model, this might suggest that of 10000 molecules tested only 200 would be ‘hits’ and perhaps only a handful potent enough to even consider progressing to in vivo studies. We are not at all sure the reviewer is even making a point here thats relevant to what we are saying.
10)       I don’t see exactly what the algorithm proposed to ‘prioritize’ in vitro active compounds will do beyond what the MIC data can do itself. Missed our point as far as machine learning and learning from the past in general. Using a model that learns from the past can help filter the in vitro hits so we enrich the compounds tested in vivo in likely active compounds.
11)       Figure 1 could be described better presented (axes hard to read, etc). This figure is perfectly readable in the file we provided.
************************
Reviewer 2
 
The authors propose that we should be learning from the historic data from the validation in the in vivo mouse model for prediction of whether in vitro drugs work in vivo.  They seem to conflate two issues- gaps in which drug testing is low and improving the efficacy of testing.  It is not clear what we can really learn from this.
 
1.  The final sentence of the abstract says, “This suggests a rethink of approaches is required…”.  This is not much of a conclusion for a high profile paper. – It’s a conclusion to the abstract of a very short opinion piece. We actually do propose a rethink and have many recommendations in the final paragraphs. Perhaps the reviewer should have read that.
 
2.  In the manuscript, the authors suggest that “…it may be time to rebalance resources dedicated to in vitro and in vivo efficacy assays of candidate antitubercular compounds”.  This is a broad statement without a defined idea on how to do this.  They also “propose that the data collated in this study could be sued to build machine learning models…”.  How will the factors which predict success or failure be studied? 
Actually we are proposing more in vivo testing as in the past, combined with learning from historic data and using models based on this data. The models will learn from actives and inactives, perhaps the reviewer should have looked at the reference we provided S. Ekins, J. S. Freundlich, J. V. Hobrath, E. L. White, R. C. Reynolds, Pharm Res 31, 414 (2014).
 
3. The authors want to “improve the efficiency of the discovery process by shortening or remove the gaps…”.  It is not clear that the gaps they discuss, the years where there has been lags in discovery are related to the efficiency in discovery or simply a lack of investigation.
We think we can do both by learning from the past and doing more in vivo screening after using models to prioritize the in vitro hits.
 
4. I was confused by the sentence “One optimistic observation from these mouse data is that the fraction of in vivo actives tested is higher in the current as compared to the past”.  I was not sure if this was related to my observation that the percentage of active vs. total compounds tested (interpolated from the authors graphs), has improved (see below).  The gaps were periods where there was lack of testing. Is improving efficiency really necessary?  These numbers look pretty good.  The authors do talk about predicting efficacy, but again, the discussion is very vague.
 
Percent  of drugs which were efficacious in the mouse
1950     1960     1970     1980     1990     2010     2010
year      43%     37%     50%     60%     57%     56%
 
As we state in the manuscript this may also represent a publication bias in recent years, we have noticed that publications in the past used to publish many active and inactive in vivo data in a single study whereas it is now common that a typical paper usually has one or two in vivo actives only. This could be because any inactives are excluded to make the paper look better, or they are testing fewer compounds or really are much better at picking in vivo actives. As no one has previously published this observation for TB we are also breaking new ground.
 
5.  The authors do not address combinations of drugs which are often used in TB.  There may also be interactions of drugs with host defense mechanisms.
 
We clearly did not have space to address every facet of TB research over 70 years, other reviews have done that and we have a maximum of 10 references anyway.
 
************************
Naturally, I am in no rush to try STM again.

Sep
09

Rethinking what I need in a journal and how I read papers

Last week I was invited at the behest of a major society (the ACS) that has lots of journals, to chat with a consultant – “The objective is to better understand the role scientific publications play in the research community.” – that piqued my interest so I agreed. The discussion happened today, and I felt like I would be pretty open and honest because, well that is how I am. What surprised me was that as my words came out I realized I had to remember as much as I could and put a post out. Perhaps other journals can benefit from these ideas?

The questions ranged from what I looked for in a journal..My answers included how well respected the journals were (not high impact but well respected by researchers – so I rely on my peers) with a reliable turnaround time  – I also tend to prefer those that have a good editorial quality to them, not too strict in terms of format etc but they might spot things I missed and improve on what I submitted.

I also mentioned I had pretty much resigned myself to never publishing in the highest impact journals, because after trying with some of what I thought were good ideas, and getting rejected it was just a waste of time. They were looking for papers from groups they knew, or from certain universities or the editorial staff just had no clue and were just hitting triage.

I was also asked what I would like to do to improve journals.. top of my list here was just standardizing within a publisher (even the ACS) on the formats between journals..For example why does a society have X journals and all have different formats so if you get rejected in one journal and need to resubmit elsewhere you basically have to reformat the paper every time. I also proposed that the publishers should just streamline the ability to share papers between journals so the emphasis was not on the author, for example when submitting to a publisher you could prioritize the journals 2nd, 3rd etc you would like to submit too. The more I think about it now there is probably a business model here to take a scientists paper and automatically be able to submit to journals and if rejected it keeps being submitted until accepted somewhere. Lets just take the human processing out of it which besides the reviews probably takes a similar amount of time.

The questions made me look at my own publication history and try to understand how its changed..When I started in the mid 90s my papers primarily went to ASPET journals even as I started doing modeling my papers never really veered from pharmacology journals. Then I started to publish in Pharmaceutical Research in 2005, another society journal (AAPS) then J Med Chem in 2006 (ACS) then in 2008 Chem Res Toxicol (ACS) and my first open access journal paper in BMC Evolutionary Biology with collaborators was published in 2008,  followed by one in PLoS Comput Biol in 2009. So from this point the papers start being published in many different journals both open access and closed. Its only in recent years with my increase in machine learning work that my options have narrowed substantially as to where I can submit these papers (computational journals including ACS journals).

For one thing the consultant kept asking questions about would I be interested or what did I think of ‘general multidisciplinary’ journals – he did not expand on this but immediately PLOS ONE came to mind as did Springer Plus and others – so my guess is yet another such journal is being proposed. Which really makes you wonder whether folks ever have any original ideas.

Then I started to mention how reading papers nowadays is so different from in the past when I read them in journals in the 90′s, now I get papers either sent to me by friends or others who think I might be interested in them or I see a link on a blog post or Twitter etc. I only really research papers when working on a new project and need to do a literature search. This is a pretty sad state of affairs because I barely have time to read or look out for things. And yet I used to be a voracious reader of papers and as an editorial board member I used to get paper copies of journals. That stopped happening a few years ago so I no longer read even those journals unless there is a paper published in there and I find it on PubMed or Google Scholar etc. Now that should be a nudge to these publishers if your journal editors do not even have time to read the journal and I am sure its not just me..!

In summary – this whole scenario points to a few things. For myself all the journals could be the same ‘vanilla’, as long as formats, references etc were identical. The big handful of highest impact journals do not interest me and I rarely read them unless I have to find a paper.  The Nature Reviews Journals maybe once in a while something of interest pops up but honestly they are not on my radar anymore. I do think publishers need to rethink, perhaps standardize on layout and formats so its easier for resubmissions across journals and even within publishers. Do I care if a journal is a single topic or multidisciplinary – NO, I only care about quality, how my colleagues think of the journal and if it processes submissions in a timely manner. Pretty basic  – perhaps almost utilitarian. The journals are just a means to get my ideas and research out there. If I can publish in an open access journal and have the money or opportunity I will.  The journals IMHO do little to get people to read them or the articles as that comes down to our social networks as authors. For me that is strongly brought home by recent experiences publishing in open access journals, tweeting, blog posts etc can have a remarkable impact in who looks at your papers. At some point all journals will need to have metrics for accesses and downloads so that authors can see how they are read or not. For me thats also reinforcing for publishing in the same journal again if I know it finds an audience for my work.

So these are my most recent rumblings, it will be interesting to see how this changes over the coming years. I do sense change over the past few years in my publishing and research strategy and part of it is from being overwhelmed by information but at the same time the challenges of research and publishing. A fresh disruptive wind needs to blow through academic publishing and I and many others will welcome it (if it has not already started).

 

 

Sep
08

In Memoriam

It is with great sadness that we learned of the passing of Dr. Martin John Rogers this past Friday. Dr. Rogers was Program Officer, Preclinical Parasite Drug Development, Parasitology and International Programs Branch, Division of Microbiology and Infectious Diseases, National Institute of Allergy and Infectious Diseases. We were honored when he agreed to speak at our 2014 community meeting at CDD. He was a wonderful source of information and connections on neglected tropical diseases and will be greatly missed by all the researchers he worked with. Our thoughts are with his family, friends and colleagues during this time. We are very thankful that we had the opportunity to work with such a wonderfully kind and enthusiastic man.

Aug
25

PLOS paper going viral?

There is just too much going on now with the whole Ebola virus outbreak and the ALS Ice bucket challenge to really use the word “viral”, but I will use it regardless. Last Thursday at 2pm a small editorial co-authored with Ethan Perlstein went live at PLOS Computational Biology about ten simple rules for live tweeting at scientific conferences. The response has been pretty amazing, I think its as close to viral as anything I have ever been a part of.

I happened to be on a 2 day vacation to recover from the previous weeks travel at the ACS, but instead devoted half a day to posting an article on this blog, tweeting and thanking the retweeters “live”. It sounds absolutely ‘nutty’ as I write this. I am a relatively sober scientist, but over the past year I have begun to follow how blog posts and twitter can have an impact on reaching potential readers of scientific articles in open access journals in particular. I am using open access journals because in most cases it is pretty easy to follow the metrics, like views, shares, downloads etc. PLOS are pretty handy like this. Some open access journals make the information only available to authors, but not PLOS, F1000research etc..

Last year we had a paper on Acoustic dispensing and computational modeling in PLOSONE that achieved considerable views primarily because of a blog post from Derek Lowe which stoked interest. To date in over a year it has over 10,300 views, 6 citations, 47 saves, 122 shares etc, not bad going in 16 months. But I am still having a hard time seeing if it will have any lasting impact, does it make a difference?

Then comes the latest editorial / paper and in just a few days it has over 7300 views and over 660 shares and this is growing by the hour. Why? Both Ethan and myself tweeted out the link and had considerable retweets. I tweeted each rule and that had no effect. So I think what has happened is that Ethan’s well connected network of 4707 followers has really made its impact felt and they have in turn retweeted it and so on. For instance PLOS Computational  Biology retweeted to their > 4000 followers. We have helped, personally I have thanked many of the retweeters and inturn I have seen that this act alone converts many into new followers. Whether this continues is another matter. Its time consuming but this is a kind of experiment in seeing if using Twitter can actively help promote an article about using Twitter. A nice circle.

So I was wondering what this means too if anything and how do we measure impact?

First we picked a topic for a “Simple Rules” editorial that has hit a nerve in a really good way. While the journal is a computational biology journal it clearly does not seem to put off people coming to it to see the article. Perhaps we would have the same impact in PLOS ONE? While the article is not in anyway about computational biology it is relevant because of the use of sharing scientific insights etc using computational methods – namely live tweeting. Not sure this is really justification. But on another level if we are to communicate science whether its ecology, evolutionary biology of computational chemistry or biology we have to do this optimally. At this point in time Twitter is a way to communicate that has created a pretty good sized user base with 271 M monthly active users. If we can get other scientists to communicate in turn what other scientists are saying and seeing at scientific conferences we could have a massive magnifying effect. Sure some of it will be meaningless but we will get information out there and anyone searching for scientific topics will find it. A great analogy here is perhaps ‘scientific scrapbooking’. We piece together all the small tweets into some coherent whole, or software will make sense of what is coming out of the patchwork of data. We are part way there with ODDT being able to collect tweets on a disease or topic and being chemistry aware. What would an app or software need to have to be truly universally science aware (that is handling structures, images, molecules, all kinds of data etc).

I have asked if new scientific conference live tweeters will let me know when they do it. I am not expecting them to remember. So what can we do to keep the memory of the article fresh? Well As people suggest new rules I am adding them to the article comments. Sure this list may grow, but a bit of audience participation might be a good thing. It would be great if some journalists or a blog picked it up and spread the word but I am not proactively doing this. We have had questions on Twitter about the ethics of sharing hashtags for conferences before they start, some people thought our rules a bit obvious so I think keeping the discussion going and perhaps engaging some conference organizers would be smart. If major conferences had scientists actually live tweeting and perhaps even professional live tweeters (..now there is a new job for the resume) that would certainly raise the visibility. For example the ACS occasionally has individuals tweeting out new molecules as they are presented. But what if BIO had people tweeting live and SLAS, and Neuroscience (insert your favorite science conference here).

Now back to doing science and occasionally Tweeting about it.

 

Aug
21

Anatomy of a PLOS Computational Biology Paper

I think the following is a fair representation of what kicked off the very short editorial paper published today in PLOS Computational Biology. In addition the timeline gives an idea that coming up with the manuscript was quick relative to publication but isn’t that always how it is, the idea is easy relative to getting it published.

1. Feb 11-13 attended Lysosomal Disease Network 10th Annual World symposium in San Diego – I was live tweeting from talks and posters as well as presenting a poster on ODDT and rare diseases. Walking to dinner one night with Ethan Perlstein we discussed the lack of live tweeters and the 1000′s of patients globally that could benefit from hearing what was going on a the meeting. We discussed the idea to write a paper on how to live tweet at such scientific conferences.

2. Feb 12-13 we send a couple of draft emails to each other called “10 simple (10 commandments) rules of tweeting at scientific conferences”  and also do a literature search for other guidelines on tweeting.

3. March 3rd we had a final draft paper and it was submitted to PLOS Computational Biology.

4. March 4th – I had to change article type to an editorial and update financial disclosure.

5. April 10th – first automated email telling me paper was in review.

6. April 30th  – reviews received

7. May 11th – Corrections submitted

8. May 22nd – June 17 multiple communications with Editor and PLOS staff to find out if paper is now acceptable.

9. June 18th – paper acecpted

10. June 26- July 14-  Had to work on a new image as we submitted one of a phone displaying twitter logo, a room packed full of attendees at a conference and a picture of the world..needed to be CC BY 4.0.. At this point I engaged my cousin Neil Dufton who has a talent for illustration.. However we also benefited greatly from assistance at the journal to increase resolution which always seems to be an issue with the figures I submit here. Image resolution is not my strong point.

11. July 24th Copy edited manuscript edited.

12. August 21st – published.

Our goal with this is to reach people who have never tweeted at a conference before and maybe this will serve as an entry to live tweeting.

If there are other useful conference tweeting resources we missed please let us know and I will post on the PLOS comments page.

We hope this will represent a basic primer for tweeting at science conferences. It is easy, it is fun and you get to know people all over the world that appreciate your sharing what you heard and putting it in a tweet.

 

 

 

 

 

Aug
18

Rare diseases collection at F1000Research

I have mentioned previously that I have the honor to edit a collection of articles for F1000Research. So here is your chance if you are in the rare disease community and you have something you want to publish..do it now! Having published previously in F1000Research with collaborators I found the process was simple compared to other open access journals. The editorial feedback on the article was excellent and the team behind the journal are very responsive. So if you have any traditional research articles (including clinical trials), as well as reviews, data notes, observation articles, case studies, method articles, and software tools (for details see the author guidelines) on rare diseases please submit them.

I am looking forward to seeing if we can make this into something more than a rare disease collection but maybe something more regular which highlights rare disease research in an open and accessible fashion. As I have said before, I think open publications could transform rare disease research.

Aug
15

Déjà vu – Another family affected by a rare disease trying to find a cure

During my trip to San Francisco this week I had the opportunity and honor to meet with Matt Wilsey, one of the parents of a child (Grace) with a rare disease and the President of the Grace Wilsey Foundation. The diagnosis of Grace’s NGLY1 deficiency was recently highlighted in an excellent article in the New Yorker by Seth Mnookin. The article also touches lightly on some of the issues with collaboration, competition between groups and publication. Matt and another parent of an NGLY1 child Matthew Might, published a commentary in January on clinical diagnostics and next generation sequencing. In all of this it is important to note that without these parents pushing there would be no understanding of NGLY1, all the scientific progress is due to them fund raising, funding research and their persistence.

So I met up with Matt in a coffee shop across from the Powell St. BART stop for about 40 minutes. The day before I had given a talk at the ACS meeeting to a small group on why there needs to be open data for rare and neglected diseases. What do I know? This is where it gets fuzzy as on one hand I am a scientist trying to piece together the fragments of biology and chemistry and find connections that might be useful. On the other I have now become a defacto rare disease advocate and champion. I sit in an uneasy position. Its not something I can take lightly. Chatting with Matt half hidden behind the largest bottle of water imaginable, I had a déjà vu moment, seeing the same phenotype time and time again with some rare disease patients/parents. Their optimism, knowledge of everything and everyone associated with the disease is worn front and center. That is in itself a superpower that can open doors. They can say and do things regular scientists could not. They can doorstep high ranking officials at the NIH, FDA, congress and they can get time with the busiest Nobel prizewinners and CEO’s. These rare disease parents have an unwillingness to just accept the disease diagnosis, but to meet whoever it takes that can unearth a piece of the pathway to the cure. This pathway may be long, as Matt presented me with a heavy business card listing the NGLY1 3 line long phenotype. All of these rare and ultra rare diseases are frightening in what they do to the patients and families.

As I am sat there listening, I was mesmerized by the fact that this disease shared some similarities and symptoms to some of the other rare diseases I am acquainted with..Its an enzyme that cleaves sugars on proteins, its symptoms include peripheral neuropathy and there may be proteasomal involvement. After the meeting I introduced Matt to some of the parent advocates who have taken a similar pathway to Matt for their diseases. While Matt is literally at the beginning I also sense that he and others can go faster if they can learn what works / does not work.

So this goes full circle to my talk on needing open data. If scientists could freely share their data and reagents faster progress would be made and any unnecessary repetition could be avoided. How do we catalyze this? I sense this could be very important because what may be learnt from another rare disease might help NGLY1. Conversely data from this disease may help others.

Later that day I am sat at the ACS in a room packed wall to wall with hundreds of medicinal chemists listening to some of the most senior medicinal chemists at the biggest pharmaceutical companies in the USA. The diseases they mentioned were dominated by cancers, the targets predominantly kinases  and in general there was not a heavy emphasis or barely a mention of rare diseases.  There was some occasional mention of collaboration but nothing obviously on openess. So here we are with a great opportunity, put chemists to work trying to develop chaperones for diseases like NGLY1, or thinking of ways to get enzyme replacement therapies that could target where they need to go to work in humans. Sitting there hearing that more and more medicinal chemistry expertize is leaving our shores for China did not fill me with hope, but it did get me wondering what if we could convert a few big pharma chemists to the rare disease cause.

I was kindly introduced to Matt by Ethan Perlstein who himself is doing his bit to help rare disease research including NGLY1. Curing 7000 rare diseases may take a few good medicinal chemists and champions that can make things happen, parents like Matt would be eternally grateful.

 

Aug
14

A poster and 3 more ACS talks

Tuesday and Wednesday at the ACS were pretty full up with meetings, talks and posters.

Tuesday I gave a poster entitled Progress in computational toxicology – which generally shows a pretty good agreement between different machine learning methods and various toxicity datasets. It also served to highlight a new tool we have developed with Alex Clark and Krishna Dole called CDD Models. This is a beta version of an open source Naive Bayes method with open source FCFP_6 fingerprints, so you can now build models in your secure CDD Vault. This work was funded by an NIH NCATS SBIR and we still have a way to go – ultimate goal to enable sharing of models if desired so they can be made open source to if desired..

First up on wednesday was a provocative talk on “Examples of how to inspire the next generation to pursue computational chemistry / cheminformatics“.  This talk was inspired over a year ago about thinking how to encourage children to interact with molecules after watching them play with perioidic table apps that show videos of the elements.

The second talk was “applying computational models for transporters to predict toxicity“.  This highlighted about 5 years of work in a 20 min talk – all achieved through collaborations with experimental groups.

The final talk “New Target prediction and visualization Tools incorporating Open source molecular fingerprints for TB Mobile version 2” was a summary of the recently published work from Alex Clark, Malabika Sarker and myself. The app now has a heavy dose of cheminformatics (bayesian models, clustering etc.) and is pushing the boundaries for a chemistry app that combined cheminformatiocs and bioinformatics information as well.

Well until Boston in 2015..lets see what I will have to present for that one.

 

 

 

Aug
11

ACS in San Francisco..6 talks and a poster

For a few days I am in SF presenting several talks ..In all but one case I plan to post the talks..

Yesterday’s talk was on collaborations. The prior speaker did not turn up so I gave it twice.

Today I gave two presentations. One on TB mimics. This work is subject to some papers that need to be submitted and so It may be a while before the slides are made available.

The second talk was on the need for open data for rare and neglected diseases.

Over the next 2 days I have a poster and 3 more talks to go.

Putting these talks together is a challenge because you just never know who is in the audience. It’s also quite important to give new fresh presentations and discuss some topics that may be a bit provocative.

Obviously the link between all the talks is really collaboration of different flavors. Whether that is large scale, small scale, closed, semi open or fully open. The ACS also ensures pretty much that there will be some interesting discussions which really is probably just as important for someone like me that works from a home office most of the time. Getting out and engaging the audience is important.

One thing I took away today from a talk by Amy Beisel (Research Square)  is the importance of reaching out to non English speaking authors as a journal editor or reviewer and try to be more sympathetic to the fact that their understanding of English would be enhanced if we presented journal information and our review comments more clearly. This also entails writing shorter sentences.

The next post should discuss how the rest of the meeting goes and summarizes it.

 

 

 

Older posts «