«

»

Feb
06

Reviewing the robot scientist

A couple of days ago I was contacted by a freelance science writer (Andy Extance) to review an embargoed article …I had a short deadline, read the paper as if I was reviewing it. I then sent an email, and the next day Andy distilled my thoughts to a quote..I was not paid to do this.

The article by Andy is now in Scientific American..what I still cannot understand is why they are highlighting what is a pretty weak study.

Here is my full email..from which I was quoted. Consider this an open review of the paper because if the journal had sent the paper to me I would have submitted something close to this. I am pretty concerned about the kind of science that gets highlighted by the press looking for good sound bites, which is likely being fed these stories by the funding bodies and universities as a way to gain more support. The vicious cycle perpetuates. I am in favor of new technologies, but it has to be good sound science.

Dear Andy,

I read the paper. I was cringing all the way through it. Awful. Why on earth is Sci Am interested in this? Is it because they published on Adam in Science in 2009. If so what have they done since then?
My apologies but the funding bodies must have been smoking crack to award these folks money. It saddens me to think that this kind of stuff passes for science in the UK, are my fellow countrymen so enamoured by anyone that talks about active learning, quotes Lewis Carroll and Peter Medawar and mentions semantic data models? Just surprised Turing was not quoted for good measure.

The paper (if I dare call it that) is all smoke and mirrors and will get dismissed by anyone who actually reads it.

They talk about drug screening and assays as if they are experts – comments like “brute force and unintelligent” will win them zero friends. The parts on QSAR are just so basic as to be laughable. Take a look at the refs cited in the paper, they are generally pretty weak examples and in many cases long out of date. Economic/ econometrics is thrown in as though folks doing screening really care about the costs?

To the science.. pretty much every neglected disease has moved to whole cell phenotypic screens and moved away from target-based screens.. which makes you wonder seriously why they focused on a target.other than ease.
The reasons they focused on validating Eve on neglected diseases are just odd, if they wanted to do something that would impact big pharma why not go after a big disease that pharma really cares about and show how much faster you find hits and at a fraction of the cost.
To find a compound that is active against malaria is great but this compound was already known to be active. Its not even clear that this came out of their wonderful screening – modeling – AI- economically sound approach.

To put this into perspective GSK released about 14,000 malaria hits a few years ago, all whole cell data and approx 1000 had IC50 values lower than TNP-470. Novartis released data on 5700 cpds and over 700 of these have IC50 data lower than TNP-470 – admittedly this data is in plasmodium falciparum, but the point is there are hundreds if not 1000’s of examples of more active cpds.

I am not at all convinced that Eve can do QSAR – where are the correlations and statistics on the models it built. They should have been asked to prove the models were actually sound. Most QSAR papers will use additional data as a test set to validate the model, very rarely does this get fed back into the model as the authors suggest, which to me is another indicator of lack of knowledge or understanding of what they are doing. The authors do not show any enrichment data, receiver operator curves or even confusion matrixes.

Where are the metrics on the quality of their assays? Even the title of the paper is misleading – I would not call TNP-470 an approved drug, and they only showed activity of one cpd against one disease

The obvious experiment was not tried, go head to head with the standard approach of using screening data with a real comp chemist who would build the machine learning or QSAR models, predict new compounds and then get them tested. Without this I am unconvinced of the utility of this approach. Similarly there was no comparison of different QSAR or machine learning approaches. Why were classification models not tried as most HTS data is single point and perfect for binary models, they could have avoided hit confirmation stages and just flown through the screening-modeling cycles without needing to reconfirm?

Most people when sharing molecules and structures make their data available as sdf files which other modelers etc can use, apparently not these folks. These are computer scientists trying to do drug discovery and it shows.

The only ray of hope I saw in the whole paper was the use of acoustic dispensing, so at least they may have fewer issues with cpds leaching from tips or hydrophobic cpds sticking to tips..

Dare I even address the pie-in -the-sky conclusions. Eve should go back to the garden of eden and leave drug discovery to scientists who know what they are doing.

I would have rejected the paper.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>