Last tuesday and wednesday (Sept 13-14th) I was very honored to be an attendee at the “Progress Through Partnership: The NINDS 2016 Nonprofit Forum“. My role was to provide an industry perspective on several panels (alongside Dr. Ronald Marcus, Cerecor). There were a large number of rare disease (patients or parent) advocates, academic scientists and of course a large number of NINDS attendees. Going into this with no previous experience of the meeting I was a little unsure what I could add. But then as I sat there listening I think I had a unique perspective. I sit in an uneasy space as a scientist, a rare disease advocate and also someone that sees how small business can help. I am also impatient, so any hint that the science is not moving or that perhaps something is being missed, then I was on it. Apologies if it comes across as blunt but frankly with many diseases the progress has been too darn slow because of some key opinion leaders basically owning diseases and not having the ability to embrace new technologies or approaches that have been used elsewhere and could make the difference as outcome measures for clinical trials, for example. Below are some simple notes that I made – apologies if I missed anything while also on the panels on day 1, I also happened to live tweet which I have now storified.
Key points from Dr. Walter Koroshetz’s introduction – there were 161 registrants at this 10th non profit forum. He suggested we need to empower patients and power trials. He mentioned a vibrant translational program with Dr. Amir Tamiz. While the average number of grant applications increased by 25%, the average cost of grant is $379k. He mentioned one of the key policy issues being reproducibility. NINDS needs to support basic research (27% of budget). Some of the NINDS programs were mentioned including IGNITE, CREATE , BLUEPRINT as well as specific RFAs such as the Parkinson’s biomarkers program and the accelerated medicines partnership for Parkinson’s Disease.
Panel 1 – Lessons learned and case studies in natural history databases
Steve Kaminsky (International Rett Syndrome Foundation) their foundation saw 4 clinical sites would not be enough for recruiting and therefore built travel clinics (4-11 sites). He described the need for partnerships with NIH programs. Now other rare diseases are in the program, up to 15 sites using the infrastructure. The data created belongs to the nation, is protected, and the 15 sites have access. Pharma has to go through PI to get access, and the general public can interface with investigators who can study it (what if the patients wanted to study it – he did not seem to allow for this scenario?). The database was part of RDCRN. He mentioned it was too expensive to keep the database going. Over 7000 data fields for 1234 patients has resulted in publications of impact on sleep, effectiveness of drugs, diagnostics, anxiety etc. all from analysis of 12 yrs of data. He suggested the future is through the past and the past is natural history studies.
Michael Shy (University of Iowa) described Charcot-Marie-Tooth and the more than 90 genes identified. The INC RDCRN is in its 8th yr. They partner with advocacy groups. MDA, CMTA, and groups in the UK and Asia. They have 20 centers involved. He then focused on the CMT neuropathy score, described natural history studies as a science and how with CMT1A, measuring change that’s detectable is important He described frequent calls with researchers (I wondered are the calls with the RDCRN open to any advocacy groups or just close collaborator groups?).
Steve Roberds (Tuberous sclerosis alliance) owned their database. He discussed the difference between clinic entered vs patient entered data and getting input from scientists for data collection. Getting information from clinical data and electronic medical records was also described. There was some discussion around what happens after the end of a grant and who owns data, the long term sustainability of a database. Such databases present opportunities whether that is partnering with industry, (e.g. he described a collaborations with Novartis), who can fund changes to the database. They can also access without having to build the database. The alternative was Novartis Europe who collected their own data in Europe via a CRO, and learnt what they had so the elements matched with the US database. Steve also mentioned linking to biosamples and how with highly variable diseases e.g. Epilepsy, tumors, etc. this can add value to the database. He also described how custom built databases could be made simpler and facilitate analysis in order to get finding and papers out to in turn encourage utilization. You have to start a NHS sometime and it is a longtime commitment.
Petra Kauffman (NCATS) discussed disease registry vs NHS, and the need for forward looking data management coordination so that surveys can go out fast. She also mentioned that NCATS is launching a Toolkit for rare disease foundations as an Online portal.
Panel 2. Data integration and data management: challenges and opportunities
Paul Gross (Hydrocephalus association) raised questions to be answered and discussed how to engage clinicians and partner with sites that lead in electronic medical/health records. Find clinicians on Epic’s (steering) board. These board clinicians advocate for inclusion of data sets.
Greg Farber (NIMH) described the national data archive at NIMH which houses data from 130,000 human subjects. Researchers can submit an application and summary data is available to anyone with a browser. Data on 800TB of image data. NINDS has FITBAR database. Holds data funded by other groups and uses common data elements in data dictionary. Every 6 months they request data from NIH grantees and validate data. They use a global unique identifier which is a hash code that can aggregate data on the same subject from multiple labs. Their database makes data potentially discoverable. Greg also mentioned the challenge of professional clinical research subjects and how to identify them and remove from clinical trials. The database is also citable and linked to the literature.
Shawn Murphy (Harvard Medical School) described how to get large sample size for trials by creating a large warehouse in the EHR. 6.7 million patients in such a database relates to 2.5 billion facts. Data in EHR is not very accurate, e.g. you can test for diagnosis using codes and notes in medical records. Finding patients by query construction in the database was also described. He mentioned that there were over 5000 registered users of tool. He had a NIH grant for i2b2 to create a community of developers, the Hive develops new tools. He also described using phenotyping algorithms to define cohorts of treatment resistant responsive. His goal was to create gold standard training sets. Such that the data could be used to build a classification algorithm to predict depression and improve detection. The CTSA at each site forms networks to combine the data and perform big data queries. For example, what is a normal child? Normal MRI provide visual guides and can be used to make new clinical decisions. He mentioned SHRINE and SMART software also.
Tamara Simon (University of Washington) described using databases to look at CSF shunt complications. She used the PHIS database used across 40 children’s hospitals. She mentioned the hydrocephalus core data project which provided a comprehensive, prospective study. This involved detailed data with 15 forms and 413 questions. This also enabled her to address what patient factors were associated with shunt failures, infection etc. PEDSNet database was also mentioned.
Panel 3. Strategies for biomarker identification
Shahshi Amur (FDA) described resources and tools, and different categories for biomarkers , risk, prognostic etc. biomarkers used in drug development and surrogate endpoints. Enablers for endpoints included data quality , assay imaging protocols. She also stated the difference between qualification and validation, To date 13 unique biomarkers have been qualified by the FDA, and 28 submissions. (She did not mention how long to process qualifications – and what it the cost). She did describe natural history data as useful for prognostic biomarkers using strength of evidence.
Other speakers in this session included Dr’s Katrina Gwinn (NINDS), Hao Wang (NINDS) and Petra Kaufmann (NCATS) who discussed agreeing on biomarker terminology and the need to make sure you are going after the right things for biomarkers. The heterogeneity of patients with disease was mentioned. Consortia were described for bringing groups together to look into them, how to stop reinvention. Standardizing and training staff to decrease noise in order to get the right treatment for the patient was also brought up. Standardization for bio specimens was thought important as well as the need to improve the quality of publications and the data that has to be associated with it. There was some mention of precompetitve efforts for biomarkers and the need overall for teamwork.
NIH101: priority setting, decision-making, NIH basics and discussion
Alan Willard (NINDS) presented on program considerations and the need to fund early stage investigators, concept clearance for proposed solicitations and influencing what they solicit through RFI. He also alerted us to how people could volunteer for peer review and go to the CSR website to volunteer expertise. There was also some mention of OnPAR as a way to alerting other foundations of applications that were not funded but may be useful for funding by others.
Panel 4. Developing Better Clinical Outcome measures
Ron Bartek (Friedreich’s Ataxia Research Alliance) moderated this session which included Michael Shy (Univ Iowa) who described outcome measures such as CMTNSv2, CMTPEDs (But did not describe how or if the scores are accepted by FDA). He also described the disability severity index, PCMT-QOL CMT health index and a recent paper that developed an MRI biomarker as published in Lancet neurology in 2016. Carsten Bonnemann (NINDS) again mentioned GAN and the importance of using outcome measures from other disorders and combining known outcome measures with exploratory measures. Ray Dorsey (Univ Rochester) suggested the need for novel outcome measures.
He illustrated this as we still use a 100 yr old plus method for scoring Parkinson’s disease. He proposed novel sensors, remote monitoring, wearables, implantables etc. as alternatives. Another example was how voice recording can detect Parkinson’s, Parkinsonism and related disorders from 2015. 15,000 people took part in mobile app based study. It is also possible to measure pharmacological improvement in an app. Huntingtons disease and step time was yet another example. He also mentioned the importance of measuring patients function in the home environment. He made an excellent comment at the end of his presentation about some thought leaders embracing technology (and yet there are many others that do not). Jacob Kean (University of Utah) discussed improving precision in patient reported outcomes, using and adaptive approach for using measures and short form selection. He also described how the NIH invested in PROMIS and the importance of tracking registry participants. Matthew Goodwin (Northeastern Univ) gave a very animated presentation which alerted the audience to the stickiness of technology and how wearables could give very clear kinematic signatures. Examples he used included tracking heart rate in autism, repeated measures which you can see groups clustering in. Also described were differences in hyper vs hypo aroused patients, and in epilepsy you can detect onset with wearable devices that can differentiate tonic and clonic signatures. There was also the potential to get devices to combine information with elements of citizen science for engagement. Some of the drawbacks include the cost of data analysis, in which case sampling might be good to find the signal. There is also the need for researchers that can do the modeling and the health sciences. Increasingly clinical companies are wanting to use mobile technologies.
Other elements are captured in the tweets used in the storify below.