Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is excellent. I have a few questions:

You need to provide the background of your study, the types of experiments undertaken, the materials and methods, and initial results of your study.

Do the technicians reproducing the results get to see the initial results? It seems like it might be more accurate if they didn't. Lots of parameters can be fudged and adjusted, a la Millikan's Oil Drop, when the results don't quite match. I imagine this might be exacerbated with the necessity of researcher-validator communication.

How are conflicts resolved? If my results are not validated, someone made a mistake - me or the validator. If both parties stand by their mutually incompatible results, where does it go from there? I can imagine a lot of researchers I know feeling annoyed that someone whose expertise they cannot verify (due to anonymity) won't "do my experiment correctly".

I imagine that in time there might be specific requirements or explicit funding allocations for such reproduction on grant applications, which would really allow it to take off. As it stands, I imagine a lot of PI's would just ask "hmm, I can spend money that might risk my already high-impact paper, or I can keep the money and not be considered wrong."

Still, this is a great first step toward facilitating a central tenet of the scientific method. Congratulations.



This is Bilal from Science Exchange, and we greatly appreciate your support for our initiative!

We will provide the methodology of the original study to those reproducing the results, while the results we believe will be helpful to check against.

For conflicts, it's true we can't force an investigator to publish or note the lack of reproduced outcomes. We do hope they will through the PLOS Collection, for transparency. We do feel though it can provide a valuable check for 'failing fast', for those investigators who want robust results.

In this initial stage, we also agree that funding will be difficult. That's why we hope to focus on small biotechs and research labs that are interested in commercializing their research, and need to show robustness of results for licensing opportunities.

Hopefully then, it can serve as a proof-of-principle for funding agencies to provide a requirement or increase support for reproducibility.


I think this will be a very valuable service for small biotechs. It would improve the value of their patents and increase chances of getting further funding.

As an experienced scientist myself, I can say there is plenty of scope for misunderstanding and misinterpretation, no matter how careful the original authors were. So I only hope that there will be some (maybe blinded) mechanism for communication between the original researchers and those replicating the published findings, in the event of the "usual" complications.

Also, who would do this replication? Would some of it be outsourced to academic labs with the requisite experience? Advanced findings often depend on advanced techniques. Outsourcing could be problematic, politically, since Big Shot No.1 might not be too interested in shooting down his pal, Alpha-MD-PhD.


Thank you for the support!

To clarify, the validation studies will be matched to core resource facilities and commercial research organizations, who specialize in conducting certain experiments on a fee for service basis. As they are paid upon completion of a service, regardless of outcome, we feel they are the solution to many of the misaligned incentives in academic research.

With respect to communication between the original authors and those conducting the validation, we definitely agree there needs to be some degree of communication, given the complexities of research. We will originally match a researcher to their provider in a blind fashion, so they have no choice in who conducts their study. But once a provider is selected, they can communicate with one another in explaining the methodology, experiments, etc.


It's probably a bit late to respond to this... However, given that this work is likely only to be undertaken when the stakes are high, I would think that blinded communication between the original researchers and those replicating the work would be a good idea. If they are going to shoot it down, they probably don't want the original people to know that they did so.


Bilal, I just want to thank you whole hearty for your, and your teams, effort! I'm a Phd student in engineering, where as far as I can tell, it is even worse. I love academia, but the fact that we don't even attempt to live up to the standards we supposedly agreed on makes me sad.

Especially in computer focused areas like CS or Statistics, it would be straight forward to submit more or less completely self-reproducible papers. Alas, it is seemingly impossible to find an adviser who will allow you to publish all your source code and primary data, let alone spend the additional time to get them into a publishable form.


You're welcome! We completely acknowledge the difficulty in reproducing scientific research, acros disciplines. We are focused on preclinical biological research for now, but hopefully in the future we can expand to other areas like CS and Statistics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: