Show simple item record

dc.contributor.authorFreire, Julianaen_US
dc.contributor.editorW. Aigner and P. Rosenthal and C. Scheideggeren_US
dc.date.accessioned2015-05-24T19:39:50Z
dc.date.available2015-05-24T19:39:50Z
dc.date.issued2015en_US
dc.identifier.urihttp://dx.doi.org/10.2312/eurorv3.20151143en_US
dc.description.abstractEver since Francis Bacon, a hallmark of the scientific method has been that experiments should be described in enough detail that they can be repeated and perhaps generalized. When Newton said that he could see farther because he stood on the shoulders of giants, he depended on the truth of his predecessors' observations and the correctness of their calculations. In modern terms, this implies the possibility of repeating results on nominally equal configurations and then generalizing the results by replaying them on new data sets, and seeing how they vary with different parameters. In principle, this should be easier for computational experiments than for natural science experiments, because not only can computational processes be automated but also computational systems do not suffer from the ''biological variation'' that plagues the life sciences. Unfortunately, the state of the art falls far short of this goal. Most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available; and configuration parameters change results in unforeseen ways.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleReproducibility Made Easyen_US
dc.description.seriesinformationEuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)en_US
dc.description.sectionheadersReproducibility in Scientific Visualizationen_US
dc.identifier.doi10.2312/eurorv3.20151143en_US
dc.identifier.pages13-14en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record