Ginsparg on peer review

“If we were to start from scratch today to design a quality-controlled archive and distribution system for scientific and technical information, it could take a very different form from what has evolved in the past decade from pre-existing print infrastructure. Recent technological advances could provide not only more efficient means of accessing and navigating the information, but also more cost-effective means of authentication and quality control. I discuss relevant experiences of the past decade from open electronic distribution of research materials in physics and related disciplines, and describe their implications for proposals to improve the implementation of peer review. ”

http://people.ccmr.cornell.edu/~ginsparg/blurb/pg02pr.html#_jmp0_

Products & Timelines

Here are the products that we will be creating with a rough timeline and who is taking the lead:

  • IEE Special Issue: Ongoing Section of the Future of EEB publication
  •     – Lonnie & Chris take the lead
        – Stefano has lead article, we all contribute pieces from the meeting
        – Papers solicited soon for October submission?

  • Survey of gaps in eeb publishing with a focus on readiness for the next thing and reputation metrics
  •     – Carol & Amber taking the lead
        – Chris filing IRB
        – Launched in October
        – Carol has army ready to process the data

  • Meeting Poster at ESPI
  •     – Amber and Bruce taking the lead
        – $5K award possible
        – July 20th

  • eeBeta/ecoHere/needsAName: commentpress style preprint/workingpaper/microarticle platform with a stackoverflow reputation system
  •     – Archive of Working Scholarly Products
        – Ed and Cameron taking the lead with Jarrett & Mark in the loop
        – August 25-26 PLoS Hackathon with either Ed or someone like Brian Glanz or Steve
        – Goal is to have a working beta in testing before 2nd NCEAS meeting in January, launch afterwards

Experimental Product: The First Step

What we are going to build as a first step & experiment to test how people respond to different models of scholarly communication? The ‘decision’ based on the comparison matrix for our works-in-progress server:

  • Commentpress style-layout
  • Stackoverflow Style Commentary & Scoring System
  • Authoring Environment
  • Possibly find reviewing features from annotum as well
  • Exploit the XML model from annotum to have semi-structured publications
  • Hopefully connect this to handles with an eye towards DOIs

Lots still open for the future from our grand vision, but this is an excellent first step
We need to establish a timeline now…