Core Elements of Experiment We Want to Create Now

Core piece of an initial system we’re going to try as an experiment followed by a survey

  • EEB is our audience
  • “Products” (works in progress) with commentary
  • System is expandable to multiple product types
  • Commentary and products are associated with users
  • Users provide ratings for products and each others’ comments
  • Revision & versioning part of the system
  • Meta-data for papers to enable discovery
  • Information is able to be exported to a new platform
  • We should have a closed beta first

The Needs we feel this addresses with this

  • works in progress/preprints
  • replicated/null/results/short notes/errata
  • interactive commentary
  • experiment with peer review

Enabling Efficient Discovery via Linking Information

The discussion focused around the attributes of scholarly projects and the comments and people associated with them that can then be used for discovery tools.

Attributes that describe a single work that can be used to enable discoverability of new works

  • who is reading it
  • properties of people reading it, and of the authors (university, location, field, etc.)
  • tags (user generated and author generated)
  • citation network, and network of citations to reviews/responses
  • reading habits of people participating in a discussion of a paper
  • geocoding
  • associated funding
  • corpus of a paper
  • associated social media and habits of social media

How do we use those attributes to enhance discoverability?

  • Aggregation by tags, sorting by scores or other properties – date, time, etc.
  • Build networks of influence based on group of people, then see what the ‘most influential are reading/reviewing, or build a network of papers based on a circle of people and their reading habits
  • You want to find things that are near each other in a given ‘space’ – can use information for people as well as information about a product and its content
  • Must have ability to stumble on something highly related, even if not reviewed.
  • Concentrating on pieces that bridge multiple fields and seeing where else they go
  • Visualize connections of these tags
  • Overlays of a paper that convey some of this information

Reputation & Credit

What are ways that people can accumulate reputation & credit for participating in the scholarly publishing enterprise? If we start by assuming the open reviewing system of yesterday, what are ways things can be ‘scored’ to give people public reputation?

Products need to accumulate reputation via use and re-use:

  • citations
  • pagerank (2nd and 3rd generation citation) – how are papers that cite a paper themselves then cited to make a paper pagerank
  • altmetrics
  • reader and reviewer opinion
  • Question: can you accumulate negative things?
  • Qualifying all of this by kind of contributions

A lot of this information can be culled from total-impact.org and altmetric.com.

These are all evidence of use & re-use, but give different information. We tossed around the idea of creating an aggregate number, but agreed that fine-grained information HAS to be there. What does an aggregated number really mean?

Reviews need some different metrics to be assessed, so that reviewers can be assessed. The must have metrics associated with re-use, reader scores, etc. But also –

  • utility to the author
  • how do they lean on a paper…keep track of rejecty rejecterson
  • reader score – does it weight the review’s use and utility to an author?
  • ‘Badging’ for activity – track quantity

Chris: also maybe something about last 5-paper read, or reading history? But privacy issues abound.
But – other people who read this, also read these…

For Those Missing the Meeting: The Building Blocks of a New System

For both those who are here at the meeting and were not able to meet it, here are the fundamental areas that we’re going to discuss that we feel need to be pulled apart and put back together to see how we can improve the current system of scholarly communication.

Please, if you have any thoughts about any of these but are not here, drop me a line with relevant questions, notes, thought, etc.

  • What is a product?
  • The Review Process & Opening it Up
  • Credit and Reputation for Participation
  • Linking Information for Efficient Discovery
  • The Roles of Agents/Stakeholders
  • Distribution Mechanisms (publishers, libraries, societies, etc.)
  • Funding
  • Software & User-interface architecture

“Report Abuse”

One idea that I don’t want us to lose from yesterday was the idea that a fraud/plagarism/quakery check need not be the responsibility of any one person. Yes, there’s likely an automated step to check for plagarism, but, after that, having a preprint randomly sent to, say, 5-10 people with the option of clicking a “report abuse option” (and then enter what kind of abuse) may well prove to be as efficient at culling bad items out of a preprint literature as any.

That, and it allows for an alternative archive of bunk. Which can be fairly useful, particularly when one wants to cite that a certain “point” is actually garbage, as it were.

An idea about a rough do-able now experiment

Area51.stackechange is a site for people to propose nascent mathoverflow-like sites. As a first pass experiment on open review and a reputation economy (while we develop or find a way to develop a more sophisticated model) could we get the community to support an Ecology Preprint stackechange? We could have NCEAS host preprint pdfs from validated authors (i.e., have a ecopreprint.stackechange user account) and an individual ‘question’ would concern people’s comments on a paper.

We’d ask the community to volunteer to post preprints there fore at least, say, 2 months for submission, and given them the option of including their ‘review’ trail, etc. when they submit their paper to a journal.

Users would shape things like tags, etc. as the do on any stackoverflow site.

Granted, ‘review’ would be public, with usernames revealed. But, it’s an experiment.

After running this for ~6 months, we survey participants about their experience.

Thoughts?

Comment from Dave: You’d need some journals and high profile people to buy-in before doing this.