Core Elements of Experiment We Want to Create Now

Core piece of an initial system we’re going to try as an experiment followed by a survey

  • EEB is our audience
  • “Products” (works in progress) with commentary
  • System is expandable to multiple product types
  • Commentary and products are associated with users
  • Users provide ratings for products and each others’ comments
  • Revision & versioning part of the system
  • Meta-data for papers to enable discovery
  • Information is able to be exported to a new platform
  • We should have a closed beta first

The Needs we feel this addresses with this

  • works in progress/preprints
  • replicated/null/results/short notes/errata
  • interactive commentary
  • experiment with peer review

Enabling Efficient Discovery via Linking Information

The discussion focused around the attributes of scholarly projects and the comments and people associated with them that can then be used for discovery tools.

Attributes that describe a single work that can be used to enable discoverability of new works

  • who is reading it
  • properties of people reading it, and of the authors (university, location, field, etc.)
  • tags (user generated and author generated)
  • citation network, and network of citations to reviews/responses
  • reading habits of people participating in a discussion of a paper
  • geocoding
  • associated funding
  • corpus of a paper
  • associated social media and habits of social media

How do we use those attributes to enhance discoverability?

  • Aggregation by tags, sorting by scores or other properties – date, time, etc.
  • Build networks of influence based on group of people, then see what the ‘most influential are reading/reviewing, or build a network of papers based on a circle of people and their reading habits
  • You want to find things that are near each other in a given ‘space’ – can use information for people as well as information about a product and its content
  • Must have ability to stumble on something highly related, even if not reviewed.
  • Concentrating on pieces that bridge multiple fields and seeing where else they go
  • Visualize connections of these tags
  • Overlays of a paper that convey some of this information

Reputation & Credit

What are ways that people can accumulate reputation & credit for participating in the scholarly publishing enterprise? If we start by assuming the open reviewing system of yesterday, what are ways things can be ‘scored’ to give people public reputation?

Products need to accumulate reputation via use and re-use:

  • citations
  • pagerank (2nd and 3rd generation citation) – how are papers that cite a paper themselves then cited to make a paper pagerank
  • altmetrics
  • reader and reviewer opinion
  • Question: can you accumulate negative things?
  • Qualifying all of this by kind of contributions

A lot of this information can be culled from total-impact.org and altmetric.com.

These are all evidence of use & re-use, but give different information. We tossed around the idea of creating an aggregate number, but agreed that fine-grained information HAS to be there. What does an aggregated number really mean?

Reviews need some different metrics to be assessed, so that reviewers can be assessed. The must have metrics associated with re-use, reader scores, etc. But also –

  • utility to the author
  • how do they lean on a paper…keep track of rejecty rejecterson
  • reader score – does it weight the review’s use and utility to an author?
  • ‘Badging’ for activity – track quantity

Chris: also maybe something about last 5-paper read, or reading history? But privacy issues abound.
But – other people who read this, also read these…