Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback on reviewer recommendation notebook #1649

Open
kbarnhart opened this issue Aug 19, 2020 · 3 comments
Open

Feedback on reviewer recommendation notebook #1649

kbarnhart opened this issue Aug 19, 2020 · 3 comments

Comments

@kbarnhart
Copy link

I just tried out the reviewer recommendation notebook. Very cool and very easy to use. My feedback covers two topics (A) assessment of results and (B) the interface.

A. Assessment of the results.

I don't know much about NLP, so I'm not sure how to provide feedback in the most constructive way. I'll try.

I tried the tool out on the submission dorado (openjournals/joss-reviews/issues/2568) and then compared its results with my standard practice for finding reviewers (combination of people I know are working in related areas, author recommendations, github search, and reviewer list search). I would rate the notebook results as neutral in that all the recommended reviewers have expertise in water (as opposed to transportation or astrophysics). So I would say the results provided a good first order match.

However, in finding reviewers there are some additional things I take into account. For example, this was a submission about water and a passive particle method, so I sought out a reviewer who knows about particle methods. And it was a submission about water and earth surface processes, so I sought out a reviewer who does work in this area. I suspect that often these higher order considerations include the methods used and the way in which the package is applied. And that these aspects of submissions are very important for finding appropriate reviewers.

I don't know how this will compare with the next time I try the notebook on a new paper, but will report back then.

B. Interface feedback

This feedback is minor and doesn't really need to be done, but here are some ideas if anyone wanted to improve the interface.

In the binder interface, the most relevant recommendation is to create some mechanism by which the results are exported (.txt or email). Another idea would be to just require the DOI, and then internally create/fetch the PDF.

If the interface was streamlined I can think of two ways I'd However, I could envision two interface options that would work really well for how I handle submissions.

  1. Being able call @ whedon recommend 10 reviewers would list the top ten reviewers on a pre review issue.

  2. Something like the test paper generation page that would allow you to plug in a repo, it would then find/build the PDF, compare against the corpus, and print out a table of recommendations.

tag: @arfon

@arfon
Copy link
Member

arfon commented Aug 20, 2020

Thanks for the feedback @kbarnhart!

However, in finding reviewers there are some additional things I take into account. For example, this was a submission about water and a passive particle method, so I sought out a reviewer who knows about particle methods. And it was a submission about water and earth surface processes, so I sought out a reviewer who does work in this area. I suspect that often these higher order considerations include the methods used and the way in which the package is applied. And that these aspects of submissions are very important for finding appropriate reviewers.

Right, and this is why we're seeking feedback at this stage :-). It would be very helpful to know if there was information in the reviewer spreadsheet that described particle methods, i.e. did the review recommendations fail to identify this in the information it had available (from how reviewers had described themselves in the spreadsheet) or did you add your own domain context/information here?

Right now, the recommendations are limited to using the information from the spreadsheet, and one thing we're considering doing is asking reviewers to upload a few papers when they first volunteer. That way we would have a larger corpus of information to learn on.

Being able call @whedon recommend 10 reviewers would list the top ten reviewers on a pre review issue.

Yes! This is exactly what we want to do next. @elizabeth-pugliese is currently working on this and hopefully we'll have something to show soon.

@kbarnhart
Copy link
Author

There were two reviewers I invited where I integrated domain context and or got different information than the NLP from the spreadsheet.

The paper never uses the term "particle method", but uses many other words next to "particle" (e.g., "passive particle" in the title). There was one person in the spreadsheet that I think should have been matched against this (gassmoeller, who works on geodynamic models and lists "Particle methods" as one of his review domains).

The other person that I'm surprised wasn't identified was dbuscombe-usgs. But unlike the last case, I can't articulate anything about the paper and his stated review domains that I think should have been connected.

Agreed that the self reported description is the biggest limitation at the moment.

@kbarnhart
Copy link
Author

I've now used this on another submission and the contrast is really interesting. The submission is "pvpumpingsystem: a python package for modeling and sizing photovoltaic water pumping systems"

openjournals/joss-reviews/issues/2361

And the recommendations were as follows:
Annotation 2020-09-02 075004

I'm pretty sure that the top three people listed were also listed for the other package that I tried out (a passive particle tracking package for hydrodynamic flows). So perhaps water is a very difficult keyword.

There are two people listed here who I invited to review (samuelduchesne and robinroche).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants