ESWC 2019 Guidelines for Reviewers

Dear PC members. We wanted to thank you once more for participating in this year’s reviewing process and for helping us to generate a truly exciting programme for ESWC 2019!

We wanted to provide some small guidelines on the review criteria and what we expect of your role as PC member.

  1. Please be constructive. Reviews should provide honest feedback in a respectful and constructive manner. Authors will appreciate your knowledge and suggestions on how to improve their work.
  2. Please be fair and reasonable in your requests. Think carefully about the strengths and limitations of the work and provide a fair assessment. A few typos or stylistic errors are not a reason to reject a paper. If you find many of these errors, please mention in your review that the paper will need substantial improvement before becoming acceptable. Similarity, an overlooked paper in the related work section can be easily added by the authors and as such is no reason for rejection. Finally, while an evaluation section is of major importance for a successful submission, time and paper length pose restrictions on the systems and other baselines to which the authors can meaningfully compare their work. A lack of a certain system/baseline should only be a reason for rejection if there is reason to believe that the presented research would fall behind previous work or if the comparison is set up in a way to favor the new work.
  3. Please provide sufficient explanations for your criticisms and decisions. E.g., don’t just state that there is an error in a formula, but please explain your reasoning about why there is an error and ideally point out whether it can be overcome in reasonable time as this may open the door for a conditional accept.
  4. Follow the review criteria. The review criteria for Research, In-Use and Resources have been specified on the ESWC 2019 website (and are also available below) and we would greatly appreciate if you would take them into account while preparing your review.
  5. Very short reviews often do not provide the level of detail that the committee will need to arrive at a fair decision nor do they allow authors to improve their work. As a rule of thumb, we would appreciate if your review would contain at least 250 words.

Review Criteria for Research

/call-for-papers-research-track/

Papers in this track will be reviewed according to the following criteria:

  • Appropriateness: Is the paper suitable for ESWC 2019? Novel/significant research contributions addressing theoretical, analytical, and empirical aspects of the Semantic Web, as well as contributions to research at the intersection of Semantic Web and other disciplines are welcome.
  • Novelty: Does the paper present novel results or applications? Does it address a new problem or one that has received little attention? Does it improve previous approaches in terms of addressed issues, success, or lessons learned? Novel combinations, adaptations,  extensions of existing ideas, or different perspectives on existing approaches are also valuable.
  • Relevance and impact of the research contributions:  How significant is the described work? Does the paper target an important problem? Does the solution represent a significant and important advance? Will aspects of the paper result in other researchers adopting the approach in their own work? If the paper is evaluated with respect to an existing baseline, how big is the delta?
  • Soundness: Does the presented approach achieve the stated claims? Are the experiments well thought, rigorous, convincing, and do they support the stated claims?
  • Design and execution of the evaluation of the work: Has the paper been evaluated following standard evaluation approaches, benchmarks and metrics? Has it been compared against similar state-of-the-art approaches? Is the evaluation repeatable? Are there links to datasets, source code, or queries used for the evaluation provided?
  • Clarity and quality of presentation: For the reasonably well-prepared reader, is it clear what was done and why? Is the paper well-written and well-structured? Are there problems with style and grammar?
  • Grounding in the literature: To what extent do the authors show awareness of related work and discuss the main novelty/contribution of their work in context of existing literature.

Review Criteria for In-Use

/call-for-papers-in-use-track/

The Semantic Web In-Use papers will be evaluated on their relevance to the track, rigor in the methodology and analysis used to reach conclusions, originality, readability, and usefulness to developers, researchers, and practitioners. Review criteria include:

  • Significance of the problem addressed
  • Value of the use case in demonstrating benefits/challenges of semantic technologies
  • Adoption by domain practitioners and general members of the public
  • Impact of the solution, especially in supporting the adoption of semantic technologies
  • Applicability of the lessons learnt to other use cases
  • Clarity and quality of the description

Review Criteria for Resources

/call-for-papers-resources-track/

The program committee will consider the quality of both the resource and  the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process, ideally by the resource being cited at a permanent location specific for the resource. For example, data available in a repository such as FigShare, Zenodo, or a domain specific repository; or software code being available in public code repository such as GitHub or BitBucket. The resource should be publicly available or a clear argument has to be made why the resource is not openly available to the public.

We welcome the description of well established as well as emerging resources. Resources will be evaluated along the following generic review criteria. These criteria should be carefully considered both by authors and reviewers.

  • Potential impact:
      • Does the resource break new ground?
      • Does the resource plug an important gap?
      • How does the resource advance the state of the art?
      • Has the resource been compared to other existing resources (if any) of similar scope?
      • Is the resource of interest to the Semantic Web community?
      • Is the resource of interest to society in general?
      • Will the resource have an impact, especially in supporting the adoption of Semantic Web technologies?
      • Is the resource relevant and sufficiently general, does it measure some significant aspect?
    • Reusability:
      • Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively, what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion forums, mailing list, issue tracker, support portal, etc?
      • Is the resource easy to (re)use?  For example, does it have good quality documentation? Are there tutorials availability? etc.
      • Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use?
      • Is there potential for extensibility to meet future requirements (e.g., upper level ontologies, plugins in protege)?
      • Does the resource clearly explain how others use the data and software?
      • Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
  • Design & Technical quality:
    • Does the design of the resource follow resource specific best practices?
    • Did the authors perform an appropriate re-use or extension of suitable high-quality resources?  For example, in the case of ontologies, authors might extend upper ontologies and/or reuse ontology design patterns.
    • Is the resource suitable to solve the task at hand?
    • Does the resource provide an appropriate description (both human and machine readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
    • If the resource proposes performance metrics, are such metrics sufficiently broad and relevant?
    • If the resource is a comparative analysis or replication study, was the coverage of systems reasonable, or were any obvious choices missing?
  • Availability:
    • Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
    • Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information)
    • Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
    • Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as  FigShare, Zenodo or GitHub?
    • Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
    • Does it use open standards, when applicable, or have good reason not to?

Share on