Call for Papers ESWC 2021 Resources Track

Many of the research efforts in the areas of the Semantic Web, Linked Data and Knowledge Graphs focus on publishing scientific papers that explore a hypothesis. However, scientific advancement is often reliant on good quality resources that provide the necessary scaffolding to support the scientific publications. Sharing these resources and the best practices that have led to their development with the research community is crucial, to consolidate research material, ensure reproducibility of results and in general gain new scientific insights.

The ESWC 2021 Resources Track aims to promote the sharing of resources including, but not limited to: datasets, ontologies, vocabularies, annotated corpora, workflows, knowledge graphs, evaluation benchmarks or methods, replication studies, services, APIs and software frameworks that have contributed to the generation of novel scientific work. In particular, we encourage the sharing of such resources following best and well established practices within the Semantic Web community, including the provision of an open license and a permalink identifying the resource. This track calls for contributions that provide a concise and clear description of a resource and its usage. A typical Resource track paper has its focus set on reporting on one of the following categories:

  • Ontologies developed for an application, with a focus on describing the modelling process underlying their creation;
  • Datasets produced to support specific evaluation tasks or by a novel algorithm;
  • Knowledge graphs of particular interest to the community that comprehensively cover new vertical domains;
  • Machine learning models that would impact the knowledge engineering community – examples include comprehensive word embeddings trained on large corpora, or embeddings of commonly known knowledge graphs, such as DBpedia or Wikidata;
  • Reusable research prototypes / services supporting a given research hypothesis;
  • Community shared software frameworks that can be extended or adapted to support scientific study and experimentation;
  • Benchmarking, focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems;
  • Development of new evaluation methodologies, and their demonstration in an experimental study.

Delineation from the other Tracks

We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission.

Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the in-use track.

If you are unsure whether your paper fits into the resource track or not, please have a look at the review criteria below. If you cannot answer “yes” to most of the questions, covering all evaluation categories, and provide evidence for that in your paper, the resource track might not be a good fit for your paper.

Review Criteria

The program committee will consider the quality of both the resource and the paper in its review process. Therefore, authors must ensure easy access to the resource during the review process, ideally by the resource being cited at a permanent location specific for the resource. For example, data available in a repository such as FigShare, Zenodo, or a domain specific repository; or software code being available in public code repositories such as GitHub or BitBucket. The resource must be publicly available.

We welcome the description of well established as well as emerging resources. Resources will be evaluated along the following generic review criteria. These criteria should be carefully considered both by authors and reviewers.

  • Potential impact:
    • Does the resource break new ground?
    • Does the resource plug an important gap?
    • How does the resource advance the state of the art?
    • Has the resource been compared to other existing resources (if any) of similar scope?
    • Is the resource of interest to the Semantic Web community?
    • Is the resource of interest to society in general?
    • Will the resource have an impact, especially in supporting the adoption of Semantic Web technologies?
    • Is the resource relevant and sufficiently general, does it measure some significant aspect?
  • Reusability:
    • Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively, what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion forums, mailing list, issue tracker, support portal, etc?
    • Is the resource easy to (re)use? For example, does it have good quality documentation? Are there tutorials availability?
    • Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use?
    • Is there potential for extensibility to meet future requirements (e.g., upper level ontologies, plugins in protege)?
    • Does the resource clearly explain how others use the data and software?
    • Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
  • Design & Technical quality:
    • Does the design of the resource follow resource specific best practices?
    • Did the authors perform an appropriate re-use or extension of suitable high-quality resources? For example, in the case of ontologies, authors might extend upper ontologies and/or reuse ontology design patterns.
    • Is the resource suitable to solve the task at hand?
    • Does the resource provide an appropriate description (both human and machine readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
    • If the resource proposes performance metrics, are such metrics sufficiently broad and relevant?
    • If the resource is a comparative analysis or replication study, was the coverage of systems reasonable, or were any obvious choices missing?
  • Availability:
    • Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
    • Does the resource provide a (preferably open) licence specification? (See creativecommons.org, opensource.org for more information)
    • Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
    • Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, DataHub, or DBpedia Databus)? Is it registered in generic repositories such as FigShare, Zenodo or GitHub?
    • Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
    • Does it use open standards, when applicable, or have good reason not to?

Submission

Important dates and submission guidelines are specified here.

Authors will have the opportunity to submit a rebuttal to the reviews to clarify questions posed by program committee members.

Track Chairs

Maria Maleshkova, University of Bonn, Germany

Pierre-Antoine Champin, Claude Bernard Lyon 1 University, France

Acknowledgements

The text from this CfP is partially based on the call for Resource Papers for ISWC 2017 by Valentina Tamma and Freddy Lecue, the call for Resource Papers for ESWC 2018 by Pascal Hitzler and Raphael Troncy, the call for Resource Papers for ESWC2019 by Amrapali Zaveri and Alasdair J. G. Gray, and the call for Resource Papers for ESWC2020 by Heiko Paulheim and Anisa Rula.

Share on