Call for papers ESWC 2021 Science Data and Scholarly Communications

Description

The main goal of the Science Data and Scholarly Communication track is to bring together the communities that rely on semantic approaches for representation, mining, analysis, visualization, and dissemination of research outputs, such as papers, datasets, software, experiments, vocabularies, workflows, patents, and others, irrespective of the context of the research, whether academia, industry, or government.

In broad terms, the themes of this track are:

  • semantic representation of research outputs: how to semantically represent, categorise, connect, or integrate scientific data in ways that facilitate the reuse and sharing of knowledge;
  • analysis of research outputs: approaches for information extraction and retrieval, AI-based pattern discovery and prediction, understanding research dynamics, forecasting trends, informing research policies, analysing and interlinking experiments, deriving new knowledge, and recommending research outputs
  • communication and dissemination of research outputs, and interaction with scholarly data: how to efficiently publish research results, and how to provide novel applications and user interfaces for navigating, analyzing, and visualizing scholarly outputs.

Negative Results

As a new theme in 2021, ESWC encourages the submission of negative results papers. Specific instructions for negative results papers can be found here.

Topics of Interest

Topics of interest include, but are not limited to:

Semantic Representation

  • Approaches for facilitating management lifecycle of scholarly outputs, metadata and information (creation, store, share, and reuse)
  • Knowledge descriptions (e.g., ontologies, vocabularies, schemas) of scholarly artefacts, knowledge graphs
  • Tools and methods for interlinking scholarly metadata
  • Virtual research environments, (semi-)automated scientific content creation (e.g., papers, reviews, events, calls for papers)
  • Description of citations and citation networks for scholarly articles
  • Data and software and their interrelationships (e.g. research objects)
  • Theoretical models describing the rhetorical and argumentative structure of scholarly papers and their application in practice
  • Preservation, curation and valorisation of scholarly metadata (e.g., wikis, crowdsourced data, citizen science)
  • Description and use of provenance information of scholarly data
  • From digital libraries of scholarly papers to Linked Open Datasets: models, applicability and challenges
  • Definition and description of scholarly publishing processes
  • Modeling licenses for scholarly artifacts (e.g., documents, data and OpenCourseWare)
  • Workflows of scholarly artifacts
  • Blockchain for scholarly communication (data provenance, authorship, digital property, peer-review, etc.)

Analysis

  • Natural language processing approaches for scholarly data
  • Automatic annotation of scientific research
  • Automatic construction of academic knowledge graphs
  • Tools and methods for pattern discovery in scholarly metadata (e.g., discovering data and software used in similar publications)
  • Approaches for scholarly recommendation systems
  • Data mining for enhanced discovery, interpretation, and interlinking of scholarly artifacts
  • Science of Science
  • Validation of Open Science practices, mandates, policies (e.g. FAIR data)
  • Assessing the quality and/or trust of scholarly artefacts
  • Citation analysis, prediction, generation
  • New semantic indicators for measuring the quality and relevance of research
  • Semantic technologies for comparing and making sense of standard metrics (e.g., h-index, impact factor, citation counting) and alternative metrics (aka altmetrics)
  • Automatic or semi-automatic approaches to making sense of research dynamics
  • Automatic semantic enhancement of existing scholarly libraries and papers
  • Reconstruction, forecasting and monitoring of scholarly data
  • Analytics on research impact using biblio-metrics and alt-metrics, indicators, impact factors, citation indexes, etc.
  • Science and citizen data crowdsourcing and analytics

Visualization

  • Novel user interfaces for interaction with paper, metadata, content, software and data
  • Virtual research environments
  • Visualization of citation networks and related papers according to multiple dimensions (e.g., citation counting, citation functions, kinds of citing/cited entities, semantic similarity, etc.)
  • Applications for making sense of scholarly data
  • Usability studies
  • Scholarly data and ubiquity: accessing scholarly information from multiple devices (PC, tablet, smartphones)

Delineation from other tracks:

We accept novel contributions focused on the improvement of Science Data and Scholarly Communication by exploiting Semantic Technologies and hybridising Semantic Technologies with other fields. Although Science Data and Scholarly Communication might be a use case, we recommend that:

  • Formal descriptions of ontologies should be submitted to the Ontologies and Reasoning track.
  • Formal descriptions of knowledge graphs should be submitted to the Knowledge Graph Track track.
  • Novel methodologies for machine learning should be submitted to the Machine Learning track.
  • Generic NLP and IR approaches should be submitted to the NLP and IR track
  • Concrete applications of Science and Scholarly Communication Data based on existing solutions should be submitted to the In-Use or Industry Track.
  • Empirical evaluations and novel data management benchmarks should be submitted to the Resources Track.

Submission

Information on deadlines and submission formats can be found here.

Track Chairs

  • Andrea Giovanni Nuzzolese, National Research Council, Italy
  • Rafael Gonçalves, Stanford University, USA

Share on