cd
NFDI4DS

Shared Task FORC

Shared Task FORC

2023-02-01
2 min read

Title

FORC: Field of Research Classification for Scholarly Publications

Abstract

In recent years we have witnessed an explosion of published research across different fields of research. This brought forth an increasing difficulty for researchers to discover relevant literature that specifically caters to their needs, interests, and current research results. To find appropriate literature, researchers have to manually filter out many unrelated papers that are still being suggested by different scientific search engines. This shared task tackles this issue by introducing two subtasks. The first subtask aims to foster the development of single-label multi-class research field classifiers for about 50 general research fields; we will provide a benchmark dataset with different levels of label granularities based on the ORKG research fields taxonomy, as well as metadata and abstracts. The second subtask will specifically focus on different fields of research within data science and artificial intelligence. This subtask will introduce a different dataset and aims to model multi-label classifiers for more fine-grained labels in those research areas.

Subtasks

  • Subtask 1: General single-label classification
  • Subtask 2: Fine-grained NLP multi-label classification

Datasets

  • Subtask 1 (general fields of research): The dataset for the first subtask will comprise research papers with metadata information (title, authors, doi, url, publication date, and publisher), as well as abstracts and full-texts (when available). Each paper will be labelled with one field of research label based on the ORKG taxonomy. Metadata, abstracts, and labels will be fetched from several sources: ORKG, Crossref, Semantic Scholar, and arXiv.
  • Subtask 2 (fields of research within Data Science and AI): The subtask dataset will consist of the same metadata structure, abstracts, and full-text. Papers will be labelled with multiple fields of research fetched from additional sources: OpenAIRE, ACL anthology corpus, and Microsoft Academic Graph.

Metrics

  • Subtask 1 (single-label classification): accuracy + micro/macro-averaged precision and recall
  • Subtask 2 (multi-label classification): P@𝑘 and NDCG@𝑘 (rank-based evaluation)

Contact Persons

  • Raia Abu Ahmad(DFKI)
  • Georg Rehm (DFKI)