Paper-Analyzer aims to facilitate knowledge extraction from scientific (biomedical) papers via Deep Learning (DL) models for Natural Language Processing (NLP). The core of the Paper-Analyzer is a Language Model (LM) built with Transformer-like architectures fine-tuned on scientific papers. The objective of LM is to predict the next word, given the context. We trained models built on top of LM to solve several downstream tasks like Named Entity Recognition (NER), Relation Extraction (RE), and Question Answering (QA) as consecutive steps to the main goal, which is automatic knowledge extraction.
We implemented NER and RE in the form of classifiers (which assign various classes to words or word tuples) and QA in the extractive form (where the answer to a question is a text span).
We also experiment with generative models for paper summarization and sentence paraphrasing tasks.
Paper-Analyzer is a web-based application that performs search queries on a collection of 30 million PubMed paper abstracts