NALOMA’23: Natural Logic meets Machine Learning 2023

Workshop description:

After the successful completion of NALOMA’20, NALOMA’21 and NALOMA’22, NALOMA’23 seeks to continue the series and attract exciting contributions. NALOMA is concerned with the whole field of Natural Language Understanding. The workshop aims to bridge the gap between ML/DL and symbolic/logic-based approaches to NLU and lay a focus on hybrid approaches. NALOMA’23 will take place at IWCS2023.

Recently, there has been a surge of interest in tasks targeting NLU and Reasoning. Particularly, the task of Natural Language Inference (NLI) has received immense attention. This attention has led to the creation of massive datasets and the training of large, deep models reaching human performance (e.g., Liu et al. 2019, Pilault et al. 2020). The world-knowledge encapsulated in such models and their robust nature enable such models to deal with diverse and large data in an efficient way. However, it has been repeatedly shown that such models fail to solve basic inferences and lack generalization power. When presented with differently biased data (Poliak et al. 2018, Gururangan et al. 2018) or with inferences containing hard linguistic phenomena, (e.g., Dasgupta et al. 2018, Nie et al. 2018, Naik et al. 2018, Glockner et al. 2018, Richardson et al. 2020, McCoy et al. 2019, Yanaka et al. 2020, to name only a few), they struggle to reach the baseline. Explicitly detecting and solving these weaknesses is only partly possible, e.g., through appropriate datasets, because such models act like black-boxes with low explainability. At the same time, another strand of research has targeted more traditional approaches to reasoning, employing some kind of logic or semantic formalism. Such approaches excel in precision, especially of inferences with hard linguistic phenomena, e.g., negation, quantifiers, modals, etc. (e.g., Bernardy and Chatzikyriakidis 2017, Yanaka et al. 2018, Chatzikyriakidis and Bernardy 2019, Hu et al. 2019, Abzianidze 2020, to name only a few). However, they suffer from inadequate world-knowledge and lower robustness, making it hard for them to compete with state-of-the-art models. Thus, lately, a third research direction seeks to close the gap between the two approaches by employing hybrid methods (e.g., Liang et al. 2017, Kalouli et al 2020, Ebrahimi et al. 2021), combining the strengths of each approach and mitigating its weaknesses.

We see such hybrid research efforts as promising not only to overcome the described challenges and advance the field, but also to contribute to the symbolic-deep learning “debate” that has emerged in the  field of NLU.  We would like to further promote this research direction and foster fruitful dialog between the two disciplines. This workshop aims to bring together researchers working on hybrid methods in any subfield of NLU, including but not limited to NLI, QA, Sentiment Analysis, Dialog, Machine Translation, Summarization, etc.

Official webpage:

Comments are closed.