PuzzLing Machines: A Challenge on Learning From Small Data

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

PuzzLing Machines : A Challenge on Learning From Small Data. / Sahin, Gozde Gul; Kementchedjhieva, Yova; Rust, Phillip; Gurevych, Iryna.

Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. p. 1241-1254.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Sahin, GG, Kementchedjhieva, Y, Rust, P & Gurevych, I 2020, PuzzLing Machines: A Challenge on Learning From Small Data. in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 1241-1254, 58th Annual Meeting of the Association-for-Computational-Linguistics (ACL), 05/07/2020. https://doi.org/10.18653/v1/2020.acl-main.115

APA

Sahin, G. G., Kementchedjhieva, Y., Rust, P., & Gurevych, I. (2020). PuzzLing Machines: A Challenge on Learning From Small Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 1241-1254). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.115

Vancouver

Sahin GG, Kementchedjhieva Y, Rust P, Gurevych I. PuzzLing Machines: A Challenge on Learning From Small Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 2020. p. 1241-1254 https://doi.org/10.18653/v1/2020.acl-main.115

Author

Sahin, Gozde Gul ; Kementchedjhieva, Yova ; Rust, Phillip ; Gurevych, Iryna. / PuzzLing Machines : A Challenge on Learning From Small Data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. pp. 1241-1254

Bibtex

@inproceedings{1eb9575d718647b0ab09df27948c552b,
title = "PuzzLing Machines: A Challenge on Learning From Small Data",
abstract = "Deep neural models have repeatedly proved excellent at memorizing surface patterns from large datasets for various ML and NLP benchmarks. They struggle to achieve human-like thinking, however, because they lack the skill of iterative reasoning upon knowledge. To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students. These puzzles are carefully designed to contain only the minimal amount of parallel text necessary to deduce the form of unseen expressions. Solving them does not require external information (e.g., knowledge bases, visual signals) or linguistic expertise, but meta-linguistic awareness and deductive skills. Our challenge contains around 100 puzzles covering a wide range of linguistic phenomena from 81 languages. We show that both simple statistical algorithms and state-of-the-art deep neural models perform inadequately on this challenge, as expected. We hope that this benchmark, available at https://ukplab.github.io/PuzzLing-Machines/, inspires further efforts towards a new paradigm in NLP-one that is grounded in human-like reasoning and understanding.",
author = "Sahin, {Gozde Gul} and Yova Kementchedjhieva and Phillip Rust and Iryna Gurevych",
year = "2020",
doi = "10.18653/v1/2020.acl-main.115",
language = "English",
pages = "1241--1254",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics",
note = "58th Annual Meeting of the Association-for-Computational-Linguistics (ACL) ; Conference date: 05-07-2020 Through 10-07-2020",

}

RIS

TY - GEN

T1 - PuzzLing Machines

T2 - 58th Annual Meeting of the Association-for-Computational-Linguistics (ACL)

AU - Sahin, Gozde Gul

AU - Kementchedjhieva, Yova

AU - Rust, Phillip

AU - Gurevych, Iryna

PY - 2020

Y1 - 2020

N2 - Deep neural models have repeatedly proved excellent at memorizing surface patterns from large datasets for various ML and NLP benchmarks. They struggle to achieve human-like thinking, however, because they lack the skill of iterative reasoning upon knowledge. To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students. These puzzles are carefully designed to contain only the minimal amount of parallel text necessary to deduce the form of unseen expressions. Solving them does not require external information (e.g., knowledge bases, visual signals) or linguistic expertise, but meta-linguistic awareness and deductive skills. Our challenge contains around 100 puzzles covering a wide range of linguistic phenomena from 81 languages. We show that both simple statistical algorithms and state-of-the-art deep neural models perform inadequately on this challenge, as expected. We hope that this benchmark, available at https://ukplab.github.io/PuzzLing-Machines/, inspires further efforts towards a new paradigm in NLP-one that is grounded in human-like reasoning and understanding.

AB - Deep neural models have repeatedly proved excellent at memorizing surface patterns from large datasets for various ML and NLP benchmarks. They struggle to achieve human-like thinking, however, because they lack the skill of iterative reasoning upon knowledge. To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students. These puzzles are carefully designed to contain only the minimal amount of parallel text necessary to deduce the form of unseen expressions. Solving them does not require external information (e.g., knowledge bases, visual signals) or linguistic expertise, but meta-linguistic awareness and deductive skills. Our challenge contains around 100 puzzles covering a wide range of linguistic phenomena from 81 languages. We show that both simple statistical algorithms and state-of-the-art deep neural models perform inadequately on this challenge, as expected. We hope that this benchmark, available at https://ukplab.github.io/PuzzLing-Machines/, inspires further efforts towards a new paradigm in NLP-one that is grounded in human-like reasoning and understanding.

U2 - 10.18653/v1/2020.acl-main.115

DO - 10.18653/v1/2020.acl-main.115

M3 - Article in proceedings

SP - 1241

EP - 1254

BT - Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

PB - Association for Computational Linguistics

Y2 - 5 July 2020 through 10 July 2020

ER -

ID: 255553840