RussianSuperGLUE

на сайте с December 18, 2022 14:48
Modern universal language models and transformers such as BERT, ELMo, XLNet, RoBERTa and others need to be properly compared and evaluated. In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. We offer testing methodology based on tasks, typically proposed for “strong AI” — logic, commonsense, reasoning. Adhering to the GLUE and SuperGLUE...