Revisiting Higher-Order Dependency Parsers
Fonseca, E.F.
;
Martins, A.
Revisiting Higher-Order Dependency Parsers, Proc Annual Meeting of the Association for Computational Linguistics - ACL, Seattle, United States, Vol. , pp. 8795 - 8800, July, 2020.
Digital Object Identifier: 10.18653/v1/2020.acl-main.776
Download Full text PDF ( 677 KBs)
Abstract
Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers. This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and grandparents in a tree. We tested this hypothesis and found that neural parsers may benefit from higher-order features, even when employing a powerful pre-trained encoder, such as BERT. While the gains of higher-order features are small in the presence of a powerful encoder, they are consistent for long-range dependencies and long sentences. In particular, higher-order models are more accurate on full sentence parses and on the exact match of modifier lists, indicating that they deal better with larger, more complex structures.