A new concept for Semantic Parsing
Recently, many recurrent neural network-based natural language processing (NLP) tasks have shown promising results and gained much attention, especially in neural machine translation (NMT). Semantic parsing maps a natural language sentence into a machine-readable representation of its meaning as a special translation task. This special translation task in semantic parsing can be treated as a sequence-to-sequence problem. However, the potential of semantic parsing is highly dependent on the availability of annotated data and it is even harder to annotate the data in some logic form formats. Inspired by the idea of coarse-to-fine, the authors of the paper Semantic Parsing Via Cross-Domain Schema — under double bind review — propose a new concept for semantic parsing.
In this paper, the authors propose the concept of cross-domain schema (CDS), which extracts some shared information across domains. The overarching intent is to attempt the utilization of cross-domain commonalities such as structural and phrasal similarity in human expressions completely. Furthermore, they present a general-to-detailed neural network (GDNN) for converting an utterance into a logic form based on the meaning representation language (MRL) form. In order to obtain CDS in a multi-task setup, an encoder-decoder model is used by the general network, which is meant to extract cross-domain commonalities. Subsequently, exploiting utterances and CDS simultaneously via an attention mechanism helps the final domain-specific target in generating the detailed network.
The authors also conducted experiments to demonstrate the effectiveness of CDS and multi-task learning. As CDS can be applied to other tasks, they, in future, intend to continue perfecting the CDS definition and exploring more ways to make it work better.
Click here to read the complete paper, which is under review as a conference paper at ICLR 2019.