Agreement-based Learning. The key idea of agreement-based learning is to train a set of models jointly by encouraging them to agree on the hidden variables (Liang et al., 2006; Liang et al., 2008). This can also be seen as a particular form of posterior constraint or poste- rior regularization (Grac¸a et al., 2007; Ganchev et al., 2010). The agreement is prior knowledge and alignment and parsing (▇▇▇▇▇▇▇ et al., 2010), tok- enization and translation (Xiao et al., 2010), pars- ing and translation (Liu and Liu, 2010), alignment and named entity recognition (Chen et al., 2010; Wang et al., 2013). Among them, Zhang et al. (2003)’s integrat- ed search algorithm for phrase segmentation and alignment is most close to our work. They use Point-wise Mutual Information to identify possi- ble phrase pairs. The major difference is we train models jointly instead of integrated decoding.
Appears in 3 contracts
Sources: Generalized Agreement for Bidirectional Word Alignment, Generalized Agreement for Bidirectional Word Alignment, Generalized Agreement for Bidirectional Word Alignment