In add_embeddings(), after the call to tf.nn.embedding_lookup(), my tensor has shape (None, n, embed_size), as expected. Right after that, I call tf.reshape(), but the output tensor has shape (None, None) instead of (None, n * embed_size). After searching online, the only discussion I can find on the topic suggests passing tf.shape(x) instead of None as the first element of the shape tuple to tf.reshape(), but that doesn’t help.
The (None, None) shape of, e.g. x_w, subsequently leads tf.matmul(x_w, W_w) to output a tensor with shape equal to that of W_w, which is (n_word_features * embed_size, hidden_size), instead of of the desired (None, hidden_size).
Finally, this all results in mismatched dimensions when doing the addition operation (x_w W_w) + (x_t W_t), since the two matmuls are outputting matrices with wrong and incompatible shapes.
Has anyone else experienced this?