Lakshmana, et al (MSR India)
Hypothesis
- CNNs (model by Kim (2014)) do not learn kernels that are semantically coherent (having similar meanings). Propose a method for learning semantically coherent kernels and visualizing them.
Interesting methods
- A two step approach:
- Group k-grams into desired number of clusters.
- Associate a filter with each cluster as a weighted combination for Word2Vec representations of the k-grams that are members of that cluster (learn the weights).
- Details
- Clustering: Using concatenations of Word2Vec and SentiwordNet (2 dimentional vectors for positive and negative words) and compute eculidean distances between them for clustering.
- Study the reusability of learned Kernels
Results
- Achieve results close to Kim (2014)