Quan Hung Tran

Research Scientist

San Jose

I am a researcher at Imagination Lab, Adobe Research. My research interests include areas of NLP, Artificial Intelligence and Machine Learning.

On previous research, I focused on of language sequence modelling using Recurrent Neural Networks that incorporated hierarchical representations, gated attention, uncertainty propagation, stacked residual learning, context-dependent and structure-dependent models in order to improve the precision, efficiency and interpretability aspects of current RNN architectures. At the moment, I am particularly interested in efficient and accurate models for text processing in low-to-medium resource scenarios with applications to dialog systems and sequence modelling.

I am looking for interns working on Language Generations, Large-scale dynamic classification, and Dialog control. Information about the Adobe internship program can be foundĀ here.


Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional Networks and Syntax-based Regulation

Veyseh, A., Nouri, N., Dernoncourt, F., Tran, Q., Dou, D., Nguyen, T. (Nov. 18, 2020)

EMNLP Findings 2020

Scene Graph Modification Based on Natural Language Commands

He, X., Tran, Q., Haffari, G., Chang, W., Lin, Z., Bui, T., Dernoncourt, F., Dam, N. (Nov. 18, 2020)

EMNLP Findings 2020

Rethinking Self-Attention: Towards Interpretability in Neural Parsing

Mrini, K., Dernoncourt, F., Tran, Q., Bui, T., Chang, W., Nakashole, N. (Nov. 18, 2020)

EMNLP Findings 2020