Quan Hung Tran

Research Scientist

San Jose

I am a researcher at Imagination Lab, Adobe Research. My research interests include areas of NLP, Artificial Intelligence and Machine Learning.

On previous research, I focused on of language sequence modelling using Recurrent Neural Networks that incorporated hierarchical representations, gated attention, uncertainty propagation, stacked residual learning, context-dependent and structure-dependent models in order to improve the precision, efficiency and interpretability aspects of current RNN architectures. At the moment, I am particularly interested in efficient and accurate models for text processing in low-to-medium resource scenarios with applications to dialog systems and sequence modelling.

I am looking for interns working on Language Generations, Large-scale dynamic classification, and Dialog control. Information about the Adobe internship program can be found here.


Boosting Punctuation Restoration with Data Generation and Reinforcement Learning

Lai, V., Salinas, A., Tan, H., Bui, T., Tran, Q., Yoon, D., Deilamsalehy, H., Dernoncourt, F., Nguyen, T. (Aug. 24, 2023)

Interspeech 2023

LayerDoc: Layer-wise Extraction of Spatial Hierarchical Structure in Visually-Rich Documents

Mathur, P., Jain, R., Mehra, A., Gu, J., Dernoncourt, F., Natarajan, A., Tran, Q., Kaynig-Fittkau, V., Nenkova, A., Manocha, D., Morariu, V. (Jan. 6, 2023)

WACV 2023

Keyphrase Prediction from Video Transcripts: New Dataset and Directions

Veyseh, A., Tran, Q., Yoon, D., Manjunatha, V., Deilamsalehy, H., Jain, R., Bui, T., Chang, W., Dernoncourt, F., Nguyen, T. (Oct. 17, 2022)


DocLayoutTTS: Dataset and Baselines for Layout-informed Document-level Neural Speech Synthesis

Mathur, P., Dernoncourt, F., Tran, Q., Gu, J., Nenkova, A., Morariu, V., Jain, R., Manocha, D. (Sep. 22, 2022)

Interspeech 2022

DocTime: A Document-level Temporal Dependency Graph Parser

Mathur, P., Morariu, V., Kaynig-Fittkau, V., Gu, J., Dernoncourt, F., Tran, Q., Nenkova, A., Manocha, D., Jain, R. (Jul. 15, 2022)

NAACL 2022

Multimodal Intent Discovery from Livestream Videos

Maharana, A., Tran, Q., Yoon, D., Dernoncourt, F., Bui, T., Chang, W., Bansal, M. (Jul. 11, 2022)

Findings of NAACL 2022

Transfer Learning and Prediction Consistency for Detecting Offensive Spans of Text

Veyseh, A., Xu, N., Tran, Q., Manjunatha, V., Dernoncourt, F., Nguyen, T. (May. 27, 2022)

Findings of ACL 2022

Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning

Zhang, J., Bui, T., Yoon, D., Chen, X., Liu, Z., Xia, C., Tran, Q., Chang, W., Yu, P. (Nov. 9, 2021)

EMNLP 2021

TIMERS: Document-level Temporal Relation Extraction

Mathur, P., Jain, R., Dernoncourt, F., Morariu, V., Tran, Q., Manocha, D. (Aug. 4, 2021)

ACL 2021

A Context-Dependent Gated Module for Incorporating Symbolic Semantics into Event Coreference Resolution

Lai, T., Ji, H., Bui, T., Tran, Q., Dernoncourt, F., Chang, W. (Jun. 11, 2021)

NAACL 2021

Inducing Rich Interaction Structures between Words for Document-level Event Argument Extraction

Veyseh, A., Dernoncourt, F., Tran, Q., Manjunatha, V., Wang, L., Jain, R., Kim, D., Chang, W., Nguyen, T. (May. 14, 2021)

PAKDD 2021

What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation

Veyseh, A., Dernoncourt, F., Tran, Q., Nguyen, T. (Dec. 13, 2020)


Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

Tran, Q., Dam, N., Lai, T., Dernoncourt, F., Le, T., Le, N., Phung, D. (Dec. 13, 2020)


Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional Networks and Syntax-based Regulation

Veyseh, A., Nouri, N., Dernoncourt, F., Tran, Q., Dou, D., Nguyen, T. (Nov. 18, 2020)

EMNLP Findings 2020

Scene Graph Modification Based on Natural Language Commands

He, X., Tran, Q., Haffari, G., Chang, W., Lin, Z., Bui, T., Dernoncourt, F., Dam, N. (Nov. 18, 2020)

EMNLP Findings 2020

Rethinking Self-Attention: Towards Interpretability in Neural Parsing

Mrini, K., Dernoncourt, F., Tran, Q., Bui, T., Chang, W., Nakashole, N. (Nov. 18, 2020)

EMNLP Findings 2020