Adobe Research at CVPR 2021

June 22, 2021

In this CVPR 2021 paper, we propose a method that can generate highly detailed high-resolution depth estimates from a single image.

Adobe actively participates in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) each year. In this year’s conference, taking place from June 19 through June 25, Adobe will present 42 co-authored papers, including 12 oral papers, 27 posters, and 3 workshop papers. In addition, two of the papers have been nominated as best paper candidates.

Adobe authors have also contributed to the conference in many other ways, including co-organizing several workshops, area chairing, and reviewing papers. In addition, Adobe reviewers received several best reviewer awards.

Nearly all of Adobe’s papers are the results of student internships or other collaborations with university students and faculty. Check out the Adobe Research internships and full-time careers pages to learn more about internships and full-time career opportunities. 

Here are Adobe’s contributions to CVPR 2021.

Oral Papers

A Deep Emulator for Secondary Motion of 3D Characters
Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbič
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk, Rajiv Jain, Varun Manjunatha, Vlad I. Morariu, Ashutosh Mehra, Vicente Ordonez, Kate Saenko
DECOR-GAN: 3D Shape Detailization by Conditional Refinement
Zhiqin Chen, Vladimir G. Kim, Matthew Fisher, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri
DeepMetaHandles: Learning Deformation Meta-Handles of 3D Meshes with Biharmonic Coordinates
Minghua Liu, Minhyuk Sung, Radomir Mech, Hao Su
Im2Vec: Synthesizing Vector Graphics without Vector Supervision
Pradyumna Reddy, Michael Gharbi, Michal Lukac, Niloy Mitra
Learning Delaunay Surface Elements for Mesh Reconstruction
Marie-Julie Rakotosaona, Paul Guerrero, Noam Aigerman, Niloy Mitra, Maks Ovsjanikov
Learning to recover 3D Scene Shape from a single image
Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, Chunhua Shen
Best Paper Nominee
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering
Fanbo Xiang, Zexiang Xu, Milos Hasan, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Hao Su
OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets
Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker
Rethinking and Improving the Robustness of Image Style Transfer
Pei Wang, Yijun Li, Nuno Vasconcelos
Best Paper Nominee
SSN: Soft Shadow Network for Image Compositing
Yichen Sheng, Jianming Zhang, Bedrich Benes
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
Zongze Wu, Dani Lischinski, Eli Shechtman

Poster Papers

Anycost GANs for Interactive Image Synthesis and Editing
Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zhu
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
S. Mahdi H. Miangoleh, Sebastian Dille, Long Mai, Sylvain Paris, Yagiz Aksoy
Content-Aware GAN Compression
Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S.Y. Kung
Deep Denoising of Flash and No-Flash Pairs for photography in low-light environments
Zhihao Xia, Michael Gharbi, Federico Perazzi, Kalyan Sunkavalli, Ayan Chakrabarti
Ensembling with Deep Generative Views
Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhang
Exemplar-Based Open-Set Panoptic Segmentation Network
Jaedong Hwang, Seoung Wug Oh, Joon-Young Lee, Bohyung Han
Exploiting Semantic Embedding and Visual Feature for Facial Action Unit Detection
Huiyuan Yang, Lijun Yin, Yi Zhou, Jiuxiang Gu
Few-shot Image Generation via Cross-domain Correspondence
Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang
IMAGINE: Image Synthesis by Image-Guided Model Inversion
Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos
Joint Learning of 3D Shape Retrieval and Deformation
Mikaela Angelina Uy, Vladimir G. Kim, Minhyuk Sung, Noam Aigerman, Siddhartha Chaudhuri, Leonidas Guibas
LayoutGMN: Neural Graph Matching for Structural Layout Similarity
Akshay Gadi Patil, Manyi Li, Matthew Fisher, Manolis Savva, Hao Zhang
Learning by Planning: Language-Guided Global Image Editing
Jing Shi, Ning Xu, Yihang Xu, Trung Bui, Franck Dernoncourt, Chenliang Xu
Learning to Associate Every Segment for Video Panoptic Segmentation
Sanghyun Woo, Dahun Kim, Joon-Young Lee, In So Kweon
Learning to Predict Visual Attributes in the Wild
Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, Scott Cohen, Quan Tran, Abhinav Shrivastava
Magic Layouts: Structural Prior for Component Detection in User Interface Designs
Dipu Manandhar, Hailin Jin, John Collomosse
Mask Guided Matting via Progressive Refinement Network
Qihang Yu, Jianming Zhang, He Zhang, Yilin Wang, Zhe Lin, Ning Xu, Yutong Bai, Alan Yuille
Multimodal Contrastive Training for Visual Representation Learning
Xin Yuan, Zhe Lin, Jason Kuen, Jianming Zhang, Yilin Wang, Michael Maire, Ajinkya Kale, Baldo Faieta
Multi-Scale Aligned Distillation for Low-Resolution Detection
Lu Qi, Jason Kuen, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya Jia
Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang
Neural Surface Maps
Luca Morreale, Noam Aigerman, Vladimir Kim, Niloy J. Mitra
Polygonal Point Set Tracking
Gunhee Nam, Miran Heo, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach
Xingqian Xu, Zhifei Zhang, Zhaowen Wang, Brian Price, Zhonghao Wang, Humphrey Shi
Seeing behind Objects for 3D Multi-Object Tracking in RGB-D sequences
Norman Müller, Yu-Shiang Wong, Niloy J. Mitra, Angela Dai, Matthias Nießner
SelfDoc: Self-Supervised Document Representation Learning
Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, Hongfu Liu
Spatially-Adaptive Pixelwise Networks for Fast Image Translation
Tamar Rott Shaham, Michael Gharbi, Richard Zhang, Eli Shechtman, Tomer Michaeli
Tackling the Ill-Posedness of Super-Resolution through Adaptive Target Generation
Younghyun Jo, Seoung Wug Oh, Peter Vajda, Seon Joo Kim
TransFill: Reference-guided Image Inpainting by Merging Multiple Color and Spatial Transformations
Yuqian Zhou, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi

Workshop Papers
AICC 2021 – AI for Content Creation Workshop
Directional GAN: A Novel Conditioning Strategy for Generative Networks
Shradha Agrawal, Dhanya Raghu, Deepak Pai, Shankar Venkitachala
WMF 2021 – Workshop for Media Forensics
Deep Image Comparator: Learning to Visualize Editorial Change
Alexander Black, Tu Bui, Hailin Jin, Vishy Swaminathan, John Collomosse
MULA 2020 – Multimodal Learning and Applications Workshop
APES: Audiovisual Person Search in Untrimmed Video
Juan Carlos Leon, Fabian Caba Heilbron, Long Mai, Federico Perazzi, Joon-Young Lee, Pablo Arbelaez, Bernard Ghanem

Workshop Co-organizer
Large-scale Video Object Segmentation Challenge
Ning Xu, Joon-Young Lee
AI for Content Creation
Cynthia Lu, Kalyan Sunkavalli
Sketch-Oriented Deep Learning (SketchDL)
John Collomosse
International Challenge on Activity Recognition (ActivityNet)
Fabian Caba
LatinX in AI
Fabian Caba

Invited Workshop Talks
Learning from Unlabeled Videos Workshop
Bryan Russell
Chart Question Answering Workshop
Kushal Kafle, Zoya Bylinskii

LatinX in AI
Ruben Villegas

Sight and Sound Workshop
Justin Salamon

Outstanding Reviewers from Adobe

Bryan Russell, Connelly Barnes, Cynthia Lu, Eli Shechtman, Fabian Caba, Jianming Zhang, Jimei Yang, John Collomosse, Joon-Young Lee, Kushal Kafle, Matt Fisher, Michael Gharbi, Michal Lukáč, Paul Guerrero, Ruben Villegas, Siddhartha Chaudhuri, Simon Niklaus, Taesung Park, Thibault Groueix, Tobias Hinz, Vova Kim, Zhixin Shu

The complete list of outstanding reviewers can be found here

Related Posts