Researcher Spotlight: Jingwan (Cynthia) Lu

March 7, 2018

Tags: Careers, Researcher Spotlights

Adobe Research Researcher Spotlight: Jingwan (Cynthia) Lu

Research scientist Jingwan (Cynthia) Lu first came to Adobe Research as an intern while pursuing her PhD in computer science at Princeton. She completed four internships with the research team, publishing four research papers in leading venues and producing a landmark technology, Real Brush, which became a digital painting feature in Adobe Photoshop Sketch. Now a full-time researcher, she’s on a quest to develop sophisticated machine-learning-powered tools for creatives.

What do you focus on at Adobe Research?

In graduate school, I figured out that I wanted to work on digital painting, but from a novel, data-driven perspective. That’s different from traditional, simulation-based digital painting that models physical interactions between canvas, pigments, and painting instruments. I asked: We have pictures of actual brushstrokes out there, pictures of paint—how can we use that data to enrich the appearance of digital painting? That’s how Real Brush was born.

Now, I want to bring a data-driven, intelligent support approach to other creative processes, especially image editing and synthesis. I want to know how we can use machine intelligence and the vast amount of data available today to help make the process of working with images more intuitive, and to free artists for the real creative task.

Scribbler is one of my first projects in this area. I hired an intern, Patsorn Sangkloy from Georgia Tech, to partner with me on this. Scribbler was chosen for an Adobe MAX demo in 2017. It’s an interactive system that colors and textures your images, powered by machine learning and Adobe Sensei.

You are involved in cutting-edge work on GANs, generative adversarial networks. Could you tell us about this area?

On a broad level, we want to know: How can we edit images in a more intelligent way? Instead of pixel-level editing, can we allow the swipe of a finger, a simple click, or a scribble to achieve intelligent editing?

GANs are a form of artificial intelligence using machine learning, and they may hold the key. We feed lots of data into these networks. Once trained on the thousands of images we expose them to, the system can create new image content. We can leverage that for developing powerful tools.

This kind of machine learning can be very complex. Scribbler uses a specific type of conditional GANs. It has one network that learns to understand input images and generates modifications constrained by them, and another network that learns to discriminate whether the generated image looks real.

I am a leader of an internal initiative that focuses on using GANs to help with image editing and image synthesis. We want to come up with an overall vision, share ideas, and collaborate on projects, including Scribbler and others.

I also work on makeup transfer, where you can take makeup from a face in an image and apply it to another image. This also relies on conditional GANs. It was shown in a diversity talk at Adobe, chosen as a positive example of the benefits of having a diverse workforce working on these kinds of questions.

The connection between art, photos, and data isn’t necessarily clear to people outside your field. What motivates you to focus on using data to further creativity?

I’m passionate about data. Simulation is good and powerful, but it is based on limited human knowledge. We are creative, but we can’t keep that much knowledge within one head. Our progress is slower than the explosion in data happening today.

I’ve always been fascinated by how much knowledge there is in data, and also how unstructured and complicated the data appears “in the wild.” And I think if we can figure out a way to have machines learn about the data—especially to enable machines to do unsupervised learning—there is great power there. Computers can help us find the inherent structure and order in the data and apply it to specific tasks.

Do you have any concerns about what that means for humans?

I think humans are still best at creativity. We tell machines what kinds of decisions to make, and we can judge how much we should believe the machines’ answers. Machines have to rely on seeing a lot of inclusive examples. I am very confident about the power of machine learning, but I do not see it replacing human creativity. Rather, it could enhance what people can do.

How does the environment at Adobe Research help advance your work?

Adobe Research is a very open, friendly, collaborative environment. Here, I have the absolute freedom to collaborate with a broad range of people on our team and throughout Adobe, as well as with university partners. There’s also a vibrant internship culture. And it’s a good fit because I am interested in applied research. I like to see my work being used.

What would you say to someone who might be interested in pursuing research at Adobe Research?

I would encourage PhD candidates not to consider jobs in this research environment as inferior to faculty positions in universities. Our roles have a lot of similarities with faculty jobs, and some advantages. In industry, we have data, computing powers, and resources. So in a way, we are in a better position than universities to make fundamental progress in computer science.

Jingwan (Cynthia) Lu is training computers to transfer makeup styles from one face to another using sophisticated GANs.

Based on an interview with Meredith Alexander Kunz, Adobe Research

Related Posts