SIGGRAPH 2024:  Research Trends 

September 23, 2024

Tags: AI & Machine Learning, Graphics (2D & 3D)

By Miloš Hašan, Senior Research Scientist 

Every year, researchers, artists, developers, filmmakers, and business professionals working at the cutting edge of computer graphics gather for SIGGRAPH. At this year’s SIGGRAPH, we had the chance to learn about groundbreaking advances from around the world.  

Adobe Research at SIGGRAPH 2024 

Adobe Research made a strong impact at the SIGGRAPH 2024 Technical Papers Program, with 23 research papers published

For example, diffusion models alone were the subject of several papers, applying them to noise generation (One Noise to Rule Them All), texture editing (TexSliders), intrinsic decomposition and recomposition (IntrinsicDiffusion, RGB<->X), or focusing on controllability (LooseControl), compositionality (Separate-and-Enhance) and de-occlusion (Object-level Scene Deocclusion). 

A traditionally important area for Adobe is vector graphics, with papers addressing two key tasks: turning regular raster images into vector format (Deep Sketch Vectorization) and fast rasterization of vector assets (GPU-accelerated Rendering of Vector Brush Strokes). Recently, directly generating vector assets from text Recently, directly generating vector assets from text became possible (Text-to-vector Generation With Neural Path Representation). Yet another technique focuses on the generation of repeatable vector patterns in the style of M. C. Escher (Generative Escher Meshes). 

A sizeable batch of Adobe papers focused on the representation, generation, editing, and animation of 3D geometry. Generation of 3D shapes by focusing on medial axis skeletons is studied by GEM3D. Another application of the medial axis results in a fast method to cover a 3D surface with a space-filling curve (Surface-Filling Curve Flows via Implicit Medial Axes). The classical topic of adaptive meshing of geometry defined by implicit surfaces is further improved in Adaptive Grid Generation for Discretizing Implicit Complexes. A new tool is introduced for 3D shape deformations (Biharmonic Coordinates and their Derivatives for Triangular 3D Cages). 

Other topics include generation of realistic terrain heightfields (Terrain Amplification using Multi-scale Erosion), fluid simulation (Fluid Control with Laplacian Eigenfunctions) and cloth simulation (Progressive Dynamics for Cloth and Shell Animation). Speaking of cloth, another method captures woven fabric appearance from two photos (Woven Fabric Capture With a Reflection-Transmission Photo Pair). 

Finally, a neural method allows tohe  compression of the geometry and appearance of a synthetic 3D shape or scene by allowing arbitrary ray queries (N-BVH: Neural Ray Queries with Bounding Volume Hierarchies). 

Our members and collaborators also received several honors for their work. Principal Scientist Aaron Hertzman was celebrated with the Computer Graphics Achievement Award, and former two-time Adobe Research intern Zachary Ferguson received the Outstanding Doctoral Dissertation Award. Alec Jacobson won an ACM SIGGRAPH Test-of-Time Award (given for lasting impact of computer graphics research over the last decade) for his 2013 research paper on generalized winding numbers. 

In addition, Adobe Research supported creativity and technical advancements at SIGGRAPH 2024 by sponsoring the Art Papers and Technical Papers Fast Forward. 

Examples of text-guided vector graphics from the Adobe Research co-authored paper “Text-to-Vector Generation with Neural Path Representation“.

Diffusion models applied to new tasks 

Perhaps the biggest topic in recent years is the development of large generative foundational models and their fine-tuning for specific tasks. In the context of SIGGRAPH, this trend leads to a special focus on diffusion models, whether for generating images, videos, or special assets of interest to graphics, such as textures and materials. Here are a few that caught my attention. 

  • Authoring materials on shapes by manual artist effort is one of the costliest steps in professional content creation. Diffusion models are consistently chipping away at automating this task by generating materials, in the raw form or directly on objects. Generative texture painting on meshes using a diffusion model was also introduced. 
  • Several papers address recent worries about the controllability and consistency of diffusion models. For example, some new methods can generate consistent subjects over a series of images, without requiring fine-tuning of the base model. 
  • For the classical problem of image analogies, introduced over 20 years ago by Aaron Hertzmann (given images A, A’, B, generate an image B’ that relates to B analogously to how A’ relates to A), we saw a new solution based on a diffusion model that does not require fine-tuning. 
  • What about generating images of completely new concepts, never seen in the training data? Surely this would forever remain outside the abilities of machine learning models, limited to concepts within their training distribution, right? Well, a method combining diffusion models and VLMs achieves just that. 
  • Video generation by diffusion also shows steady progress towards controllability, promising to become a viable alternative to 3D animation rendering. Several papers address the controllability challenge. 

Diffusion models are taking on 3D geometry generation, too 

Beyond generating 2D images, videos, textures and materials, computer graphics researchers are obviously interested in generating 3D shapes. Unsurprisingly, most of the solutions involve a diffusion model as a step in their pipelines. 

  • Controllability is a primary topic for 3D generation as well. Researchers are working on several options, including methods that enable control using simple proxy shapes and sketches. 
  • What about generating a whole family of shapes with a consistent style, which creators need for (say) a movie or game that tells a unified story? This is precisely the goal of some methods; another method addresses the problem where the specific family of shapes is a 3D font. 
  • Professional authoring pipelines frequently require 3D shapes with more structure than a simple triangle or Gaussian soup. Researchers are now able to generate the topology and geometry as a hierarchical tree. 

Gaussians, NeRFs, and meshes continue the battle for 3D representation 

The explosive growth of 3D Gaussian splatting techniques started only one year ago, with the original paper on this topic published at SIGGRAPH 2023. Now, several papers are pushing the boundaries of the Gaussian representation.  

  • Highly reflective surfaces pose challenges for the original method, but are very common in practice, including virtually all cars, among many other objects. High-quality rendering of these surfaces is now possible. 
  • An unsatisfying feature of 3D Gaussians was the necessity to flatten them into 2D “billboards” in practical rendering. New methods now allow this to be done with improved accuracy or to avoid it altogether by embracing 2D Gaussian disks instead. 
  • Conversely, why should Gaussian representations only apply up to 3 dimensions? Further work on adapting Gaussian splatting to dynamic scenes has led to several new methods. N-dimensional fitting derives practical benefits from Gaussian mixtures as well. 

Some in the research world expected Gaussian representations to fully replace volumetric NeRF solutions. This has not happened, and progress on the NeRF front has been impressive. 

  • For example, an elegant generalization of the well-known triplane representation beats all existing NeRF and Gaussian-based representations in terms of 3D object representation quality. 
  • Researchers presented interesting developments in the use of NeRF representations for lighting, whether as replacement for environment maps or for fully dynamic light transport. 
  • High quality, interactive rendering of large scenes is an essential need, and the state of the art is currently held by a NeRF method. 
  • Finally, meshes are extremely fast to render but have recently been overshadowed by NeRFs and Gaussians for novel view synthesis of real scenes. That has been due to their lower ability to represent subtle detail. These issues may, however, not be fundamental; an in-depth analysis of mesh aliasing leads to high-quality view synthesis using meshes. 

Stochastic and Monte Carlo techniques are still bringing us new discoveries 

The classical field of Monte Carlo sampling and rendering, where my research journey started, has been considered “solved” many times, but never correctly.  Research presented at SIGGRAPH 2024 shows surprising progress on these classical topics. 

  • A new low-discrepancy sampling sequence manages to improve over the venerable Sobol sequence in terms of sample uniformity in 2D and 4D. 
  • Surface-based and volumetric material models can be unified by taking the average appearance over many stochastically sampled implicit surfaces. 
  • Another classical topic, importance-sampling analytic material models, requires surprisingly different techniques in the context of differentiable rendering. 
  • Speaking of differentiable rendering, analytically evaluating the gradient may not always be possible, but a new method shows that a clever fitting of a neural network to local samples of a function gives a surrogate whose gradient can work as well as the true unknown gradient. 
  • A very interesting recent development is the application of Monte Carlo techniques to the field of physical simulation (e.g. fluid simulation). This is an area recently dominated by complex discretization and meshing techniques, especially when dealing with non-trivial boundary conditions. It will be exciting to watch the growth of Monte Carlo applications to simulation. At SIGGRAPH 2024, we saw applications of this trend to fluid simulation and solving Poisson equations. 

Wondering what else is happening inside Adobe Research? Check out our latest news here. 

Recent Posts