Brian Budge
What's New?
About Me
Current Work
Contact Me
index » Publications
Enabling Increased Complexity for Realistic Image Synthesis
Scalability graphB. Budge, dissertation [PDF]
author = {Brian C. Budge},                                                                
title = {Enabling Increased Complexity for Realistic Image Synthesis},                                               
school = {University of California, Davis},                                                                               year = {2009},                                                                                                            month = {Dec.},

Abstract A discussion of much of the work I performed during my PhD at Davis.

Out-of-core Data Management for Path Tracing on Hybrid Resources
Boeing Cockpit B. Budge, T. Bernardin, J. Stuart, S. Sengupta, K. Joy, J. Owens. In Eurographics, 2009. [PDF]
author = {Brian C. Budge and Tony Bernardin and Jeff A. Stuart and 
Shubhabrata Sengupta and Kenneth I. Joy and John D. Owens},
title = {Out-of-core Data Management for Path Tracing on Hybrid Resources},
book = {Eurographics},
year = {2009},

Abstract We present a software system that enables path-traced rendering of complex scenes. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out-of-core scheduling and data-management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture, discuss design decisions of the system's data-management layer, and outline an efficient implementation of a path tracer application, where GPUs perform functions such as ray tracing, shadow tracing, importance-driven light sampling, and surface shading. The use of GPUs speeds up the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability.

Caustic Forecasting: Unbiased Estimation of Caustic Lighting for Global Illumination
Dragon and Horse B. Budge, J. Anderson, K. Joy. In Pacific Graphics, 2008. [PDF]
author = {Brian C. Budge and John C. Anderson and Kenneth I. Joy},
title = {Caustic Forecasting: Unbiased Estimation of Caustic 
Lighting for Global Illumination},
book = {Pacific Graphics},
year = {2008},

Abstract We present an unbiased method for generating caustic lighting using importance sampled Path Tracing with {\em Caustic Forecasting}. Our technique is part of a straightforward rendering scheme which extends the \emph{Illumination by Weak Singularities} method to allow for fully unbiased global illumination with rapid convergence. A photon shooting preprocess, similar to that used in Photon Mapping, generates photons that interact with specular geometry. These photons are then clustered, effectively dividing the scene into regions which will contribute similar amounts of caustic lighting to the image. Finally, the photons are stored into spatial data structures associated with each cluster, and the clusters themselves are organized into a spatial data structure for fast searching. During rendering we use clusters to decide the caustic energy importance of a region, and use the local photons to aid in importance sampling, effectively reducing the number of samples required to capture caustic lighting.

Stack-based Visualization of Out-of-Core Algorithms
Visualization T. Bernardin, B. Budge, B. Hamann. In ACM SoftVis, 2008. [PDF]
author = {Tony Bernardin and  Brian Budge and Bernd Hamann},
title = {Stack-based Visualization of Out-of-Core Algorithms},
book = {ACM SoftVis},
year = {2008},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},

Abstract We present a visualization system to assist designers of scheduling-based multi-threaded out-of-core algorithms. Our system facilitates the understanding and improving of the algorithm through a stack of visual widgets that effectively correlate the out-of-core system state with scheduling decisions. The stack presents an increasing refinement in the scope of both time and abstraction level; at the top of the stack, the evolution of a derived efficiency measure is shown for the scope of the entire out-of-core system execution and at the bottom the details of a single scheduling decision are displayed. The stack provides much more than a temporal zoom-effect as each widget presents a different view of the scheduling decision data, presenting distinct aspects of the out-of-core system state as well as correlating them with the neighboring widgets in the stack. This approach allows designers to hone in on problems in scheduling or algorithm design. As a case study we consider a global illumination renderer and show how visualization of the scheduling behavior has led to key improvements of the renderer's performance.

Accelerated Building and Ray Tracing of Restricted BSP Trees
Mars Rover B. Budge, D. Coming, D. Norpchen, K. Joy. In Symposium on Interactive Ray Tracing, 2008. [PDF]
author = {Brian C. Budge and Daniel Coming and Derek Norpchen and Kenneth I. Joy},
title = {Accelerated Building and Ray Tracing of Restricted BSP Trees},
book = {Symposium on Interactive Ray Tracing},
year = {2008},

Abstract We present algorithms for building and ray tracing restricted BSP trees. The build algorithm uses a dynamic programming technique to compute coefficients that allow efficient calculation of the surface area heuristic. This algorithm reduces asymptotic runtime, and has significant impact on tree building time. Additionally, we make several simple observations which lead to very fast ray-tree traversal of RBSPs. Our new traversal algorithm is based on state-of-the-art kd-tree traversal algorithms, and effectively increases the speed of ray tracing RBSPs by an order of magnitude. We show that RBSP trees are not only practical to build, but that RBSP trees are nearly as fast to ray trace as kd-trees, generally accepted as the fastest ray acceleration structure.

A hybrid CPU-GPU Implementation for Interactive Ray-Tracing of Dynamic Scenes
Dragon with
	      baby dragons B. Budge, J. Anderson, C. Garth, K. Joy. In UC Davis Computer Science Tech Reports, 2008. [PDF]
author = {Brian C. Budge and John C. Anderson and Christoph Garth and Kenneth I. Joy},
title = {A hybrid CPU-GPU Implementation for
Interactive Ray-Tracing of Dynamic Scenes},
institution = {University of California, Davis Computer Science},
year = {2008},
number = {CSE-2008-9},

Abstract In recent years, applying the powerful computational resources delivered by modern GPUs to ray tracing has resulted in a number of ray tracing implementations that allow rendering of moderately sized scenes at interactive speeds. For non-static scenes, besides ray tracing performance, fast construction of acceleration data structures such as kd-trees is of primary concern. In this paper, we present a novel implementation for the ray tracing of both static and dynamic scenes. We first describe an optimized GPU-based ray tracing approach within the CUDA framework that does not explicitly make use of ray coherency or architectural specifics and is therefore simple to implement, while still exceeding performance of previously presented approaches. Optimal performance is achieved by empirically tuning the ray tracing kernel to the executing hardware. Furthermore, we describe a straightforward parallel approach for approximate quality kd-tree construction, aimed at multi-core CPUs. The resulting hybrid ray tracer is able to render fully dynamic scenes with hundreds of thousands of triangles at interactive speeds. We describe our implementation in detail and provide a performance analysis and comparison to prior work.

Geometric Texturing Using Level Sets
Dragon with
	      baby dragons A. Brodersen, K. Museth, S. Porumbescu and B. Budge. In Transactions on Visualization and Graphics, 14(2), 2008. [PDF]
author = {Anders Brodersen and Ken Museth and Serban Porumbescu and Brian Budge},
title = {Geometric Texturing Using Level Sets},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {14},
number = {2},
year = {2008},
issn = {1077-2626},
pages = {277-288},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},

Abstract We present techniques for warping and blending (or subtracting) geometric textures onto surfaces represented by high resolution level sets. The geometric texture itself can be represented either explicitly as a polygonal mesh or implicitly as a level set. Unlike previous approaches, we can produce topologically connected surfaces with smooth blending and low distortion. Specifically, we offer two different solutions to the problem of adding fine-scale geometric detail to surfaces. Both solutions assume a level set representation of the base surface which is easily achieved by means of a mesh-to-level-set scan conversion. To facilitate our mapping, we parameterize the embedding space of the base level set surface using fast particle advection. We can then warp explicit texture meshes onto this surface at nearly interactive speeds or blend level set representations of the texture to produce high-quality surfaces with smooth transitions.

Dense Geometric Flow Visualization
Dense Flow Sung Park, Brian C. Budge, Lars Linsen, Bernd Hamann, Kenneth I. Joy. In VGTC Symposium on Visualization 2005, 2005. [PDF]
title	    =	"Dense Geometric Flow Visualization",
booktitle   =	"VGTC Symposium on Visualization 2005",
author	    =	"Sung Park AND Brian C. Budge AND Lars Linsen 
                 AND Bernd Hamann AND Kenneth I. Joy ",
year	    =	"2005",
month	    =	jun,
publisher   =	"Eurographics Association",

Abstract We present a flow visualization technique based on rendering geometry in a dense, uniform distribution. Flow is integrated using particle advection. By adopting ideas from texture-based techniques and taking advantage of parallelism and programmability of contemporary graphics hardware, we generate streamlines and pathlines addressing both steady and unsteady flow. Pipelining is used to manage seeding, advection, and expiration of streamlines/pathlines with constant lifetime. We achieve high numerical accuracy by enforcing short particle lifetimes and employing a fourth-order integration method. The occlusion problem inherent to dense volumetric representations is addressed by applying multi-dimensional transfer functions (MDTFs), restricting particle attenuation to regions of certain physical behavior, or features. Geometry is rendered in graphics hardware using techniques such as depth sorting, illumination, haloing, flow orientation, and depth-based color attenuation to enhance visual perception. We achieve dense geometric three-dimensional flow visualization with interactive frame rates.

Shell Maps
Whale Tail Vase Serban D. Porumbescu, Brian C. Budge, Zhi (Louis) Feng, Kenneth I. Joy In ACM SIGGRAPH , 24(3), 2005. [PDF]
title	 =	"Shell Maps",
journal	 =	"ACM SIGGRAPH 2005, ACM Transactions on Graphics",
author	 =	"Serban D. Porumbescu AND Brian C. Budge AND 
                 Zhi (Louis) Feng AND Kenneth I. Joy ",
year	 =	"2005",
pages	 =	"626--633",
volume	 =	"24",
number	 =	"3",
publisher=	"ACM Press",

Abstract A shell map is a bijective mapping between shell space and texture space that can be used to generate small-scale features on surfaces using a variety of modeling techniques. The method is based upon the generation of an offset surface and the construction of a tetrahedral mesh that fills the space between the base surface and its offset. By identifying a corresponding tetrahedral mesh in texture space, the shell map can be implemented through a straightforward barycentriccoordinate map between corresponding tetrahedra. The generality of shell maps allows texture space to contain geometric objects, procedural volume textures, scalar fields, or other shell-mapped objects.

Multi-dimensional Transfer Functions for Interactive 3D Flow Visualization
MDTF flow Sung Park, Brian C. Budge, Lars Linsen, Bernd Hamann, Kenneth I. Joy. In Pacific Graphics, 2004. [PDF]
author = {Sung W. Park and Brian Budge and Lars Linsen and 
          Bernd Hamann and Kenneth I. Joy},
title = {Multi-Dimensional Transfer Functions for Interactive 
         3D Flow Visualization},
journal = {Pacific Graphics},
volume = {00},
year = {2004},
issn = {1550-4085},
pages = {177-185},
doi = {},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},

Abstract Transfer functions are a standard technique used in volume rendering to assign color and opacity to a volume of a scalar field. Multi-dimensional transfer functions (MDTFs) have proven to be an effective way to extract specific features with subtle properties. As 3D texture-based methods gain widespread popularity for the visualization of steady and unsteady flow field data, there is a need to define and apply similar MDTFs to interactive 3D flow visualization. We exploit flow field properties such as velocity, gradient, curl, helicity, and divergence using vector calculus methods to define an MDTF that can be used to extract and track features in a flow field. We show how the defined MDTF can be applied to interactive 3D flow visualization by combining them with state-of-the-art texture-based flow visualization of steady and unsteady fields. We demonstrate that MDTFs can be used to help alleviate the problem of occlusion, which is one of the main inherent drawbacks of 3D texture-based flow visualization techniques. In our implementation, we make use of current graphics hardware to obtain interactive frame rates.

An Ocularist s Approach to Human Iris Synthesis
Iris Rendering Aaron Lefohn, Richard Caruso, Erik Reinhard, Brian Budge and Peter Shirley. In Computer Graphics and Applications, Nov/Dec, 2003. [PDF]
author = {Aaron Lefohn and Brian Budge and Peter Shirley and 
          Richard Caruso and Erik Reinhard},
title  = {An Ocularist's Approach to Human Iris Synthesis},
journal= {IEEE Computer Graphics and Applications},
volume = {23},
number = {6},
year   = {2003},
issn   = {0272-1716},
pages  = {70-75},
doi    = {},

Abstract We have a particularly fortunate situation in iris synthesis: artificial eye makers (ocularists) have developed a procedure for physical iris synthesis that results in eyes with all the important appearance characteristics of real eyes. They have refined this procedure over decades,and the performance of their products in the real world completely validates the approach. Our approach lets users (other than trained ocularists) create a realistic looking human eye, paying particular attention to the iris. We draw from domain knowledge provided by ocularists to provide a toolkit that composes a human iris by layering semitransparent textures. These textures look decidedly painted and unrealistic. The composited result, however, provides a sense of depth to the iris and takes on a level of realism that we believe others have not previously achieved. Prior work on rendering eyes has concentrated predominantly on producing geometry for facial animation or for medical applications. Some work has focused on accurately modeling the cornea. In contrast, the goal of our work is the easy creation of realistic looking irises for both the ocular prosthetics and entertainment industries.

Simple Nested Dielectrics In Ray Traced Images
Glass with
	      Ice Cube Charles M. Schmidt and Brian Budge. In Journal of Graphics Tools, 7(2), 2002. [PDF]
author = "Charles M. Schmidt and Brian Budge",
title = "Simple Nested Dielectrics in Ray Traced Images",
journal = "journal of graphics tools",
volume = "7",
number = "2",
pages = "1-8",
year = "2002",

Abstract This paper presents a simple method for modeling and rendering refractive objects that are nested within each other. The technique allos the use of simpler scene geometry and can even improve rendering time in some images. The algorithm can be easily added into an existing ray tracer and makes no assumptions about the drawing primitives that have been implemented.