This is a nearly-complete list of my publications. They are in reverse chronological order, including where they were presented/published and their abstracts. Unless otherwise noted, I was the principal author. Click on a title to download a PDF version of the publication. You'll need Adobe Acroread Reader, xpdf, or a similar PDF reader to access them.
Co-authored with Antonio Ledesma (principal)
Presented at the 2021 International Symposium for Visual Computing (ISVC 2021), (Virtual; October 4-6, 2021)
Will also apper in Lecture Notes in Computer Science, Number 13017, pps. 1-13 (2021)
We describe SORGATE, a procedure for extracting geometry and texture from images of solids of revolution (SORs). It uses multivariate optimization to determine the parameters of the camera in order to build the viewing transform, as well as to reconstruct the geometry of the SOR using the silhouette. In addition to individual image analyses, it can use the data extracted from the same SOR viewed from different directions to produce a single, composite texture which can be combined with their blended geometries to produce a reconstructed 3D model. No prior knowledge other than the object’s rotational symmetry is required. Camera viewing parameters are derived directly from the image. SORGATE is useful when 3D modeling of SORs is needed yet direct measurement the physical objects is infeasible. As it does not require camera calibration, it is also fast, inexpensive, and practical. One use case might be for researchers and curators who wish to display and/or analyze the art on historical vases; metric reconstruction and proper texturing of the objects would allow this without requiring viewing in person.
lightning talk presented at SIGCSE '15: The 46th ACM Technical Symposium on Computer Science Education (Kansas City, MO; March 4-7, 2015)
(abstract)
Co-authored with Sujith Puthiyaveetila (principal), Onie Tsabarib, Troy Lowryc, Steven Lenhertc, Ziv Reichb, and Helmut Kirchhoff
Appeared in Proc. Nat. Acad. Sci. USA, vol. 28, no. 13, pp. 1704-1713, (May, 2012).
A crucial component of protein homeostasis in cells is the repair of damaged proteins. The repair of oxygen-evolving photosystem II (PS II) supercomplexes in plant chloroplasts is a prime example of a very efficient repair process that evolved in response to the high vulnerability of PS II to photooxidative damage, exacerbated by high-light (HL) stress. Significant progress in recent years has unraveled individual components and steps that constitute the PS II repair machinery, which is embedded in the thylakoid membrane system inside chloroplasts. However, an open question is how a certain order of these repair steps is established and how unwanted back-reactions that jeopardize the repair efficiency are avoided. Here, we report that spatial separation of key enzymes involved in PS II repair is realized by subcompartmentalization of the thylakoid membrane, accomplished by the formation of stacked grana membranes. The spatial segregation of kinases, phosphatases, proteases, and ribosomes ensures a certain order of events with minimal mutual interference. The margins of the grana turn out to be the site of protein degradation, well separated from active PS II in grana core and de novo protein synthesis in unstacked stroma lamellae. Furthermore, HL induces a partial conversion of stacked grana core to grana margin, which leads to a controlled access of proteases to PS II. Our study suggests that the origin of grana in evolution ensures high repair efficiency, which is essential for PS II homeostasis.
Co-authored with Brian L LaMarche (principal), Kevin L Crowell, Navdeep Jaitly, Vladislav A Petyuk, Anuj R Shah, Ashoka D Polpitiya, John D Sandoval, Gary R Kiebel, Matthew E Monroe, Stephen J Callister, Thomas O Metz, Gordon A Anderson, and Richard D Smith
Appeared in BMC Bioinformatics, vol. 14, no. 49, pp. 1-14, (February 12, 2013).
Background: MultiAlign is a free software tool that aligns multiple liquid chromatography-mass spectrometry datasets to one another by clustering mass and chromatographic elution features across datasets. Applicable to both label-free proteomics and metabolomics comparative analyses, the software can be operated in several modes. For example, clustered features can be matched to a reference database to identify analytes, used to generate abundance profiles, linked to tandem mass spectra based on parent precursor masses, and culled for targeted liquid chromatography-tandem mass spectrometric analysis. MultiAlign is also capable of tandem mass spectral clustering to describe proteome structure and find similarity in subsequent sample runs.
Results: MultiAlign was applied to two large proteomics datasets obtained from liquid chromatography-mass spectrometry analyses of environmental samples. Peptides in the datasets for a microbial community that had a known metagenome were identified by matching mass and elution time features to those in an established reference peptide database. Results compared favorably with those obtained using existing tools such as VIPER, but with the added benefit of being able to trace clusters of peptides across conditions to existing tandem mass spectra. MultiAlign was further applied to detect clusters across experimental samples derived from a reactor biomass community for which no metagenome was available. Several clusters were culled for further analysis to explore changes in the community structure. Lastly, MultiAlign was applied to liquid chromatography-mass spectrometry-based datasets obtained from a previously published study of wild type and mitochondrial fatty acid oxidation enzyme knockdown mutants of human hepatocarcinoma to demonstrate its utility for analyzing metabolomics datasets.
Conclusion: MultiAlign is an efficient software package for finding similar analytes across multiple liquid chromatography-mass spectrometry feature maps, as demonstrated here for both proteomics and metabolomics experiments. The software is particularly useful for proteomic studies where little or no genomic context is known, such as with environmental proteomics.
Appeared in Journal of Computing Sciences in Colleges, vol. 28, no. 1 (October, 2012), pps. 192-198 (also presented at the Thirteenth Annual Consortium for Computing Sciences in Colleges Northwestern Conference; Olympia, WA; October 5-6, 2012)
We describe coaster, a project intended to teach an introductory computer graphics class by means of a sequence of programming assignments. The assignments are incremental -- each one building on the previous ones -- and ultimately bring in most of the course content. We discuss the sequence of assignments and how they relate to the topics, how all assignments themselves share code (from the instructor's view), course policy strategies needed to maintain student progress, and (preliminary) results on course effectiveness.
Co-authored with Lars J. Kangas (principal), Thomas O. Metz, Giorgis Isaac, Brian T. Schrom, Bojana Ginovska-Pangovska, Luning Wang, Li Tan , and John H. Miller
Appeared in Bioinformatics, vol. 28, no. 13, pp. 1704-1713, (May, 2012).
Motivation: Liquid chromatography–mass spectrometry-based metabolomics has gained importance in the life sciences, yet it is not supported by software tools for high throughput identification of metabolites based on their fragmentation spectra. An algorithm (ISIS: in silico identification software) and its implementation are presented and show great promise in generating in silico spectra of lipids for the purpose of structural identification. Instead of using chemical reaction rate equations or rules-based fragmentation libraries, the algorithm uses machine learning to find accurate bond cleavage rates in a mass spectrometer employing collision-induced dissociation tandem mass spectrometry.
Results: A preliminary test of the algorithm with 45 lipids from a subset of lipid classes shows both high sensitivity and specificity.
Co-authored with Jeff Daily (principal)
Presented at the 2011 Python for Scientific Computing Conference (SciPy 2011), (Austin, TX; July 11-16, 2011).
(poster)
Co-authored with Daniel M. Best (principal)
Appeared in Computers & Geosciences, vol. 36, pp. 1436-1442, (November, 2010).
The Ground-Water Visualization application (GWVis) presents ground-water data visually in order to educate the public on ground-water issues. It is also intended for presentations to government and other funding agencies. GWVis works with ground-water level elevation data collected or modeled over a given time span, together with a matching fixed underlying terrain.
GWVis was developed using the Python programming language in conjunction with associated extension packages and application program interfaces such as OpenGLTM to improve performance and allow us fine control of attributes of the model such as lighting, material properties, transformations, and interpolation.
There are currently several systems available for visualizing ground-water data. We classify these into two categories: research-oriented models and static presentation-based models. While both of them have their strengths, we find the former overly complex and non-intuitive and the latter not engaging and presenting problems showing multiple data dimensions.
GWVis bridges the gap between static and research based visualizations by providing an intuitive, interactive design that allows participants to view the model from different perspectives, infer information about simulations, and view a comparison of two datasets. By incorporating scientific data in an environment that can be easily understood, GWVis allows that information to be presented to a large audience base.
Co-authored with Manojkumar Krishnan (principal) and Abhinav Vishnu
Presented by Manojkumar Krishnan at ICPPW, the 39th IEEE International Conference on Parallel Processing Workshops (San Diego, CA; September 13-16, 2010).
This paper describes the scalability of linear algebra kernels based on remote memory access approach. The current approach differs from the other linear algebra algorithms by the explicit use of shared memory and remote memory access (RMA) communication rather than message passing. It is suitable for clusters and scalable shared memory systems. The experimental results on large scale systems (Linux-Infiniband cluster, Cray XT) demonstrate consistent performance advantages over ScaLAPACK suite, the leading implementation of parallel linear algebra algorithms used today. For example, on a Cray XT4 for a matrix size of 102400, our RMA-based matrix multiplication achieved over 55 teraflops while ScaLAPACK's pdgemm measured close to 42 teraflops on 10000 processes.
Co-authored with Brian L. LaMarche (principal), Donald C. Girvin, and James P. McKinley.
Poster submitted to WSU Academic Showcase (March 27, 2009).
For abstract, see below.
Co-authored with Benjamin Eitzen (principal).
Submitted to ACM Transactions on Mathematical Software.
Originally intended for graphics, a Graphics Processing Unit (GPU) is a powerful parallel processor capable of performing more floating point calculations per second than a traditional CPU. However, the key drawback against the widespread adoption of GPUs for general purpose computing is the difficulty of programming them. Programming a GPU requires non-traditional programming techniques, new languages, and knowledge of graphics APIs. GpuPy attempts to eliminate these drawbacks while still taking full advantage of a GPU. It does this by providing a GPU implementation of NumPy, an existing numerical API for the Python programming language.
Co-authored with Brian L. LaMarche (principal), Donald C. Girvin, and James P. McKinley.
Poster presented at American Geophysical Union, 2008 Fall Meeting (December 15-19, 2008)
We are developing a software tool that can automatically classify anthropogenic and natural aerosol particulates using morphological analysis. Our method was developed using SEM (background and secondary electron) images of single particles. Particle silhouettes are detected and converted into polygons using Intel's OpenCV image processing library. Our analysis then proceeds independently for the two kinds of images. Analysis of secondary images concerns itself solely with the silhouette and seeks to quantify its shape and roughness. Traversing the polygon with spline interpolation, we uniformly sample k(s), the signed curvature of the silhouette's path as a function of distance along the perimeter s. k(s) is invariant under rotation and translation. The power spectrum of k(s) qualitatively shows both shape and roughness: more power at low frequencies indicates variation in shape; more power at higher frequencies indicates a rougher silhouette.
Presented at the Tenth Annual Consortium for Computing Sciences in Colleges Northwest Conference (Ashland, Oregon; October 10-11). Also appears in Journal of Computing Sciences in Colleges, vol. 24, no. 2 (December, 2008), pps. 6-12.
Having been assigned part-time to an administrative position (concurrently with normal academic duties) in charge of curriculum management, the author has developed curriculorum, a system to assist in that task that was, is, and will continue to be developed by and for a computer scientist. We describe the design of the system and how it has evolved to meet requirements that were only dimly perceived when the project started. As such, the project has been a gratifying exercise in the development of its designer's skills in data management, in software engineering, and in Python, its implementation language. We provide guidelines for the reader embarking on a similar project.
Co-authored with Benjamin Eitzen (MSCS student).
Presented by Benjamin Eitzen at SciPy '06 (Pasadena, CA; August 18, 2006).
Co-authored with Shuangshuang Jin (principal).
Appeared in The Visual Computer, vol. 21, pp. 71-82, (February, 2005).
We investigate current vertex normal computation algorithms and evaluate their effectiveness at approximating analytically-computable (and thus comparable) normals for a variety of classes of model. We find that the most accurate algorithm depends on the class and that for some classes, none of the available algorithms are particularly good. We also compare the relative speeds of all algorithms.
Co-authored with J. H. Miller (principal), M. T. Batdorf, D. J. Lynch, and W. E. Wilson.
Appeared in Radiation Research, vol. 162, pp. 474-479 (2004).
Track structures of 25, 50, and 80 keV primary electrons, simulated by the detailed-history Monte Carlo method, were analyzed for the frequency distributions of energy deposited in spheres with a diameter of 1 μm placed in a cylindrically symmetric array around the projected initial direction of the primary electron. The frequency mean of specific energy, the dose mean of lineal energy, as well as the median and variance of log normal functions fit to the dose distributions were calculated as a function of beam penetration and radial distance from the projected beam axis. Given these data, the stochastics of dose and radiation quality for micrometer-scale sites targeted by a medium- energy electron microbeam can be predicted as a function of the site’s location relative to the beam entry point.
Co-authored with Alejandro Aceves-Gaona (principal).
Presented (by Alejandro Aceves-Gaona) at the Fifteenth Western Computer Graphics Symposium (The Inn at Big White, British Columbia; March 29-31, 2004).
We present a new glyph called a "comet" to represent vectors. Comets improve the accuracy of vector visualization to enhance understanding while reducing visual clutter. Also, they eliminate an ambiguity possible with conventional vector glyphs. By using a hue-lightness-saturation (HLS) color space, devoting saturation to distinguish head from tail, and partially devoting lightness to represent the depth component of a vector's direction, comets allow increased visual accuracy and keep hue and (partially) lightness free to encode other quantities such as the nature of the object to which the vector is attached, if any. Comets are computationally inexpensive and can represent large data sets. We show how it is possible to use the OpenGL(TM) lighting model to place comets in static display lists that can be viewed by a moving camera without reconstruction.
Co-authored with Masaki Kameya (principal).
Technical Sketch at ACM SIGGRAPH 2003
A one-page technical sketch describing the work presented more fully in the paper below.
Co-authored with Renwei Wang and Donald Hung.
Appeared in Canadian Journal of Electrical and Computer Engineering, vol. 28, no. 1 (January, 2003).
We describe an algorithm for computing ray/Bézier patch intersections from a hardware design aspect. This algorithm uses patch subdivision and other geometrical techniques to find a given maximum number of intersection points nearest to the ray origin. We propose a pipeline-based hardware architecture, verify the number of pipeline stages required by simulation, and estimate the performance of a load-balanced implementation based on a state-of-the-art digital signal processor (DSP). This paper is a more extended version of another paper listed below.
Co-authored with Patrick Chiang (principal).
Presented at the Thirteenth Western Computer Graphics Symposium (Silver Star, British Columbia; March 24-27, 2002).
This paper presents a new subdivision scheme called "star-flower subdivision". Like other well-known subdivision schemes, such as Catmull-Clark (based on quadrilaterals) and sqrt(3)-subdivision (based on triangular meshes), it also generates smooth objects after a few iterations.
This scheme has advantages over previous schemes, however, due to a slower increase in polygon complexity. It subdivides quadrilateral meshes by a factor of 3, not 4, at each iteration. Having the number of facets increase more gradually allows designers to have greater control over the resolution of a refined mesh. We call star-flower subdivision "semi-stationary" because different subdivision rules occur on alternating levels of refinement.
We also present a strategy to handle boundaries when applying the odd iterations of the scheme.
Co-authored with Masaki Kameya (principal).
Presented at the Thirteenth Western Computer Graphics Symposium (Silver Star, British Columbia; March 24-27, 2002).
We present a representation for reflectance, especially measured reflectance data, that is both smooth and time-efficient. By fitting a multi-level B-spline approximation to bidirectional reflectance distribution function (BRDF) data, we can obtain a continuous function that matches the data to arbitrary precision. This algorithm shows very good time efficiency both constructing as well as evaluating the fit. Synthesized images show the accuracy and the smoothness of the fit BRDF. The finer level of fitting we use, the higher accuracy we can achieve. Our results compare favorably with other schemes currently in wide use.
One shortcoming of our method is the amount of storage required. We present one approach that addresses this, a bi-level representation combined with hashing.
Co-authored with Renwei Wang (principal) and Donald Hung.
Presented at the Thirteenth Western Computer Graphics Symposium (Silver Star, British Columbia; March 24-27, 2002).
We describe an algorithm for computing ray/Bézier patch intersections from a pipelined hardware point-of-view. This algorithm uses patch subdivision and other geometrical techniques to find a given maximum number of intersection points nearest to the ray origin. We propose a pipeline-based hardware architecture, and verify the number of pipeline stages required by simulation. This paper is an abbreviated version of another paper listed above.
Co-authored with Michael P. Weis (principal).
Presented at the Twelfth Western Computer Graphics Symposium (Sun Peaks, British Columbia; March 25-28, 2001).
The problem of scattered data interpolation is the fitting of a smooth surface (or, more generally, a manifold) through a set of non-uniformly distributed data points that extends to all positions in a domain. Common sources of scattered data include experiments, physical measurements, and computational values. Scattered data interpolation assists in interpreting such data through the calculation of values at arbitrary positions in the domain. Despite much attention in the literature, scattered data interpolation remains a difficult and computationally expensive problem to solve. BSPLND is a software package that solves this problem. It uses the scattered data interpolation technique presented by Lee, Wohlberg, and Shin. This technique is fast and produces a C2-continuous interpolation function for any set of scattered data using a hierarchical set of cubic B-splines. BSPLND extends the technique to work with data having an arbitrary number of dimensions for both its domain and range.
Co-authored with Wayne Cochran (principal) and John Hart.
Appeared in The Visual Computer, vol. 17 (2001).
We have discovered a class of fractal functions that are differentiable. Fractal interpolation functions have been used for over a decade to generate rough functions passing through a set of given points. The integral of a fractal interpolation function remains a fractal interpolation function, and this new fractal interpolation function is differentiable. Tensor products of pairs of these fractal functions form fractal surfaces with a well defined tangent plane. We use this surface normal to shade fractal surfaces, and demonstrate its use with renderings of fractal mirrors.
Co-authored with Alain Fournier.
Appeared in Computer Graphics Forum, vol. 19, no. 2 (June, 2000).
There is an older, less detailed version of this paper below.
Recently, there has been considerable interest in the representation of radiance in terms of wavelet basis functions. We will present a coordinate system called Nusselt coordinates which, when combined with wavelets, considerably simplifies computation of radiative transport and surface interaction. It also provides straightforward computation of the physical quantities involved.
We show how to construct a discrete representation of the radiative transport operator T involving inner products of smoothing functions, discuss the possible numerical integration techniques, and present an application. We also show how surface interaction can be represented as a kind of matrix product of the wavelet projections of an incident radiance and a bidirectional reflectance distribution function (BRDF).
Co-authored with Victor Roetman (principal).
Presented at the Eleventh Western Computer Graphics Symposium (Panorama, British Columbia; March 26-29, 2000).
In this paper, we describe our approach to interactively render the CSG models used in the MCNP nuclear transport code using OpenGL and standard graphics hardware. The approach uses the algorithm presented by T. F. Wiegand in ``Interactive Rendering of CSG Models.''
Doctoral Thesis presented at the University of British Columbia (1998).
This thesis considers the problem of global illumination: the modelling of light as it travels through a scene interacting with the objects contained within the scene. Starting with a description of the problem and a discussion of previous work, we explore a new approach called light-driven global illumination that offers several advantages over its predecessors: a lower asymptotic complexity, a wider range of representable surface interaction phenomena, and an absence of the need for ``meshing'' -- object surface subdivision needed primarily to represent shadows.
Light-driven global illumination is intermediate between local and global illumination. Representing light with wavelet basis functions, we are able to treat both the interaction between two surfaces and the interaction of a surface with a radiation field in a source-to-destination model that applies to whole surfaces, not just small elements.
We have found this ``wavelet radiative transfer'' to be a valid way to generate and store complex global light field data as four-dimensional textures for incorporation in local illumination solutions. Wavelets can considerably reduce the otherwise substantial storage and reconstruction problems associated with doing this. We include several examples of this.
We also discuss plausible illumination models, which are required to make light-driven global illumination work theoretically. Like wavelet radiative transfer, these models have application in other areas of rendering besides global illumination.
Finally, we develop the theory behind light-driven global illumination and apply it successfully to some simple examples. While we find the algorithm to be quite slow compared to other well-known rendering algorithms, we analyze what is needed to make it competitive.
In conclusion, we find that representing light with wavelets has a set of advantages that are independent of the comparative inefficiency of the light-driven algorithm.
Presented at the Ninth Western Computer Graphics Symposium (Whistler, British Columbia; April 23-26, 1998).
We discuss the design of a proposed instrument called a spherical target gonioreflectometer to measure the reflective properties of materials. By computing the image geometry of an illuminated sphere made of or coated with reflective material, we show that we can generate a sufficient amount of data to fill the parameter space needed to create bidirectional reflectance distribution functions (BRDFs) for real substances.
Co-authored with Alain Fournier.
Presented at the 7th Eurographics Workshop on Rendering in Porto, Portugal (17-19 June, 1996). There is an older but more detailed version of this paper by the same name below.
We describe the basis of the work he have currently under way to implement a new rendering algorithm called light-driven global illumination. This algorithm is a departure from conventional raytracing and radiosity renderers which addresses a number of deficiencies intrinsic to those approaches.
Presented at the 1996 Western Computer Graphics Symposium at Panorama, BC (4-6 March, 1996). There is a more recent, detailed version of this paper above.
In this work-in-progress paper, we present a solution to the illumination problem that is intermediate between the conventional local and global approaches to illumination. It involves the representation of radiance on a surface as a finite element expansion in terms of wavelets. By expanding in terms of ``Nusselt coordinates'', we show how irradiance, transport, and surface interaction can be evaluated simply and directly in terms of wavelet coefficients. We present an example of transport.
Presented at the 11th ACM Symposium on Computational Geometry (June 5-7, 1995).
VideHoc is an interactive graphical program that visualizes two-dimensional homogeneous coordinates. Users manipulate data in one of four views and all views are dynamically updated to reflect the change.
Co-authored with Alain Fournier.
There is an updated but less detailed version of this paper by the same name above.
We describe the basis of the work he have currently under way to implement a new rendering algorithm called ``light-driven global illumination''. This algorithm is a departure from conventional raytracing and radiosity renderers which addresses a number of deficiencies intrinsic to those approaches.
Appeared in "International Journal for Numerical Methods in Engineering", vol. 37, no. 6 (30 March, 1994).
Simulated annealing can minimize both profile and fill of sparse matrices. We applied these techniques to a number of sparse matrices from the Harwell-Boeing Sparse Matrix Collection. We were able to reduce profile typically to about 80% of that attained by conventional profile minimization techniques (and sometimes much lower), but fill reduction was less successful (85% at best). We present a new algorithm that significantly speeds up profile computation during the annealing process. Simulated annealing is, however, still much more time-consuming than conventional techniques and is therefore likely to be useful only in situations where the same sparse matrix is being used repeatedly.
As presented at the 1993 Western Computer Graphics Symposium at Silver Star, BC. A slightly modified version appeared in Computer Graphics Forum, vol. 13, no. 2 (June, 1994).
There is a need to develop shaders that not only ``look good'', but are more physically plausible. From physical and geometric considerations, we review the derivation of a shading equation expressing reflected radiance in terms of incident radiance and the bidirectional reflectance distribution function (BRDF). We then examine the connection between this equation and conventional shaders used in computer graphics. Imposing the additional physical constraints of energy conservation and Helmholtz reciprocity allows us to create variations of the conventional shaders that are more physically plausible.
As presented at the 1992 Western Computer Graphics Symposium in Banff, Alberta.
We investigate the application of multigrid techniques to the solution of the ``classic'' radiosity equation. After overviews of the global illumination problem and of radiosity, we describe the latter's solution via multigrid methods.
An implementation of the multigrid algorithm presented here is able to solve the classic radiosity equation in about 50% of the time required by the more commonly-used Gauss-Seidel approach. Although few researchers currently use classic radiosity, we discuss possibilities for the adaption of multigrid methods to more recent radiosity solution techniques.
Presented at Eurographics '90 (3-7 September, 1990).
This paper describes a way to perform realistic three-dimensional texturing of ray-traced objects with irregular surfaces. Such texturing has been done in the past with texture mapping, particle systems, or volumetric methods. We propose an alternative to these called a "lattice". Lattices work as fast but inexact ray tracers. As long as lattices are used for small objects, though, the inexactness doesn't show on the scale of the display, and the result is acceptable.
The paper shows how lattices can be integrated with a more traditional ray tracer, with several examples.
Time and memory space considerations are major constraints on lattices, preventing widespread application at the present time. The paper discusses these limitations and how they might be reduced.
Presented at the 1986 Winter USENIX Conference
This paper presents an architectural description of Galadriel, a window management system that provides both text and graphics services to client processes. Unlike most other window managers, Galadriel runs under UNIX on a hosted, display list terminal instead of a bitmapped workstation. It discusses the advantages and disadvantages of this approach as well as areas for further development.