DUAL REPRESENTATIONS FOR LIGHT FIELD COMPRESSION Peter Chou and Prashant Ramanathan A light field [1] is a 4D representation of radiance in free space that can be used to render a static 3D scene from arbitrary views. Light field datasets are constructed by sampling many 2D images of a scene taken from multiple viewpoints. Data compression is essential for making light fields tractable for rendering, transmission, and storage [2]. Recent research has shown that representations combining light field and 3D geometry data yield benefits for both compression and rendering. In [3], Magnor and Girod propose a compression scheme for multi-view images based on constructing 3D geometry, projecting each view into a texture map for that geometry, and applying a 4D wavelet coder to the array of texture maps. Each texture map represents one view. The texture coordinates correspond to the different points on the surface of the geometry. In [4], Wood et al. perform compression and photorealistic rendering of surface light fields. For each point on the surface of the geometry, a lumisphere is used to represent the radiance in all directions from that point. Each point on the lumisphere corresponds to a view, and the surface of the lumisphere is interpolated to obtain a continuous function representing all viewing directions. We observe that the texture map based and surface light field schemes are dual representations of 4D light field data. Each approach has its own advantages and disadvantages. Our goal is to investigate this duality and compare them from a compression standpoint. To facilitate this comparison, we introduce two new terms to refer to these complementary representations for light field data. We will call the multiple texture map format a view-dominant organization, and its complement, a geometry-dominant organization. The geometry-dominant organization is analogous to the surface light field representation, except that it does not imply that continuous lumispheres necessarily be used. This project consists of three components. First, we will convert a view-dominant dataset into a geometry-dominant organization. We will also construct a geometry-dominant dataset directly from the image and geometry data. We expect these to be equivalent to each other and yield the same compression results as in the view-dominant case. They will serve as baselines for the next two experiments. Second, we propose to reparameterize this geometry-dominant data along the viewing axes using a local coordinate system aligned with the normal direction at each surface point. The purpose of this experiment is to understand the effects of using a local coordinate system, as it is also used in the surface light field work. Third, we will attempt to visualize the surface sampling pattern implied by the texture map approach. We will then study the effects of using a different surface sampling pattern. It is possible that this may lead to a novel compression scheme for surface light fields. WORKPLAN (Weeks 1-2) - acquire data - familiarize with source code - get programs running - convert between representations - visualize surface sampling - construct geometry-dominant dataset - apply compression (Weeks 3-4) - study reparameterization - study surface sampling (Week 5) - analyze results, write report, prepare presentation REFERENCES [1] M. Levoy and P. Hanrahan, "Light Field Rendering," SIGGRAPH 96 Conference Proceedings, pp. 31-42, August 1996. [2] M. Magnor and B. Girod, "Data Compression for Light-Field Rendering," IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10, No. 3, pp. 338-343, April 2000. [3] M. Magnor and B. Girod, "Model-based Coding of Multi-Viewpoint Imagery," VCIP 2000, June 2000. [4] D. N. Wood, D. I. Azuma, K. Aldinger, B. Curless, T. Duchamp, D. Salesin, and W. Stuetzle, "Surface Light Fields for 3D Photography," SIGGRAPH 2000 Conference Proceedings, 2000.