Close
The page header's logo
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An implicit-based haptic rendering technique
(USC Thesis Other) 

An implicit-based haptic rendering technique

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content AN IMPLICIT-BASED HAPTIC RENDERING TECHNIQUE Copyright 2003 by Laehyun Kim A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (COMPUTER SCIENCE) August 2003 Laehyun Kim Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3116728 Copyright 2003 by Kim, Laehyun All rights reserved. INFORMATION TO USERS The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. ® UMI UMI Microform 3116728 Copyright 2004 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UNIVERSITY OF SOUTHERN CALIFORNIA THE GRADUATE SCHOOL UNIVERSITY PARK LOS ANGELES, CALIFORNIA 90089-1695 This dissertation, written by under the direction o f h IS dissertation committee, and approved by all its members, has been presented to and accepted by the D irector o f Graduate and Professional Programs, in partial fulfillment o f the requirements fo r the degree of LAEHYUN £1M DOCTOR OF PHILOSOPHY Director Date. Dissertation Committee Chairj Chairi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Contents List of Figures Iv Abstract ix 1 Introduction 1 1.1 Motivation and overview ............................................................................ 1 1.2 Advantages of implicit surface representation............................. 4 1.3 Contributions................................................................................................ 4 1.4 System architecture ............................................................. 7 1.5 Organization.......................................... 9 2 Related W ork 10 2.1 Collision d e te c tio n ................................... 10 2.2 Geometric haptic rendering algorithm . ................................................. 11 2.2.1 Penalty-based approaches ...................................... 11 2.2.2 Constraint-based approaches.......................................................... 12 2.2.3 Force shading................................................................................... 18 2.3 Volumetric haptic rendering algorithm ....................... 19 2.3.1 Direct volume haptic rendering ................................................... 20 2.3.2 Constraint-based approaches............................. 21 2.4 Haptic texturing.......................................... 24 2.4.1 Synthesized haptic texture ................ 24 2.4.2 Image-based haptic texturing.......................................................... 25 2.5 Haptic p a in tin g ...................................................................................... 26 2.6 Volume-based haptic sculpting .......................................................... 28 3 Implicit Surface Representation 30 3.1 D efinition.......................................... 30 3.2 Surface n o rm a l...................................... 31 3.3 Closest point transform .......................................... 32 ii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 Implicit-based Haptic Rendering Model 33 4.1 Collision detection ................... 33 4.2 Friction-less model .................................................................. 34 4.2.1 Force direction................................... 34 4.2.2 Force m agnitude............................................................................ 36 4.2.3 Final force output by a spring-damper m o d e l.............................. 38 4.2.4 Result . ............................................................................ 39 4.3 Adding friction to the model . .................................................................. 39 4.4 Offset surface for thin objects............................................................ 42 4.4.1 New offset surface for thin objects ........................ 42 4.5 Magnetic su rfa c e ................... 43 4.6 Merging multiple o b je cts............................................................... 45 4.7 Implicit-based haptic texturing................................................ 46 4.7.1 Changing the geometry of an implicit surface ........................... 47 4.7.2 Exam ples.......................................................................................... 48 4.8 An octree to reduce memory requirem ent................. 49 4.9 Implementation . 49 5 Haptic Decoration and M aterial Editing 53 5.1 Haptic decoration . 53 5.1.1 Haptic painting....................................................... 53 5.1.2 Image-based haptic texturing.......................................................... 57 5.2 Material ed itin g ............................................................................................. 59 5.3 Image-based 3D embossing and engraving.................................................. 60 6 A Volume-based Haptic Sculpting Technique 68 6.1 Polygonization m e th o d ................................................................................ 68 6.1.1 Adaptive m e th o d ............................................. 70 6.2 Sculpting m o d e ...................................... 73 6.2.1 Haptic sculpting m o d e .......................................................... 73 6.2.2 Block sculpting mode ................. 75 6.3 Octree data stru c tu re ............................................................................... . 76 6.4 Mesh-based solid texturing .................................................. . 77 7 Conclusion and Future Work 90 7.1 Conclusion ................................... 90 7.2 Future work ............................. 91 Reference List 93 iii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List of Figures L i An Asian-style plate decorated by our haptic system ............................. 3 1.2 Comparison of two haptic renderings without force discontinuity (a) Force shading (b) Our approach ............................................................... 5 1.3 position of the virtual contact point (a) approximation in Avila’s method (b) our approach........................................................................................... 6 1.4 A haptic device called PHANTOM ................................................... 7 1.5 The haptic system architecture................................................................... 8 2.1 Drawbacks in penalty-based approach (a) Force discontinuities when crossing boundaries of internal Voronoi cells (b) Pop-through thin objects 11 2.2 a constraint-based god-object method (a) minimize the displacement between the tool tip and the god-object (b) surface constraint prevent from passing through the object ............................................................... 13 2.3 Convex intersection...................................................................................... 13 2.4 Configuration space obstacles and Constraint planes (figure from Rus- pini’s paper [52]) ................... 16 2.5 Finding a proxy using an iterative search (a) Perform collision detec­ tion between the path of the tool tips and the C-obstacles (b) Set the subgoal and the constraint plane(s) (c) Find a new subgoal using the active planes and the minimization based on Lagrange multipliers (d) Since the subgoal is in free space, drop the constraints, set the HIP as the new subgoal and perform collision detection between the path and the C-obstacles (e) Recompute subgoal with new constraints (f) set final proxy if the sub goal is stable .................................................. 17 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.6 Concave intersection (a) Move the proxy to the closest constraint plane (b) Find a sub-goal obeying that constraint (c) Iterate to determine addi­ tional constraints with the sub-goal as the new target . . . . . . . . . . 17 2.7 Force discontinuity in constraint-based approaches ............... . 18 2.8 Two pass force shading (a) pass 1 (b) pass 2 (figure from Ruspini’s paper [52]) ............... 19 2.9 A volumetric haptic rendering method by an intermediate representation (a) computing a force vector using a virtual plane as an intermediate representation (b) Virtual plane update rate is adjustable......................... 22 2.10 Find virtual contact points on the frictionless surface (a) the first contact point after penetration (b) subsequent virtual contact p o i n t ................... 23 2.11 Modification of the virtual contact point for friction (a) stick (b) slip . . 24 2.12 Two-stage texture mapping (from Basdogan’s paper [6]) ..................... 26 3.1 Implicit surface properties ................ 31 3.2 Closest point transform(CPT) (a) a geometric model with mesh (b) vol­ umetric implicit surface representation converted using CPT (b) Hybrid surface representation.................................................................................. 32 4.1 compute a normal vector(red arrow) of each position from neighbor’s gradients(blue arrow) in a 2D g r i d ............................................................ 35 4.2 Two examples for the change in force direction (a) penetration deeply into the volume of a object (b) move toward a volumetric boundary . . . 36 4.3 Spring-Damper model ............................. 39 4.4 Haptic display for geometric models . ...................................... 40 4.5 The new virtual contact point due to friction............................................... 41 4.6 Haptic simulation on an object (Galleon: 8794 triangles) with thin vol­ ume using an offset surface ...................... . 42 4.7 Offset surface to simulate thin objects (a) constraint force based an offset surface (b) move the VCP onto the original surface .................. 43 4.8 Magnetic surface force the tool tip to keep the contact with the surface . 44 v Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4.9 merging multiple implicit surface representation (a) potential values in the implicit surface representation of the first object, M i (b) potential values in the implicit surface representation of the second object, M2 (c) potential values in the implicit surface representation of the merged object, M ........................................... 46 4.10 Merging two objects (a) the first object (b) adding a handle object to the first object ........................................................ 46 4.11 Modulating the potential value (a) an implicit surface with a flat side (b) a textured implicit surface obtained by modulating the potential values of the flat su rfa c e ..................... 47 4.12 Implicit-based haptic texturing is haptically render a modified implicit surface by modulating directly the potential values ................................ 48 4.13 Implicit-based haptic texture (a) Gaussian noise (b) Lattice pattern . . . 51 4.14 Visualization of an octree with level 7 to save a volumetric implicit sur­ face representation of a horse m o d e l......................................................... 52 5.1 Generating a texture on a 3D object (a) by hand or automatic method (b) by haptic painting. This Figures comes from [32] 54 5.2 A volume-fill algorithm to find 3D triangles within the brush volume (red circle representing the brush volume, white points indicating the grid points within the brush volume, yellow area being painted).............. 55 5.3 Barycentric Coordinate ................................................................... 56 5.4 Examples of haptic painting (a) the pottery created by the haptic paint­ ing system and its wire-frame (d) the pottery model and its implicit sur­ face representation............... 58 5.5 Examples of haptic decoration (a) a panda on a model(3072 triangles) (b) a self-portrait on a pottery(4800 triangles) (c) a decorated pottery (d) An Asian-style plate ............................................ 62 5.6 A previous image-based haptic texturing method and our approach (dot­ ted li n e ) ............................ 63 5.7 Various haptic effects using material editing (a) the user edits and feels the material properties while painting (b) a closer view showing the mesh and the volumetric representation ............................... 63 5.8 Image-based embossing and engraving ................................... 64 vi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5.9 An example to show the embossing process (a) An original model, (b) Haptic painting on (a), (c) Image-based haptic texturing on (b). (d) The embossed model .................. 65 5.10 An example of polygonization by Marching cube algorithm . . . . . . 66 5.11 Embossed models (a) A texture mapped asteroid model, (b) The embossed model of (a), (c) A painted tray model, (d) The embossed model of (c) 67 6.1 Comparison between uniform and adaptive approach (a) Uniform poly­ gonization by Marching cube algorithm. Sampled at 128x128x128 res­ olution. (b) Mesh data of (a), (c) An adaptive polygonization. Initial mesh is sampled at 64x64x64 resolution, (d) Mesh data of (c) 69 6.2 Coxeter-Freudenthal decomposition of the cube. Image from [56] . . . 71 6.3 Intersection of the surface with a simplex. Image from [56] . . . . . . . 72 6.4 Adaptive polygonization. Possible subdivision cases: three simple edges (triangle 1), two simple edges (triangle 2), one simple edge (triangle 3), and no simple edge (triangle 4) (a) Initial mesh (b) the first subdivision (c) projecting midpoints onto the implicit surface (d) the final mesh after first refinem ent........................................................................ 80 6.5 Adaptation criteria and projection to find the closest point on the implicit surface. Image from [ 5 7 ] ............................................................................ 81 6.6 Bridge the disparity between the physical model update frequency, (a) physical model before pushing deformation (b)intermediate implicit sur­ faces in the middle of the physical deformation (blue arrow: direction of tool tip movement, red arrow: constraint force) (c) physical model after pushing deform ation..................................................................... 81 6.7 Bridge the disparity between the physical model update frequency, (a) physical model before pulling deformation (b)intermediate implicit sur­ faces in the middle of the physical deformation (blue arrow: direction of tool tip movement, red arrow: constraint force) (c) physical model after pulling deformation ............... 82 6.8 Screen shots to show the sculpting process in haptic sculpting mode. The sculpting tool is represented by a red wire frame sphere, (a) An original model before sculpting, (b) A sculpted model by the carving operation (c) mesh model of (b) (d) A sculpted model by the addition tool (e) mesh model of (d) (f) The sculpted model after applying more adding and carving operation.................................. 82 vii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6.9 Screen shots to show the sculpting process In block sculpting mode. The sculpting tool is represented by a red wire frame, (a) locating the box tool on the desired region to be carved, (b) a sculpted model after applying a box carving tool to (a), (c) a sculpted model by the sphere carving operation, (d) locating the box tool on the desired region to be added, (e) a sculpted model after applying a box adding tool to (d). (f) mesh model of (e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.10 A virtual model created after applying several sculpting tools in block sculpting mode, (a) a front view (b) mesh model of (a), (c) a close shot from back side of (a), (d) mesh model of (c).............................................. 84 6.11 A USC logo created by our sculpting system (a) a USC logo (b) the mesh model of (a), (c) a USC logo after applying more sculpting tools to (a), (d) mesh model of (c)......................................................................... 85 6.12 Rhino sculpted from wood and its area-weighted mesh atlas. Image from [11]................................................................................. 86 6.13 Mesh adapted to the detail of solid texture (a) a model without adaptive polygonization (b) a model by adaptive polygonization (c) a close shot of part of (b). (d) the mesh model of ( c ) ................................................... 87 6.14 A model with a marble-like solid texture (a) a USC logo model, (b) a close shot of the mesh of (a), (c) a bumpy model by modulating the sur­ face normal of (a), (d) a complex model with a wood-like solid texture. 88 6.15 Solid texture based modelling, (a) a embossed model along with a solid texture (b) a engraved model with a solid texture (c) a close shot of part of (b). (d) the mesh model of (c). . ............................................ 89 viii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Abstract In this thesis, we present a novel haptic rendering technique based on a hybrid surface representation mixing geometric model and implicit surface representation. We demon­ strate that this new approach to haptic rendering has several major advantages over pre­ vious techniques: fast collision detection, precise contact point determination, constant complexity of force computation, and reduced numerical issues. Using the hybrid sur­ face representation, we also address conventional limitations in the haptic display based on the geometric model,such as fast force update at 1 kHz, simulating large geometric models without significant degradation of haptic performance, correct and stable simu­ lation of surface properties like friction, stiffness, and haptic texture, and avoiding force discontinuity without a feeling of rounded surface. Additionally, we introduce new features based on the implicit surface representation such as an offset surface which provides additional internal volume to prevent the tool tip from passing through thin objects, and a magnetic surface which forces the tool tip to stick to the surface so that the user can explore a 3D model without loosing the contact with the surface. Finally, we use an octree to reduce memory requirement of the volumetric representation. We also present a haptic decoration and material editing technique. Haptic decora­ tion allows the user to paint directly on the 3d model (haptic painting) and then sense the ix Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. paint thickness spread on the surface (image-based Haptic texturing). A similar mech­ anism can be used to emboss or engrave geometric models. Using a material editing technique, the user can edit local surface properties like friction and stiffness and then simulate the assigned material properties on the surface. Finally, we introduce a haptic sculpting system where the user intuitively adds and carves material to a volumetric model using various sculpting tools. The volumetric model being sculpted is visualized as a geometric model which is adaptively polygo- nized according to the surface complexity. In order to enhance visual realism, we present a mesh-based solid texturing method which accurately simulates the sculpting of a shape on the surface without significant texture distortion. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 1 Introduction The haptic interface allows the user to touch, explore, paint, and manipulate 3D models in a natural way with a haptic display device. The haptic display device is a force feedback device as well as an input device. Due to haptic feedback, it renders the virtual object tangible and provides an intuitive interface with a virtual environment. The haptic rendering algorithm generates a force field to simulate the contour of the object and surface properties (such as friction and texture) when the user touches a virtual object and to guide the user along a specific trajectory. 1.1 Motivation and overview Haptic rendering methods can be classified into mainly two groups according to the surface representation they use: geometric haptic algorithms [62, 52, 39, 53] used to render surface data and volume haptic algorithms [3, 31, 36] used for volumetric data. This thesis suggests a novel haptic rendering algorithm [34, 35] to take advantages of both the geometric (B-rep) and the implicit (V-rep) surface representations for a given 3D object. For the visual display, the geometric model can effectively represent the 3D model compared with volume rendering (time complexity is 0 (n 3) with respect to resolution). Meanwhile the implicit surface representation has many properties which benefit the haptic rendering algorithm for instance, fast collision detection and determi­ nation of surface normal, avoiding force discontinuity without an exaggerate feeling of round object and so on. The novelty of our technique therefore lies in exploiting both 1 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. representations to derive a fast and accurate haptic rendering technique without force discontinuity. The thesis, based on the volumetric implicit representation, introduces some novel features in haptic display, for instance an offset surface to provide additional internal volume to prevent the tool tip from passing through thin objects, a magnetic surface to force the tool tip to stay on the surface for better exploration, and merging multiple implicit surfaces to simulate combined multiple objects transparently. In our system, an octree-based data structure is employed to reduce memory requirement of the volumetric representation. Recently, a few haptic systems [32, 23, 17] support haptic painting, where the user can paint directly onto a 3D model. When the user paints on the surface, corresponding portions in the texture map are updated. Using haptic painting, we can easily make correct and undistorted texture images on the desired area of the 3D model without knowledge about the mathematics of the surface parameterization. In the dissertation, we suggest a haptic editing technique for decoration and material properties to extend previous haptic painting systems. Haptic decoration (see Figure 1.1) allows the user to paint directly on the surface and then senses the surface variation generated by the painted image (called Image-based haptic texturing). The surface variation is accomplished by modulating potential values in the volumet­ ric representation around the painted image. As a result, the system haptically renders the textured implicit surface simulated without any additional computation and modi­ fication of the algorithm. In addition, the texture implicit surface can be converted a geometric model containing embossed or engraved shape on the painted image. Material editing enables editing of the local material properties such as friction and stiffness instead of global properties over the 3D model. The system then simulate assigned material properties in the fast haptic loop when the user explores the surface. 2 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The material properties are saved into the volumetric representation rather than in the surface representation. This provides a reasonable approximation and a fast computation of corresponding friction and stiffness values. Figure 1.1: An Asian-style plate decorated by our haptic system We also developed a virtual sculpting system based on volumetric implicit surface as an alternative to existing digital sculpting implementations. Our haptic rendering algorithm is integrated into the sculpting system to haptically render the implicit surface being sculpted and to intuitively manipulate the deformation. In order to convert the volumetric model into the geometric model, we use an adaptive polygonization method in which a mesh effectively represent sharp edges with a smaller number of triangles than a uniform polygonization method such as Marching cubes. For better visual effect, we present a mesh-based solid texturing method for solid texturing [48] by adaptively subdividing the mesh according to the detail of solid texture. This method accurately simulates the sculpting of a shape on the shape’ s surface without any texture distortion. 3 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1.2 Advantages of implicit surface representation The Implicit representation of the surface of a 3D object is traditionally described by an implicit equation [8]. Our haptic rendering algorithm uses a discrete, volumetric implicit surface representation, where the implicit function defining the surface is only sampled on a regular 3D grid [34], Such a surface representation has the following advantages in haptic rendering: • Fast collision detection and contact point determination on the surface using the potential value in each grid point to indicate the proximity to the surface. • Fast surface normal computation through the gradient of potential values. • Easy offset surface computation as iso-surfaces of the same potential with differ­ ent iso-values. • Constant complexity of force computation, even for complex 3D surfaces since the force computation is performed only locally in the cell containing the tool tip. • Reduced numerical issues: raw and/or complex geometry models often have small gaps at a common edge or vertex due to small numerical errors [52]; these may cause the tool tip to fall inside the internal volume suddenly through the gaps, which could not happen for an implicit-based technique. 1.3 Contributions Contributions of our haptic system based on implicit surface representation are described as follows: 1. We first employ an implicit surface representation to haptically render a geometric model to take advantage of the implicit representation as well as the geometric representation. In our algorithm, the user ’’sees” a geometric model and ’’ feels” 4 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. an implicit surface which wraps around the geometric model (Figure 3.2). The haptic algorithm is fast and stable using implicit surface properties and can be used for huge models on low-end computers since the performance is independent of shape complexity of geometric models and the grid resolution of the volumetric representation. 2. The haptic system avoids the force discontinuity around volumetric boundaries (edges and vertices) in a geometric model without a feeling of rounded surfaces (Figure 1.2b) using an interpolation function in a volumetric model, which was first introduced by Avila [3]. Note that Avila’s method is used for volumetric data. We employ this approach to simulate geometric models. (a) Figure 1.2: Comparison of two haptic renderings without force discontinuity (a) Force shading (b) Our approach 3. We use a simple variation of Avila’s volume haptic method [3] to obtain the cor­ rect force magnitude. In previous volume haptic algorithms [3, 36], the force magnitude is a function of the potential value. This approximation may not allow the user to feel stiff objects (Figure 1.3a). In order to solve the problem, we find a virtual contact point on the surface to render the surface more accurately and robustly (Figure 1.3b). 4. We use the offset surface in the implicit surface representation to give an addi­ tional internal volume to 3D models for generating a enough force to prevent the 5 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 1.3: position of the virtual contact point (a) approximation in Avila’s method (b) our approach tool tip from passing through. The responsive force is generated based on the off­ set surface. Haptic decoration and material editing can be performed on the offset surface (see Figure 1.1, 5.5). 5. We first introduce a magnetic surface in haptic system which force the tool tip to stick to the surface by attracting the tool tip within the magnetic field established around the surface. It allows the user to explore the surface without leaving the surface. The magnetic surface also works on the offset surface. 6. We suggest a novel haptic texturing method which is implemented by mapping a texture pattern directly into the implicit representation unlike previous haptic tex­ turing methods [6, 39, 53] which modulate the friction or perturb the surface nor­ mal. As a result, the geometry of the implicit surface is changed and it can express texture geometry explicitly without additional computations. This implicit haptic texturing is applicable to image-based haptic texturing to sense the surface varia­ tion by painted 2D image using the implicit representation. 7. We first suggest material editing by haptic interface. The user edits material prop­ erties like friction and stiffness directly over the 3D model in an intuitive way 6 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. just as he paints. Then we feel the assigned material properties instead of global properties in the fast haptic loop. 8. The volume-based haptic sculpting system use an fast and adaptive polygonization method as an alternative to existing digital sculpting implementations. In addition, we present a novel method to implement solid texturing on the physical model being sculpted. 1.4 System architecture Our haptic system consists of two basic parts, visual and haptic processes on a PC with a Pentium III lGhz. We use a 3DOF PHANTOM haptic device (see Figure 1.4) for haptic display. The overall system architecture is shown in Figure 1.5. Figure 1.4: A haptic device called PHANTOM The visual process is in charge of loading and rendering the geometric model, pro­ cessing user input, modulating potential values for haptic texturing (including synthetic and image-based haptic texturing), editing the surface properties onto the volumetric 7 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. representation, and updating the 2D texture image by haptic painting. The system can load geometric models in VRML and Openlnventor formats. The haptic process takes care of updating the force displayed by the PHANTOM. Collision detection, finding the contact point, simulate surface properties such as friction and stiffness and the force computation are performed in fast haptic loop. We use the GHOST library [21] to obtain the current position of the tool tip and send the generated force directly to the PHANTOM at 1 kHz. Haptic loop (I kHz) Virtual contact point Force generation Collision detection Magnetic surface Offset surface Original surface Volumetric representation Surface property Face index Implicit surface Potential Material value properties Geometric representation 2 0 texture Editing material properties Painting Graphical Synthetic texturing rendering User interface Display loop (30 Hz) Figure 1.5: The haptic system architecture Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In the volumetric representation, each cell point has a potential value for implicit surface representation, surface properties, and an index of the closet face on the geomet­ ric model. The initial implicit surface representation and face indices are built during the conversion process from the geometric model before the haptic simulation. 1.5 Organization We proceed with a discussion on previous related work and implicit surface in chapter 2 and chapter 3 respectively. Chapter 4 describe our haptic model. Haptic decoration and material editing techniques are described in chapter 5. Haptic sculpting system based on volumetric representation is explained in chapter 6. We conclude and suggest future work in chapter 7. 9 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 2 Related Work 2.1 Collision detection In geometric haptic rendering models, collision detection is not trivial to compute. How­ ever, we need a fast and accurate method for collision detection since the haptic simula­ tion should be updated as high as 1000kHz to achieve system stability. In order to speed up collision detection, hierarchical bounding volumes and spatial partitioning approaches have been proposed. There are different types of bounding vol­ umes, such as sphere [52], axis-aligned bounding boxes, oriented bounding box (OBB), etc. Of them, hierarchies of oriented bounding boxes (OBBTrees) is popularly used. However, a bounding volume tree can be badly skewed and the large depth of the tree results in increasing search time. Spatial partitioning is used in some algorithms for sim­ ple collision detection, it decomposes the space into uniform or adaptive grids, octrees, or K-D trees to directly access each cell at constant time. Usually, data structure for spatial decomposition requires a huge memory as the size of cell becomes smaller. One of the most popular collision detection algorithms for geometric haptic render­ ing is H-Collide [24], It uses a hybrid hierarchy of spatial subdivision which decom­ poses the space into uniform cells and OBB (oriented bounding box) trees to represent bounding volume hierarchy. Ruspini et al. [52] which the smallest sphere are the leaves to detect 10 used a bounding sphere hierarchy in collisions. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.2 Geometric haptic rendering algorithm Haptic rendering algorithms can be classified into two groups according to surface rep­ resentation methods: geometric and volumetric haptic rendering algorithm. Traditional haptic rendering methods are based on geometric surface representations which mainly consist of triangle meshes. In this section, several haptic algorithms for the geometric model are described. 2.2.1 Penalty-based approaches An early approach for the geometric models is the penalty-based method (or vector field method) [39, 40] in which the direction of the force is normal to the closest surface and the magnitude is proportional to the amount of penetration into a 3D volume. This method does not have a history of the movement of the tool tip and use a one-to-on map­ ping of position to the force. Typically, the interior of the object would be subdivided into several internal Voronoi cells and assume that the tool tip entered from the closest surface. (b) (a) Figure 2.1: Drawbacks in penalty-based approach (a) Force discontinuities when cross­ ing boundaries of internal Voronoi cells (b) Pop-through thin objects The penalty-based methods, however, have a number of drawback as follows: 11 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. • It is often difficult to determine which surface is the closest one when contact exists with multiple surfaces. In the worse case, a global search of all the primi­ tives may be required to find the closest exterior surface. • This method suffers from a strong force discontinuity in both the direction and magnitude when crossing volume boundaries. If the tool tip penetrates too deeply, it may move the user from one side of the object to the other around volume boundaries (see Figure 2.1a). • Small and thin objects do not have the internal volume to generate enough force to prevent the tool tip from passing through the objects (see Figure 2.1b). Due to these limitations, this method works for simple geometries, such as spheres and planes for which the direction and amount of the force are easy to determined. 2.2.2 Constraint-based approaches In order to overcome limitations of the penalty-based method, Zilles and Salisbury [62] introduced a constraint-based “god-object” method. A god-object is a virtual object to represents the surface contact point. In free space, the tool tip point and the god-object are located at the same position (see the god-object at time t in Figure 2.2b). If the tool tip penetrates into the surface, the god-object remains on the original surface and the tool tip is located inside the surface. The position of the god-object is determined by minimizing the displacement between the tool tip and the god-object on the surface (see Figure 2.2a). As a result, the movement of a god-object is constrained to stay on the object’s surface. This surface constraint prevents from the tool tip from passing through the objects. This approach keeps a history of contact surfaces of a object, so that we always know the surfaces that the god-object has passed through. 12 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Minimizing the displacement ^ ' H n H - n h i t t+1 t+2 k - ---God-obj ect Tool tip t+1 t + 2 t (a) (b) Figure 2.2: a constraint-based god-object method (a) minimize the displacement between the tool tip and the god-object (b) surface constraint prevent from passing through the object They denote a surface as active if the old god-object is located on the surface. When the tool tip is moving on the convex surface, only one surface is active at a time and constraints the movement of god-object. The transition of the god-object between two consecutive surfaces takes place in two steps. If the old god-object is on an active surface, the new one should be stay on the plane of the surface, but it is not necessarily within the boundaries. In the first step, the new god-object is over the next surface but it is still on the plane of the current surface (at time t + 1 in Figure 2.3). In the next servo loop, however, the surface is not active anymore since the god-object is located in free space. In the second step, the god-object fall onto the next surface (at time t + 1 in Figure 2.3). N t t + 1 t+2 Figure 2.3: Convex intersection 13 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. When the tool tip touches concave portions of an objects, multiple surfaces can be active simultaneously. If the tool tip traverses on a concave intersection of two plains, both plains become active and works as constraint surfaces. When there are more than three active surfaces, only three will be activated at any one time and the god-object will still be constrained to a point. The new position of the god-object is determined using Lagrange multipliers. Planes (at most three planes) is used as constraint and an energy function is defined to mini­ mize the distance between the tool tip and the god-object shown in equation 2 . 1, 2.2 respectively. AiX + Biy + CiZ — Di = 0, 1 < i < 3 (2.1) E = ^k((x - xpf + (y - ypf + (z - zpf ) (2.2) C = ^k((x - xpf + (y - ypf + ( z - zpf ) 3 (2.3) + ^ ] XfAiX + Biy + C{Z — Di) i — l The new position of the god-object is determined by minimizing the cost C in equa­ tion 2.3. In order to minimize the cost, all six partial derivatives on x, y, z, Ai, A 2 and A 3 of C is set to 0. As a result, we obtain 6 linear equations: 14 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 0 Ai a 2 X Xp 0 1 0 Bi b 2 B3 y U p 0 0 1 C i c2 C 3 z zp Ai Bi C l 0 0 0 Ar Dx a 2 b 2 c2 0 0 0 x2 d 2 CO \ f?3 c3 0 0 0 A3 _ D 3_ Fortunately, the matrix in equation 2.4 has several advantages. It is symmetric, has the identity matrix in the upper left hand (3x3) and a null matrix in the lower left hand (3x3), and always be invertible. Thanks to these properties, only 65 multiplicative oper­ ations (multipliers and divides) is required to solve for x, y, and z in case of three con­ straints. It requires 33 multiplicative operations when there are two constraints and 12 when a single constraint. This haptic rendering method tracks the closest point (the god-object) on the surface with a simple method of detecting collision between the tool tip and the surface of objects. It limits model size to a few hundred polygons. Ruspini et al. [52] proposed another constraint-based method based on the god- object method. They adopt the term ’’ virtual proxy” which corresponds to the god- object. The god-object is a point on the surface. However, the virtual proxy is a small sphere to prevent the virtual contact point from falling through small gaps commonly found in computer graphics. In order to more formally define constraint planes, they use configuration-space obstacles (C-obstacles) from robotics (see the right hand side image in Figure 2.4). The virtual proxy is computed by an iterative search as follows: 1. Basically, find subgoals based on the same distance minimization as for the god- object using Lagrange multipliers (see Figure 2.5-c,e). 15 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ^ Proxy position ^ Tool tip position C onstraint plar J Figure 2.4: Configuration space obstacles and Constraint planes (figure from Ruspini’s paper [52]) 2. At each subgoal, all the planes that go through that point are potential constraints. The minimum set of active constraints is selected (see Figure 2.5-b,d). 3. If the subgoal is in free space, set as new subgoal the position of the tool tip. The path might intersect the C-obstacles. Add the first plane intersected as a constraint and the intersection point as the current subgoal (see Figure 2.5-d). 4. The process ends when the virtual proxy becomes stable (see Figure 2.5-f). Similarly, Figure 2.6 shows how to find the final proxy using the iterative search when the tool tip is moving around concave portions of objects. In this method, a more advanced collision detection method which used a bound­ ing sphere hierarchy [51] was employed to increase model size to tens of thousands of polygons. However, this constraint-based approach still suffers from the force disconti­ nuity (see Figure 2.7) around volume boundaries. In addition, the system is able to be unstable due to gracefully degradation of the performance as the complexity of models increases. The surface properties, such as friction and stiffness can be implemented by restrict­ ing the movement of the virtual contact point. 16 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. obe <t) r tC tl V S ccn.strain tN. #1 " ' fjl Probe (t) \ © Probe (t) \ (b ) (c) (d) (e) (f) Figure 2.5: Finding a proxy using an iterative search (a) Perform collision detection between the path of the tool tips and the C-obstacles (b) Set the subgoal and the con­ straint plane(s) (c) Find a new subgoal using the active planes and the minimization based on Lagrange multipliers (d) Since the subgoal is in free space, drop the con­ straints, set the HIP as the new subgoal and perform collision detection between the path and the C-obstacles (e) Recompute subgoal with new constraints (f) set final proxy if the sub goal is stable constraint plane constraint plane ! n m (a) (b ) constraint plane (c) Figure 2.6: Concave intersection (a) Move the proxy to the closest constraint plane (b) Find a sub-goal obeying that constraint (c) Iterate to determine additional constraints with the sub-goal as the new target 17 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. / \ Force discontinuity/ Figure 2.7: Force discontinuity in constraint-based approaches 22 .3 Force shading Both penalty-based and constraint-based approach create force discontinuity along the edge of each polygon as the surface normal shifts from one polygon to the next. Morgenbesser and Srinivasan [45] first introduced an approximation termed Force shading (analogous to Phone shading in computer graphics) which interpolates surface normals between adjacent polygons. Basdogan et al. [6] proposed an improved smooth­ ing technique to interpolate per-vertex normals using barycentric coordinates. In these approaches, the surface normal at each vertex is computed by averaging the surface nor­ mals of neighboring polygons, weighted by their neighboring angles before the haptic simulation. These solutions changes the direction of the normal force while retaining the magni­ tude proportional to the depth of penetration into the surface. However, it may not render the exact surface and cause the system to be unstable. In addition, it is unclear how to extend to handle multiple intersecting surfaces or support additional surface properties, like friction or texture. Ruspini et al. [52] suggested a more general force shading method which changes both the direction and magnitude of the force. In this method, the interpolated normal specifies a new constraint plane going through the contact point. The first sub-goal is found in the interpolated planes instead of the original constraint planes in the first pass. 18 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This sub-goal is then treated as the tool tip position and the second pass is performed to find the final goal using the iteration as we mentioned before. final goal original plane sub goal force shading plane tool tip position (b) surface normal interpolated 4 normal proxy positi^oi original plane sub goal tool tip positionforce shading plane Figure 2.8: Two pass force shading (a) pass 1 (b) pass 2 (figure from Ruspini’s paper [52]) The force shading method addresses the force discontinuity problem, but introduces a feeling of rounded surfaces due to the discrepancy between the haptic force field and the actual normal field of the surface [28, 52] as sketched in figure 1.2(a). 2.3 Volumetric haptic rendering algorithm Haptic rendering algorithms for geometric models are not applicable to volumetric data without first converting into geometric representation. However, the resultant geometric models usually contain a large number of polygons, which makes it impractical. In the volume haptic rendering, instead, the force field is computed directly from the volume data. 19 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.3.1 D irect volum e haptic re n d e rin g A haptic rendering algorithm, known as volume haptization, for volumetric data was first introduced by Iwata and Noma [31], Volume haptization defines a direct mapping of voxel value to force and/or torque for exploration of volumetric data. They used the gradient of the density value to calculate the direction of a stiffness force. Avila [3] proposed general haptic algorithms for volumetric data. In these approaches, the direction of the force is generated from the gradient of density in a volumetric data and the amount is linearly proportional to the density difference. The final force is created by the following equation 2.5. F = A + R(V) + S(N) (2.5) where, F is the force supplied to the user, V is a velocity of the tool tip which is moving, and T V is a normal vector which is computed using central differences. A is an ambient force, S( V ) is a stiffness force normal to the surface, and R( V) is a damping force(retarding force). The ambient force is the sum of all forces acting on the tool tip which are independent upon the volumetric data itself. For example, gravitational force, buoyant force, or synthetic guide forces. The equation 2.6, 2.7 show how to obtain the damping force directly from a volu­ metric data. R ( V ) - - V f r(d) (2.6) di) + C2 if(d + i < d < dj), (2.7) otherwise 20 fr{d) Grid- Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The density d indicates the depth of penetration in the thin sell between two isosur­ face densities, di and dj, where di < dj. Cx and C2 are coefficients to define a linear mapping from density values to force magnitude. The function f r(d) maps density val­ ues into the magnitude of damping force. The stiffness force is created by the equation 2.8, 2.9. The function f s(d) maps density values into the magnitude of stiffness force. However, the force magnitude does not take into account the distance to a virtual contact point on the surface - or in other words, it is a good approximation only when the haptic device is extremely close to the real surface. As a result, the haptically rendered surface may not match the original surface, as sketched in Figure 1.3(a). 2.3.2 Constraint-based approaches Chen et al. [14] suggested a direct haptic rendering method for isosurface in volumetric data using an intermediate representation which was introduced by Adachi et al. [1]. Just as constraint-based haptic rendering method for geometric models, the virtual contact point is constrained by a virtual plane as an intermediate representation of the isosurface. The intermediate plane facilitates collision detection and reaction force computation. When the tool tip penetrates into the volume, the intersection point (a white dot in Figure 2.9) can be determined quickly by linear interpolation along a ray between the previous and current tool tip point. The intermediate plane is defined as a tangent plane S(V) = 4 - / s(d) A H (2.8) otherwise (2.9) 21 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. isosurface tool tip tool tip (g| (b) Figure 2.9: A volumetric haptic rendering method by an intermediate representation (a) computing a force vector using a virtual plane as an intermediate representation (b) Virtual plane update rate is adjustable at the intersection point on the surface. Note that the normal vector at the intersection point is computed by central difference instantly. Then, the tool tip point is mapped onto the virtual plane to find the virtual contact point using the equation 2.10. VCP — TP + ((IP - TP) ■ N)N (2.10) Where, VCP is a virtual contact point, T P is a current tool tip point, IP is a inter­ section point and N is normal vector on the intersection point. The update rate of the virtual plane is adjustable according to processing power(see Figure 2.9b). If the update rate is fast, the user feel curved surface. However, low update rate causes a bumpy surface which feel like a surface of polyhedron and introduces strong force discontinuities. Salisbury and Tarr [54] suggested an similar constraint-based algorithm for surfaces defined by implicit functions. The implicit surface usually is implicitly described by an analytic function f(p) which indicates the proximity between point p and the surface(for further information, see section 3). The implicit surface representation has same proper­ ties as volumetric data, such as surface normal from the gradient of implicit function and 22 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. proximity representation to the surface using potential value(density value in volumetric data). p o i n t ( (a) (b) Figure 2.10: Find virtual contact points on the frictionless surface (a) the first contact point after penetration (b) subsequent virtual contact point They also used the tangent plane like the intermediate plane in Chen’s approach. However, The movement of the virtual contact point is constrained by the surface rather than the intermediate plane. Also, the tangent plane is created at the closest point on the surface from the tool tip or intersection point between the tangent plane and the tool tip, instead of an intersection point between the tool tip and the previous virtual contact point(see Figure 2.10). In order to find the nearest point, instead of Lagrangian multipliers, the algorithm use the gradient of f(p), where p is a seed point(the tool tip point in Figure 2.10a) or intersection point on a tangent plane in Figure 2.10b), since the gradient is a good approximation to the direction toward the closest point on the surface. Then, a small step is taken along the direction and is repeated until the size of step becomes sufficiently small. However, The drawback of this algorithm is to introduce additional friction even though no friction is applied to the surface since the direction of generated force is different from the surface nonnal of the contact point(see the contact point(t+l) in Fig­ ure 2.10a). 23 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. poin! point new intersection old intersection new intersection old intersection point'N (a) (b) Figure 2.11: Modification of the virtual contact point for friction (a) stick (b) slip In this algorithm, friction is implemented by modifying the position of virtual con­ tact point. They defined the friction cone shown in Figure 2.11 with it’ s apex at the new tool tip point and central axis normal to and passing through the last tangent plane at the intersection point. The angle of the cone a is function of the friction constant fr a — arctan(fr). If the previous contact point is inside the cone, it becomes a new intersection point on the tangent plane and sticks in place(see Figure 2.1 la). However, if it is outside the cone, the old intersection point moves into a new one according to the value of a and slip(see Figure 2.1 lb). 2.4 Haptic texturing 2.4.1 Synthesized haptic texture Minsky et al. [44] first demonstrated simulation of some haptic textures by lateral force fields proportional to the local gradient of the textured surface on a two degree of free­ dom planar haptic device. The stick-slip friction model is presented by some authors [13, 39, 53]. In this model, the tool tip of the haptic device is ’stuck’ (restrained) by the means of a static friction until the user applies enough force to overcome this static friction. Then, the 24 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. tool tip moves away from the sticking point and ’slips’ until it meets the next snagging point. It can be done by modulate the surface friction. Siira and Pai [55] present an algorithm to synthesize haptic textures from statis­ tical representation using a two degree of freedom haptic display. Similarly, Fritz and Bamer [19] introduce a stochastic modelling techniques to produce random and pseudo­ random texture patterns for synthetic texture generation. Basdogan et al. [6, 27] implemented a bumpy surface by perturbing the direction and magnitude of the surface normal using height fields. The height field of any point on the surface is generated by a two-stage texture mapping [59] or procedural approach. 2.4.2 Image-based haptic texturing In addition to visual realism of the graphical texture, Image-based haptic texturing enhances haptical realism at the same time. Image-based haptic texturing allows the user to feel the height variations of a 2D image(graphical texture). Ho et al. [27] implemented a image-based haptic texturing by perturbing the direc­ tion and magnitude of the surface normal using height fields. In order to implement graphical texture mapping and to obtain the height field, they use a two-stage texture mapping technique [59] in computer graphics to implement graphical texture mapping and to obtain a discrete height field. In the first stage, a two-dimensional image space is mapped onto a simple three-dimensional intermediate surface such as a sphere. The second stage maps the three-dimensional texture pattern in the intermediate surface onto the object surface(see Figure 2.12. After this two stages, texture coordinate for any point on the object surface can be obtained and produces a height value. In order to perturb the surface normal, they use a local gradient of the height value. This image-based haptic texturing, however, requires additional computations for detecting collision between the tool tip and a new textured surface and perturbing the 25 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 2.12: Two-stage texture mapping (from Basdogan’s paper [6]) surface normal. In addition, the direction and amount of force is computed based on the height value at the VCP [the white circle in Figure 5.6] on the original surface instead of the textured surface. That makes the system unstable due to the sudden changes in the magnitude and direction of force. 2.5 Haptic Painting The automatic graphical texture is created by apply texture maps from a 2D image to a 3D model, based on a mathematical mapping function. This way of texture mapping causes some problems. Firstly, it is difficult to position a texture on a target region on the 3D model even in manual texturing by hand. In addition, resultant image on the 3D model may be distorted during the mapping process. Haptic painting system addresses these problem by painting directly onto 3D models using haptic interface. Unlike a mouse as a 2D input device, haptic devices provide an intuitive, precise and responsive interface which enable a user to use a natural painting style. Hanrahan and Haeberli first suggested a 3D painting system using a 2D mouse inter­ face which allows the user to paint directly onto a 3D model [25]. However, the mapping 26 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. from 2D screen space to the 3D mesh may not always be clear. The Cameleon sys­ tem introduced an advanced painting system with a mouse interface [30]. This system dynamically generates a tailored UV-mapping for newly painted polygons during the painting process to avoid texture distortion rather than using a predefined UV-mapping. The drawbacks of these systems incorporating a mouse interface can not provide a nat­ ural and precise painting style as well as force response. A 3D painting with a 3D input device was introduced by Agrawala et al. [2], When the user touch a physical object using a virtual brush incorporating a 3D tracking sensor, he feels the responsive force without haptic devices. Meanwhile, corresponding area on the scanned model of the physical object is painted. The problem of this interface is that the user should look at both the real object and scanned model at the same time to see whether the paint is being applied correctly. This registration process may make the painting style awkward. In addition, the system requires real objects to perform the painting. In 3D painting systems by Hanrahan et al [25] and Agrawala et al [2], paint is applied to vertices of the 3D model rather than texture map. Thus, these 3D painting systems require models with a large number of polygons to avoid sampling artifacts due to large size of polygons. Johnson et al. [32] was first proposed a haptic painting system which paints directly onto trimmed NURBS models with a haptic interface. Incorporating a haptic interface provides a natural painting style as if the user paints on a real object. While the user paints on a 3D model, the system updates the 2D texture map and adaptively adjust the brush size using the surface parameterization to mitigate the texture distortion between the 2D texture image and the 3D model. However, this system works for only NURBS models and perform a flood fill on region within brush volume in the texture space resulting in texture distortion on complex 3D models. 27 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. A haptic painting system [named inTouch] for polygonal models was developed by Gregory et al. [23 ]. In addition to painting, the system can edit polygonal models of arbi­ trary topology with multiple resolution, based on a subdivision surface representation. In order to avoid the texture distortion problem, they use a standard 2D scan-conversion in texture space. During the scan-conversion, each texel in the 2D triangle in the texture space is updated according to a brush function based on its corresponding 3D position. Building upon the framework of inTouch, ArtNova [17] allows the user to patch tex­ tures interactively to a local region by brush strokes and the orientation of the texture is determined directly by the stroke. Baxter et al. [7] developed a painting system, dAb, which provides the user with the traditional tools of a painter. They present a physically-based, deformable 3D brush model which allows anyone to control a virtual brush as he or she would a real brush. The haptic feedback enhances the sense of realism and provides tactile cues that enable the user to better manipulate the paint brush. 2.6 Volume-based Haptic Sculpting Volume sculpting [20, 58, 9] allows the user to interactively edit a 3D object as a volu­ metric representation rather than the traditional surface representation. The volumetric representation is very well suited to solid modelling in that arbitrary model can be mod­ elled and no additional work is required for topological changes. For instance, changing the genus(i.e. the number of holes) of the solid is trivially supported. Volume sculpt­ ing have proven to be effective for sculpting objects of complex topology and organic appearance. Meanwhile, surface-based modelling [12, 60] has tow fundamental problems: (1) surface models are control-point based, and modelling process is in its nature indirect. 28 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Even direct free-form deformations (FDDs) methods are still accomplished by control points. (2) The initial topology of control-point-based structures remains a problem. For all practical purposes, this structure cannot be changed easily, and therefore the topology of the model itself cannot change [9], Galyean and Hughes [20] introduced a voxel-based approaches to volume sculpting that used marching cubes [37] to display the model. Later, Wang and kaufman [58] pre­ sented a similar sculpting system with sculpting tools of carving and sawing. In order to achieve real-time interaction, the system reduced the complex operations between the 3d tool volume and the 3D object to primitive voxel-by-voxel operation. Barentzen [4] proposed to use octree-based volume sculpting. However, these approaches have limita­ tion such as low resolution due to the data size of the volume, a large number of triangles for the displayed surface. In order to address these limitation, multi-resolution [29] and adaptive approaches [50, 9] have been suggested resulting in high image quality with less number of triangles. Mouse-based computer interface in the 3D sculpting system is unnatural and inef­ ficient. Avila [3] introduced a volume-based haptic sculpting system which allows the user to intuitively sculpt a volumetric data using a haptic interface. Later work includes [29, 34] which proposed some advanced features. 29 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 3 Implicit Surface Representation We define and describe properties of implicit surface representation on which our haptic rendering algorithm is based. 3.1 Definition The implicit representation of the external surface S of an object is described by the following implicit equation [8] S = {(x, y , z) G Rs\f(x, y, z) = 0} where / is the implicit function (also called potential), and (x , y, z) is the coordinate of a point in 3D space. Value of f(p ) indicates the proximity between point p and the surface. If the poten­ tial value is 0, then the point(x', y, z) is on the surface. The set of points for which the potential value is 0 defines the implicit surface. If the potential is positive, then the point(x, y, z) is outside the object(red points in figure 3.2-a). If f(x , y, z) < 0, then the point (x, y, z) is inside the surface(green points in figure 3.2-a). In our algorithm, we use discrete potentials in a 3D regular grid like a volumetric representation. The potential value of each point in a grid is a signed scalar value which indicates the proximity to the surface. Potential values inside a cube of the grid is computed using a trilinear interpola­ tion between the eight values at the comer of the cube. Now, the inside/outside property 30 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. of the potential function makes the collision detection between the tool tip of haptic dis­ play device and the implicit surface trivial, since we know (at fixed computational cost) the sign of the potential. . ■ # Figure 3.1: Implicit surface properties 3.2 Surface normal The surface normals of an implicit surface can be obtained from the gradient of the implicit function as follows: d f d f d f V! = [TI 'T y'T z] QA) The gradient (V / ) is the partial derivative of the implicit function, and the surface normal vector is n — V //||V /|| [8]. The surface normal of a certain point inside the surface can be computed by interpo­ lating the gradients of the 8 neighboring points around the point. This property is crucial to achieve smooth changes of the force direction in our algorithm. 31 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.3 Closest point transform The potential value of each point is pre-computed using the closest point trans- form(CPT) [41]. CPT converts an explicit representation of a geometric surface into an implicit one. A fast algorithm for computing the closest distance was proposed by Maucfa [41]. The algorithm computes the closest point to a surface and its distance from it by solving the Eikonal equation using the method of characteristics. The computed distance is accurate and the complexity is linear in the number of grid cells and surface complexity. In our algorithm, Mauch’s CPT algorithm is used to generate the potential value of each point. The user can select the resolution of the grid depending on the surface com­ plexity of objects (see Figure 3.2), or depending on the computational power available. (a) (b) (c) Figure 3.2: Closest point transform(CPT) (a) a geometric model with mesh (b) volumet­ ric implicit surface representation converted using CPT (b) Hybrid surface representa­ tion 32 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 4 Implicit-based Haptic Rendering Model In this chapter we give a detailed presentation of our haptic model including collision detection, force generation, and surface properties like friction and haptic texture. 4.1 Collision detection Using the implicit representation, collision detection becomes trivial due to the inside/outside property. We can obtain the proximity to the surface by interpolating the potential values of the 8 neighbor points around the tool tip. If the distance becomes 0 or changes sign of potential value, a collision is detected. Algorithm for collision detection 1. obtain a current position of the tool tip. 2. convert the position in the local coordinate into one in the 3D grid space. 3. compute a potential value of the current position by interpolating potential values from 8 neighbors in the 3D grid. 4. if the potential value = 0 or sign of the value is changed, then collision is occurred. 33 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The complexity is constant since we are using a regular grid and is independent of the resolution of the grid. Unlike geometric haptic rendering algorithm, the complexity of collision detection is not dependent upon the mesh complexity of geometric models. 4 2 Friction-less model 4.2.1 Force direction As mentioned before, penalty-based approaches [39, 40] have many limitations such as finding the nearest surface, strong force discontinuity, and push-through of thin objects. Constraint-based approaches [62, 52] overcome these problems to some extent. These approaches, however, still suffer from force discontinuity(see figure 2.7). The force dis­ continuity generally occurs when the direction and/or amount of the force are changed suddenly around volumetric boundaries such as edges on the surface. The force discon­ tinuity is a crucial problem in a haptic rendering algorithm since the human sense of touch is sensitive enough to notice even small force discontinuities. In the implicit surface representation, we can obtain smooth surface normals as the tool tip moves along the surface. When the tool tip is inside the surface, the position of the tool tip lies on a certain isosurface inside the real surface. The isosurface works like an inner constraint of the surface. When the user touches the surface of an object, the algorithm first computes the gradient of each point of 8 neighbors around the tool tip in a 3D grid. Then the gradient at the position of the tool tip is computed by interpolating the neighbor’s gradients(blue arrows in figure 4.1). The resulting gradient is equal to the surface normal of “a” virtual contact point on the surface and becomes the direction of the force (red arrows in figure 4.1). Algorithm for computing the force direction 34 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Y axis outside \> ~ X axis Figure 4.1: compute a normal vector(red arrow) of each position from neighbor’s gradi- ents(blue arrow) in a 2D grid 1. use the position of tool tip generated during detecting collision. 2. compute gradients of potential values for 8 neighbors around the tool tip. 3. obtain a normal vector of the current position by interpolating 8 gradients. 4. normalize the normal vector This algorithm is constant and independent of shape complexity of geometric mod­ els and resolution of 3D grid representing volumetric data since the computation is per­ formed locally on the 3D regular grid instead of geometric model. The gradients of potential values could be computed using the central difference approximation for partial derivatives: d f (fx+d fx-df ^ dx 2 d df_ = ify+d - fy-d ) dy 2d d f = (fz+d ~ fz-d) dz 2d (4.2) (4.3) 35 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (4.4) o o m w (a) (b) Figure 4.2: Two examples for the change in force direction (a) penetration deeply into the volume of a object (b) move toward a volumetric boundary The next section shows how to find the exact virtual point in the direction of the force, so that the force magnitude is consistent with the surface we have at hand. 4.2.2 Force magnitude In the previously introduced volume haptic algorithms [3, 31, 36], the force magnitude has been approximated using the potential value. However, the potential value may not be proportional to the virtual contact point on the surface. This occurs for instance in thin convex and concave surface of rugged objects (see figure 1.3(a)). As a result, the user usually feels the surface more smooth than he sees it. In our algorithm we first find the virtual contact point on a surface in order to deter­ mine the amount of the force. This means that the force magnitude is not a function of the arbitrary potential value. The virtual contact point is constrained by, and moves along the surface, just as the constraint-based approach for the geometric representation 36 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (see Figure 1.3-b). The contact point is found as the intersection point between the sur­ face and a ray along the computed force direction. The position of that point can be quickly calculated by binary search. As the tool tip is usually very close to the surface, the computation required is extremely simple (usually only a few steps along the ray suffice). Algorithm for computing the force magnitude 1. Fd the direction of the force 2. Ip, Ci initial position of the tool tip 3. P < — potential value at Ip 4. Fa < — 0 // the amount of the force 5. Co < - C i 6. while( P < 0 ) 7. C0 Ci II save the previous position 8. move Ci incrementally by a small amount along Fd 9. compute P at a new Ci using a linear 3D interpolation 10. if( P > 0 ) 11. then Ci binary search(C'0, Cf) II find the contact point on the surface 12. Fa - Ip - Q Usually, the force magnitude can be computed within a couple of step depending on the step size. This algorithm is independent of shape complexity of geometric models and resolution of 3D grid representing volumetric data. To find a contact point between C 0 and C i on the surface, we use a binary search scheme as follows: 37 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Algorithm for binary search 1. C\ < — a position outside the surface 2. Co < — a position inside the surface 3. Mp < — (Ci — Co)/2 // find a middle point 4. P < — potential value at Mp 5. iV < — 0 // the number of iteration 6. whlle( || P\\> e ox N < n ) He and n are thresholds 7. increase N by 1 8. if( Mp > 0 ) 9. then Ci < — Mp 10. else C0 < — Mp 11. Mp « — (Ci — C0)/2 12. P f— potential value at Mv 13. return Mp The haptic performance is depending on two thresholds, e and n, which are defined by the user. 4.2.3 Final force output by a spring-damper model Once the virtual contact point is found, spring-damper model [61] is used in order to compute the force vector that will try to keep the virtual contact point and the tool tip at the same position. F — (pc — p t ) * k — V *b (4.5) 38 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where F is the force vector, p c is the coordinate of a contact point, pt is the coor­ dinate of the tool tip, k is stiffness, V is velocity of the tool tip, and b is viscosity(see figure 4.3). Spring stiffness has a reasonably high value and viscosity is to prevent oscil­ lations. It is similar to PD(proportional- derivative) control in robotics. By modulating stiffness and viscosity, the user can vary surface stiffness. Surface Figure 4.3: Spring-Damper model To provide more robustness, we also threshold the resulting magnitude. When the tool tip penetrates deeply inside the surface with a low stiffness, instability can occur in a very complex surface since isosurfaces are getting smoother as the depth of penetration increases. 4.2.4 Result We have successively simulated geometric models ranging from simple models to com­ plex models (see Figure 4.4). 4.3 Adding friction to the model If the model has no friction (viscosity), it creates the feeling of a very slippery surface, since the direction of the force vector is always perpendicular to the surface. Therefore, 39 Spring Haptic Tool Damper Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Genus (3328 triangles) Triceratops (5660 triangles) David (11820 triangles) Dinosaur (28136 triangles) Figure 4.4: Haptic display for geometric models the algorithm should incorporate a friction term in order to simulate various surfaces with different friction properties. In our algorithm, friction is implemented by limiting the movement of the virtual contact point like in the constraint-based method. The friction term takes into account a friction coefficient and depth of the penetration. f v = Sc * (1 + (l|Pc+A t - Pt+ A t ||) * d ) (4.6) 40 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 4.5: The new virtual contact point due to friction. where f v is the friction term which ranges from 0.0 to 1.0, f c is the friction coeffi­ cient, ||p c+At — Pt+A t 1 1 is the penetration depth and d is a depth constant. Due to the depth term in the equation above, the user feels a stronger retarding force as he/she is moving inside the object. By using the friction term f v, we can compute the retarding force Fr and the new contact point as follows: Ft = Pc+At - Pc (4.7) Fr = -Ft * f v (4.8) Pn — Pc + {Ft + Fv) (4.9) where Ft is the tangential force, p c+At is the current contact point coordinate, p c is the previous contact point coordinate, Fr is the retarding force and p n is the new position of the tangential force after applying friction (the green point in figure 4.5). The new position p n, however, may not lie on the surface. We have to find the new contact point(the red point in figure 4.5) which is on the surface and intersects with a ray along the new surface normal vector(pn — p t+ At)- The final force is finally calculated using the equation 4.5. 41 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4.4 Offset surface for thin objects 4.4.1 N ew offset surface for thin objects In penalty methods, if 3D models have not sufficient internal volume, the haptic system can not generate the enough constraint force to prevent the tool tip from passing through the models. Similarly, this problem can be occur in our approach. Constraint-based approaches [62, 52] address this problem by limiting the movement of the VCP by the surface but introduce strong force discontinuity resulting in a feeling of bumpy surface around volume boundaries such as edges and vertices In geometric models. Figure 4.6: Haptic simulation on an object (Galleon: 8794 triangles) with thin volume using an offset surface In order to give thin objects sufficient internal volume without force discontinuity, we use an offset surface which represents an iso-surface with a positive offset from the implicit surface. An additional volume between the offset surface and the original surface allows the system to generate the appropriate constraint force for thin objects. Note that the system uses the surface normal as the force direction and surface prop­ erties at the closest point on the original surface (red circles in Figure 4.7) from the tool tip to simulate the original surface as accurately as possible. The closest point is also 42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. used to represent the visual contact point on the surface and to update texture map and material map when the user performs haptic editing (see section 5.1, 5.2) on the offset surface. The force magnitude is proportional to the distance between the physical tool tip and the VCP on the offset surface (grey and blue circles respectively in Figure 4.7 (a)). The offset surface could smooth out small dent on the surface but the system pro­ vides reasonable haptic fidelity in high resolution of volumetric implicit surface repre­ sentation. / VCP implicit surface tool tip VCP offset surfacei implicit surface (a) Figure 4.7: Offset surface to simulate thin objects (a) constraint force based an offset surface (b) move the VCP onto the original surface 4.5 Magnetic surface In previous work, Virtual Fixtures [47] were used to assist the operator as guides in the virtual environment by restricting the motion of the end-effector. These help the operator to execute tasks quickly and precisely. Similarly, the force field in the Haptic Buck system [10] attracts the haptic device to the closest control handle to assist the user to perform a designed control in 3D virtual 43 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. environment. In addition, this system forces the haptic device to follow the mechanism trajectory by constraining motion. offset surface implicit Figure 4.8: Magnetic surface force the tool tip to keep the contact with the surface We propose a magnetic surface which attracts the tool tip to the closet point on the surface if the tool tip is within a magnetic field. Unlike previous two systems, the purpose of the magnetic surface is to force the tool tip to keep in contact with the sur­ face while the user explores complex 3D models. It helps the user, especially visually impaired people, to feel the shape of 3D model without losing the contact with the sur­ face. The magnetic field is created in a narrow band between the offset surface and the original surface (hatched area in figure 4.8). The direction of magnetic force is deter­ mined by the surface normal at the position of tool tip (represented by arrows in Fig­ ure 4.8). The amount is proportional to the distance between the tool tip and the closest point on the original surface (yellow area in Figure 4.8). The user can adjust the range and strength of the magnetic force according to appli­ cations. 44 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4.6 Merging multiple objects Complex objects can be constructed by merging (overlapping) multiple objects. When the tool tip is travelling around regions where object intersections occurs, the virtual contact point should be smoothly shifted onto the surface of a new object. This transition process is not trivial in previous algorithms. In our algorithm, when multiple objects are combined, multiple implicit representa­ tions also are merged resulting in an implicit representation in a 3D grid if all implicit representations have the same grid resolution. Computing potential values for the com­ bined implicit representation(M) requires a very simple equation 4.10. P(x, y, z) in M = m in{P(x,y, z) in Mt} (4.10) 1=0,n The potential value of a point in a 3D grid is represented by P(x,y,z), where (x,y, z) is the index of the point, and each of n multiple objects is defined by Mi, M2, ..., M„. P(x,y, z) in M is the minimum value among P(x.y, z) in Mi, P(x, y, z) in M2, ...., P(x, y, z) in Mn. Figure 4.9 shows an example of the force summation. By merging two implicit representations(Mi in Figure 4.9a and M2 in Figure 4.9a respectively), we can obtain a new implicit representation, M, which represents a new implicit surface. After merging, the responsive force for multiple objects is computed based on the new implicit surface representation in the same way in a single object (see Figure 4.10). Furthermore, when the tool tip is travelling on the area overlapped multiple objects, we don’ t need to take care of transition from an object to a next object. 45 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) (b) (c) Figure 4.9: merging multiple implicit surface representation (a) potential values in the implicit surface representation of the first object, M x (b) potential values in the implicit surface representation of the second object, M2 (c) potential values in the implicit sur­ face representation of the merged object, M Figure 4.10: Merging two objects (a) the first object (b) adding a handle object to the first object 4.7 Implicit-based haptic texturing Haptic texturing is the term used to describe the way we can simulate surface roughness without requiring increased geometric complexity. It can enrich the user interaction with a haptic device just as graphical texture enhances the visual realism. In previously intro­ duced algorithms, haptic textures are implemented by modulating the surface friction and/or perturbing local surface normals without modifying the original surface. 46 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. As mentioned in section 2.4, previous algorithms for haptic texturing require addi­ tional computations and may not render the exact surface and cause the system to be unstable. In addition, it is unclear how to extend to handle multiple intersecting surfaces or support additional surface properties, such as friction. In our algorithm, haptic texturing is simulated by applying the texture value directly to the potential value of each point in the 3D grid without any need for preprocessing and additional computational cost. Furthermore, we can apply friction to textured surface without any modification of the algorithm. 4.7.1 Changing the geometry of an implicit surface If a potential value of a grid point increases, it means that the distance from the point to the surface increases. Similarly, the distance decreases as the potential value decrease. As a result, the geometry of an implicit surface is changed. A simple example of apply­ ing texture values in 2D is shown in figure 4.11. As shown in figure 4.11 (a), three grid points with same potential values, 0.5, represent a flat implicit surface. However, after the first potential values are changed from 0.5 to 0.2, surface around the point is moved up toward the point due to shortened distance in figure 4.11 (b). In the same way, after the third potential values are changed from 0.5 to 0.9, surface is moved down away from the point. 0 . 0 . 0 . (a) (b) Figure 4.11: Modulating the potential value (a) an implicit surface with a flat side (b) a textured implicit surface obtained by modulating the potential values of the flat surface 47 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. After modulating the potential value, we obtain a new implicit surface with the tex­ tured information while a geometric model is not modified (see the textured surface in figure 4.12). As a result, the user can feel a textured implicit surface with the new geometry. We call this approach ’’Implicit-based haptic texturing”. Implicit surface Geometric surface O riginal su rface T extured su rfa c e Figure 4.12: Implicit-based haptic texturing is haptically render a modified implicit surface by modulating directly the potential values This approach is a new idea in haptic texturing and has the following benefits: • it is a novel way to simulate haptic textures since the geometry of an implicit surface is changed explicitly and expresses the textured surface. • no modification to the existing algorithm is necessary in order to accommodate the new texture features of the surface. Since the direction and amount of the force are computed dynamically as the tool tip moves along the surface, whether or not this one has been modified by additional textures. • there is no additional computational cost and required memory imposed due to haptic texturing. Unlike the algorithm by Ho et al. [27, 6], we don’t need to take into account new collision situation and algorithms to compute the amount and direction of the force due to the explicit modification of the geometry of an implicit surface. 4.7.2 Examples Implicit-based haptic texturing is implemented by modulating the potential value of each grid point. In order to simulate synthetic haptic texturing, we need a mapping function 48 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. to generate texture fields. For instance, a simple noise function can be used to simulate a rugged surface and fractals are used for modelling natural textures. We demonstrate two different synthetic texture methods using implicit-based haptic texturing: figure 4.13(a) simulate a rugged surface by applying Gaussian noise and fig­ ure 4.13(b) does a surface with a lattice pattern texture. In the figures, grid points with light blue color have new potential values to reflect haptic texturing. 4.8 An octree to reduce memory requirement In our algorithm, the memory complexity of the volumetric representation is 0 (n 3), where n is the size of one side in the 3D grid. However, only cell points around the surface are involved in the force computation. The others are useless like an empty portion of the 3D grid. In order to reduce the memory requirement, we can use an octree which is a very efficient data structure to store 3 dimensional data of any form. Using octree avoids the representation of empty portions of the space resulting in saving a lot of memory. In addition, you can search and insert in such an octree very fast. Based on an octree algorithm, the volumetric representation can be decomposed into small elements with various sizes recursively. Elements are placed at the leaves of the resulting structure. Figure 4.14 shows a octree for a volumetric implicit surface representation of a horse model. 4.9 Implementation For our haptic system, both haptic and visual processes runs at the same time on a PC with a 1GHz Pentium III CPU and a E&S AccelGALAXY 16mb video card. Haptic process updates the force computation at 1 Khz thanks to the fast collision detection and 49 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. determination of the surface normal and so on. We use Ghost library from SensAble to sends the force directly to the 3DOF PHANTOM haptic device at 1000 Hz. A sample program is available at ’’ http://www-grail.usc.edu/Haptic”. 50 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 4.13: Implicit-based haptic texture (a) Gaussian noise (b) Lattice pattern Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 4.14: Visualization of an octree with level 7 to save a volumetric implicit surface representation of a horse model 52 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 5 Haptic Decoration and Material editing In this chapter, we describe haptic decoration which allows the user to paint directly on the surface. Then, the user senses the surface variation of painted image on the surface. In addition, material editing allows to edit surface properties like friction and stiffness directly on the surface and then feel the edited surface properties. 5.1 Haptic Decoration 5.1.1 Haptic Painting Traditional texture mapping undergoes texture distortion while texture image in 2D space is mapped into a virtual object in 3d space. 2D texture could be shrunk or extended on complex shaped area of the virtual object if the user edits 2D image by hand (see Figure 5.1b). Meanwhile, haptic painting can reduce this texture distortion by direct painting on 3D objects (see Figure 5.1a). We use a volume fill algorithm to find the 3D triangles to be painted within the brush volume. 2D triangles in texture space are rasterized in a similar way as in intouch [23] during haptic painting. A simple brush function is used to update each texel in a texture map during the rasterization. We explain these two points in more details next. Finding 3D triangles to be rasterized 2D triangles being painted in texture space are determined by corresponding 3D trian­ gles which fall within the brush volume in 3D space. In our system, the volumetric 53 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) (b) Figure 5.1: Generating a texture on a 3D object (a) by hand or automatic method (b) by haptic painting. This Figures comes from [32] representation is used to find 3D triangles to be painted. Each grid point has an index of the nearest face which is computed while building the implicit surface representation before haptic simulation. These face indices are used for the connection between the implicit surface and geometric model. In order to find the seed face to start the rasterization, the system performs an inter­ section test between a line segment from the tool tip position to the VCP and faces in the grid cell containing the VCP (see Figure 5.5c). Then, the system walks through all faces within the brush volume from the seed face. If a face is within the brush volume and is not in the list of faces being painted, the face is added into the list. The process to find faces being painted is repeated recursively until no new face is found (this process is known as a flood-fi.il algorithm). However, this flood-fill algorithm can not find all triangles within the brush volume when the tool tip is moving on the overlapped area of multiple objects. For the volume- fill, the system checks all faces indicated by grid points (white points in Figure 5.2) within the brush volume. If it finds a new face within the brush volume, this face is 54 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 5.2: A volume-fill algorithm to find 3D triangles within the brush volume (red circle representing the brush volume, white points indicating the grid points within the brush volume, yellow area being painted). used to a new seed face to perform another flood-fill on a new object (called volume- fill algorithm). As a result, we can make the list containing all faces within the brush volume. Triangle rasterization and brush function The system finds 2D triangles in texture map using the face list being painted. These 2D triangle are rasterized one by one using a standard scan conversion (see Figure 5.5b). The color of each texel are determined by a function of the brush size, brush color, and fall-off rate, background color and the distance to the center of the brush volume during the triangle rasterization. Previous 3D painting systems [25, 2, 23] have suggested sim­ ilar brush functions. Equation 5.1 is used to compute the weighting factor a G [0,1] for standard alpha blending between the current texel color and the brush color as follows: a = l - ( f y (5.1) 55 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where R t is the distance to the center of the brash from corresponding 3D position of the texel, is the radius of brash volume, and / is a fall-off rate which is specified by the user. The size of brash volume is determined according to the amount of applied force or a user-specified size. 3D position for each texel is computed from a 2D triangle containing the texel using a barycentric coordinate. The blended color of a texel is computed by Equation 5.2. CAR, G. B] — Cb[R, G,B]*a (5.2) + Ct[R,G,B}*{I-a) where Cb[R, G, B] is the brush color (foreground) and Ct[R,G,B] is the current texel color (background). Barycentric coordinates is a way to represent a point in the plane with respect to a given triangle(possibly polygon). It have had may application in computer graphics such as texture mapping, surface parameterization, ray-tracing, and so on. Q a. a. v. v. " 2 '3 2 v. V. 2 3 (a) (b) Figure 5.3: Barycentric Coordinate Barycentric coordinates are triples of numbers (oq, a2, a3) corresponding to masses placed at the vertices^!, v^, v3) of a triangle AU1U 2U 3. These masses then determine a 56 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. point P, which is the centroid of the three masses, and is identified with coordinates(see figure 5.3). To find the barycentric coordinates for an arbitrary point P, find ay and q.2 from the point Q at the intersection of the line ty P with the side U 2U 3, and then determine ay as the mass at ty that will balance a mass 0 2 + a3 at Q, thus making P the centroid (figure 5.3 (a)). Furthermore, the areas of the triangles A tytyP, A tytyP, and A rytyP are proportional to the barycentric coordinates a3, < 2 2 , and ay of P (figure 5.3 (b)). The following equation 5.3 describes this property. -P = ^ O iVi (5.3) i€[1..3] ] T = 1. (5.4) i€ [1 ..3 ] An example created by our haptic paint system is shown in the Figure 5.5. Figure 5.4: Examples of haptic painting (a) the pottery created by the haptic painting system and its wire-frame (d) the pottery model and its implicit surface representation 57 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. R e su lt Here axe some examples (Figure 5.5). Especially, the pottery model in Figure 5.5 is painted on the offset surface. The user can not imagine that his tool tip moves on the offset surface instead of the original surface. For the haptic painting, we use a parame­ terized texture map (512x512). 5.1.2 Image-based haptic texturing We use the implicit-based haptic texturing [34] to sense the surface variation of 2d image (called image-based haptic texturing). While the image-based haptic texturing applies, all potential values in grid points around the surface are updated according to corre­ sponding texture values at a time. In order to obtain texture value of each grid point, the system first finds the closest 3D position on the closest face from each grid point. Then, the 3D position is mapped into a 2D texel in a texture map by barycentric coordinate. Finally, the texture value is obtained from the grayscale intensity of the texel. After updating potential values, we get a new geometry of implicit surface on which the user correctly senses the surface variation. This method is independent of the shape complexity and texture mapping method unlike in previous systems [27, 52], The system also does not introduce the sudden change in the force direction and amount since the force is generated based on the tex­ tured geometry instead of the original geometry (see Figure 5.6). Haptic painting is performed in the visual display loop. Meanwhile image-based haptic texturing is simulated in the fast haptic loop without delay which often causes to miss significant change in texture. Furthermore, no addition computational cost and change in the algorithm are required. 58 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5,2 Material editing Texture map in computer graphics is used to make the surface more realistic by locally changing the color or appearance of the surface. Similarly, the user may expect hapti- cally different surface properties when he/she touches a graphically textured model since typically different material have different surface properties like stiffness and friction. However, in most previous haptic systems, a single surface property is applied over the whole 3D model. Recently, Pai et al. [46] suggested a contact texture including a friction map which is saved in each vertex in a 3d model. Friction values are measured by scanning the response of real objects with a robot system. However, it does not support editing the friction map. We first suggest a material editing technique to edit and simulate material map (see Figure 5.7). Material map contains local surface properties such as stiffness and friction coefficients just as texture map has texture values. In our method, the user performs haptic painting and material editing directly on the surface at the same time. During the editing of material properties, local material properties are saved at grid points within the brush volume in a 3D grid instead of vertices in geometric models (see Figure 5.7b). It gives some advantages in haptic rendering. The system simulates material properties more accurately since the volumetric representation is represented as a 3D regular grid. Meanwhile, polygons in geometric models typically have irregular shapes and sizes. After editing the material map, the user can sense the assigned local surface prop­ erties on the surface immediately. As the tool tip moves on the surface, the system computes corresponding friction and stiffness values at the VCP by linearly interpolat­ ing material properties at 8 grid points in a cell containing the VCP on the surface. This 59 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. interpolation function gives a fast and reasonable approximation. Based on the volu­ metric representation, material editing technique is independent of the geometric model. The material map can be extended to other haptic properties like surface roughness for instance. The material map can be extended easily for other haptic properties like haptic sound and surface roughness. 5.3 Image-based 3D embossing and engraving Using the haptic techniques such as haptic painting and image-based haptic texturing, we obtain easily embossed and engraved 3D model on the painted image. The user creates an image directly on a 3D model using our haptic painting technique in a natural way (the first step in Figure 5.8). The system then generates a height field from gray-scale of 2D image by directly modulating potential values around the painted image in the image-based haptic texturing (the second step in Figure 5.8). It changes explicitly the geometry of the implicit surface resulting in the new implicit surface on which the user feels the surface variation due to the 2D image. An textured implicit surface can be converted into a geometric model by Marching cube algorithm [37], Using this process, we obtain the embossed or engraved geometric model right on the painted image (the third step in Figure 5.8). Figure 5.9 shows these steps. The painted model (see Figure 5.9b) is created from an original model (see Figure 5.9a) using the haptic painting technique. In Figure 5.9c, blue points show a height field modulated during the image-based haptic texturing. Finally, we obtain the embossed model converted from the textured implicit surface. When the height field from the 2D image is built, the system obtains texture coor­ dinate of each cell point in the volumetric representation. This texture coordinate is 60 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. used to compute per-vertex texture coordinate of a new geometric model by linearly interpolating uv value of two neighbor cell points. Since each vertex coordinate of the new geometric model (white points in Figure 5.10) is a intersection point between two neighbor cell points with different sign (red and blue points in Figure 5.10) or cell point itself if potential value of the point is zero in Marching cube algorithm. As a result, we can find UV texture coordinate of each vertex without surface param­ eterization resulting in a fast-generated and undistorted texture right on embossed or engraved surface. Figure 5.11 shows two embossed models. We can see painted models in the left hand side and embossed models in the right hand side. 61 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (c) (d) Figure 5.5: Examples of haptic decoration (a) a panda on a model(3072 triangles) (b) a self-portrait on a pottery(4800 triangles) (c) a decorated pottery (d) An Asian-style plate 62 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. textured surface | height vaiue Figure 5.6: A previous image-based haptic texturing method and our approach (dotted line) Figure 5.7: Various haptic effects using material editing (a) the user edits and feels the material properties while painting (b) a closer view showing the mesh and the volumetric representation 63 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. height field texture coordinate Graphical texturing Synthesized geometric model Textued implicit surface Haptic painting geometric surface I Implicit surface 2D texture Engraving Figure 5.8: Image-based embossing and engraving Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (c) (d) Figure 5.9: An example to show the embossing process (a) An original model, (b) Haptic painting on (a), (c) Image-based haptic texturing on (b). (d) The embossed model 65 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (0.3, 0.3) (0.7, 0.7) (0.5, 0.5) # Inside point in the implicit surface # Outside point in the implicit surface O Vertex point in the geometric model Figure 5.10: An example of polygonization by Marching cube algorithm Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 m m \ Figure 5.11: Embossed models (a) A texture mapped asteroid model, (b) The embossed model of (a), (c) A painted tray model, (d) The embossed model of (c) 67 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 6 A Volume-based Haptic Sculpting Technique We developed a virtual sculpting system based an volumetric implicit surface as an alternative to existing digital sculpting implementations. Our haptic rendering algorithm is integrated into the sculpting system to haptically render the implicit surface being sculpted and to intuitively manipulate the deformation. 6.1 Polygonization Method A fast method for visualizing the implicit surface is required for interactive sculpting. Typically, visualization of implicit surfaces may be performed either by polygonizing the surface or by direct ray-tracing. Ray-tracing methods require long rendering time although it creates high quality digital images. To implement an interactive modelling, we use a polygonal rendering method. Uniform polygonization method such as Marching cube algorithm [37] suffers from large volume size and a resolution limited by a fixed sampling rate even though many sculpting systems use the uniform polygonization method [20, 4, 16, 5], The fixed sam­ pling rate results in oversampling of area with low curvature and undersampling of area with high curvature. In order to achieve the desired accuracy, uniform polygonization often generates a 3D model with too many polygons which is impractical, (see Fig­ ure 6.1a,b). 68 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In order to address these limitation on uniform polygonization method, we employ an adaptive polygonization method [56, 57]. This adaptive approach is adapted to the surface geometry according to the surface curvature (see Figure 6.1c,d). The resultant mesh effectively represent sharp edges with the less number of triangles than in March­ ing cube algorithm. (a) (b) (c) (d) Figure 6.1: Comparison between uniform and adaptive approach (a) Uniform polygo­ nization by Marching cube algorithm. Sampled at 128x128x128 resolution, (b) Mesh data of (a), (c) An adaptive polygonization. Initial mesh is sampled at 64x64x64 reso­ lution. (d) Mesh data of (c) 69 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6.1.1 Adaptive method Velho’s adaptation method is easy to implement and can save both space and time effi­ ciently. The algorithm consists of two steps as follows: Algorithm for adaptive polygonization 1. Initial polygonization (a) start with a coarse decomposition of the surface (b) generate the initial mesh (c) sample the edges of all cells in the initial mesh 2. Adaptive refinement (a) for each cell, test the corresponding surface patch for flatness (b) if the patch is not flat, then recursively subdivide the cell (c) structure new cells by constructing internal edges (d) sample all internal edges In this algorithm, structuring and sampling is separated. Structuring is done along with a coarse sampling in the first step. Meanwhile, the second step creates new sam­ pling inside the initial mesh and then project them onto the surface using the gradient of the implicit function to find more accurate point. In the initial polygonization step, an uniform space decomposition is used to create the initial mesh that serves as the basis for adaptive refinement. The uniform polygo­ nization algorithm is based on a simplicial space decomposition [22] using the classical Coxeter-Freudenthal space subdivision scheme (see Figure 6.2). Each of sub cells is tested if its edges are intersected by the surface. The implicit surface may intersect the edges of a simplex in either three or four points. The results 70 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. in two basic configurations: in the first case (left hand side in Figure 6.2), a triangle is generated, and in the second case (right hand side in Figure 6.2) a quadrilateral is generated. This quadrilateral will split into two triangles for adaptive refinement. Figure 6.2: Coxeter-Freudenthal decomposition of the cube. Image from [56] In the adaptive polygonization step, the cell structuring operation performs a simpli- cial decomposition. It recursively subdivides a triangular cell into triangular subcells, using information from its edges, if an initial triangle is not fiat, then new internal edges are created and adaptively sampled. For the adaptive refinement, edges is classified into 2 groups: simple and complex edges. A simple edge corresponds to a flat curve segment on the surface. A complex edge contain internal sample points at which subdivision is performed. Combination of types of all three edges of a triangle produces four possible cases: three simple edges (the first triangle in Figure 6.4), two simple edges (the second triangle in Figure 6.4), one simple edge (the third triangle in Figure 6.4), and no simple edge (the fourth triangle in Figure 6.4). Except the first case, complex edges of the triangle are split. The refinement iterates until the desired accuracy is achieved. It works like 2D refinement so the second step is a fast operation compared with other adaptive approaches. 71 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 6.3: Intersection of the surface with a simplex. Image from [56] In this algorithm, the surface curvature is measured by the angle a between the surface normals, nO and nl, at the edge endpoints (see Figure 6.5). In order to find the closest point on the implicit surface from the edge midpoint, the algorithm employs a physically-based method resulting in good accuracy control. A simple algorithm is shown as follows: Algorithm to find the closest point on the implicit surface while( | f{x) |> e) y <- f(x) x *— x — 8 * sign(y) * y f(x) If (sign(y) f sign(f(z))) then 5 < — 5/2 In this pseudo-code, the point x moves towards the implicit surface by taking small steps 5 in the direction of the gradient. Every time the point x crosses the surface, the 72 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. step size decreases in half. If the point x is close enough to the surface as measured by the value of / at x, it stops moving. 6.2 Sculpting mode When a sculpting tool is applied to a local region of a virtual object, potential values around the local region are animated depending on the type, shape and size of sculpting tool. Then, the geometric surface of the local region is updated using the adaptive poly­ gonization method. The sculpting is performed by carving(pushing) tool to deflate and addition(pulling) tool to inflate the surface. In our system, there are two sculpting mode: haptic sculpting and block sculpting. During sculpting a volumetric data, the animation of potential values is done by CSG (Constructive Solid Geometry) boolean operations. 6.2.1 Haptic sculpting mode In this mode, the system haptically renders change in surface geometry during pulling and pushing deformation by generating intermediate implicit surface. Most of previous haptic sculpting system [23, 17, 42], however, could not directly simulate physical models being deformed since physical model update rate(20 30hz) is much slower than in haptic force update rate(lKhz). In inTouch [23], the system first updates the mesh with the geometric change and then generates the backward force to move the tool tip to a contacting position on the new surface resulting in a feeling of popping the surface up or down due to the sudden change in the surface geometry. Other approaches [17, 42] used the initial position of the tool tip instead of the position of contact point to generate the force vector. During the deformation, the system turn off the restoring force and allows the user to move in any direction. Instead, a Hooke’s law 73 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. spring force is established between the current tool tip and the initial position to provide feedback resulting in undesired change in surface geometry. To bridge the disparity between the physical model update frequency, our system animates the implicit surface in the fast haptic loop to simulate intermediate physi­ cal models by modulating potential values in the local region being sculpted (see Fig­ ure 6.6(b) and 6.7(b)). The geometric model is visually updated in the next graphical frame (see Figure 6.6(c) and 6.7(c)). The deformation is performed while the space bar on the keyboard is pressed down and is stopped when pressed up. The speed of defor­ mation is adjusted according to the force the user applied. If the user push or pull hard against the surface, the speed of animation increases in a natural way. The pseudo-code below shows how to update potential values around the area being sculpted. Algorithm to update potential values in haptic freeform sculpting mode if(fir stT im,e) compute target potential values(tPot(p)) in grid points (p) around the region to be sculpted. F o r each grid point p in the region being sculpted if(addingOperation) if( tPot(p) < oPot(p)) tm pV alue = oPot(p) — tPotijp) oPot{p)— = tmpValue * tim eStep if( tPot(p) > oPot(p)) oPot(p) = tPot(p) if(carvingOperation) 74 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. tPot(p)* = — 1 if( tPot(p) > oPot(p) ) trnpValue = tPot(p) — oPot(p) oPot(p)+ = tmpValue * timeStep if( tPot(p) < oPot(jp)) oPot{p) = tPot(p) CSG boolean operation is used to compute target potential value, tPot(p) for each grid point p after adding (union operation) and carving (difference operation) material by the sphere tool. During the pushing deformation, the tool tip always penetrates the surface resulting in the responsive force to resist pushing. However, pulling makes the tool tip leave the surface and does not generate the resistant force appropriately. In order to simulate a feeling of pulling, we use Magnetic surface in section 4, which forces the tool tip to stick to the surface when the tool tip is leaving from the surface. It allows user to intuitively and accurately add material on the desired area. Figure 6.8 shows some examples sculpted by the carving and addition tools in haptic sculpting mode. 6.2.2 Block sculpting mode Sometimes, the force feedback hinders the user from performing accurate and detail sculpting since the user is hard to control the sculpting tool in detail. In block sculpting mode, the system turns off the haptic force and allows the user to freely move in any direction in 3D space. To sculpt a virtual object in this mode, the user first moves the sculpting tool onto the desired region while the user can imagine the shape being sculpted. The sculpting 75 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. tool represented by a simple wire frame shows the boundary of the shape to be sculpted. Then when the space bar on the keyboard is pressed down, material is either added or carved from the surface at once. In this mode, the system provides two sculpting tools: sphere and box. Target potential values at grid points to be sculpted is computed by CSG boolean operation. Figure 6.9 shows some examples sculpted by the carving and addition tools in block sculpting mode. Some more examples sculpted by our system are shown in Figure 6.10 and 6.11. 6.3 Octree data structure If an array is used to save potential values, it typically contains large empty regions which is not represented. In order to reduce memory requirement and to dynamically manage volume field, an octree data structure is employed similar to the scheme used in [4]. When an octree is constructed, the volume is recursively subdivided until either the subdivided volume is empty, or the subdivision has reached the leaf or maximum level. Volumetric implicit surface is first sampled at a 3D regular grid with a certain reso­ lution such as 128x128x128. At this point, the octree contains narrow band of potential field around the surface, which is necessary to generate the continuous constraint force for haptic simulation. Furthermore, if the thickness of the boarder region is very small, we will almost only obtain values o f-1 or 1 resulting in object space aliasing [4]. Initial mesh is created from a volume field sampled at a coarse resolution such as 32x32x32 or 64x64x64 from the original implicit surface representation. Local initial mesh in each grid cell is subdivided into sub mesh locally according to the shape com­ plexity. The all triangles (local initial mesh and its sub mesh) in each cell are save into 76 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. a leaf node of another octree of which resolution is as low as the sampling rate for ini­ tial mesh. It facilitates local mesh update (adding and deleting) during the deformation. When changes are made locally, the mesh topology is not affected globally since edges are shared by adjacent cells and are always subdivided at their split points. The initial mesh is not sampled anymore unless corresponding potential values are changed. 6.4 Mesh-based solid texturing Traditional texture mapping may not avoid texture distortion while texture image in 2D space is mapped into a virtual object in 3d space. 2D texture could be shrunk or extended on complex shaped area of the virtual object (see section 5.1.1). This texture distortion can be reduced by parametrization method and/or artist’s effort. However, these approaches are not suitable for free-from deformation in which the topology of geometry is changed significantly in real-time causing unacceptable tex­ ture distortion. In order to address this problem, we employ Solid Texture method introduced by Per- line [48], which uses procedural function to calculate the color for each point, instead of looking up from 2d texture image. In Solid Texture, shape and texture become inde­ pendent so that the texture does not need to be fit onto the surface. If shape is changed by adding or carving solid material, the appearance of the solid material will change accordingly. Solid Texture can be easily used to explore marble-like and wood-like texture. Mar­ ble is defined as follow: marble(X) = rnarbleColor(D(X) + a* turbulence(X, b)) (6.1) 77 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 ^ \ turbulence(X, b) — - * \ ^ [ ( —) * noise(2k, X)} (6.2) 2 ^ — ' 2f t i= 0 where, D is function of X(x, y, z) which controls the turbulence direction and a controls the amplitude and b controls the frequency of turbulence. Several methods such as ray marching [26, 49] to render procedural textures have been introduced. However, they require high computational power like parallel graphics systems and large storage of 3D texture volume. In other approaches, Carr et. al [11] introduced Solid Map which provides a view-independent method using an ordinary 2- D surface texture map. The solid map transforms a model’s polygons into 2D texture space without overlap. The system then rasterize the solid texture coordinates as col­ ors directly into the atlas. The solid texture is mapped back onto the 3D object using standard texture mapping (see Figure 6.12. This method may need a large texture mem­ ory and a good algorithm to pack arbitrarily-shaped triangles into right-triangle meshes atlas to reduce distortion and seam. In addition, this method is not suitable for a real­ time deformable object due to the expensive cost dynamically to rasterize and to manage and pack the texture atlas. In our system, color generated from a noise function is applied to the surface by assigning per-vertex colors rather than using a solid map [11] to make solid texturing process go faster. Color of vertex blends with near by vertices and sometimes give a good approximation. However, much of the detail of the solid texture will be lost depending upon the complexity of the object surface (see Figure 6.13(a)). In order to address this limitation, we employ the adaptive polygonization approach in which the mesh is adapted to the detail of the solid texture. If the difference between per- vertex colors along a edge is greater than a specific threshold, this edge will be spit (see Figure 6.13(b,c,d)). 78 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. As a result, we easily obtain a clear solid texture on the surface. This approach allows the system to render the solid texture on a sculpted object with in real-time with­ out special hardware and graphical rendering method. Figure 6.14 shows models with various Solid Texture created by our sculpting system. Note that the mesh complexity corresponds to the detail of solid texture. Using this property, we can simulate bumpy surface by modulating the surface normal (gradient) with the solid texture (see Figure 6.14(c)), embossed surface (see Figure 6.15(a)) and engraved surface along the solid texture (see Figure 6.15(b,c,d)). 79 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. c (a) (d) (c) Figure 6.4: Adaptive polygonization. Possible subdivision cases: three simple edges (triangle 1), two simple edges (triangle 2), one simple edge (triangle 3), and no simple edge (triangle 4) (a) Initial mesh (b) the first subdivision (c) projecting midpoints onto the implicit surface (d) the final mesh after first refinement 80 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 6.5: Adaptation criteria and projection to find the closest point on the implicit surface. Image from [57] (a) (b) (c) Figure 6.6: Bridge the disparity between the physical model update frequency, (a) phys­ ical model before pushing deformation (b)intermediate implicit surfaces in the middle of the physical deformation (blue arrow: direction of tool tip movement, red arrow: constraint force) (c) physical model after pushing deformation 81 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) (b) (c) Figure 6.7: Bridge the disparity between the physical model update frequency, (a) phys­ ical model before pulling deformation (b)intermediate implicit surfaces in the middle of the physical deformation (blue arrow: direction of tool tip movement, red arrow: constraint force) (c) physical model after pulling deformation (d) (e) (e) Figure 6.8: Screen shots to show the sculpting process in haptic sculpting mode. The sculpting tool is represented by a red wire frame sphere, (a) An original model before sculpting, (b) A sculpted model by the carving operation (c) mesh model of (b) (d) A sculpted model by the addition tool (e) mesh model of (d) (f) The sculpted model after applying more adding and carving operation 82 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (d) (e) (e) Figure 6.9: Screen shots to show the sculpting process in block sculpting mode. The sculpting tool is represented by a red wire frame, (a) locating the box tool on the desired region to be carved, (b) a sculpted model after applying a box carving tool to (a), (c) a sculpted model by the sphere carving operation, (d) locating the box tool on the desired region to be added, (e) a sculpted model after applying a box adding tool to (d). (f) mesh model of (e) 83 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (c) (d) Figure 6.10: A virtual model created after applying several sculpting tools in block sculpting mode, (a) a front view (b) mesh model of (a), (c) a close shot from back side of (a), (d) mesh model of (c). 84 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. O i l ■ Figure 6.11: A USC logo created by our sculpting system (a) a USC logo (b) the mesh model of (a), (c) a USC logo after applying more sculpting tools to (a), (d) mesh model of (c). 85 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 6.12: Rhino sculpted from wood and its area-weighted mesh atlas. Image from [11]- 86 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (c) (d) Figure 6.13: Mesh adapted to the detail of solid texture (a) a model without adaptive polygonization (b) a model by adaptive polygonization (c) a close shot of part of (b). (d) the mesh model of (c). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) IM S * * £ * ' 4V # ' ■ » # ',y^ (C) Figure 6.14: A model with a marble-like solid texture (a) a USC logo model, (b) a close shot of the mesh of (a), (c) a bumpy model by modulating the surface normal of (a), (d) a complex model with a wood-like solid texture. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (c) (d) Figure 6.15: Solid texture based modelling, (a) a embossed model along with a solid texture (b) a engraved model with a solid texture (c) a close shot of part of (b). (d) the mesh model of (c). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Chapter 7 Conclusion and Future work 7.1 Conclusion We introduced a novel haptic rendering technique based on a hybrid surface represen­ tation which consists of explicit(geometric) and implicit models. Based on this idea, we have successfully implemented and advanced many haptic rendering techniques including haptic painting, haptic texturing, editing of local surface properties, and haptic sculpting. The volumetric implicit surface representation has many advantages in haptic ren­ dering for example, fast force computation at 1 Khz using implicit surface properties like inside/outside and proximity properties, independent haptic performance on shape complexity of geometric models and grid resolution of the volumetric representation, and avoiding force discontinuity without a feeling of rounded surface. Thanks to the implicit representation properties, the user can feel a smooth surface without force discontinuity. For the correct force magnitude, the system fines the virtual contact point on the surface using proximity property of implicit surface. As a result, the user feels the stiff geometry of the surface unlike previous volumetric haptic simulations. Our algorithm also simulates effectively surface properties like friction, stiffness, and haptic texture. Especially, haptic texture is implemented by mapping the texture values directly into the potential values of the grid resulting in a textured geometry of the implicit surface. 90 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. We have extended the basic haptic technique, for example offset surface to pro­ vide additional internal volume for thin object without introducing force discontinuity, magnetic surface to force the tool tip to stick to the surface while exploring complex 3D models, and merging multiple implicit representation to simulate multiple objects just as the system simulates a single object. We also use an octree to reduce memory requirement of the volumetric representation. Our haptic decoration technique allows the user to paint directly on the surface and then to sense the surface variation of the painted image. The texture implicit surface by the surface variation can be converted a geometric model containing embossed or engraved shape on the painted image. In addition, the user can edit material properties such as friction and stiffness over a 3D model instead of applying global properties over a whole model. Then, the user can feel the surface properties in the volumetric representation on the surface. Finally, we developed a haptic sculpting system in which the user intuitively adds and carves material to a volumetric model using various sculpting tools. The volumetric model being sculpted is visualized as a geometric model which is adaptively polygo- nized according to the surface complexity. For better visual effect, we present a mesh- based solid texturing method in which the mesh is adaptively subdivided according to the detail of solid texture. 12 Future work We want to enhance the haptic sculpting system by adding a haptic painting tool during volume sculpting. We will use the volumetric representation to save color information applied by the user. The system then computes the color for each vertex by locally interpolating 8 colors in a cell. If the difference of colors along an edge is greater than 91 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. a threshold, the edge will be split recursively to avoid undesired color blending due to large triangles just as in the mesh-based solid texturing. In this approach, we don’t need a texture image and consequently avoid texture distortion. For the detail and efficient sculpting, various sculpting tools such as smoothing, stamping, deflating/inflating are required. In addition, The cut-and-paste operator allows the user to cut, copy, and paste part of a volumetric model represented by a 3D geometric model. It makes the sculpting process faster and easier. Finally, we will implement a stereo rendering to enhance the visual perception of sculpted object and integrate a haptic glove such as Cybergrasp from Immersion for more intuitive interface. 92 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reference List [1] Yoshitaka Adachi, Takahiro Kumano and Kouichi Ogino, “Intermediate Repre­ sentation for Stiff Virtual Object”, IEEE Virtual Reality Annual Symposium, pp. 203-210,1995 [2] Maneesh Agrawala, Andrew C. Beers, and Marc Levoy, “3D Painting on Scanned Surfaces”, In proceedings of symposium on Interactive 3D graphics, pp. 145-150, 1995 [3] R. S. Avila, L. M. Sobierajski, “A Haptic Interaction Method for Volume Visual­ ization” IEEE Visualization, pp. 197-204, Oct. 1996. [4] J. Andreas Barentzen, “Octree-based Volume Sculpting”, IEEE Visualization, pp. 9-12, 1998 [5] J. Andreas Barentzen, Niels I. Christensen “Volume Sculpting Using the Level- Set Method”, in Proceedings of Shape Modelling International, pp. 175-182, 2002 ,Canada. [6] C. Basdogan, C. Ho, M. A. Sriniviasan, “A Ray-based Haptic Rendering Technique for Displaying Shape and Texture of 3D Objects in Virtual Environments”, Pro­ ceedings of the ASME Dynamic Systems and Control Division, pp. 77-84, 1997 [7] Bill Baxter, Vincent Scheib, Ming C. Lin, and Dinesh Manocha, “DAB: Interactive Haptic Painting with 3D Virtual Brushes”, In Proceedings of ACM SIGGRAPH 01, August 2001, pp. 461-468 [8] J. Bloomenthal, et al. “Introduction to Implicit surface”, Morgan Kaufmann Pub­ lishers, Inc. 1997 [9] P. Bremer, S. D. Porumbescu, F. Kuester, B. Hamann, K. I. Joy, K. Ma, “Vir­ tual Clay Modeling using Adaptive Distance Fields”, International conference on Imaging science, System, and Technology, Vol 1, 2002 93 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [10] P. Buttolo, P. Stewart, A. Marsan, “A Haptic Hybrid Controller for Virtual Proto­ typing of Vehicle Mechanisms”, ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems pp. 249-254, 2002. [11] Nathan A. Carr, John C. Hart, “Meshed Atlases for Real-Time Procedural Solid Texturing”, ACM Transactions on Graphics 21(2), pp. 106-131, Apr. 2002 [12] Y. H. Chai, G. R. Luecke, J. C. Edwards, “Virtual clay modeling using the ISU exoskeleton”, in Proceedings of IEEE Virtual Reality Annual Symposium, pp. 76- 80, March 1998. [13] J. Chen, R. M. Taylor, “Nanomanipulator Force Feedback Research”, Proceedings of the First PHANTOM Users Group Workshop, December 1996 [14] Kwong-Wai Chen, Pheng-Ann Heng, Hanqiu Sun, “Direct Haptic Rendering of Isosurface by Intermediate Representation”, ACM Symposium on Virtual Reality Software and Technology, pp. 188-194 2000. [15] Mathieu Desbrun, Marie-Paule Cani-Gascuel, ’’ Active Implicit Surface for Anima­ tion” In Graphics Interface’98 [16] Eric Ferley, Marie-Paule Cani, Jena-Dominique Gascuel, “Practical Volumetric Sculpting”, in Proceedings of Implicit Surface, pp. 143-150 1999 [17] Mark Foskey, Miguel A. Otaduy, and Ming C. Lin, “ArtNova: Touch-Enabled 3D Model Design”, In proceedings of IEEE VR, pp. 119-126 2002 [18] S. Frisken, R. Perry, A. Rockwood, T. Jones, “Adaptively Sampled Distance Fields: A General Representation of Shape for Computer Graphics”, In SIGGRAPH’00, pp. 249-254, 2000 [19] Jason P. Fritz, Kenneth E. Bamer, “Stochastic Models for Haptic Texture” Pro­ ceedings of SPIE International Symposium on Intelligent Systems and Advanced Manufacturing - Telemanipulator and Telepresence Technologies III Conference, pp. 34-44, Boston, MA, Nov. 1996. [20] Tinsley A. Galyean, John F. Hughes, “Sculpting: An interactive volumetric mod­ eling technique”, In ACM SIGGRAPH conference proceedings, pp. 267-274 July 1991. [21] Ghost haptic library, http://www.sensable.com/ [22] J. Gomes and L. Velho, “Implicit Objects in Computer Graphics”, Monograph Series. IMPA, Rio de Janeiro, 1993 94 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [23] Arthur D. Gregory, Stephen A. Elimann, and Ming C. Lin, “ in Touch: Interactive Multiresolution Modeling and 3D Painting with a Haptic Interface”, In proceed­ ings of IEEE VR, pp.7-14, 2000. [24] Arthur D. Gregory, Ming C. Lin, et al. “H-Collide: A Framework for Fast and Accurate Collision Detection for Haptic Interaction”, In IEEE Virtual Reality, pp. 119-125, 1999 [25] Pat Hanrahan, Paul Haeberli, “Direct WYSWYG Painting and Texturing on 3D Shapes”, In ACM SIGGRAPH conference proceedings, pp. 215-223, August 1990. [26] P. Hanrahan, J. Lawson, “A language for shading and lighting calculations”, In ACM SIGGRAPH conference proceedings, pp. 289-298, August 1990. [27] Chih-Hao. Ho, Cagatay. Basdogan, Mandayam A. Sriniviasan, “Efficient Point- Based Rendering Techniques for Haptic Display of Virtual Objects”, Presence. Vol 8. No 5, October 1999,447-491. [28] J. M. Hollerbach and D. E. Johnson, “Virtual Environment rendering,” in Human and Machine Haptics, M. Cutkosky, R. Howe, K. Salisbury, and M. Srinivasan, eds., MIT Press, 2000. [29] Jing Hua, Hong Qin, “Haptic Sculpting of Volumetric Implicit Functions”, The ninth Pacific Conference on Computer Graphics and Applications, Octorber, 2001. [30] Takeo Igarashi, Dennis Cosgrove, “Adaptive Unwrapping for Interactive Texture Painting”, in ACM Symposium on Interactive 3D Graphics, pp. 209-216 2001. [31] H. Iwata and H. Noma, “Volume haptization” In Virtual Reality, pp. 16-13. IEEE ’ 93 Symposium on Research Frontiers in, 1993 [32] David Johnson, Thomas V. Thompson II, Matthew Kaplan, Donald Nelson, and Elaine Cohen, “Painting Textures with a Haptic Interface”, In proceedings of IEEE VR, pp. 282-285 1999. [33] L. P. Kobbelt, M. Botsch, U. Schwanecke, H. Seidel, “Feature-Sensitive Surface Extraction From Volume Data” In SIGGRAPH’01, pp. 57-66, 2001 [34] Laehyun Kim, Anna. Kyrikou, Gaurav S. Sukhatme and Mathieu Desbrun, “An Implicit-based Haptic Rendering Technique”, IEEE IROS 2002, pp. 2943-2948 Switzerland. [35] Laehyun Kim, Gaurav S. Sukhatme, Mathieu Desbrun, “Haptic Editing for Deco­ ration and Material Properties”, in 11th Symposium on Haptic Interfaces for Vir­ tual Environment and Teleoperator Systems, IEEE Computer Society, pp. 213-221, 2003, Los Angeles, USA 95 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [36] D. A. Lawrence, C. D. Lee, L. Y. Pao, “Shock and Vortex Visualization Using a Combined Visual/Haptic Interface”, IEEE Visualization, pp. 131-137, 2000 [37] W. E. Lorensen and H. E. Cline, ’’Marching Cubes: a high resolution 3D surface reconstruction algorithm,” Computer Graphics, Vol. 21, No. 4, pp 163-169 (Proc. of SIGGRAPH), 1987. [38] M. L. McLaughlin, J. P. Hespanha, G. S. Sukhatme, “Touch In Virtual Environ­ ments”, Prentice Hall PTR, 2001 [39] W. Mark, S. Randolph, M. Finch, J. V. Verth, R. M. Taylor II. “Adding Force Feed- backto Graphics Systems: Issues and Solutions” In ACM SIGGRAPH conference proceedings, pp. 447-452, August 1996 [40] Thomas H. Massie, J. K. Salisbury, “The Phantom Haptic Interface: A Device for Probing Virtual Objects” Proceedings of the ASME Dynamic Systems and Control Division , DSC- Vol. 55-1, Chicago, IL, pp. 295-301, 1994 [41] Sean. Mauch, “A Fast Alogorithm for Computing the Closest Point and Distance Transform” http://www.cco.caltech.edu/ sean/closestpoint/ closept.html. [42] Kevein T. McDonnell, Hong Qin and Robert A. Wlodarczyk, “Virtual Clay: A Real-time Sculpting System with Haptic Toolkits”, ACM Symposium on Interac­ tive 3D Techniques, pp. 179-190 2001 [43] Mark Meyer, Haeyoung Lee, Alan H. Barr, Mathieu Desbrun, “Generalizing Barycentric Coordinates for Irregular N-gons”, in Journal of Graphics Tools, 2001 [44] M. D. R. Minsky, “Computational Haptics: The Sandpaper System for Synthesiz­ ing Texture for a Force-Feedback Display”, PhD thesis, MIT, June 1995 [45] Hugh B. Morgenbesser and Mandayam A. Srinivasan, ’’ Force Shading for Hap­ tic Shape Perception” Proceedings of the ASME Dynamic Systems and Control Division, pp 407-412 1996. [46] D. K. Pai, K. Doel, D. L. James, J. Lang, J. E. Lloyd, J. L . Richmond, S. H. Yau, “Scanning Physical Interaction Behavior of 3D Objects”, In ACM SIGGRAPH, pp. 87-96 2001 [47] S. Payandeh, Z. Stanisic, “On Application of Virtual Fixtures as an Aid for Tele- manipulation and Training”, ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems pp. 18-23 2002. [48] Ken Perlin, “An Image Synthesizer”, ACM SIGGRAPH conference proceedings, vol 19, No 3, pp. 287- 296 1985 96 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [49] Ken Perlin, Eric M. Hoffert, “Hypertexture”, ACM Computer Graphics, Vol 23, No 3, July, pp. 253-262 1989 [50] R. N. Perry, s. F. Frisken, “Kizamu: A system for sculpting digital charaters”, in Proceedings of SIGGRAPH, pp. 47-56, 2001. [51] S. Quinlan, “Efficient distance computation between non-convex objects”, IEEE International Conference on Robotics and Automation, pp. 117-123, 1994. [52] Diego C. Ruspini, Krasimir Kolarov, and Oussama Khatib. “The haptic display of complex graphical environment” In ACM SIGGRAPH conference proceedings, volumn 1, pp. 295-301, August 1997 [53] K.Salisbury, D. Brock, T. Massie, N. Swarup, C. Zilles., “Haptic rendering: pro­ gramming touch interaction with virtual objects” Symposium on Interactive 3D Graphics, pp. 123-130, 1995 [54] Kenneth Salisbury, Christopher Tarr, “Haptic Rendering of Surfaces Defined by Implicit Functions”, ASME Haptic Interfaces for Virtual Environment and Teleop­ erator System pp. 61-68, 1997. [55] Juhani Siira, Dinesh K. Pai, “Haptic Texturing - A Stochastic Approach”, IEEE International Conference on Robotics and Automation, pp. 557-562, Minnesota, 1996. [56] Luiz Velho, “Simple and Efficient Polygonization of Implicit Surface”, Journal of Graphics tools Vol 1, No 2, pp 5-25, 1996 [57] Luiz Velho, Luiz Henrique D. Figureiredo, Jonas Gomes, “A Unified Approach for Hierarchical Adaptive Tesselation of Surfaces”, ACM Transactions on Graphics, Vol. 18, No. 4, pp. 329-360, 1999 [58] Sidney Wang, Arie E. Kaufman, “Volume sculpting”, In ACM symposium on Interactive 3D Graphics, pp. 151-156, 1995 [59] Alan Watt, Mark Watt, “Advanced Animation of Rendering Techniques” Addison- Wesley, pp. 181-182, England, 1992 [60] J. P. Y. Wong, R. W. H. Lau, L. Ma, “Virtual 3D sculpting”, Visuallization and Computer Animation, vol. 11, pp. 155-166, 2000 [61] A. Yoshitaka, T. Kumano, K. Ogino, “Intermediate Representation for Stiff Virtual Objects”, IEEE Virtual Reality Annual Symposium pp. 203-210 March 1995 [62] C. Zilles, J.K. Salisbury, “A Constraint-based God-object Method For Haptic Dis­ play”, ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems. Dynamic Systems and Control, vol. 1, pp 146-150, 1994 97 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button
Conceptually similar
Algorithms for compression of three-dimensional surfaces
PDF
Algorithms for compression of three-dimensional surfaces 
Facial animation by expression cloning
PDF
Facial animation by expression cloning 
Data -driven facial animation synthesis by learning from facial motion capture data
PDF
Data -driven facial animation synthesis by learning from facial motion capture data 
Active learning with multiple views
PDF
Active learning with multiple views 
Efficient minimum bounding circle-based shape retrieval and spatial querying
PDF
Efficient minimum bounding circle-based shape retrieval and spatial querying 
A syntax-based statistical translation model
PDF
A syntax-based statistical translation model 
Algorithms for phylogenetic tree reconstruction based on genome rearrangements
PDF
Algorithms for phylogenetic tree reconstruction based on genome rearrangements 
Diploid genome reconstruction from shotgun sequencing
PDF
Diploid genome reconstruction from shotgun sequencing 
A voting-based computational framwork for visual motion analysis and interpretation
PDF
A voting-based computational framwork for visual motion analysis and interpretation 
Annotation databases for distributed documents
PDF
Annotation databases for distributed documents 
A script-based approach to modifying knowledge -based systems
PDF
A script-based approach to modifying knowledge -based systems 
Design of wireless sensor network based structural health monitoring systems
PDF
Design of wireless sensor network based structural health monitoring systems 
Functional testing of constrained and unconstrained memory using march tests
PDF
Functional testing of constrained and unconstrained memory using march tests 
An adaptive temperament -based information filtering method for user -customized selection and presentation of online communication
PDF
An adaptive temperament -based information filtering method for user -customized selection and presentation of online communication 
Combining compile -time and run -time parallelization
PDF
Combining compile -time and run -time parallelization 
Directed diffusion:  An application -specific and data -centric communication paradigm for wireless sensor networks
PDF
Directed diffusion: An application -specific and data -centric communication paradigm for wireless sensor networks 
Augmenting knowledge reuse using collaborative filtering systems
PDF
Augmenting knowledge reuse using collaborative filtering systems 
Chemisorption on the (111) and (100) faces of platinum-tin bimetallic surfaces
PDF
Chemisorption on the (111) and (100) faces of platinum-tin bimetallic surfaces 
Composing style-based software architectures from architectural primitives
PDF
Composing style-based software architectures from architectural primitives 
Application of electrochemical methods in corrosion and battery research
PDF
Application of electrochemical methods in corrosion and battery research 
Action button
Asset Metadata
Creator Kim, Laehyun (author) 
Core Title An implicit-based haptic rendering technique 
School Graduate School 
Degree Doctor of Philosophy 
Degree Program Computer Science 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag Computer Science,OAI-PMH Harvest 
Language English
Contributor Digitized by ProQuest (provenance) 
Advisor [illegible] (committee chair), Desbrun, Mathieu (committee chair), [illegible] (committee member) 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-c16-639763 
Unique identifier UC11335042 
Identifier 3116728.pdf (filename),usctheses-c16-639763 (legacy record id) 
Legacy Identifier 3116728.pdf 
Dmrecord 639763 
Document Type Dissertation 
Rights Kim, Laehyun 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Access Conditions The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au... 
Repository Name University of Southern California Digital Library
Repository Location USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA