Visuo-Haptic Collaborative Augmented Reality Ping-Pong
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Visuo-Haptic Collaborative Augmented Reality Ping-Pong B. Knoerlein G. Székely M. Harders knoerlein@vision.ee.ethz.ch szekely@vision.ee.ethz.ch harders@vision.ee.ethz.ch Computer Vision Laboratory ETH Zurich CH-8092 Zurich, Switzerland ABSTRACT interfaces are necessary, which allow transferring skills and In our current work we examine the development of visuo- knowledge from and to daily life. haptic augmented reality setups and their extension to col- Using augmented reality environments as user interfaces laborative experiences in entertainment settings. To this for games has been a topic of research for a while, as de- end, an expandable system architecture supporting mul- scribed for instance in [7, 12, 18, 5, 8]. Some research re- tiple users is one of the most indispensable prerequisites. lated directly to our example application of AR ping-pong In addition, system stability, low latency, accurate calibra- have been carried out in the past. For instance, in [19] a tion and stable overlay of the virtual objects have to be virtual table-tennis setup with real rackets has been devel- assured. In this paper we provide an overview of our frame- oped. However, the system did not include haptic feedback work and present our collaborative example application, an and only used the rackets as an interface to position a vir- augmented reality visuo-haptic ping-pong game for two play- tual racket on the screen. A similar approach was also taken ers. The users play with a virtual ball in a real environment in [11], which neither provided haptic feedback. In [13], a while, by using virtual bats colocated with haptic devices, collaborative, mobile AR-based billiard game has been dis- they are able to feel the impact of the simulated ball on the cussed, already incorporating tactile feedback. In most of bat. these works, either no details on overlay accuracy, stabil- ity and latency are given, or the reported results are of lower quality. Regarding the integration of haptic feedback Categories and Subject Descriptors in general AR setups only limited work has been carried H.5.1 [Multimedia Information Systems]: Artificial, aug- out. Some groups have experimented with adding props mented, and virtual realities; I.3.7 [Three-Dimensional to provide passive haptic feedback in the AR environment, Graphics and Realism]: Virtual reality e.g.[14, 6, 15]. However, active behavior of scene objects can not be generated with these systems. Recent research has General Terms examined the possibility of adding devices providing active haptic feedback. A few proof-of-concept systems have been Design, Performance developed in this context, for instance [17, 10, 20, 1]. How- ever, lag reduction, exact alignment, or error minimization Keywords is only seldomly addressed. Moreover, the haptic device is Augmented reality, haptics, collaboration usually not colocated with the visual representation of the augmented objects, thus compromising hand-eye coordina- 1. INTRODUCTION tion. In addition, if simultaneous interaction with real and virtual objects should be allowed, visuo-haptic colocation in The underlying idea of augmented reality (AR) systems is the augmented scene is indispensable. In this context, the the combination of real and virtual objects into one environ- real world represents a reference frame into which the vir- ment [2]. However, the perception of the augmented scene tual elements have to be perfectly integrated. Any errors is usually restricted to the visual domain and no haptic cues in this process would reduce the usability of a system, and are provided, limiting user interactions. The integration of compromise user interaction and immersion. haptic interfaces into AR systems removes this constraint To alleviate the currently existing shortcomings, we have and permits to not only see the virtual objects, but in ad- developed a high precision, collaborative, colocated visuo- dition to feel them. This can be used for numerous appli- haptic augmented reality environment. Main components cations where natural, three-dimensional human computer are a distributed, low latency framework for synchroniza- tion, the accurate calibration of the haptic device in world coordinates and a hybrid tracking setup. In this short paper Permission to make digital or hard copies of all or part of this work for we describe the developed framework and present an exam- personal or classroom use is granted without fee provided that copies are ple testbed – multimodal AR ping pong. Figure 1 depicts not made or distributed for profit or commercial advantage and that copies the overall setup for one user as well as his augmented view bear this notice and the full citation on the first page. To copy otherwise, to of the scene. republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACE’07, June 13–15, 2007, Salzburg, Austria. Copyright 2007 ACM 978-1-59593-640-0/07/0006 ...$5.00.
Graphic loop Physic Haptic loop loop Transformtations Grabbing of image requ 1 cycle 1 cycle est Forces Transformtations 1 cycle 1 cycle Undistortion of Forces Transformtations 1 cycle 1 cycle image Forces Transformtations 1 cycle Forces 1 cycle Pose refinement sea Transformtations ans r 1 cycle 1 cycle we r pa ching 1 cycle Forces Transformtations 1 cycle Drawing of cke Forces t background g 1 cycle 1 cycle ... ckin te Drawing of che upda 1 cycle 1 cycle static objects for 1 cycle 1 cycle 1 cycle 1 cycle Drawing of 1 cycle 1 cycle interactive objects st reque 1 cycle 1 cycle Grabbing of image ... ... ... Figure 2: Data exchange in the distributed system. nected. A second haptic device is attached to a third 1.0GHz single CPU machine (256MB RAM, 256kb cache) running a haptics client. All components are using Linux OS. As ex- Figure 1: View of the colocated visuo-haptic AR ternal tracker we use the infrared (IR) optical 6DoF tracking system. device OPTOTRAK 3020 manufactured by Northern Digi- tal Inc. The view of one user is captured by a head-mounted Videre Design MEGA-DCS FireWire camera (640x480 pixel 2. COLLABORATIVE AR FRAMEWORK image at 30 fps). In addition, a user-independent top view The computational requirements for achieving stable mul- of the scene is generated by an Allied Vision Technologies timodal interaction in multiuser AR go beyond the capabil- FireWire camera (800x600 at 30fps). Since only one video- ities of currently available hardware. Therefore, the sys- see-through HMD was available, one player currently has to tem framework had to be distributed to several machines. rely on the images displayed via the overhead camera. A The core components of our system are the physics server, a second HMD setup is already manufactured for our system. graphics and a haptics client. The first machine performs the Finally, to allow touching of virtual objects, two SensAble physical simulation of the virtual scene. A graphics client 1.5 PHANToM devices are integrated into our AR setup. carries out all tasks typical to a standard AR setup. The As previously mentioned, the haptic devices are colocated external input data needed on this client are position and with the augmented environment, thus allowing interaction orientation of the haptic interfaces, as well as the move- with virtual and real objects via the same tool. Therefore, ment of the virtual objects in the scene. Finally, haptic high demands for tracking and calibration have to be met, interfaces can be added to the environment with the haptics which will be discussed in the following. clients, which communicate directly with the physics server. In our example setup, one device is directly connected to the physics server, while a second interface is added via a 3. HYBRID TRACKING haptics client. To provide high level of immersion, the vi- An IR marker is attached to the head-mounted camera sual data acquired from the simulation always has to coin- and tracked by the OPTOTRAK system to determine the cide with the moment the image was taken on the graphics- user’s head pose. The system measures and computes marker machine. Therefore, accurate synchronization between the orientation and position with submillimeter accuracy at an physics server and a graphics client is required. To this optimal distance of about two meters. Since the camera and end, the actual clock difference is determined at startup of the marker are rigidly attached, the camera-marker trans- the application. This computation is done by a comparison formation is fixed and can thus be determined by Hand-Eye of the clocks on both machines, while taking into account calibration [16]. The IR tracking data are inherently noisy the round trip time. After synchronizing both clocks, the due to the inaccurate measurements of the LED positions, graphical data is stored in a timestamped ringbuffer and the especially when the head-mounted marker moves. As a con- graphics client can request data for the timestamp the im- sequence, the registration between the real and the virtual age was taken. The data are transmitted, temporarily stored world is affected, causing instability of virtual objects in and updated when needed by the rendering process. There- the augmented images. Therefore, we correct the estimated fore, no latency is added to the overall delay. One round trip camera pose of the IR optical tracker with a vision-based of the communication takes 500−900µs for small-sized pack- approach. The first step is to detect precalibrated visual ets. The data exchange between the physics server and the landmarks in the recorded images. Furthermore, to ensure haptics client is collision free with a round trip time below that a landmark is sufficiently visible, an occlusion test is 200µs. Figure 2 illustrates all information communication carried out and occluded landmarks are discarded. Given occurring in the framework. the 2D-3D correspondence of the remaining landmarks, the In our testbed setup for two-player interaction in multi- position and the orientation of the camera are then refined modal AR, the physics server is run on a dual processor PC by error minimization. An image space approach is used to with 2.4GHz CPUs (2GB RAM, 512kb cache). Rendering estimate the camera pose as a nonlinear least squares prob- and tracking on the graphics client is done on a dual pro- lem. A RMS backprojection error of less than 0.7 pixels can cessor PC with 2.8GHz CPUs (2GB RAM, 512kb cache). be achieved in less than 1ms computation time. Moreover, To the former a PHANToM is attached, while to the latter using additional offline refinement of 3D landmark positions, the optical position tracking device and the cameras are con- the error can be further reduced to about 0.3 pixels [3].
pivot calibration [9], giving the marker-tip transformation. This allows obtaining corresponding point measurements in both the haptic and world coordinate system. After acquir- ing 3D point data, the absolute orientation problem has to be solved. Due to additional errors, resulting from inaccuracies in haptic encoder initialization, a two-staged optimization process is followed [4]. The final calibration results yield an alignment error better than 1.5mm. Figure 3 demonstrates the calibration results for visuo-haptic colocation. A virtual table tennis bat is aligned with a real handle attached to the haptic device. 5. COLLABORATIVE, VISUO-HAPTIC AR PING-PONG To demonstrate the high fidelity of the system, an AR ping-pong environment was chosen as testbed scenario for our collaborative visuo-haptic augmented reality system. This choice has been driven by the importance of fast collabo- rative interactions during gameplay. Real handles of table tennis rackets are attached to the haptic devices. The actual bat and the ball are, however, overlaid virtual objects. In order to integrate the real table and the net, these objects are initially calibrated in the world coordinate system, thus, allowing the virtual ball to collide with them. A simplified rigid body dynamics model is used for physical modeling of the ball. The haptic devices are used to render impact forces to the users and to control the virtual bat. The overall rendering performance of the augmented scene at a resolu- tion of 640x480 pixels is illustrated in Table 1. Due to the multi-threaded implementation, the images are displayed at about 20Hz with an overall latency of about 60 − 100ms. The scene as shown to one of the users is depicted in Figure 4. Component Average time [ms] Frame rendering 15 Frame unwrapping 5 Corner detection + occlusion test 20 Pose refinement 0.6 Object rendering 5 Table 1: Rendering performance Figure 3: Haptic device without (top) and with colo- cated augmentation of a virtual bat (center,bottom) . 6. CONCLUSION We have presented a colocated visuo-haptic augmented reality environment for two-player ping-pong. A dedicated 4. HAPTIC DEVICE CALIBRATION system providing accurate calibration, high stability, and In order to align the virtual representation of the hap- low latency had to be developed. A problem of the current tic interaction point with the correct physical location in setup is the lack of binocular stereo cues. This limits the the real world, the relationship between the haptic and the interactivity of the system since depth information has to world coordinate system needs to be determined. The first be inferred from shadows, occlusions and motion parallax. step of our calibration procedure is to collect 3D point mea- Moreover, the workspace of the currently used haptic device surements in the coordinate systems of the haptic device and is rather limited. Therefore interactions are restricted to a the optical tracker. To this end an optical marker is attached certain area. In addition, due to the used device only 3DOF to the stylus of a PHANToM device. For the measurements, of force feedback can be provided. the tip position is stabilized in the middle of the haptic Next steps will be the extension to stereo vision for better workspace by rendering a fixation force with the device. The depth perception. In addition, to reduce network traffic and marker assembly is then manually rotated following a spher- to ease the synchronization a hardware trigger signal will be ical movement in space, while the marker poses are being integrated. Also, extension to 6DOF haptic feedback and recorded. The best fitting sphere is then determined using alternative haptic devices will be investigated.
and I. Rudomin. Multi-user networked interactive augmented reality card game. In International Conference on Cyberworlds, pages 177–182, 2006. [6] R. Grasset, J.-D. Gascuel, and D. Schmalstieg. Interactive mediated reality. In Symposium on Mixed and Augmented Reality, pages 302–303, 2003. [7] T. Jebara, C. Eyster, J. Weaver, T. Starner, and A. Pentland. Stochasticks: Augmenting the billiards experience with probabilistic vision and wearable computers. In Proc. 1st Int’l Symp. Wearable Computers ISWC, pages 138–145, 1997. [8] A. Lam, K. Chow, E. Yau, and M. Lyu. Art: augmented reality table for interactive trading card game. In Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications, pages 357–360, 2006. [9] S. Lavalle, P. Cinquin, and J. Troccaz. Computer Integrated Surgery and Therapy: State of the Art, chapter 10, pages 239–310. IOS Press, 1997. [10] T. Nojima, D. Sekiguchi, M. Inami, and S. Tachi. The smarttool: a system for augmented reality of haptics. In Proc. of IEEE Virtual Reality, 2002, pages 67–72, 2002. [11] T. Ohshima, K. Satoh, H. Yamamoto, and H. Tamura. AR2 hockey: A case study of collaborative augmented reality. In Proc. VRAIS, pages 268–275, 1998. [12] W. Piekarski and B. Thomas. Arquake: The outdoor augmented reality gaming system. Communications of the ACM, 45(1):36–38, 2002. [13] U. Sargaana, H. S. Farahani, J. W. Lee, J. Ryu, and W. Woo. Collaborative billiARds: Towards the ultimate gaming experience. In ICEC, pages 357–367, 2005. [14] D. Schmalstieg, A. Fuhrmann, G. Hesina, Z. Szalavari, L. M. Encarnacao, M. Gervautz, and W. Purgathofer. Figure 4: Colocated visuo-haptic AR ping-pong for The studierstube augmented reality project. Presence two players. - Teleoperators and Virtual Environments, 11(1):32–45, 2002. 7. ACKNOWLEDGMENTS [15] R. Sidharta, J. Oliver, and A. Sannier. Augmented reality tangible interface for distributed design review. This work has been performed within the frame of the In Computer Graphics, Imaging and Visualisation, Swiss National Center of Competence in Research on Com- pages 464–470, 2006. puter Aided and Image Guided Medical Interventions (NCCR Co-Me) supported by the Swiss National Science Founda- [16] R. Y. Tsai and R. K. Lenz. A new technique for fully tion. autonomous and efficient 3D robotics hand/eye calibration. IEEE Journal of Robotics and Automation, 5(3):345–358, 1989. 8. REFERENCES [17] J. R. Vallino and C. M. Brown. Haptics in augmented [1] M. Adcock, M. Hutchins, and C. Gunn. Augmented reality. In ICMCS, Vol. 1, pages 195–200, 1999. reality haptics: Using ar toolkit for display of haptic [18] D. Wagner, T. Pintaric, F. Ledermann, and applications. In Proc. of 2nd IEEE Intl Augmented D. Schmalstieg. Towards massively multi-user Reality Toolkit Workshop, pages 1–2, 2003. augmented reality on handheld devices. In Pervasive, [2] R. T. Azuma. A survey of augmented reality. pages 208–219, 2005. Presence: Teleoperators and Virtual Environments, [19] C. Woodward, P. Honkamaa, J. Jäppinen, and E.-P. 6(4):355–385, 1997. Pyökkimies. Camball: augmented networked table [3] G. Bianchi, C. Jung, B. Knoerlein, M. Harders, and tennis played with real rackets. In Advances in Comp. G. Székely. High-fidelity visuo-haptic interaction with Entertainment Techn., pages 275–276, 2004. virtual objects in multi-modal ar systems. In ISMAR [20] G. Ye, J. J. Corso, G. D. Hager, and A. M. Okamura. 2006, October 2006. Vishap: Augmented reality combining haptics and [4] G. Bianchi, B. Knoerlein, G. Székely, and M. Harders. vision. In Proc. of IEEE Intl. Conference on Systems, High precision augmented reality haptics. In Man and Cybernetics, pages 3425–3431, 2003. Eurohaptics 2006, pages 169–177, July 2006. [5] M. Diaz, M. Alencastre-Miranda, L. Munoz-Gomez,
You can also read