Virtual Reality Movies - Real-Time Streaming of 3D Objects
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
TERENA-NORDUnet Networking Conference (TNNC) 1999 1 Virtual Reality Movies – Real-Time Streaming of 3D Objects S. Olbrich, H. Pralle Institute for Computer Networks and Distributed Systems (RVS), Regional Scientific Computing Center for Lower Saxony (RRZN) University of Hanover Schlosswender Str. 5, D-30159 Hannover, Germany Tel.: +49 511 762 3078, Fax: +49 511 762 3003 Email: olbrich@rvs.uni-hannover.de Abstract Powerful servers for computation and storage, high-speed networking resources, and high-performance 3D graphics workstation, which are typically available in scientif- ic research environments, potentially allow the development and productive applica- tion of advanced distributed high-quality multimedia concepts. Several bottlenecks, often caused by inefficient design and software implementation of current systems, prevent utilization of the offered performance of existing hardware and networking resources. We present a system architecture, which supports streamed online presen- tation of series of 3D objects. In the case of expensive simulations on a supercomput- er, results are increasingly represented as 3D geometry to support immersive exploration, presentation, and collaboration techniques. Accurate representation and high-quality display are fundamental requirements to avoid misinterpretation of the data. Our system consists of the following parts: a preprocessor to create a special 3D representation – optimized under transmission and streamed presentation issues in a high-performance working environment, an efficiently implemented streaming serv- er, and a client. The client was implemented as a web browser plugin, integrating a viewer with high-quality virtual reality presentation (stereoscopic displays), interac- tion (tracking devices), and hyperlinking capabilities. Keywords Virtual Reality, VRML, Scientific Visualization, Streaming, Browser, Plugin. 1 INTRODUCTION Scientific and industrial research environments increasingly provide powerful oper- ating platforms, such as high-speed networks, high-performance server and client systems, and high-quality 3D graphics systems. These are potentially meeting the re- quirements of high quality applications in typical visualization scenarios, where complex 3D objects – for example results of a simulation – have to be handled inter- actively. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 2 Available client-server systems for distributed online presentation of virtual 3D scenes in the WWW are based on Internet standards, such as TCP/IP, URL [5], HTTP [13], MIME [14], HTML [4], and VRML (Virtual Reality Modeling Language) Ver- sion 1.0 [3] or VRML97 [22]. They reveal several constraints regarding performance, quality, and functionality aspects which often prohibit useful application. This is in contrast to specific virtual reality systems, which demonstrate the performance po- tential, but are essentially designed as stand-alone systems. Using VRML viewers, high latency, low navigation frame rates, little support of high-quality virtual reality presentation and interaction techniques have been ob- served, which is caused by inefficient representation, transport, and presentation pro- tocol design and implementation. While delays between user-requested changes of static scenes already prevent interactive production, real-time streaming of sequenc- es of scenes is obviously in general left out of consideration. But such a mechanism is required for exploration of three-dimensional, time-dependent phenomena, utiliz- ing virtual-reality techniques in order to get Virtual Reality Movies, freely to navigate with on-the-fly presentation. 2 RELATED WORK Interactive viewing of 3D models was originally considered in the context of special- ized „Virtual Reality Systems“, consisting of a local storage system, a high-perfor- mance 3D rendering engine, stereoscopic displays, and 3D interaction devices. After the – often time-consuming – startup phase, loading the often proprietary file formats for the model data and behaviour in such a stand-alone system, all manipulation is based on the memory-based representation. When Internet-based, hyper-linked multimedia documents became generally available and useful applicable in science and industry, integration and standardiza- tion of 3D scene descriptions in the World Wide Web were required. In 1994, after presentation of a 3D user interface for the Internet [37], public discussion – based on an e-mail list – was started. The Open Inventor (Silicon Graphics) file format [51] was then choosen as a foundation of the first specification of a Virtual Reality Mode- ling Language (VRML 1.0) [3]. A revised version, including dynamic elements, was standardized by collaboration of the Internet community and ISO (VRML97) [22]. Further development concerns, amoung other aspects, compressed binary coding [23]. A framework and representation of multimedia documents including 3D scenes is also provided by MPEG-4, specifying a binary representation of interactive graph- ics and audio-visual scenes, known as BIFS (BInary Format for Scenes) [11][42]. Several methods for preparation of complex 3D surface models to reduce its com- plexity and data volume are known. They can be classified as: © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 3 • Mesh decimation, smoothing [25][26][39][40][40] and multiresolution modeling [9][24][27], • Topology-oriented geometric compression [16][44][46][47] and progressive coding [12][15][45], • Primitive-packing, such as „Triangle stripping“. Techniques of the first two classes on principle introduce deviations in the repre- sentation, which should generally be avoided for scientific data visualization appli- cations in order to maintain accurate exploration of results of expensive simulations on supercomputers. Besides of that, reconstruction from the compressed code can be a very time-consuming process – e. g.: ca. 35,000 triangles per second in [46]; 800,000 triangles per second in [16], and therefore introduce a significant bottleneck in powerful networking and rendering environments. Triangle stripping is a useful technique not only to reduce the data volume (3:1 for large connected meshes), but also to optimize the rendering process (see figure 1 and figure 4). N independant triangles N connected triangles (triangle strip) Volume: Volume: N * 3 * per_vertex_data, (N+2) * per_vertex_data 3D coordinates and normals: 3D coordinates and normals: N * 72 byte (N+2) * 24 byte Per vertex data: 3D coordinates: 3 float values = 3 * 4 bytes / vertex Normal vectors: 3 float values = 3 * 4 bytes / vertex Colors: 4 byte values (RGBA) = 4 bytes / vertex Figure 1 Complexity of typical rendering primitives: Independant triangles versus connected mesh Besides VRML97 viewers, such as CosmoPlayer (now „PlatinumPlayer“, an- nounced as open source [35]) and VRwave [1], several other 3D viewers have been implemented: first integrating 3D presentation in the WWW at all (CLRMosaic, Web OOGL), then particularly considering efficient display (e. g. i3D [2]) or prototyping new concepts. An approach for embedding complex 3D objects in digital documents is presented in [9], resulting in a hierarchical mesh reduction technique with progres- sive transmission and display capabilities, available as an ActiveX control for Mi- crosoft Windows and a web-browser helper application for Unix. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 4 Streaming concepts are discussed in the context of transmission and on-the-fly pre- sentation and control of video and audio media, separately presented or integrated in 3D worlds (VRML97 only supports video or audio objects as complete files), as a method to incrementally update minor changes in virtual scenes [17], or transmission and adaptive presentation of progressive multi-level meshes of static scenes [15]. Concepts for sharing distributed virtual environments are investigated in [6][7]. The connection of scientific data visualization and immersive virtual reality tech- niques is discussed in [8][10][18][19]. Representation, communication, and presentation aspects in conjunction with on- line presentation of 3D objects are discussed in [33], resulting in an efficient concept for interactive handling preprocessed VRML-1.0-based complex static models, uti- lizing the capacity of high-performance networking, computing, and 3D graphics en- vironment. Simulation Simulation results e.g. temperature Postprocessing Symbolic representation e.g. isosurface VR System Navigation, Interaction, Multimedia Steering representation e.g. 3D geometry Visual Acoustic Haptic Displays Figure 2 Virtual Reality Environment for Scientific Visualization 3 REQUIREMENTS We mainly focus on the visualization of scientific results for exploration and presen- tation purposes. Large multi-dimensional datasets from measurements or high-per- formance computations are characteristic for this field of application. This raw data (e. g. scalar or vector fields) have to be filtered to symbolic (e. g. isosurfaces, stream- lines) and mapped to three-dimensional geometric (e. g. triangles, lines) representa- © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 5 tions, which are then rendered on a two-dimensional raster display. Immersive virtual reality environments, as illustrated in figure 2, will increasingly provide powerful means to support virtual laboratories, which take advantage of multi-sensor, intuitive user interfaces. In the following, we consider the distributed presentation of sequences of complex 3D objects at the geometric level of abstraction. This enables virtual reality tech- niques, such as stereoscopic projection and 3D navigation. Table 1 Classification of requirements. standards (examples) Presentation Communication Representation Performance • Short latency • Optimized • Client-side effort – Startup implementation – Decode – Navigation – Decompress • High framerates – Parse – Transform rate – Building scene graph – Fill rate – Calculation of normal – Usage of efficient vectors primitives • Data volume Quality • Resolution • Quality of Service • Resolution – Pixels – High bitrates – Time – Intensity – Short latency – Geometry – Color – Low jitter – Normals • Antialiasing – Low error rate – Color, Texture • Color Management • Compression – Configurable – Lossless! destination • Color profiles support ICC profile – sRGB default Functionality • Progressive rendering • Streaming, e. g. • Needed attributes for • Stereoscopic viewing – control: RTSP Virtual Reality • Navigational comfort – media data: HTTP presentation – Tracking • Synchronization – Focal distance – 3D input devices – intrastream • Sequences of • Multiplatform support – interstream 3D objects: – MS Windows • Media-specific „Virtual Reality Movie“ – UNIX scaling – frame rate – resolution – level of detail Standards • Generic WWW- • Networks • WWW Server Browser – LAN, WAN • Dynamic content – Netscape – Ethernet, ATM – CGI – Plugin Interface • Protocols • Caching, Proxy • APIs – IP, TCP, RTSP techniques – 3D Graphics • Services • Multimedia, 3D/VR data – GUI – HTTP formats © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 6 Partitioning at a higher level would support more interaction, such as the choice of filtering and mapping methods and parameters. But for the interesting grand-chal- lenge problems – such as exploration of large-scale unsteady fluid flows – these steps could not be offered at interactive rates, since the data volume or computation efforts are often prohibitive [28]. The online-presentation of series of these prepared 3D objects has to avoid bottle- necks, which could prevent efficient and accurate exploration. An overview of the re- quirements is given in table 1, where we classify the aspects which influence performance, quality, and functionality and attach them to the representation, com- munication and presentation instances of the processing pipeline. 4 CONCEPT AND IMPLEMENTATION After identification of the bottlenecks and studying feasibility of the given applica- tion scenario requirements using state-of-the art equipment, we have designed and implemented an optimized viewer architecture. It consists of three components, which are discussed in the next sections: • Preprocessor: VRML to DVR, a new 3D file format; • DVR streaming server, based on Real-Time Streaming Protocol (RTSP); • New Viewer: Netscape plugin, called DocShow-VR. By using this software in a high-performance networking and graphics environ- ment, sequences of high-quality 3D scenes can be played out, interactively navigat- able while viewing smooth 3D animations.The operating sequence is characterized by the following steps (numbers corresponding to figure 3): 1. The web client reads a DVRS object (MIME type: application/x-doc- show-vrs, extension: .dvrs) via HTTP, which was possibly requested by an EMBED tag in an HTML page. 2. After the appropriate DVRS plugin „DocShow-VR“ (see section 4.3) is loaded by the web browser, it establishes a connection to the 3D streaming server (DVRS, see section 4.2), based on the reference information (IP address, port number) and attributes (frame rate, etc.) contained in the DVRS data. 3. The 3D streaming server reads 3D objects from files in DVR format (see sec- tion 4.1) and delivers them to the client, interleaved by additional delimiting PDUs. 4. After reading the first 3D object, the client executes some further actions: – Reading DVR data: Transfer from the streaming connection into a 3D object memory buffer. – Rendering: Transformation of the 3D objects, based on the current virtual camera position, orientation, and view angle, to the display device. – VR Navigation: Modification of the virtual camera parameters, according to the input device control. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 7 – RTSP, VCR Metaphor: Control handling, e. g. instruct the server to stop, to play-out from a new position with a modified frame rate, or to deliver an alternate data set (in case of existing multiple levels of detail on the server). The binaries, the web-based VRML-to-DVR converter service, and several appli- cation examples are publicly available: http://www.dfn-expo.de/Technologie/DocShow-VR/ Preparation Application examples: - Preprocessing VRML to DVR of 3D objects - Simulation and visualization, creating DVR directly Dynamic scene generation (planned) DVRS meta file DVR scene files DVRS: Meta data, DVR: 3D geometry SCSI, Fibre Channel 1 3 DVRS DVRS WWW Server Streaming Server Streaming Server Streaming Server TP HT RTSP DVRP Network: - Ethernet, ATM 1 2 3 - TCP/IP DVR Communication: Browser, Plugin - Netscape Plugin Callbacks - Socket API 3D Rendering: - OpenGL 4 4 Input Devices Displays e.g. SpaceMouse, e.g. Monitor, Tracking systems Stereo projection Figure 3 3D streaming system 4.1 Preprocessor: VRML to DVR, a new 3D file format In order to avoid the overhead of calculations that are usually done by the viewer software in typical WWW-integrated applications, the 3D scene descriptions are pre- processed. Compute-intensive processes, such as • decompressing, • converting from ASCII to binary representation, • building a scene graph, • calculation of normal vectors, • packing optimized display lists – such as triangle strips (as described in section 2, see also figure 4), © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 8 are now executed as a preparation step, resulting in a new 3D file format called „DVR“. The DVR files are then placed on a web server (MIME type: applica- tion/x-docshow-vr, extension: .dvr) or our DVR streaming server (see sec- tion 4.2). This partitioning approach could be thought as splitting a conventional scene graph based virtual reality API – such as Iris Performer, Open Inventor, or Op- timizer – at a usually hidden level, where structures of optimized rendering primi- tives are built in memory, in order to take advantage of adequately optimized rendering routines, which are also provided within this API in a capsulated manner. Since the standardized 3D file format in the Internet is VRML (Virtual Reality Modeling Language) [3][22], we have developed tools to provide efficient and com- fortable conversion from VRML to DVR. This supports processing of existing 3D scenes, which could possibly be first converted to VRML by other available tools [50] and then to DVR by our tool. VRML output of 3D modeling tools, such as 3D Studio, and visualization systems, such as AVS (with the public domain module „write VRML“) [48], AVS/Express, Ensight and VTK [39], has been successfully prepared in this way. Besides that, a class library was implemented to support direct DVR data generation, which is intended to be integrated into VTK. The preprocessor accepts static 3D scene descriptions in the ASCII formats VRML 1.0 or VRML97 and converts them into an own binary representation (IEEE format, network byte ordering) of the restructured, linearized scene graph to support efficient on-the-fly rendering of the data stream at the client side. The C++ implementation is based on the available VRML 1.0 parser library QvLib [43], an email message from Jan Hardenberg, consisting of fragments of an implementation of a rudimentary, OpenGL-based VRML-1.0 viewer [20], and the VRML 2.0 reader and scene graph classes of OpenInventor (TGS, Version 2.5). The current release is provided as a batch-oriented command-line tool for Microsoft Windows 95/98/NT and SGI Irix. It has also been integrated in an interative online conversion service via HTTP form- based upload [29]. We have also experimented with a transparent mechanism, using a caching proxy [33], based on a configurable http filter process webfilt [49]. 4.2 DVR streaming server, based on RTSP Additional to a usual WWW server which delivers single DVR scenes or DVR streaming reference and control information (new format called DVRS), we need a dedicated 3D streaming server. We have implemented a prototype, applying the Real Time Streaming Protocol (RTSP, RFC 2326) [41] for this purpose. The DVR trans- port protocol (DVRP) is realized via a TCP-based transmission, simply using the na- tive DVR files, interleaved with a new separator protocol data unit (PDU), which can be considered as an extension of the DVR file PDUs. An intra-stream synchroniza- tion is realized based on the server-side delivery times. Optionally the intended mean frame rate is maintained by occasionally omitting scene files. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 9 4.3 New Viewer: Netscape plugin, called DocShow-VR To achieve short latency, streaming capabilities and support of embedded 3D scenes in HTML pages, we have implemented an optimized viewer as a plugin for Netscape browsers. As such, by using the Netscape Plugin API [31][32], it is tightly coupled to the data delivery to allow streamed, progressive display of single DVR scenes via HTTP or – after reading DVRS information – to connect to our 3D streaming server. After RTSP startup, 3D animations are displayed in real-time, overlapped with the streamed TCP-based DVR data transmission, where display (up to two graphics pipes) and transport are implemented in separate threads, which leads to a speed-up by parallelization of the communication and rendering processes. This RTSP-based streaming scenario is implemented only on SGI Irix by now. The renderer is efficiently implemented for UNIX (SGI Irix, Sun Solaris, HP/UX) and Win32 (Microsoft Windows 95/98/NT) platforms, based on OpenGL [30], the de-facto 3D graphics API standard, and utilizes platform-specific OpenGL exten- sions, such as vertex arrays (high-speed polygon rendering) or multisampling anti- aliasing (high-quality presentation). For UNIX platforms not providing an OpenGL runtime environment, plugin versions linked with the OpenGL emulation library Mesa [36] were created. Stereoscoping viewing is supported on several platforms (SGI: any Irix worksta- tion; Sun: Elite 3D; Windows NT: Diamond FireGL 1000, HP Kayak XW), using the quad-buffer stereo mode in conjunction with shutter glasses or large screen stereo projection. Head tracking systems (Crystal Eyes CE-VR, Polhemus Fastrak, In- tersense) have also been integrated, in order to get an immersive virtual reality sys- tem. Color management [21] support was integrated to increase reproduction quality of textures as scene components. The presentation of images in TIFF RGB and CMYK formats, originally based on Sam Leffler’s TIFF library, takes the eventually embed- ded ICC-conforming source profile as well as a monitor-specific ICC destination profile into account, and converts color space and gamut to match the intended col- ors. It was implemented on SGI Irix 6.5, Sun Solaris, and Microsoft Windows 98, by applying the available color management system (CMS) APIs Coloratura, KCMS and ICM 2.0, respectively (used CMS functions listed in table 2). On these platforms the plugin is also capable of viewing TIFF images (MIME type: image/tiff or image/x-tiff, extension: .tif) Support of collaborative work in performing bidirectional synchronization of the virtual camera between two clients is already implemented in the Unix version. In the future we would like to synchronize not only navigation in 3D space, but also im- plement 3D tele-pointers and annotations, and a synchronization of the time axis for at least two users in the streaming scenario. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 10 Table 2 ICC-based Color managament functions on different platforms Microsoft Windows 98 SGI Sun ICM 2.0 Irix 6.5 Solaris Coloratura KCMS Initialization – cmsOpen KcsAvailable Open Profile OpenColorProfile cmsImportProfile KcsLoadProfile Close Profile CloseColorProfile cmsCloseProfile KcsFreeProfile Get Tag Value GetColorProfile- cmsGetTag KcsGetAttribute Element Create Color CreateMultiProfile- cmsCreateTfm KcsConnectProfiles Transformation Transform Transform TranslateBitmapBits cmsApplyTfm KcsEvaluate Colors 5 EVALUATION We built a testbed for the evaluation of the described production process in a typical scientific data visualization application. The components are: 1. DVR dataset: • A simulation of an unsteady fluid flow phenomenon (oceanic convection [38]) on a supercomputer (Siemens/Fujitsu VPP 300) resulted in raw data of 740 timesteps, each consisting of temperature and velocity information at 161x161x31 grid points, stored as 32-bit float values. This dataset was post- processed by an AVS/Express application into colored slicers, isosurfaces and arrows, stored as a sequence of 3D scenes in VRML 1.0 format by the module „OutputVRML“, and then converted to DVR by our „wrl1toDVR“ tool. • 740 DVR files, between 5,542,044 bytes and 9,309,468 bytes in size (whole DVR dataset: 5.3 Gbyte), each containing about 100,000 primitives (lines, polygons). An example of a rendered image is shown in figure 4 as case 2. 2. 3D streaming client-server configuration: • As a client system, where 3D rendering performance is a main issue, we used an SGI Onyx2 (rack system, 4 processors R10000/195 MHz, 2 Gbytes main memory, 9 Gbytes SCSI disk, Irix 6.5.4) with two Infinite Reality graphics subsystems (each with two 64 MB Raster Manager boards). • A 3D streaming server was also installed on this machine for development purposes and testing capabilities in a high-performance TCP/IP scenario (loopback communication). The DVR files were stored on a 140 Gbyte RAID system, connected via Fibre Channel. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 11 • A separate 3D streaming server was implemented on an SGI Origin200 (R10000/225 MHz, 512 Mbytes main memory, 9 + 18 Gbyte SCSI disk, Irix 6.5.4). In this case the DVR files were stored on a 18 Gbyte SCSI disk. • A high-speed network between the Onyx2 and the Origin200 was realized by a back-to-back gigabit ethernet connection. As a performance optimization, jumbo frame support was enabled on both machines. We measured the throughput of the critical instances separately and the over-all per- formance of the processing pipeline. Then we justified read and write block sizes and socket buffer sizes in the implementation of our client and server software. In table 3 we present some characteric results. Table 3 Throughput measurement results (typical values in Mbit/s) of the scenario „Oceanic convection“ (frames 720–739) Client Server 1 Server 2 SGI Onyx2 SGI Onyx2 SGI Origin200 Read DVR files, one read block – 250–350 109–110 per file (second try: cached) (980–1020) (1090–1170) TCP/IP transmission from server to – 790–950 560–670 client (ttcp -l1048576 -n100 -b262144) (loopback) (gigabit ethernet) 3D rendering (DocShow-VR): 380–420 – – OpenGL immediate mode DocShow-VR maximal framerate – 280–320 324–326 client-server 4 frames / second – 251 251 streaming pipeline 2 frames / second – 216 126 Interesting experiences in this scenario are: • The throughput in the distributed configuration is higher than in the case of executing client and server on the same machine. Reasons could be: – Lower CPU clock on the client machine (195 MHz versus 225 MHz), – Overhead introduced by running 3 processes simultanuously. • The sustained overall throughput is significantly slower than the rendering speed, which seemed to be the limiting factor. Possible reasons are: – Swapping latency caused by double-buffered display. Average time: 1/(2*frame_rate) (1/144 s for 72 Hz frame rate), – X11/Motif event processing at every frame, – Initialization of OpenGL rendering state at startup of every frame. In figure 4 we illustrate the rendering performance in typical application scenarios. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 12 1. „DX-03“ (IBM Data Explorer result, part of the OpenGL benchmark viewperf [34], converted to VRML): 91,584 triangles with normals per vertex HP SGI Kayak fx 6 Onyx2 IR 15 ms 21 ms Triangle strips: 6.11 Mio./s 4.34 Mio./s 2,252,744 bytes 1.20 Gbit/s 0.85 Gbit/s Independant 46 ms 53 ms triangles: 1.95 Mio./s 1.72 Mio./s 6,601,928 bytes 1.15 Gbit/s 0.99 Gbit/s 2. „Oceanic convection, step nr. 492“ (AVS/Express result; Courtesy Institute for Meteorology, University of Hanover) 109,255 primitives HP SGI Kayak fx 6 Onyx2 IR 109 ms 139 ms Triangle strips: 1.00 Mio./s 0.79 Mio./s 5,798,264 bytes 0.43 Gbit/s 0.33 Gbit/s Independant 125 ms 157 ms triangles: 0.87 Mio./s 0.70 Mio./s 9,080,416 bytes 0.58 Gbit/s 0.46 Gbit/s 3. „Surface roughness measurement“ (AVS result; Courtesy Institute for Pro- duction Engineering and Machine Tools, Univ. of Hanover) 130,050 tri.; normals and colors per vertex HP SGI Kayak fx 6 Onyx2 IR 47 ms 61 ms Triangle strips: 2.77 Mio./s 2.14 Mio./s 3,658168 bytes 0.62 Gbit/s 0.49 Gbit/s Independant 141 ms 189 ms triangles: 0.92 Mio./s 0.69 Mio./s 10,924,656 bytes 0.62 Gbit/s 0.46 Gbit/s Figure 4 Preprocessing results of three application scenarios at RRZN/RVS: DVR data volumes, and immediate mode rendering rates, times, bitrates, based on embedded DVR plugin (DocShow-VR 1.3). Window sizes: 1. 512x512 pixel, 2. and 3. 640x480 pixel. Platforms: – HP Kayak XW, fx6 graphics, 2 x Pentium II Xeon, 450 MHz, Windows NT 4.0 – SGI Onyx2 Infinite Reality, 4 x MIPS R10000, 195 MHz, Irix 6.5.4 © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 13 6 CONCLUSION Our concept of preprocessing, real-time streaming, and efficient, high-quality pre- sentation of 3D scenes as Virtual Reality Movies, embedded in a WWW browser, has proven to be a powerful tool, in particular for high-performance application environ- ments, such as exploration or presentation of visualized supercomputer simulation results on a 3D graphics workstation. This is a potential Gigabit-network killer-ap- plication, since the geometric rendering data throughput of a state-of-the-art 3D graphics system is in the order of Gbit/s, and the requirements for real-time transmis- sion are similar in a streaming scenario. For example: N triangles, organized as a tri- angle strip with 3D normals per every 3D vertex have a volume of (N+2)*24 bytes in the case of the usual 32-bit representation for coordinate and normal vector com- ponents – this leads with N>>2 for 6 Mio. triangles/s to a data rate of above 1 Gbit/s. Already existing advanced distributed application scenarios will require the inte- gration of collaborative functionalities – such as synchronization of viewing param- eters, annotations, and video-conferencing in a multi-user environment – and the integration of further media types – such as video and audio streams as parts of an immersive virtual reality presentation. 7 ACKNOWLEDGEMENT This work is partly funded by the DFN-Verein (German Research Network), with funds from the BMBF (German Federal Ministry for Education and Research) in the project „DFN-Expo“. The authors wish to thank A. von Berg (RVS) for the discus- sion about the high-performance network issues and configuration of gigabit ether- net. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 14 8 REFERENCES 1. Andrews, K., Pesendorfer, A., Pichler, M., Wagenbrunn, K. H., Wolte, J.: Loo- king Inside VRwave: The Architecture and Interface of the VRwave VRML97 Browser. Proc. of VRML ‘98 Symposium, Monterey, California, 1998. (http://www2.iicm.edu/keith/publications.html) 2. Balaguer, J.-F., Gobbetti, E.: i3D: A High-Speed 3D Web Browser. Proc. of VRML ‘95 Symposium, San Diego, 1995. (http://www.crs4.it/~gobetti/) 3. Bell. G., Parisi, A., Pesce, M.: The Virtual Reality Modeling Language – Ver- sion 1.0 Specification. 09.11.1995. (http://www.vrml.org/Specifications/VRML1.0/) 4. Berners-Lee, T., Connolly, D.: Hypertext Markup Language – HTML 2.0. RFC 1866, 03.11.1995. (ftp://nis.nsf.net/documents/rfc/) 5. Berners-Lee, T., Masinter, L., McCahill, M.: Uniform Resource Locators (URL). RFC 1737, 20.12.1994. (ftp://nis.nsf.net/documents/rfc/) 6. Broll, W.: DWTP – An Internet Protocol for Shared Virtual Environments. Proc. of VRML ‘98 Symposium, Monterey, California, 1998. (http://ece.uwaterloo.ca/vrml98/papers/content.html) 7. Brutzman, D., Zyda, M., Watsen, K., Macedonia, M.: Virtual Reality Transfer Protocol (vrtp) Design Rationale. Workshop on Enabling Technology: Infra- structure for Collaborative Enterprises (WET ICE): Sharing a Distributed Vir- tual Reality, MIT, 18.–20.06.1997. (http://www.stl.nps.navy.mil/~brutzman/vrtp/vrtp_design.ps) 8. Bryson, S., Levit, C.: The Virtual Windtunnel. IEEE Computer Graphics and Applications 12(4), 1992. 9. Campagna, S., Kobbelt, L., Seidel, H.: Enhancing Digital Documents by inclu- ding 3D-Models. Computers & Graphics, Vol. 22, No. 6, 1998. 10. Corrie, B., Sitsky, D., Mackerras, P.: Integrating High Performance Computing and Virtual Environments. Proceedings of the Seventh Parallel Computing Workshop, Canberra, Australia, September 1997. (http://cap.anu.edu.au/cap/projects/parvis/bibliography/BCDSPM97.ps.gz) 11. Doenges, P. K., Capin, T., K., Lavagetto, F., Ostermann, J., Pandzic, I. S., Peta- jan, E. E.: MPEG–4: Audio/Video & Synthetic Graphics/Audio for Mixed Media. Image Communication Journal, Vol. 5, No. 4, May 1997. 12. Engel, K., Grosso, R., Ertl, T.: Progressive Iso-surfaces on the Web. Proc. of IEEE Visualization 1998. (http://www9.informatik.uni-erlangen.de/ger/research/pub1998/) 13. Fielding, R., Gettys, J., Mogul, J., Nielsen, H., Berners-Lee, T.: Hypertext Transfer Protocol – HTTP/1.1. RFC 2068, 03.01.1997. (ftp://nis.nsf.net/documents/rfc/) 14. Freed, N., Borenstein, N.: Multipurpose Internet Mail Extensions (MIME). RFC 2049, 02.12.1996. (ftp://nis.nsf.net/documents/rfc/) © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 15 15. Guéziec, A., Taubin, G., Horn, B., Lazarus, F.: A Framework for Streaming Geometry in VRML. IEEE Computer Graphics and Applications 19(2), 1999. 16. Gumhold, S., Straßer, W.: Real Time Compression of Triangle Mesh Connecti- vity. ACM SIGGRAPH ’98 Conference Proceedings, July 19–24, 1998. 17. Grau, O.: Representation of Temporal Changes of Flexible 3-D Objects. Inter- national Workshop on Synthetic-Natural Hybrid Coding and Three-Dimensio- nal Imaging 1997. 18. Haase, H.: Symbiosis of Virtual Reality and Scientific Visualization System. In: Computer Graphics Forum 15(3), 1996. 19. Haase, H., Dai, F., Strassner, J., Göbel, M.: Immersive Investigation of Scientific Data. In: Nielson, G. et al (eds.): Scientific Visualization – Overviews, Metho- dologies and Techniques, IEEE Computer Society Press, 1997. 20. Hardenberg, J.: RE: QvLib questions. VRML Hypermail Archive, 27.03.1995. (http://vag.vrml.org/www-vrml/arch/1107.html) 21. Has, M., Newman, T.: Color Management: Current Practice and The Adoption of a New Standard, 1996. (http://www.color.org/overview.html) 22. ISO/IEC 14772-1: The Virtual Reality Modeling Language (VRML97) – Part 1: Functional specification and UTF-8 encoding. International Standard, 1997. (http://www.vrml.org/Specifications/VRML97/) 23. ISO/IEC 14772-3: The Virtual Reality Modeling Language (VRML97) – Part 3: Compressed Binary Format Specification. Editor‘s Draft 5, 1997. 24. Klein, R.: Multiresolution representations for surface meshes. Proc. of SCCG, 1997. (http://www.gris.uni-tuebingen.de/people/staff/reinhard/mai97.ps.gz) 25. Klein, R., Straßer, W.: Handling of Very Large 3D-Surface-Datasets Using Mesh Simplification and Multiresolution Modeling. Tutorial, Computer Gra- phics International 1998, Hannover, Germany, June 22–26, 1998. (http://www.gris.uni-tuebingen.de/people/staff/reinhard/CGI-Tutorial/) 26. Kobbelt, L., Campagna, S., Seidel, H.: A General Framework for Mesh Deci- mation. Graphics Interface Proceedings, Vancouver, BC, 18–20 June 1998. 27. Kobbelt, L., Campagna, S., Vorsatz, J., Seidel, H.: Interactive Multi-Resolution Modeling on Arbitrary Meshes. ACM SIGGRAPH ’98 Conference Procee- dings, July 19–24, 1998. 28. Lane, D. E.: Scientific Visualization of Large-Scale Unsteady Fluid Flows. In: Nielson, G. M. et al (eds.): Scientific Visualization – Overviews, Methodolo- gies and Techniques, IEEE Computer Society Press, 1997. 29. Nebel, E., Masinter, L.: Form-based File Upload in HTML. RFC 1867, 07.11.1995. (ftp://nis.nsf.net/documents/rfc/) 30. Neider, J., Davis, T., Woo, M.: OpenGL Programming Guide – The Official Guide to Learning OpenGL, Release 1. Addison-Wesley, 1993. 31. Netscape: Netscape Navigator LiveConnect/Plug-In Software Development Kit, 1998. (http://home.netscape.com/comprod/development_partners/plugin_api/) 32. Netscape: Plug-In Guide – Communicator 4.0. January 1998. (http://developer.netscape.com/docs/manuals/communicator/plugin/) © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
TERENA-NORDUnet Networking Conference (TNNC) 1999 16 33. Olbrich, S., Pralle, H.: High-Performance Online Presentation of Complex 3D Scenes. IFIP Conference on High Performance Networking, Wien, 1998. 34. OPC – The OpenGL Performance Characterization Projekt: Viewperf Informa- tion and Results. (http://www.specbench.org/gpc/opc.static/viewin~1.html) 35. Parisi, T.: Corrections: Keeping Web 3D on Course. Keynote address at the VRML ‘99 Symposium, Paderborn, 1999. 36. Paul, B.: The Mesa 3-D graphics library. (http://www.ssec.wisc.edu/~brianp/Mesa.html) 37. Pesce, M., Kennard, P., Parisi, A.: Cyberspace. First WWW Conference, May 1994. (http://www.hyperreal.org/~mpesce/www.html) 38. Raasch, S., Etling, D.: Modeling Deep Ocean Convection: Large Eddy Simula- tion in Comparison with Laboratory Experiments. J. Phys. Oceanogr., 1998, Vol. 28, 1796–1802. 39. Schroeder, W., Martin, K., Lorensen, B.: The Visualization Toolkit. 2nd Edition. Prentice Hall, 1997. 40. Schroeder, W.: Polygon Reduction Techniques. IEEE Visualization 1995. 41. Schulzrinne, H., Rao, A., Lanphier, R.: Real Time Streaming Protocol (RTSP). RFC 2326, 14.04.1998. (ftp://nis.nsf.net/documents/rfc/) 42. Signès, J., Cazoulat, R., Pelé, D., Le Mestre, G.: An MPEG-4 Player integra- ting 3D, 2D graphics, video and audio. International Workshop on Synthetic- Natural Hybrid Coding and Three Dimensional Imaging 1997. 43. Strauss, P., Bell, G.: The VRML Programming Library – QvLib, Version 1.0 beta 1. 1995. (http://vag.vrml.org/www-vrml/vrml.tech/qv.html) 44. Taubin, G., Rossignac, J.: Geometric Compression Through Topology Surgery. IBM Research technical report RC-20340, 16.01.1996. (http://www.research.ibm.com/vrml/binary/pdfs/ibm20340.pdf) 45. Taubin, G., Gueziec, A., Horn, W., Lazarus, F.: Progressive Forest Split Com- pression. ACM SIGGRAPH ’98 Conference Proceedings, July 19–24, 1998. 46. Taubin, G., Rossignac, J.: Geometric Compression Through Topologic Surgery. ACM Transactions on Graphics, Vol. 17, No. 2, April 1998. 47. Touma, C., Gotsman, C.: Triangle Mesh Compression. Graphics Interface Pro- ceedings, Vancouver, BC, 18–20 June 1998. 48. Upson, C., Faulhaber, T., Kamins, D., Laidlaw, D., Schlegel, D., Vroom, J., Gurwitz, R., van Dam, A.: The Application Visualization System: A Computa- tional Environment for Scientific Visualization. IEEE Computer Graphics and Applications, July 1989. 49. Vöckler, J.-S.: A quick glance at webfilt. RVS, University Hannover, 03.09.1997. (voeckler@rvs.uni-hannover.de) 50. Web 3D Consortium: The VRML Repository. (http://www.web3d.org/vrml/vrml.htm) 51. Wernecke, J.: The Inventor Mentor: Programming Object-Oriented 3D Gra- phics with Open Inventor. Release 2, Addison-Wesley, Reading, Mass., 1994. © 1999, RRZN/RVS University of Hanover 19990525-tnnc99-2.5.fm
You can also read