System Reliability and Safety Concepts of the Humanoid Service Robot HERMES
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
System Reliability and Safety Concepts of the Humanoid Service Robot HERMES Rainer Bischoff Bundeswehr University Munich, Institute of Measurement Science Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany Phone: +49-89-6004-3589, Fax: +49-89-6004-3074 E-Mail: Bischoff@ieee.org, URL: http://www.unibw-muenchen.de/hermes Abstract (sponsors, public) believe that most of the robotic commu- nity’s problems are already solved, which is certainly not true. A service robot of anthropomorphic size and shape has been On the contrary, much research is still needed to improve built to study safe ways of interaction with people and their considerably not only system reliability and safety concepts, common living environment. Although the robot is presently but also design concepts, locomotion and manipulation capa- used mostly by trained personnel it has also shown robust and bilities, cooperation and communication abilities, reliability, safe behavior with novice users and people who are not neces- and – probably most importantly – adaptability, learning capa- sarily interested in robotic matters, e.g. at trade fairs, televi- bilities and sensing skills. sion studios and at various demonstrations in our institute To advance research in all of the before-mentioned areas environment. During the design process we followed certain we have developed the humanoid experimental robot HER- guidelines in both hardware and software that have proved to MES. It is built from 25 motor modules with identical mechan- lead to a reliable and safe overall system. ical and electrical interfaces, thus yielding a Main guideline was to construct the robot from very flexible, extensible and modular design building blocks that were themselves easy to that can be easily modified and maintained test and to maintain, i.e., enclosed subsystems (Figure 1). With its omnidirectional undercar- with clearly defined interfaces. We do not riage, body, head, eyes and two arms it has claim that our present system is failsafe and now 22 degrees of freedom and resembles a foolproof, but we believe that the strategies we human in height and shape. Its main embarked on could lead the way to robots hav- exteroceptive sensor modality is stereo vision. ing these characteristics. Both camera “eyes” may be actively and inde- pendently controlled in pan and tilt degrees of freedom. A variety of proprioceptive sensors 1 Introduction further enhances its perceptual abilities. A System reliability and safety are major con- multimodal human-friendly interface built cerns for everybody working with industrial upon the basic senses – vision, touch and hear- robots or developing and manufacturing them. ing – enables even non-experts to intuitively Nevertheless, this seems to be neglected by control the robot. most robotic researchers developing service or personal robots. However, a reliable system 2 Design Strategies and advanced safety concepts are needed es- In our opinion system reliability, operating pecially for these types of robots because they robustness and safety of robots emerge from are intended to operate in unpredictable and the following simple design strategies and unsupervised environments in close proximity guidelines: to or in direct contact with people who are not 1. Learning from nature how to design reli- necessarily interested in them, or, even worse, able, robust and safe systems. who try to harm them by disabling sensors or 2. Providing natural and intuitive communi- playing tricks on them. cation and interaction between the robot System reliability and safety have not been and its environment. a major issue in research institutions so far 3. System reliability depends on ease of because it is believed that industrial compa- maintenance. nies, when they will actually market service 4. Only a nice-looking robot is a reliable ro- or personal robot products, will eventually Figure 1: Humanoid experimental bot. deal with this question. Researchers in labora- robot HERMES with an omnidirect- We believe that future robotic systems could tories have always been satisfied if their robot ional undercarriage, a bendable benefit from applying these design strategies performed well once or twice under specific body, two arms with two-finger- and guidelines in addition to general design conditions or at end-of-project demonstrations, grippers and a 6-DOF stereo vision rules that must be followed by the designer of which enabled them to write a publication system; size: 1.85 m x 0.70 m x 0.70 any robotic system with respect to the applica- about their “perfectly” performing robot. How- m; weight: 250 kg, low center of tion domain. In the sequel these design strate- ever, these “performances” make people gravity provides good stability gies are explained in greater detail.
Nature has provably designed reliable, robust and safe does not have humanoid shape, a safe confidence inspiring systems. According to the classical approach, robot control is interaction could benefit from humanoid characteristics such model-based. Numerical models of the kinematics and dynam- as smoothness of movements and compliance of the joints or ics of the robot and of the external objects that the robot links. In general, unexpected robot movements should be should interact with, as well as quantitative sensor models, are avoided. Instead, gentle human-like motions should be gener- the basis for controlling the robot’s motions. The main advan- ated to enable operators or uninvolved person to anticipate the tage of model-based control is that it lends itself to the appli- robot’s actions. cation of classical control theory and, thus, may be considered It would be dangerous, however, to try to anticipate peo- a straight-forward approach. The weak point of the approach ple’s movement in order to let the robot operate faster. Since is that it breaks down when there is no accurate quantitative humans might behave in illogical, irrational or unpredictable agreement between reality and the models. Differences be- ways, it is necessary to have the robot move and interact in a tween models and reality may come about easily; an error in way that prevents accidents under all circumstances. one of the many coefficients that are part of the numerical Therefore, it might be useful to additionally visualize the models suffices. Among the many possible causes for discrep- robot’s state or subsequent motions in a way that facilitates ancies are initial calibration errors, aging of components, anticipation, e.g., with help of facial expressions, postures or changes of environmental conditions, such as temperature, even indicators that humans are familiar with in everyday humidity, electromagnetic fields or illumination, maintenance situations. Doing so, it should be the goal to exploit the peo- work and replacement of components, to mention only a few. ple’s own intuition to make the interaction safer. Consequently, most robots work only in carefully controlled System reliability depends on ease of maintenance. In our environments and need frequent maintenance (including repet- opinion the first step to make a complex system safe is to itive calibration), in addition to a cumbersome and expensive make its components reliable. If the components themselves initial calibration. are failsafe and need little or no maintenance at all, overall Organisms, on the other hand, are robust and adapt easily system safety is greatly increased. We believe that only a robot to changes of their own conditions and of the environment. that needs little or no maintenance at all and that can be easily They never need any calibration, and they normally do not repaired (if ever needed) will be accepted as co-worker, care- know the values of any parameters related to the characteris- taker or companion. This requires, among other things, en- tics of their “sensors” or “actuators”. Obviously, they do not closed and maintenance-free subsystems such as the modules suffer from the shortcomings of model-based control which used to build the robot’s joints. leads us to the assumption that they use something other than quantitative measurements and numerical models for control- Only a nice-looking robot is a reliable robot. It is a matter ling their motions. Perhaps their motion control is based on a of personal experience that only “nice” or “tidy” looking ro- holistic assessment of situations for the selection of behaviors bots are really reliable, especially in research environments. to be executed. Perhaps robotics could benefit from following This might result from the fact that, if the designer makes an a similar approach. effort to have a nice-looking robot, he also places great em- Following this line of argumentation we strongly believe phasis on doing other things right, such as reliably connecting that sensing in general should be based on the senses that have the different sensors, actuators and peripherals and finding proved their effectiveness in nature. Therefore, vision – the proper ways to route all the cables. Many robots fail (or only sensor modality that predominates in nature – is also an emi- work on “Wednesday afternoon when the sun is shining”) nently useful and practical sensor modality for robots. Also, because of broken cables and unreliable connections. Of tactile sensing and hearing may greatly improve a robot’s safe course, a good design involves more than these esthetic as- operation as shown by nature. Active sensing (laser, radar, pects. Industrial designers, e.g., consider all aspects from sonar) might be a suitable approach in the short run for spe- ergonomics over construction to deployment. Nevertheless, it cific system solutions, but only a more generic approach with should be mentioned that only a few research institutes really low-cost universally applicable (passive) sensor modalities on try to consider these aspects in a holistic fashion to provide a the robot will lead in the long run to the deployment of service truly robust system. and personal robots in massive numbers. In addition, passive sensors cannot harm human eyes, ears or tissue, whereas ac- tive sensors could be hazardous. 3 Implementation of the Design Strategies Natural and intuitive communication and interaction en- We tried to apply the above laid-out design strategies and to hances safety. Any person who might – voluntarily or not – translate them into a really robust and safe system. Two of the encounter a robot needs to be able to communicate and inter- peculiarities of our robot HERMES are certainly its anthropo- act with the robot in a natural and intuitive way. Therefore, the morphic shape and its modular design. We have experienced human communication interface has to be designed in a way that its anthropomorphic shape encourages people to interact that no training would be required for any person who might with HERMES in a natural way. Besides its appearance, HER- get in contact with it. This can be achieved if the human-robot MES possesses several other promising features inside and communication would resemble a dialogue that could as well outside that makes it intrinsically more reliable and safer than take place between two humans. If the robot resembles a hu- other robots. In the sequel these special safety measures will man, a person can easily derive from his former everyday be explained as well as the robot’s special hardware structure experience with humans how a specific interaction, e.g., ex- and system architecture (software) which contributes to an changing objects with the robot, might work. Even if the robot overall safe system.
3.1 Robot hardware ventilation keeps the processors’ tem- peratures down and reduces electro- In designing our humanoid experimen- magnetic noise to a minimum. tal robot we placed great emphasis on modularity and extensibility [Bischoff Cables and connectors. Within HER- 1997]. All drives are realized as mod- MES, all signal and power line connec- ules with compatible mechanical and tors are secured with screws or similar electrical interfaces; each drive module fixtures to their respective housing. All consists of two cubes rotating relative connectors are strain-relieved to elimi- to each other and containing a motor, a nate the risk of loose or broken cables. Harmonic Drive gear, power electron- And electromagnetic shielding of the ics, sensors, a micro-controller, and a cabling has been a major concern to communication interface. A standard- diminish the effect of the many sources ized CAN bus connects all drive mod- of electromagnetic fields within the ules with the main computer. robot. HERMES runs on 4 wheels, ar- Power circuitry and emergency ranged on the centers of the sides of its stopping. Safety standard regulations base. The front and rear wheels are require that all consumer loads are dis- driven and actvely steered, the lateral connected from the power in case of an wheels are passive. Figure 2: Motion sequence to illustrate the enlarged emergency and that all drives are ac- The manipulator system consists of work space, but also higher hazards, gained by a tively braked, e.g., if the bumpers are two articulated arms with 6 degrees of bendable body with two arms (6 degrees of freedom touched. In this case a human operator freedom each on a body that can bend each). The heavy undercarriage prevents tipping over is needed to reset the robot. Any kind forward (130°) and backward (-90°). and collapsing onto people. of intelligent assessment of the prevail- The work space extends up to 120 cm ing “emergency” situation by the robot in front of the robot. The heavy base guarantees that the robot is not allowed. However, in normal living environments the will not loose its balance even when the body and the arms are robot might need to touch things or cannot prevent it if it fully extended to the front. Currently each arm is equipped wants to continue its given task. Should it not have the ability with a two-finger gripper that is sufficient for basic manipula- to intelligently assess the situation? For instance, maybe it tion experiments. would suffice during simple maneuvers such as turning around Main sensors are two video cameras mounted on inde- a corner just to back up a little bit or to change the curvature pendent pan/tilt drive units in addition to the pan/tilt unit that in order to prevent any damage to the walls. Another scenario controls the common “head” platform. The cameras can be could require to set the robot’s modules into a compliant mode moved with accelerations and velocities comparable to those where all joints can be moved manually with ease to prevent of the human eye. further injury to a human instead of actively braking all drives. A radio Ethernet interface allows to control the robot re- We believe that future robots need to have more intelligent motely. A wireless keyboard can be used to teleoperate the safety concepts than the existing ones to be able to work with robot up to distances of 7 m. Separate batteries for the motors or in close proximity to humans. It will be simply not safe and the information processing system allow a continuous enough to just follow the existing safety regulations for indus- operation of the robot for several hours without recharging. trial manipulators or automated guided vehicles. A hierarchical multi-processor system is used for informa- Therefore, our safety concept allows active utilization of tion processing and robot control. The control and monitoring the bumpers to enable tactile sensing and to complement miss- of the individual drive modules is performed by the sensors ing visual information. Program failures could be detected by and controllers embedded in each module. The robot’s “brain” implementing so called “watch dog” timers on different levels, is a network of digital signal processors (DSP, TMS 320C40) e.g., in the robot’s microcontrollers, the slot CPU and DSPs. embedded in a standard industrial PC. Sensor data processing Any watch dog timer running out would cause the robot to (including vision), situation recognition, behavior selection stop via electronic emergency switches. So far, these watch and high-level motion control are performed by the DSPs, dog timers have not been implemented. HERMES only pos- while the PC provides data storage and the human interface. sesses two standard emergency buttons. One could be acti- 3.1.1 Special hardware measures for enhancing reli- vated by pressing a clearly visible red-yellow button on the ro- ability and operating safety bot’s cargo area, another one could be activated via a wireless emergency switch carried by a human operator. They are con- Modular and standardized computer hardware. Ease of nected in series and only interrupt the power circuitry for the maintenance and repair is certainly one of the most prominent motors; the information processing system keeps running as features of HERMES, since the robot consists of 25 long as the robot is switched on. No time would be wasted in functionally similar drive modules with almost identical me- case of an emergency to “re-boot” the robot. chanical and electrical interfaces. If any of these modules On a lower level, current sensors in each module check if should ever fail, it could be easily replaced with a new readily the motor current is too high. In this case the power line will available off-the-shelf module. Same holds true for the robot’s be interrupted to prevent further damage to the electronic brain: each DSP board and the single slot CPU can be easily components, and a break is activated to prevent falling of replaced from stock. A rugged PC with special shielding and grasped objects.
Artificial skin. A modular approach has also been taken in the design of an artificial skin for the robot. This “skin” is based on conductive foam that serves two purposes: one, it damps accidental and unwanted impacts between the robot and hu- mans or environmental objects, and two, it allows to identify the contact locations of and the forces exerted by the touched objects. Contact points and forces are measured via a dense grid of electrodes underneath the foam. Pressing the foam results in a higher conductivity of the material (lower resis- tance, respectively). The resistance between two electrodes is continuously measured (50 Hz) and evaluated by dedicated microcontrollers. In case of touch events these microcontrollers first send messages to higher hierarchical computing levels that decide about appropriate reactions based on the robot’s current situa- tion. If for any reason these higher levels do not immediately respond to the message, the microcontroller will directly stop the associated motor module(s). A bumper consisting of 12 identical sections of the artifi- cial skin surrounding the robot’s undercarriage (at a height of 30 - 330 mm measured from the ground, each section 200 mm wide) has already been realized. Furthermore, two new two- Figure 3: System architecture of a personal robot based on the finger grippers that are completely covered by this conductive concepts of situation, behavior and skills. foam have been developed and are currently being revised. In future it is planned to cover the whole robot structure with this a human-friendly interface. In its core, the system is behavior- kind of tactile sensing elements. Ideally, these elements will based, which is now generally accepted as an efficient basis be directly connected to the individual motor modules and for autonomous robots [Arkin 1998]. However, to be able to connected via a safe bus system to the central information select behaviors intelligently and to pursue long-term goals in processing unit. In our opinion this is the only way reliably addition to purely reactive behaviors, we have introduced a detect unwanted contacts of the robot and its environment. All situation-oriented deliberative component that is responsible elements are connected via a high-speed serial communication for situation assessment and behavior selection. bus (CAN) and can be easily replaced. Another (or a complementing) solution could be to employ 3.2.1 System Overview either slip clutches in the joints of manipulators or to imple- Figure 3 shows the essence of the situation-oriented behavior- ment intelligent control algorithms that continuously predict based robot architecture as we have implemented it. The situa- and verify force and torque on all joints. Prerequisite for the tion module (situation assessment & behavior selection) acts latter safety concept would be a lightweight manipulator that as the core of the whole system and is interfaced via “skills” in allows position, velocity and torque control with minimal a bidirectional way with all other hardware components – control loop cycle times (
skills are simple movements of the robot’s actuators. They can grasping of objects, without quantitatively correct models of be arbitrarily combined to yield a basis for more complex its manipulation or visual system. control commands. Encapsulating the access to groups of The general idea to solve the first learning problem is to actuators, that form robot parts, such as undercarriage, arms, let the robot behave like a new worker in an office with the body and head, leads to a simple interface structure, and al- ability to explore, e.g., a network of corridors, and to ask peo- lows an easy generation of pre-programmed motion patterns. ple for reference names of specific points of interest, or to let Sensor skills encapsulate the access to one or more sensors, people explain how to get to those points of interest. The geo- and provide the situation module with proprioceptive or metric information is provided by the robot’s odometry, and exteroceptive data. Sensorimotor skills combine both sensor relevant location names are provided by the people who have and motor skills to yield sensor-guided robot motions, e.g., an interest that the robot needs to know a place under a spe- vision-guided or tactile and force/torque-guided motion skills. cific name. In this way the robot learns quickly from scratch, Communicative skills pre-process user input and generate a how (specific) persons call places and what the most impor- valuable feedback for the user according to the current situa- tant places (and routes to these places) are. tion and the given application scenario. The system’s knowl- The general idea to solve the second learning problem is edge bases are organized and accessed via data processing simple. While the robot watches its end effector with its cam- skills. They return specific information upon request and add eras, like a playing infant watches his hands with his eyes, it newly gained knowledge (e.g., map attributes) to the robot’s sends more or less arbitrary control commands to its motors. data bases, or provide means of more complex data process- By observing the resulting changes in the camera images it ing, e.g., path planning. For a more profound theoretical dis- “learns” the relationships between such changes in the images cussion of our system architecture which bases upon the con- and the control commands that caused them. After having cepts of situation, behavior and skill see [Bischoff, Graefe executed a number of test motions the robot is able to move its 1999]. end effector to any position and orientation in the images that is physically reachable. If, in addition to the end effector, an 3.2.2 Implementation object is visible in the images the end effector can be brought to the object in both images and, thus, in the real world. A robot operating system has been developed that allows Based on this concept a robot can localize and grasp ob- sending and receiving messages via different channels among jects without any knowledge of its kinematics or its camera the different processors and microcontrollers. All tasks and parameters. In contrast to other approaches with similar goals, threads run asynchronously, but can be synchronized via mes- but based on neural nets, no training is needed before the sages or events. manipulation is started. Overall control is realized as a finite state automaton that does not allow unsafe system states. It is capable of respond- Speaker-independent voice recognition. The robot under- ing to prioritized interrupts and messages. After powering up stands natural continuous speech independently of the speak- the robot finds itself in the state “Waiting for next mission de- er, and can, therefore, be commanded in principle by any non- scription”. A mission description is provided as a text file that dumb human. This is a very important feature, not only be- may be either loaded from a disk, received via e-mail, entered cause it allows anybody to communicate with the robot with- via keyboard, or result from a spoken dialogue. It consists of out needing any training with the system, but more importantly an arbitrary number of single commands or embedded mission because the robot may be stopped by anybody via voice in descriptions that let the robot perform a required task. All case of emergency. Speaker-independence is achieved by commands are written or spoken, respectively, in natural lan- providing grammar files and vocabulary lists that contain only guage and passed to a parser and an interpreter. If a command those words and provide only those command structures that cannot be understood, is under-specified or ambiguous the can actually be understood by the robot. In the current imple- situation module tries to complement missing information mentation HERMES understands 58 different command struc- from its situated knowledge or asks the user via its communi- tures and 344 words. cative skills to provide it. Robust dialogues for dependable interaction. Most parts of Motion skills are mostly implemented at the micro- robot-human dialogues are situated and built around robot- controller level within the actuator modules. High-level motor environment or robot-human interactions, a fact that has been skills, such as coordinated smooth arm movements are real- exploited to enhance the reliability and speed of the recogni- ized by a dedicated DSP interfaced to the microcontrollers via tion process by using so-called contexts. They contain only a CAN bus. Sensor skills are implemented on those DSPs that those grammatical rules and word lists that are needed for a have direct access to digitized sensor data, especially digitized particular situation. However, at any stage in the dialogue a images. number of words and sentences not related to the current con- 3.2.3 Special software measures for enhancing safety text are available to the user, too. These words are needed to “reset” or bootstrap a dialogue, to trigger the robot’s emer- and operating robustness gency stop and to make the robot execute a few other impor- Learning by doing. Two forms of learning are currently be- tant commands at any time. ing investigated. They both help the robot to learn from It is important to note that the robot is always in charge of scratch by actually doing a useful task: One, to have the robot the current action and controls the flow of information towards generate, or extend, an attributed topological map of the envi- the user. If the robot is asked by a user to execute a service ronment over time in cooperation with human teachers. Two, tasks it will follow a specific “program” consisting of concate- to let the robot automatically acquire or improve skills, e.g., nated and combined skills thereby tightly coupling acting,
sensing and speech acts in a predefined way. If something 5 Summary and Conclusions goes wrong, i.e., some parameters exceed their bounds, the current command will be canceled by the robot. Canceling a A service robot of anthropomorphic size and shape has been command involves returning into a safe state which again built to study safe ways of interaction with people and their might involve communication and interaction with the user. common living environment. Although the robot is presently used mostly by trained personnel it has also shown robust and Object-oriented image processing. One apparent difficulty safe behavior with novice users and people who are not neces- in implementing vision as a sensor modality for robots is the sarily interested in robotic matters, e.g., at trade fairs, televi- huge amount of data generated by a video camera: about 10 sion studios and at various demonstrations in our institute million pixels per second, depending on the video system environment. System reliability from a hardware point of view used. Nevertheless, it has been shown (e.g., by [Graefe 1989]) is mostly guaranteed by the modular robot structure which can that modest computational resources are sufficient for realiz- be easily maintained. The robot is basically constructed from ing real-time vision systems if a suitable system architecture is readily available motor modules with standardized and viable implemented. mechanical and electrical interfaces. System reliability (oper- As a key idea for the design of efficient robot vision sys- ating robustness) and safety are ensured by a simple but pow- tems the concept of object-oriented vision was proposed. It is erful skill-based system architecture that integrates visual, based on the observation that both the knowledge representa- tactile and auditory sensing and various motor skills that do tion and the data fusion processes in a vision system may be not rely on quantitatively exact models or accurate calibration. structured according to the visible and relevant external ob- The robustness of the robot’s camera eyes with respect to jects in the environment of the robot. For each object that is varying lighting conditions is greatly enhanced by actively relevant for the operation of the robot at a particular moment controlling the integration time of the CCD sensor elements the system has one separate “object process”. An object pro- within an object-oriented software framework, and, thus, al- cess receives image data from the video section (cameras, lowing safe navigation and manipulation even under uncon- digitizers, video bus etc.) and generates and updates continu- trolled and sometimes difficult lighting conditions. Recent ously a description of its assigned physical object. This de- efforts include the development of an artificial touch-sensitive scription emerges from a hierarchically structured data fusion skin that can be easily attached to any motor module or outer process which begins with the extraction of elementary fea- structure element, such as the undercarriage, grippers or arms. tures, such as edges, corners and textures, from the relevant The robot understands natural spoken language speaker-inde- image parts and ends with matching a 2-D model to the group pendently, and can, therefore, be commanded in principle by of features, thus identifying the object. any non-dumb human. Recognition of relevant objects is crucial for the robot’s Admittedly, today’s robots (including HERMES) still have operation. The decision what objects have to be detected and limited sensing abilities. Also, the perception quality is not tracked is made by the situation module. It also decides that high enough to cope with all kinds of real-world situations. the robot has to move slower, if, e.g., some features are Anyone who wants to trick the robot can do it, and the robot tracked less reliably, and that it has to stop, if the features are will fail. The slow progress in this area is certainly due to the lost. Based on the type of detected and tracked objects the complexity of the problem, but also because researchers all speed of the robot may be adjusted. For instance, since inter- over the world are wasting time with building and maintaining sections constitute possible hazards because of people sud- their own robotic research platforms and reinventing and im- denly walking around the corner, the robot will slow down plementing algorithms again and again. Establishing hardware automatically before these kind of hazards could occur. and safety standards (comparable to PCs and industrial ro- bots), and providing software libraries for already solved per- 4 Experiments and Results ception problems would definitely accelerate research. A number of experiments have been carried in the meantime. 6 Literature The robot has been presented at trade fairs, television studios and at various demonstrations in our institute environment. Arkin, R. C. (1998). Behavior-Based Robotics. MIT Press, Cam- Due to limited space unfortunately none of these experiments bridge, MA, 1998. can be described here. The reader may refer to the literature, Bischoff, R. (1997). HERMES – A Humanoid Mobile Manipulator ([Bischoff, Graefe 1998], [Bischoff 2000]) or check the web for Service Tasks. Proc. of the International Conference on Field page http://www.unibw-muenchen.de/hermes for details. and Service Robotics. Canberra, Australia, Dec. 1997, pp. 508-515. One of the promising results is that the humanoid shape Bischoff, R. (2000). Towards the Development of ‘Plug-and-Play’ and the human communication interface of the robot encour- Personal Robots. 1st IEEE-RAS International Conference on Hu- manoid Robots. MIT, Cambridge, September 7-8, 2000. aged people to interact with it in an almost natural way. An- Bischoff, R.; Graefe, V. (1998). Machine Vision for Intelligent other one is that our calibration-free approach seems to pay Robots. IAPR Workshop on Machine Vision Applications. Maku- off, because we experienced offset problems at system initial- hari/Tokyo, November 1998, pp. 167-176. ization due to heating problems or simply wear of parts or Bischoff, R.; Graefe, V. (1999). Integrating Vision, Touch and aging. These offsets could have produced severe problems, Natural Language in the Control of a Situation-Oriented Behav- e.g, during object manipulation, if the employed methods ior-Based Humanoid Robot. IEEE Conference on Systems, Man, relied on exact kinematic modeling and calibration. Since our and Cybernetics, October 1999, pp. II-999 - II-1004. navigation and manipulation algorithms only rely on qualita- Graefe, V. (1989). Dynamic Vision Systems for Autonomous tive (not quantitative) accurate information, reliable system Mobile Robots. Proc. IEEE/RSJ International Workshop on Intelli- performance can be guaranteed nonetheless. gent Robots and Systems, IROS ‘89. Tsukuba, pp. 12-23.
You can also read