http:^^www.cim.mcgill.ca^~dudek^mobile.html

来自「This data set contains WWW-pages collect」· HTML 代码 · 共 636 行 · 第 1/2 页

HTML
636
字号
</a>of York U. and D. Wilkes at Ontario Hydro</h4>We are interested in elaborating a taxonomy for systemsof multiple mobile robots.  The specific issues we are focusing on are the relationships between inter-robotcommunication, sensing, and coordination of behaviour in thecontext of position estimation and exploration.A short paper describing a trial experiment in this contextis <!WA29><a href="ftp://ftp.cim.mcgill.ca/pub/mobile-robot/paper:simpl-convoy-w-vision.ps">available in postscript form.</a><p><li>Mapping using weak information<BR></a><h4>G. Dudek in collaboration with Professors E. Milios and <!WA30><a href="http://www.cs.yorku.ca/People/jenkin/Welcome.html">M. Jenkin</a>of York U. and D. Wilkes at Ontario Hydro</h4>Autonomous navigation using sensory information often depends on a usable map of the environment.  This work deals with the automatic creation of such a maps by an autonomous agent and the minimal requirements such a map must satisfy in order to be useful. One aspect of this work is the analysis of how uncertainty either in the map or in sensing devices relates to the reliability and cost of navigation and and path planning.  Another aspect is the development of sensing strategies and behaviours that facilitate reliable self-location and map construction.<p><li>Probabilistic sonar understanding<BR><h4>Simon Lacroix, Grogory Dudek</h4><p><li>Pose Estimation From Image Data Without Explicit Object Models<BR><h4>G. Dudek, Chi Zhang</h4>We consider the problem of locating a robot in an initially-unfamiliar environment from visual input. The robot is not given a map of the environment, but it does have access to a limited set of training examples each of which specifies the video image observed when the robot is at a particular location and orientation.  Such data might be acquired using dead reckoning the first time the robot entered an unfamiliar region (using some simple mechanism such as sonar to avoid collisions). In this paper, we address a specific variant of this problem for experimental and expository purposes: how to estimate a robot's orientation(pan and tilt) from sensor data.Performing the requisite scene reconstruction needed to construct a metric map of the environment using only video images is difficult. We avoid this by using an approach in which the robot learns to convert a set of image measurements into a representation of its pose (position and orientation). This provides a {\em local} metric description of the robot's relationship to a portion of a larger environment.  A large-scale map might then be constructed from a collection of such local maps.  In the case of our experiment, these maps express the statistical relationship between the image measurements and camera pose. The conversion from visual data to camera pose is implemented using multi-layer neural network that is trained using backpropagation. For extended environments, a separate network can be trained for each local region. The experimental data reported in this paper for orientation information (pan and tilt) suggests the accuracy of the technique is good while the on-line computational cost is very low.  <p><li>Spatial abstraction and mapping<BR><h4>P. Mackenzie, G. Dudek</h4>This project involves the development of a formalism and methodology for making the transition from raw noisy sensor data collected by a roving robot to a map composed of object models and finally to a simple abstract map described in terms of discrete places of interest.  An important early stage of such processing the the ability to select, represent and find a discrete set of places of interest or landmarks that will make up a map.  Associated problems are those of using an map to accurately localize a mobile robot and generating intelligent exploration plans to verify and elaborate a map.<!WA31><a href="ftp://ftp.cim.mcgill.ca/pub/mobile-robot/paper:RA94-model-based-localization.ps.Z">Click here for a compressed postscript copy of a recent paper on this work.</a><p><li>Multi-sensor fusion for mobile robotics<BR><h4>MRL group members</h4><!WA32><a href="ftp://ftp.cim.mcgill.ca/pub/mobile-robot/fusion.html">Click here for abstract (with picture)</a><p><li>Spatial Mapping with Uncertain Data<BR><h4>G. Dudek</h4>As a sensor-based mobile robot explores an unknown environment it collects percepts about the world it is in.  These percepts may be ambiguous individually but as a collection they provide strong constraints on the topology of the environment. Appropriate exploration strategies and representations allow a limited set of possible world models to be considered as maps of the environment.  The structure of the real world and the exploration method used specify the reliability the final map and the computational and perceptual complexity of constructing it.  Computational tools being used to construct a map from uncertain data range from graph-theoretic to connectionist.<p><li>Human object recognition and shape integration<BR><h4>Gregory Dudek, Daniel Bub: Neurolinguistics, Montreal Neurological Inst., Martin Arguin: Phychology Dept., University of Montreal</h4>Computational vision is defined, to a large extent, with reference to the visual abilities of humans.  In this project we are examining the relationship between the characteristics of object shape and the abilities of humans to recognize these shapes.  This includes the modelling of subjects with object recognition deficits due to brain damage as well as normal subjects.<!WA33><a href="ftp://ftp.cim.mcgill.ca/pub/mobile-robot/paper:human-form-recog-capri.ps.Z">Click here for a compressed postscript copy of a recent paper on this work.</a><p><li>Dynamic reasoning, navigation and sensing for mobile robots<BR><h4>Martin D. Levine, Peter Caines, Renato DeMori, Gregory Dudek, Paul Freedman (CRIM), Geoffrey Hinton (University of Toronto)</h4>The goal of this project is to develop both the theoretical basis and practical instantiation of a mobile robotic system will be able to reason about tasks, recognize objects in its environment, map its environment, understand voice commands, and navigate through the environment and perform the specified search tasks. This will be achieved in a dynamic environment, in that knowledge of a (possibly changing) world may be updated, and the tasks themselves may be radically altered during the system's operation.   Core research areas involved include perceptual modelling, control theory, neural networks, graph theory, attentive control of processing and speech understanding.Among the key capabilities indended as outcomes of this project are:    <ul>    <li>  Integrated low (eg, points and lines) and high level (eg. places and rooms) descriptions of the environment.     <li>  Ability to deal with a changing environment.    <li>  Ability to reason about multiple tasks and the changing environment.    <li>  Ability to learn about the environment and the sensor characteristics.    <li>  Ability to accept high level verbal commands (with a limited lexicon and    </ul>syntax) similar to those employed by humans (based on psychological data) and translate them into control actions for the robot and sensors.<p><li>Enhanced reality for mobile robotics<BR><h4>Kadima Lonji, G. Dudek</h4>This project involves the use of a synthetic scene model forteleoperation or pose estimation.  Live video and synthetic modelinformation is fused to produce a composite image.<p><li>Natural language referring expressions in a person/machine dialogue.<BR><h4>G. Dudek, R. DeMori</h4><!WA34><a href="http://www.cim.mcgill.ca/~dudek/mobile/language.html">Click here for abstract</a><p><li>A FLEXIBLE BEHAVIORAL ARCHITECTURE FOR MOBILE ROBOT NAVIGATION<BR><h4>J. Zelek, M. D. Levine</h4>The intention of this study is to design an architecture that allows  the behavioral control strategy that is flexible, generalizable, and extendable.  The component dedicated to  behavioral activities should be able to attempt tasks with or without a reasoning module.  We are investigating 2D navigational tasks for a mobile robot possessing sonar sensors and a controllable TV camera mounted on a pan-tilt head. The major aspects  of our proposed behavioral architecture are as follows: - A natural language lexicon is used to represent spatial information and for defining task commands. The lexicon is used as a language for internal communications and user-specified commands. The task is to go to a location in space, either known or determined by locating a specific object. - An extension of a formalism, referred to as teleo-reactive (T-R) programs (Nilsson:94), is used for specifying behavioral control. The extensions of this approach involve dealing with real-time resource limitations and constraints.</ul><P  ALIGN=CENTER><!WA35><img src ="http://www.cim.mcgill.ca/~dudek/mobile/movingdivider.gif" WIDTH=216 HEIGHT=5 ></P><HR><A NAME="links"></A><h2>Some other (outside) information sources</h2><P>There is an archive for several general <!WA36><a href="ftp://ftp.cim.mcgill.ca/pub/techrep/README.html">CIM Technical Reports</a>.<P><!WA37><a href="http://piglet.cs.umass.edu:4321/robotics.html">Robotics Internet Page at U. Mass.</a><P><!WA38><a href="http://www.cup.cam.ac.uk/">Cambridge University Press.</a><P><!WA39><a href="http://mitpress.mit.edu/">MIT Press.</a><P><!WA40><a href="http://www.igs.net:80/precarn/">The IRIS/PRECARN page.</a><P  ALIGN=CENTER><!WA41><img src ="http://www.cim.mcgill.ca/~dudek/mobile/movingdivider.gif" WIDTH=216 HEIGHT=5 ></P><HR><A NAME="keywords"></A><h3>Keywords</h3><small>mobile robot, mcgill, robots, autonomous, vision, perception, artificial intelligence, AI, robotics, telerobotics, computers,computer science, engineering,learning, environment mapping, map making, cartography, rendezvous,intelligent machines, cognition, cognitive science, learning,path planning, navigation, localization, positioning, modelling, modeling,shape, form, recognition, graduate students, students, teaching, research,canada, canadian, science, montreal, quebec, dudek, gregory, faculty,newton, movie, movies, sex (gotta attract web-bots somehow), multimedia.<A NAME="legalities"></A><h3>Legalities</h3><small><BLOCKQUOTE>This document is Copyright (c) Gregory Dudek, 1996.You are granted permission for the non-commercial use, reproduction, distribution ordisplay of this document in any format under the following restrictions.Appropriate credit is given as toits source and authorship. This permission is valid for a period of 45 (forty-five) days from the time this document was obtained from McGill University.  All other rights reserved by the author(s).</BLOCKQUOTE></small></body></HTML>

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?