Robotics Patents (Class 382/153)
  • Patent number: 8588512
    Abstract: A localization method of a moving robot is disclosed in which the moving robot includes: capturing a first omni-directional image by the moving robot; confirming at least one node at which a second omni-directional image having a high correlation with the first omni-directional image is captured; and determining that the moving robot is located at the first node when the moving robot reaches a first node, at which a second omni-directional image having a highest correlation with the first omni-directional image is captured, from among the at least one node.
    Type: Grant
    Filed: January 18, 2008
    Date of Patent: November 19, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sukjune Yoon, Woosup Han, Seung Ki Min, Kyung Shik Roh
  • Patent number: 8588982
    Abstract: An exemplary mechanical parking system includes platforms for parking vehicles, a switch electrically connected to the platforms, cameras mounted on the platforms, a data processing device electrically connected to the cameras and the switch, and an alarm electrically connected to the data processing device. When any one of the platforms is switched to move, the cameras capture photos/videos for an environment of the mechanical parking system and send the photos/videos to the data processing device continuously. The data processing device analyzes whether there is a person appearing in the photos/videos and decides whether the movement of the platform should be stopped via the switch. After the danger is precluded, the movement of the platform is restored and the cameras start to work again, until the parking of the car is finished.
    Type: Grant
    Filed: August 30, 2011
    Date of Patent: November 19, 2013
    Assignee: Hon Hai Precision Industry Co., Ltd.
    Inventors: Hou-Hsien Lee, Chang-Jung Lee, Chih-Ping Lo
  • Patent number: 8582863
    Abstract: A string emanating from a packaging machine is arranged in a slot by means of a feed inlet and placed in a winding shaft (12). This placement is assisted by the use of positioning brushes (4). After the initial introduction, winding up takes place upon rotation of the winding shaft and upon further engagement of the string in the slot (14) in the winding shaft. The winding shaft is provided on a carrying disc (5) and said carrying disc is also arranged so as to be rotatable. During the first stage, the section of the string emanating from the supply path is pulled with constant force. After the string has been separated from the remaining material, the winding is carried out. Subsequently, the rotating shaft with the string, through rotation of the carrying disc, reaches a next position in which pressing the reel and thus adhesion of the reel end moves the winding shaft out of the reel. The reel has already been checked prior to entering the winding device to determine whether it has to be inspected.
    Type: Grant
    Filed: September 12, 2008
    Date of Patent: November 12, 2013
    Assignee: Global Factories B.V.
    Inventor: Richard Rudolf Theodoor Van Den Brink
  • Patent number: 8571302
    Abstract: A method and apparatus to build a 3-dimensional grid map and a method and apparatus to control an automatic traveling apparatus using the same. In building a 3-dimensional map to discern a current location and a peripheral environment of an unmanned vehicle or a mobile robot, 2-dimensional localization and 3-dimensional image restoration are appropriately used to accurately build the 3-dimensional grid map more rapidly.
    Type: Grant
    Filed: March 26, 2009
    Date of Patent: October 29, 2013
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Sukjune Yoon, Kyung Shik Roh, Woong Kwon, Seung Yong Hyung, Hyun Kyu Kim
  • Patent number: 8559699
    Abstract: Vision based systems may select actions based on analysis of images to redistribute objects. Actions may include action type, action axis and/or action direction. Analysis may determine whether an object is accessible by a robot, whether an upper surface of a collection of objects meet a defined criteria and/or whether clusters of objects preclude access.
    Type: Grant
    Filed: October 10, 2008
    Date of Patent: October 15, 2013
    Assignee: RoboticVISIONTech LLC
    Inventor: Remus Boca
  • Publication number: 20130266205
    Abstract: The invention relates to a method and system for recognizing physical objects. In the method an object is gripped with a gripper, which is attached to a robot arm or mounted separately. Using an image sensor, a plurality of source images of an area comprising the object is captured while the object is moved with the robot arm. The camera is configured to move along the gripper, attached to the gripper or otherwise able to monitor the movement of the gripper. Moving image elements are extracted from the plurality of source images by computing a variance image from the source images and forming a filtering image from the variance image. A result image is obtained by using the filtering image as a bitmask. The result image is used for classifying the gripped object.
    Type: Application
    Filed: October 12, 2011
    Publication date: October 10, 2013
    Applicant: ZenRobotics Oy
    Inventor: Harri Valpola
  • Patent number: 8542909
    Abstract: There is provided a method of measuring 3D depth of a stereoscopic image, comprising providing left and right eye input images, applying an edge extraction filter to each of the left and right eye input images, and determining 3D depth of the stereoscopic image using the edge extracted left and right eye images. There is also provided an apparatus for carrying out the method of measuring 3D depth of a stereoscopic image.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: September 24, 2013
    Assignee: Tektronix, Inc.
    Inventors: Antoni Caceres, Martin Norman
  • Patent number: 8538125
    Abstract: A method and apparatus are provided for automatic application and monitoring of a structure to be applied onto substrate. A plurality of cameras positioned around an application facility are utilized to monitor the automatic application of a structure on a substrate by means of a stereometry procedure. Three-dimensional recognition of a reference contour position results in the overlapping area to be used for gross adjustment of the application facility prior to applying the structure.
    Type: Grant
    Filed: December 23, 2004
    Date of Patent: September 17, 2013
    Assignee: Quiss GmbH
    Inventors: Jan Anders Linnenkohl, Andreas Tomtschko, Mirko Berger, Roman Raab
  • Patent number: 8538133
    Abstract: This system for guiding a drone during the approach phase to a platform, particularly a naval platform, with a view to landing the same, is characterized in that the platform is equipped with a glide slope indicator installation emitting an array of optical guide beams over an angular sector predetermined from the horizontal, and in that the drone is equipped with a beam acquisition camera connected to image analysis means and to computing means of orders for commanding automatic piloting means of the drone to cause it to follow the guide beams.
    Type: Grant
    Filed: October 13, 2009
    Date of Patent: September 17, 2013
    Assignee: DCNS
    Inventor: Julien Pierre Guillaume Moresve
  • Patent number: 8503759
    Abstract: Methods, devices, and systems for use in accomplishing registration of a patient to a robot to facilitate image guided surgical procedures, such as stereotactic procedures.
    Type: Grant
    Filed: April 16, 2008
    Date of Patent: August 6, 2013
    Inventors: Alexander Greer, Garnette Sutherland, Tim Fielding, Perry Newhook, Scott King, Calvin Bewsky, Jarod Matwiy, Boguslaw Tomanek, Mike Smith
  • Patent number: 8483881
    Abstract: A multi-function robotic device may have utility in various applications. In accordance with one aspect, a multi-function robotic device may be selectively configurable to perform a desired function in accordance with the capabilities of a selectively removable functional cartridge operably coupled with a robot body. Localization and mapping techniques may employ partial maps associated with portions of an operating environment, data compression, or both.
    Type: Grant
    Filed: September 1, 2006
    Date of Patent: July 9, 2013
    Assignee: Neato Robotics, Inc.
    Inventors: Vladimir Ermakov, Mark Woodward, Joe Augenbraun
  • Publication number: 20130163853
    Abstract: A method for estimating a location of a device uses a color image and a depth image. The method includes matching the color image to the depth image, generating a 3D reference image based on the matching, generating a 3D object image based on the matching, extracting a 2D reference feature point from the reference image, extracting a 2D reference feature point from the object image, matching the extracted reference feature point from the reference image to the extracted reference feature point from the object image, extracting a 3D feature point from the object image using the matched 2D reference feature point, and estimating the location of the device based on the extracted 3D feature point.
    Type: Application
    Filed: December 21, 2012
    Publication date: June 27, 2013
    Applicant: Samsung Electronics Co., Ltd.
    Inventor: Samsung Electronics Co., Ltd.
  • Patent number: 8472698
    Abstract: During pre-processing, a 3D model of the object is rendered for various poses by arranging virtual point light sources around the lens of a virtual camera. The shadows are used to obtain oriented depth edges of the object illuminated from multiple directions. The oriented depth edges are stored in a database. A camera acquires images of the scene by casting shadows onto the scene from different directions. The scene can include one or more objects arranged in arbitrary poses with respect to each other. The poses of the objects are determined by comparing the oriented depth edges obtained from the acquired images to the oriented depth edges stored in the database. The comparing evaluates, at each pixel, a cost function based on chamfer matching, which can be speed up using downhill simplex optimization.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: June 25, 2013
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Amit K. Agrawal, Changhua Wu, Ramesh N. Raskar, Jay Thornton
  • Patent number: 8467596
    Abstract: A pose of an object is estimated from an from an input image and an object pose estimation is then stored by: inputting an image containing an object; creating a binary mask of the input image; extracting a set of singlets from the binary mask of the input image, each singlet representing points in an inner and outer contour of the object in the input image; connecting the set of singlets into a mesh represented as a duplex matrix; comparing two duplex matrices to produce a set of candidate poses; and producing an object pose estimate, and storing the object pose estimate.
    Type: Grant
    Filed: August 30, 2011
    Date of Patent: June 18, 2013
    Assignee: Seiko Epson Corporation
    Inventors: Arash Abadpour, Guoyi Fu, Ivo Moravec
  • Patent number: 8467904
    Abstract: A control system and method generate joint variables for motion or posing of a target system in response to observations of a source system. Constraints and balance control may be provided for more accurate representation of the motion or posing as replicated by the target system.
    Type: Grant
    Filed: December 21, 2006
    Date of Patent: June 18, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventor: Behzad Dariush
  • Patent number: 8467597
    Abstract: A segmentation method for segmenting a plurality of duplicate articles (3) involves acquiring an image (M) of a sample article (30); calculating keypoint-descriptors of the image (M); defining an identifying figure (Z); acquiring a first image (I1) of a plurality of duplicate articles; matching keypoint-descriptor pairs; acquiring a position and an orientation of the identifying figure (Z) with respect to a first keypoint-descriptor pair having a match with a second keypoint-descriptor pair; defining an identifying figure; applying the two preceding stages to a plurality of keypoint-descriptor pairs; collecting together identifying figures of projection having a predetermined degree of superposing; defining a representative figure formed by a minimum predetermined number of identifying figures of projection, which has a same shape and dimension as an identifying figure of projection, and is selected to estimate a position of a corresponding article illustrated in the first image of a plurality of duplicate ar
    Type: Grant
    Filed: April 26, 2010
    Date of Patent: June 18, 2013
    Assignee: Marchesini Group S.p.A.
    Inventors: Giuseppe Monti, Andrea Prati, Rita Cucchiara, Paolo Piccinini
  • Patent number: 8463018
    Abstract: A method of classifying and collecting feature information of an area according to a robot's moving path, a robot controlled by area features, and a method and apparatus for composing a user interface using the area features are disclosed. The robot includes a plurality of sensor modules to collect feature information of a predetermined area along a moving path of the robot, and an analyzer to analyze the collected feature information of the predetermined area according to a predetermined reference range and to classify the collected feature information into a plurality of groups.
    Type: Grant
    Filed: February 28, 2007
    Date of Patent: June 11, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung-Nyung Chung, Hyun-jeong Lee, Hyun-jin Kim, Hyeon Myeong
  • Patent number: 8457792
    Abstract: A shape detection system includes a distance image sensor that detects an image of a plurality of detection objects and distances to the detection objects, the detection objects being randomly arranged in a container, a sensor controller that detects a position and an orientation of each of the detection objects in the container on the basis of the result of the detection performed by the distance image sensor and a preset algorithm, and a user controller that selects the algorithm to be used by the sensor controller and sets the algorithm for the sensor controller.
    Type: Grant
    Filed: June 18, 2010
    Date of Patent: June 4, 2013
    Assignee: Kabushiki Kaisha Yaskawa Denki
    Inventors: Hiroyuki Handa, Takeshi Arie, Yuji Ichimaru
  • Patent number: 8442714
    Abstract: An autonomous mobile device has its movement controlled by a control device and includes a first sensing unit for sensing an obstacle. The control device includes a first storage unit for storing information as to a temporary positional fluctuation of the obstacle and sets as a virtual obstacle region a region where it is predicted that the obstacle sensed by the first sensing unit travels following a predetermined time passage based on the information as to the temporary positional fluctuation of the obstacle stored in the first storage unit.
    Type: Grant
    Filed: April 14, 2008
    Date of Patent: May 14, 2013
    Assignee: Panasonic Corporation
    Inventors: Yoshihiko Matsukawa, Soichiro Fujioka, Yuji Adachi, Toshio Inaji
  • Patent number: 8422741
    Abstract: The invention relates to a method for detecting the proper motion of an real-world object, comprising the steps of acquiring, by an image capturing device (ICD), a first image (I1) of the object at a first point in time (t1) and a second image (I2) at a second point in time (t2); obtaining a third (hypothetical) image (I3), based on an estimated effect of the motion of the image capturing device (ICD) itself (EMF) between the first and the second point in time (t1, t2), wherein the effect of the motion of the image capturing device (ICD) itself is estimated based on the forward kinematics of the image capturing device; determining an optical flow (OF) between the second image (I2) and the third image (I3); and evaluating the optical flow (OF) by incorporating uncertainties of the optical flow (OF) and the ego-motion-flow (EMF) in order to determine the proper motion of the object.
    Type: Grant
    Filed: August 21, 2008
    Date of Patent: April 16, 2013
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Julian Eggert, Sven Rebhan, Jens Schmudderich, Volker Willert
  • Publication number: 20130089235
    Abstract: A method of controlling a mobile apparatus includes acquiring a first original image and a second original image, extracting a first feature point of the first original image and a second feature point of the second original image, generating a first blurring image and a second blurring image by blurring the first original image and the second original image, respectively, calculating a similarity between at least two images of the first original image, the second original image, the first blurring image, and the second blurring image, determining a change in scale of the second original image based on the calculated similarity, and controlling at least one of an object recognition and a position recognition by matching the second feature point of the second original image to the first feature point of the first original image based on the change in scale.
    Type: Application
    Filed: October 5, 2012
    Publication date: April 11, 2013
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: SAMSUNG ELECTRONICS CO., LTD.
  • Patent number: 8417446
    Abstract: Systems and methods for generating and using geographic data are disclosed. For example, one method comprises generating and/or identifying an open area map. A route is generated using the open area map. A path segment record and/or a node record is created based on the generated route. The path segment record and/or the node record may be stored in a geographic database.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: April 9, 2013
    Assignee: Navteq B.V.
    Inventor: Joseph P. Mays
  • Patent number: 8406506
    Abstract: A system and method are disclosed for fast sub-pixel optical flow estimation in a two-dimensional or a three-dimensional space. The system includes a pixel-level optical flow estimation module, a matching score map module, and a sub-pixel optical flow estimation module. The pixel-level optical flow estimation module is configured to estimate pixel-level optical flow for each pixel of an input image using a reference image. The matching score map module is configured to generate a matching score map for each pixel being estimated based on pixel-level optical flow estimation. The sub-pixel optical flow estimation module is configured to select the best pixel-level optical flow from multiple pixel-level optical flow candidates, and use the selected pixel-level optical flow, its four neighboring optical flow vectors and matching scores associated with the pixel-level optical flows to estimate the sub-pixel optical flow for the pixel of the input image.
    Type: Grant
    Filed: May 18, 2010
    Date of Patent: March 26, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventor: Morimichi Nishigaki
  • Patent number: 8406505
    Abstract: In a legged mobile robot having an imaging device (such as CCD camera) for taking an image utilizing incident light from external world in which a human being to be imaged is present, brightness reduction operation is executed to reduce brightness of a high-brightness imaging region produced by high-brightness incident light, when the high-brightness imaging region is present in the image taken by the imaging device. With this, when the imaged high-brightness imaging region is present owing to high-brightness incident light from the sun or the like, the legged mobile robot can reduce the brightness to image a human being or other object with suitable brightness.
    Type: Grant
    Filed: August 25, 2008
    Date of Patent: March 26, 2013
    Assignee: Honda Motor Co., Ltd.
    Inventors: Takamichi Shimada, Taro Yokoyama
  • Patent number: 8401275
    Abstract: A robotic system that includes a mobile robot and a remote input device. The input device may be a joystick that is used to move a camera and a mobile platform of the robot. The system may operate in a mode where the mobile platform moves in a camera reference coordinate system. The camera reference coordinate system is fixed to a viewing image provided by the camera so that movement of the robot corresponds to a direction viewed on a screen. This prevents disorientation during movement of the robot if the camera is panned across a viewing area.
    Type: Grant
    Filed: March 27, 2009
    Date of Patent: March 19, 2013
    Assignee: InTouch Technologies, Inc.
    Inventors: Yulun Wang, Charles S. Jordan, Keith P. Laby, Jonathan Southard, Marco Pinter, Brian Miller
  • Patent number: 8396282
    Abstract: The present disclosure describes a fused saliency map from visual and auditory saliency maps. The saliency maps are in azimuth and elevation coordinates. The auditory saliency map is based on intensity, frequency and temporal conspicuity maps. Once the auditory saliency map is determined, the map is converted into azimuth and elevation coordinates by processing selected snippets of sound from each of four microphones arranged on a robot head to detect the location of the sound source generating the saliencies.
    Type: Grant
    Filed: October 31, 2008
    Date of Patent: March 12, 2013
    Assignee: HRL Labortories, LLC
    Inventors: David J Huber, Deepak Khosla, Paul Alex Dow
  • Patent number: 8396249
    Abstract: A method and apparatus for controlling robots based on prioritized targets extracted from fused visual and auditory saliency maps. The fused visual and auditory saliency map may extend beyond the immediate visual range of the robot yet the methods herein allow the robot to maintain an awareness of targets outside the immediate visual range. The fused saliency map may be derived in a bottom-up or top-down approach and may be feature based or object based.
    Type: Grant
    Filed: December 23, 2008
    Date of Patent: March 12, 2013
    Assignee: HRL Laboratories, LLC
    Inventors: Deepak Khosla, Paul Alex Dow, David Huber
  • Patent number: 8379967
    Abstract: A system for controlling a swarm that includes a plurality of autonomous objects may include a processing system and a controller. The processing system may compute the primitives to be applied to each pair of objects in the swarm, and may combine the primitives to generate higher-level primitives. The processing system may generate a graph of the computed primitives, and identify the cliques in the graph. The controller may cause the primitives to be applied between each pair of objects in the swarm, and cause each object to maximize its respective set of primitives so as to induce the desired group behavior. The controller may detect the desired group behavior in the swarm by monitoring the primitives computed by the processing system and the cliques identified by the processing system.
    Type: Grant
    Filed: December 4, 2008
    Date of Patent: February 19, 2013
    Assignee: Lockheed Martin Corporation
    Inventors: Stephen Francis Bush, John Hershey, Ralph Thomas Hoctor, Glen William Brooksby, Ambalangoda Gurunnanselage Amitha Perera, Anthony James Hoogs
  • Patent number: 8379966
    Abstract: Provided is an apparatus for recognizing the position of a mobile robot. The apparatus includes an image capturing unit which is loaded into a mobile robot and captures an image; an illuminance determining unit which determines illuminance at a position where an image is to be captured; a light-emitting unit which emits light toward the position; a light-emitting control unit which controls the light-emitting unit according to the determined illuminance; a driving control unit which controls the speed of the mobile robot according to the determined illuminance; and a position recognizing unit which recognizes the position of the mobile robot by comparing a pre-stored image to the captured image.
    Type: Grant
    Filed: July 11, 2008
    Date of Patent: February 19, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Woo-yeon Jeong, Jun-ho Park, Seok-won Bang
  • Patent number: 8374723
    Abstract: Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown.
    Type: Grant
    Filed: April 22, 2009
    Date of Patent: February 12, 2013
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Wenyi Zhao, Tao Zhao, David Q. Larkin
  • Patent number: 8374421
    Abstract: Methods and systems for robot and cloud communication are described. A robot may interact with the cloud to perform any number of actions using video captured from a point-of-view or in the vicinity of the robot. The cloud may be configured to extract still frames from compressed video received from the robot at a frame rate determined based on a number of factors, including the robot's surrounding environment, the available bandwidth, or actions being performed. The cloud may be configured to request that a compressed video with higher frame rate be sent so that the cloud can extract still frames at a higher frame rate. Further, the cloud may be configured to request that a second compressed video from a second perspective be sent to provide additional environment information.
    Type: Grant
    Filed: October 18, 2011
    Date of Patent: February 12, 2013
    Assignee: Google Inc.
    Inventor: Ryan Hickman
  • Patent number: 8374420
    Abstract: A learning control device performs a positioning process, a first image capturing process, and a first deviation amount calculating process in which a reference position deviation amount in the horizontal direction between the imaging reference position and a detection mark is derived based on image information captured in the first image capturing process to derive a position adjustment amount from the derived reference position deviation amount, and the learning control device further includes a positioning correcting process in which the position adjustment device is operated to adjust a position of the second learn assist member based on the derived movement adjustment amount when the reference position deviation amount derived in the first deviation amount calculating process falls outside a set tolerance range. A second image capturing process, and a second deviation amount calculating process may be further provided.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: February 12, 2013
    Assignee: Daifuku Co., Ltd.
    Inventor: Ryuya Murakami
  • Publication number: 20130034295
    Abstract: Methods for recognizing a category of an object are disclosed. In one embodiment, a method includes determining, by a processor, a preliminary category of a target object, the preliminary category having a confidence score associated therewith, and comparing the confidence score to a learning threshold. If the highest confidence score is less than the learning threshold, the method further includes estimating properties of the target object and generating a property score for one or more estimated properties, and searching a supplemental image collection for supplemental image data using the preliminary category and the one or more estimated properties. Robots programmed to recognize a category of an object by use of supplemental image data are also disclosed.
    Type: Application
    Filed: August 2, 2011
    Publication date: February 7, 2013
    Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Masayoshi Tsuchinaga, Mario Fritz, Trevor Darrell
  • Patent number: 8369606
    Abstract: A system and method for aligning maps using polyline matching is provided. A global map and a local map are represented as polyline maps including a plurality of line segments. One or more approximate matches between the polyline maps are identified. One or more refined matches are determined from the approximate matches. The global map and the local map are aligned at the one or more refined matches.
    Type: Grant
    Filed: July 21, 2010
    Date of Patent: February 5, 2013
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Juan Liu, Ying Zhang, Gabriel Hoffmann
  • Patent number: 8340901
    Abstract: A mobile robot and a path planning method are provided for the mobile robot to manipulate the target objects in a space, wherein the space consists of a periphery area and a central area. With the present method, an initial position is defined and the mobile robot is controlled to move within the periphery area from the initial position. Next, the latest image is captured when the mobile robot moves, and a manipulating order is arranged according to the distances estimated between the mobile robot and each of target objects in the image. The mobile robot is controlled to move and perform a manipulating action on each of the target object in the image according to the manipulating order. The steps of obtaining the image, planning the manipulating order, and controlling the mobile robot to perform the manipulating action are repeated until the mobile robot returns to the initial position.
    Type: Grant
    Filed: September 21, 2009
    Date of Patent: December 25, 2012
    Assignee: National Taiwan University of Science and Technology
    Inventors: Chin-Shyurng Fahn, Chien-Hsin Wu
  • Patent number: 8331652
    Abstract: A simultaneous localization and map building (SLAM) method and medium for a moving robot is disclosed. The SLAM method includes extracting a horizontal line from an omni-directional image photographed at every position where the mobile robot reaches during a movement of the mobile robot, correcting the extracted horizontal line, to create a horizontal line image, and simultaneously executing a localization of the mobile robot and building a map for the mobile robot, using the created horizontal line image and a previously-created horizontal line image.
    Type: Grant
    Filed: May 23, 2008
    Date of Patent: December 11, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: SukJune Yoon, Seung Ki Min, Kyung Shik Roh
  • Patent number: 8326019
    Abstract: An apparatus, method, and medium for dividing regions by using feature points and a mobile robot cleaner using the same are provided. A method includes forming a grid map by using a plurality of grid points that are obtained by detecting distances of a mobile robot from obstacles; extracting feature points from the grid map; extracting candidate pairs of feature points, which are in the range of a region division element, from the feature points; extracting a final pair of feature points, which satisfies the requirements of the region division element, from the candidate pair of feature points; forming a critical line by connecting the final pair of feature points; and forming a final region in accordance with the size relationship between regions formed of a closed curve which connects the critical line and the grid map.
    Type: Grant
    Filed: March 17, 2011
    Date of Patent: December 4, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Su-jinn Lee, Hyeon Myeong, Yong-beom Lee, Seok-won Bang
  • Patent number: 8326456
    Abstract: A behavior control apparatus collects information of a mobile object in relation to an action space of the mobile object, and acquires a position and an orientation of a human. The behavior control apparatus sets an exclusive area for the human based on the position and the orientation, and judges whether any information is to be notified to the human by the mobile object. If judging negatively, the behavior control apparatus determines a target position, a target orientation and a travel route of the mobile object such that the mobile object moves out of the exclusive area.
    Type: Grant
    Filed: March 7, 2008
    Date of Patent: December 4, 2012
    Assignee: Panasonic Corporation
    Inventors: Kotaro Sakata, Ryuji Inoue, Toshiya Naka
  • Publication number: 20120294509
    Abstract: A robot control system includes a processing unit which performs visual servoing based on a reference image and a picked-up image, a robot control unit which controls a robot based on a control signal, and a storage unit which stores the reference image and a marker. The storage unit stores, as the reference image, a reference image with marker in which the marker is set in an area of a workpiece or a hand of the robot. The processing unit generates, based on the picked-up image, a picked-up image with marker in which the marker is set in an area of the workpiece or the hand of the robot, performs visual servoing based on the reference image with marker and the picked-up image with marker, generates the control signal, and outputs the control signal to the robot control unit.
    Type: Application
    Filed: May 15, 2012
    Publication date: November 22, 2012
    Applicant: SEIKO EPSON CORPORATION
    Inventor: Shigeyuki MATSUMOTO
  • Patent number: 8315739
    Abstract: A method for determining the position of at least one object present within a working range of a robot by an evaluation system, wherein an image of at least one part of the working range of the robot is generated by a camera mounted on a robot. The image is generated during a motion of the camera and image data are fed to the evaluation system in real time, together with further data, from which the position and/or orientation of the camera when generating the image can be derived. The data are used for determining the position of the at least one object.
    Type: Grant
    Filed: June 15, 2010
    Date of Patent: November 20, 2012
    Assignee: ABB AG
    Inventor: Fan Dai
  • Patent number: 8315454
    Abstract: The present invention provides a robot apparatus that can perform appropriate actions in accordance with the ambient conditions, and a method of controlling the behavior of the robot apparatus.
    Type: Grant
    Filed: September 12, 2005
    Date of Patent: November 20, 2012
    Assignee: Sony Corporation
    Inventors: Fumihide Tanaka, Hiroaki Ogawa, Hirotaka Suzuki, Osamu Hanagata, Tsutomu Sawada, Masato Ito
  • Patent number: 8315455
    Abstract: A robot system includes a robot having a movable section, an image capture unit provided on the movable section, an output unit that allows the image capture unit to capture a target object and a reference mark and outputs a captured image in which the reference mark is imaged as a locus image, an extraction unit that extracts the locus image from the captured image, an image acquisition unit that performs image transformation on the basis of the extracted locus image by using the point spread function so as to acquire an image after the transformation from the captured image, a computation unit that computes a position of the target object on the basis of the acquired image, and a control unit that controls the robot so as to move the movable section toward the target object in accordance with the computed position.
    Type: Grant
    Filed: November 2, 2009
    Date of Patent: November 20, 2012
    Assignee: Seiko Epson Corporation
    Inventor: Mitsuhiro Inazumi
  • Patent number: 8311317
    Abstract: The present invention creates and stores target representations in several coordinate representations based on biologically inspired models of the human vision system. By using biologically inspired target representations a computer can be programmed for robot control without using kinematics to relate a target position in camera eyes to a target position in body or head coordinates. The robot sensors and appendages are open loop controlled to focus on the target. In addition, the invention herein teaches a scenario and method to learn the mappings between coordinate representations using existing machine learning techniques such as Locally Weighted Projection Regression.
    Type: Grant
    Filed: August 15, 2008
    Date of Patent: November 13, 2012
    Assignee: HRL Laboratories, LLC
    Inventors: Paul Alex Dow, Deepak Khosla, David J Huber
  • Patent number: 8306314
    Abstract: A pose for an object in a scene is determined by first rendering sets of virtual images of a model of the object using a virtual camera. Each set of virtual images is for a different known pose the model, and constructing virtual depth edge map from each virtual image, which are stored in a database. A set of real images of the object at an unknown pose are acquired by a real camera, and constructing real depth edge map for each real image. The real depth edge maps are compared with the virtual depth edge maps using a cost function to determine the known pose that best matches the unknown pose, wherein the matching is based on locations and orientations of pixels in the depth edge maps.
    Type: Grant
    Filed: December 28, 2009
    Date of Patent: November 6, 2012
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Cuneyt Oncel Tuzel, Ashok Veeraraghavan
  • Patent number: 8306305
    Abstract: A method of automatically identifying bone components in a medical image data set of voxels, the method comprising: a) applying a first set of one or more tests to accept voxels as belonging to seeds, b) applying a second set of one or more tests to accept seeds as bone seeds, and c) expanding the bone seeds into bone components by progressively identifying candidate bone voxels, adjacent to the bone seeds or to other previously identified bone voxels, as bone voxels, responsive to predetermined criteria which distinguish bone voxels from voxels of other body tissue.
    Type: Grant
    Filed: June 3, 2011
    Date of Patent: November 6, 2012
    Assignee: Algotec Systems Ltd.
    Inventors: Hadar Porat, Gad Miller, Shmuel Akerman, Ido Milstein
  • Publication number: 20120269422
    Abstract: A collision detection system includes a processing section, a drawing section, and a depth buffer. Depth information of an object is set to the depth buffer as depth map information. The drawing section performs a first drawing process of performing a depth test, and drawing a primitive surface on a reverse side when viewed from a predetermined viewpoint out of primitive surfaces constituting a collision detection target object with reference to the depth buffer. Further, the drawing section performs a second drawing process of drawing the primitive surface on the reverse side when viewed from a predetermined viewpoint out of the primitive surfaces constituting the collision detection target object without performing the depth test. The processing section determines whether or not the collision detection target object collides with the object on the target side based on the result of the first drawing process and the second drawing process.
    Type: Application
    Filed: April 18, 2012
    Publication date: October 25, 2012
    Applicant: SEIKO EPSON CORPORATION
    Inventor: Mitsuhiro INAZUMI
  • Patent number: 8290238
    Abstract: Disclosed are methods and apparatus for automatic optoelectronic detection and inspection of objects, based on capturing digital images of a two-dimensional field of view in which an object to be detected or inspected may be located, analyzing the images, and making and reporting decisions on the status of the object. Decisions are based on evidence obtained from a plurality of images for which the object is located in the field of view, generally corresponding to a plurality of viewing perspectives. Evidence that an object is located in the field of view is used for detection, and evidence that the object satisfies appropriate inspection criteria is used for inspection. Methods and apparatus are disclosed for capturing and analyzing images at high speed so that multiple viewing perspectives can be obtained for objects in continuous motion.
    Type: Grant
    Filed: May 24, 2005
    Date of Patent: October 16, 2012
    Assignee: Cognex Technology and Investment Corporation
    Inventor: William M. Silver
  • Patent number: 8285416
    Abstract: A system and a method for stabilization control may control position and direction of an object having first and second bodies connected to each other. The system may include an artificial vestibular apparatus for outputting a movement signal corresponding to movement of the first body and a rotation signal corresponding to rotation of the first body; a translating actuation unit connected between the first and second bodies and controlling position of the second body in response to the movement signal; and a rotating actuation unit connected between the first and second bodies and controlling rotation of the second body in response to the rotation signal. If the system and the method are applied to a vision system of a mobile robot, the vision system may obtain stabile image information even when the mobile robot is moving. Thus, it is possible to prevent any blurring from occurring at the image information.
    Type: Grant
    Filed: February 12, 2009
    Date of Patent: October 9, 2012
    Assignee: SNU R&DB Foundation
    Inventors: Dong-il Cho, Hyoungho Ko, Jaehong Park, Sangmin Lee
  • Publication number: 20120219207
    Abstract: The present invention relates to a slip detection apparatus and method for a mobile robot, and more particularly, to a slip detection apparatus and method for a mobile robot, which not only use a plurality of rotation detection sensors to detect a lateral slip angle and lateral slip direction, but also analyze the amount of change in an image and detect the blocked degree of an image input unit to determine the quality of an input image, and detect the occurrence of a frontal slip to precisely detect the type of slip, direction of the slip, and the rotation angle, and, on the basis of the latter, to enable the mobile robot to move away from and avoid slip regions, and to reassume the precise position thereof.
    Type: Application
    Filed: October 30, 2009
    Publication date: August 30, 2012
    Applicant: YUJIN ROBOT CO., LTD.
    Inventors: Kyung Chul Shin, Seong Ju Park, Hee Kong Lee, Jae Young Lee, Hyung O Kim, James Stonier Daniel
  • Patent number: RE43895
    Abstract: A scanning apparatus and method for generating computer models of three-dimensional objects comprising means for scanning the object to capture data from a plurality of points on the surface of the object so that the scanning means may capture data from two or more points simultaneously, sensing the position of the scanning means, generating intermediate data structures from the data, combining intermediate data structures to provide the model; display, and manually operating the scanning apparatus. The signal generated is structured light in the form of a stripe or an area from illumination sources such as a laser diode or bulbs which enable data for the position and color of the surface to be determined. The object may be on a turntable and may be viewed in real time as rendered polygons on a monitor as the object is scanned.
    Type: Grant
    Filed: December 24, 2009
    Date of Patent: January 1, 2013
    Assignee: 3D Scanners Limited
    Inventor: Stephen James Crampton