Robotics Patents (Class 382/153)
  • Patent number: 10239201
    Abstract: A master-slave system (1) according to the present invention includes a slave actuator (As1 to As3) for generating a slave driving force (?s) to control a slave robot in terms of driving force, an effective driving force sensor (Fs1 to Fs3) for measuring a slave effective driving force (?sa) actually acting on a terminal output axis of the slave actuator (As1 to As3), and a slave target effective driving force calculating device (3) for calculating a slave target effective driving force (?sad) which is a target value for the slave effective driving force (?sa), on the basis of a master operating force (fm) applied to the master robot by an operator (U). The slave actuator (As1 to As3) generates the slave driving force (?s) on the basis of the slave target effective driving force (?sad) and the slave effective driving force (?sa).
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: March 26, 2019
    Assignee: Muscle Corporation
    Inventor: Katsuya Kanaoka
  • Patent number: 10239612
    Abstract: An automated commissioning and floorplan configuration (CAFC) device can include a CAFC system having a transceiver and a CAFC engine, where the transceiver communicates with at least one device disposed in a volume of space, where the CAFC engine, based on communication between the transceiver and the at least one device, commissions the at least one device.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: March 26, 2019
    Assignee: Cooper Technologies Company
    Inventors: Jonathan Andrew Whitten, Michael Alan Lunn
  • Patent number: 10192113
    Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy. The sensory data are gathered from multiple operational cameras and one or more auxiliary sensors.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: January 29, 2019
    Assignee: PerceptIn, Inc.
    Inventors: Shaoshan Liu, Zhe Zhang, Grace Tsai
  • Patent number: 10162354
    Abstract: In one embodiment, motion planning and control data is received, indicating that an autonomous vehicle is to move from a first point to a second point of a path. The motion planning and control data describes a plurality of routes from the first point to the second point within the path. For each of the routes, a simulation of the route is performed in view of physical characteristics of the autonomous vehicle to generate a simulated route. A controlling error is calculated, the controlling error representing a discrepancy between the route and the simulated route. One of the routes is selected based on controlling errors between the routes and associated simulated routes. The autonomous vehicle is operated to move from the first point to the second point according to the selected route.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: December 25, 2018
    Assignee: BAIDU USA LLC
    Inventors: Qi Kong, Fan Zhu, Dong Li, Yifei Jiang, Li Zhuang, Guang Yang, Jingao Wang
  • Patent number: 10163036
    Abstract: One or more image parameters of an image may be analyzed using a hierarchical set of models. Executing individual models in the set of models may generate outputs from analysis of different image parameters of the image. Inputs of one or more of the models may be conditioned on a set of outputs derived from one or more preceding model in the hierarchy.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: December 25, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Andreas Lehrmann, Leonid Sigal
  • Patent number: 10076842
    Abstract: Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot or motion stage and a camera in communication with a control system. The control system is configured to move the robot or motion stage to poses, and for each pose: capture an image of calibration target features and robot joint angles or motion stage encoder counts. The control system is configured to obtain initial values for robot or motion stage calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot or motion stage calibration parameters, the image, and joint angles or encoder counts. The control system is configured to determine final values for the hand-eye calibration parameters and robot or motion stage calibration parameters by refining the hand-eye calibration parameters and robot or motion stage calibration parameters to minimize a cost function.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: September 18, 2018
    Assignee: Cognex Corporation
    Inventors: Lifeng Liu, Cyril C. Marrion, Tian Gan
  • Patent number: 10075818
    Abstract: Traces are collected by multiple portable devices moving with an area that includes an indoor region, with each of the traces including measurements of wireless signals at different times, including measurements of wireless signals from signal sources disposed within the area. A motion map for the geographic area is constructed by determining, for each of the cells that make the motion map, respective probabilities of moving in various directions relative to each cell. Location estimates for the portable devices and the signal sources are generated using graph-based SLAM optimization of the location estimates. The graph-based SLAM optimization includes determining to which of the cells of the motion map the location estimate corresponds and applying the measurements of wireless signals sources and the set of probabilities of the cells as a first constraint and a second constraint, respectively, in the graph-based SLAM optimization.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: September 11, 2018
    Assignee: GOOGLE LLC
    Inventors: Etienne Le Grand, Mohammed Khider, Luigi Bruno
  • Patent number: 10062144
    Abstract: A spherical harmonic is defined which is an operationally optimal small finite subset of the infinite number of spherical harmonics allowed to exist mathematically. The composition of the subset differs depending on its position on virtual hemisphere. The subsets are further divided into small spherical tesserae whose dimensions vary depending on the distance from the hemispherical center. The images of the outside visual scenes are projected on the flat surface of the webcam and from there are read and recalculated programmatically as if the images have been projected on the hemisphere, rotational invariants are then computed in the smallest tesserae using numerical integration, and then invariants from neighboring tesserae are added to compute the rotational invariant of their union. Every computed invariant is checked with the library and stored there if there is no match. The rotational invariants are solely used for visual recognition and classification and operational decision making.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: August 28, 2018
    Inventor: Alex Simon Blaivas
  • Patent number: 10028632
    Abstract: A method of controlling a mobile robot, the method including monitoring a first system of the mobile robot to detect a first error associated with the first system, monitoring a second system of the mobile robot to detect a second error associated with the second system, and when the first error and the second error are detected at the same time, determining that a third error has occurred.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: July 24, 2018
    Assignee: Dyson Technology Limited
    Inventors: Maximilian John Britain, William Matthew Wakeling, Christopher John Ord
  • Patent number: 9969089
    Abstract: Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: May 15, 2018
    Assignee: iRobot Corporation
    Inventors: Dhiraj Goel, Ethan Eade, Philip Fong, Mario E. Munich
  • Patent number: 9966060
    Abstract: The method is performed at an electronic device with one or more processors and memory storing one or more programs for execution by the one or more processors. A first speech input including at least one word is received. A first phonetic representation of the at least one word is determined, the first phonetic representation comprising a first set of phonemes selected from a speech recognition phonetic alphabet. The first set of phonemes is mapped to a second set of phonemes to generate a second phonetic representation, where the second set of phonemes is selected from a speech synthesis phonetic alphabet. The second phonetic representation is stored in association with a text string corresponding to the at least one word.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 8, 2018
    Assignee: Apple Inc.
    Inventors: Devang K. Naik, Thomas R. Gruber, Liam Weiner, Justin G. Binder, Charles Srisuwananukorn, Gunnar Evermann, Shaun Eric Williams, Hong Chen, Lia T. Napolitano
  • Patent number: 9939274
    Abstract: Providing a route guide to a destination using building information modeling (BIM) data. A request from a user for a route guide to a destination in a building is received. BIM data for the building, security information for a route, and a user profile of the user that requested the route guide to the destination is received. A route guide to the destination in the building is created, based at least on the BIM data, security information, and the user profile. Creating the route guide may be further based on use restriction information, including time, weight, operation, impassable information, information related to the method of using the facility equipment, or precaution information when using the facility equipment. The route guide may include a method of using the facility equipment in the building, a method of operating, a direction of operation, a method of unlocking or locking, or precaution information.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: April 10, 2018
    Assignee: International Business Machines Corporation
    Inventors: Yasutaka Nishimura, Masami Tada, Akihiko Takajo, Takahito Tashiro
  • Patent number: 9930252
    Abstract: Methods, systems, and robots for processing omni-directional image data are disclosed. A method includes receiving omni-directional image data representative of a panoramic field of view and segmenting, by one or more processors, the omni-directional image data into a plurality of image slices. Each image slice of the plurality of image slices is representative of at least a portion of the panoramic field of view of the omni-directional image data. The method further includes calculating a slice descriptor for each image slice of the plurality of image slices and generating a current sequence of slice descriptors. The current sequence of slice descriptors includes the calculated slice descriptor for each image slice of the plurality of image slices.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: March 27, 2018
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventor: Joseph Maria Angelo Djugash
  • Patent number: 9824451
    Abstract: A system for determining a camera pose relative to an object including a unique feature may include a monocular camera configured to produce an image of the object, and a processing unit in operative communication with the monocular camera. The processing unit may be configured to: identify the unique feature of the object in the image produced by the monocular camera, synthesize at least four points along a contour of the object in the image using the identified unique feature as a starting point, synthesize a same number of points as synthesized in the image along a contour of the object in a reference model, the reference model preprogrammed into a memory of the processing unit, correlate the points from the reference model to the image, and determine a pose of the monocular camera based on the correlated points.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: November 21, 2017
    Assignee: The Boeing Company
    Inventors: Michelle Crivella, Philip Freeman
  • Patent number: 9818198
    Abstract: Motorized machinery, such as overhead cranes, are widely used in industries all over the world. It is not easy to move crane payloads without oscillation, increasing the likelihood of obstacle collisions and other accidents. One possible solution to such problems could be aiding the operator with a dynamic map of the workspace that shows the current position of obstacles. This method discloses the use of a camera to take images of the workspace, using imaging blurring to smooth the obtained images, and drawing contours to produce an individual, near real-time map of the workspace. In one or more embodiments, known obstacles may be tagged in a manner which is readable by the camera. This image and historical images of the same workspace are layered on top of one another to produce a map of obstacles on the workspace floor. This imaging and layering can produce a near real-time map of obstacles that can be used to guide heavy motorized machinery around a workspace without incident.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: November 14, 2017
    Assignee: University of Louisiana at Lafayette
    Inventors: Joshua Vaughan, Mohammad Sazzad Rahman
  • Patent number: 9766072
    Abstract: A method and device for providing a travel route of a portable medical diagnosis apparatus include acquiring a travel image obtained by capturing a space to move of the medical diagnosis apparatus; predicting the travel route of the medical diagnosis apparatus on the basis of a steering angle of the medical diagnosis apparatus; and displaying information regarding the travel route on the travel image.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: September 19, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Myeong-je Kim
  • Patent number: 9766624
    Abstract: A robotic system that includes a mobile robot and a remote input device. The input device may be a joystick that is used to move a camera and a mobile platform of the robot. The system may operate in a mode where the mobile platform moves in a camera reference coordinate system. The camera reference coordinate system is fixed to a viewing image provided by the camera so that movement of the robot corresponds to a direction viewed on a screen. This prevents disorientation during movement of the robot if the camera is panned across a viewing area.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: September 19, 2017
    Assignee: INTOUCH TECHNOLOGIES, INC.
    Inventors: Yulun Wang, Charles S. Jordan, Keith P. Laby, Jonathan Southard, Marco Pinter, Brian Miller
  • Patent number: 9734391
    Abstract: Disclosed methods include a method of controlling a computing device includes the steps of detecting a gesture made by a human user, identifying the gesture, and executing a computer command. The gesture may comprise a change in depth of a body part of the human user relative to the 2D camera. The gesture may be detected via a 2D camera in electronic communication with the computing device. Disclosed systems include a 2D camera and a computing device in electronic communication therewith. The 2D camera is configured to capture at least a first and second image of a body part of a human user. The computing device is configured to recognize at least a first object in the first image and a second object in the second image, identify a change in depth, and execute a command in response to the change in depth.
    Type: Grant
    Filed: July 10, 2015
    Date of Patent: August 15, 2017
    Inventors: Ryan Fink, Ryan Phelps, Gary Peck
  • Patent number: 9720410
    Abstract: Example systems and methods enable an autonomous vehicle to request assistance from a remote operator in certain predetermined situations. One example method includes determining a representation of an environment of an autonomous vehicle based on sensor data of the environment. Based on the representation, the method may also include identifying a situation from a predetermined set of situations for which the autonomous vehicle will request remote assistance. The method may further include sending a request for assistance to a remote assistor, the request including the representation of the environment and the identified situation. The method may additionally include receiving a response from the remote assistor indicating an autonomous operation. The method may also include causing the autonomous vehicle to perform the autonomous operation.
    Type: Grant
    Filed: March 3, 2014
    Date of Patent: August 1, 2017
    Assignee: Waymo LLC
    Inventors: Nathaniel Fairfield, Joshua Seth Herbach, Vadim Furman
  • Patent number: 9717607
    Abstract: Embodiments of the present invention provides for one or more processors receive image data of an object selected by a user and determine image attributes of the object selected, based on image analytics on the image data. One or more processors determine whether the image attributes of the object selected match an identified object of a knowledge base, in which an identified object includes image attributes and manipulation data corresponding to the identified object, and responsive to determining that the object selected by the user of the prosthetic device matches an identified object of the knowledge base, one or more processors transmits manipulation data corresponding to the identified object matching the object selected by the user, to a mobile controlling device communicatively connected to the prosthetic device, wherein the mobile controlling device applies the manipulation data corresponding to the identified object to the prosthetic device.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: August 1, 2017
    Assignee: International Business Machines Corporation
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit, Craig M. Trim
  • Patent number: 9704049
    Abstract: The present disclosure relates to an apparatus configured to adjust a processing function for image data for a vehicle control system. The apparatus comprises an image sensor configured to capture the image data corresponding to a field of view. The image sensor is in communication with a controller which is further in communication with an accelerometer. The controller is operable to receive the image data from the image sensor and receive an acceleration signal from the accelerometer. The accelerometer signal may be utilized to identify a direction of gravity relative to the image sensor.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: July 11, 2017
    Assignee: GENTEX CORPORATION
    Inventors: Christopher A. Peterson, John C. Peterson
  • Patent number: 9688501
    Abstract: A device and a method for separating sheet material is provided. The separation of sheet material is not effected statically, but a suitable pick-up position and/or a suitable pick-up mechanism is selected for each sheet material piece, for example in dependence on the quality of a surface of the sheet material piece to be picked up, in order to pick up the sheet material piece from the sheet material stack and remove it from the same.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: June 27, 2017
    Assignee: WINCOR NIXDORF INTERNATIONAL GMBH
    Inventors: Matthias Lochbichler, Christopher Lankeit, Martin Landwehr, Ludger Hoischen
  • Patent number: 9690293
    Abstract: A system for navigating an autonomous vehicle along a road segment is disclosed. The system may have at least one processor. The processor may be programmed to receive from an image capture device, images representative of an environment of the autonomous vehicle. The processor may also be programmed to determine a travelled trajectory along the road segment based on analysis of the images. Further, the processor may be programmed to determine a current location of the autonomous vehicle along a predetermined road model trajectory based on analysis of one or more of the plurality of images. The processor may also be programmed to determine a heading direction based on the determined traveled trajectory. In addition, the processor may be programmed to determine a steering direction, relative to the heading direction, by comparing the traveled trajectory to the predetermined road model trajectory at the current location of the autonomous vehicle.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: June 27, 2017
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Amnon Shashua, Aran Reisman, Daniel Braunstein, Yoav Taieb, Igor Tubis
  • Patent number: 9681107
    Abstract: A camera scope inspection system with a flexible, tether mounted camera head that is maneuverable in confined internal cavities of power generation machinery. A camera head position sensing system inferentially determines the three dimension (3D) position of the camera head within the inspected machinery. Camera head position data are correlated with camera image data by a controller. In this manner correlated internal inspection image data and corresponding position data are available for future analysis and image tracking.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: June 13, 2017
    Assignee: SIEMENS ENERGY, INC.
    Inventors: Clifford Hatcher, Jr., Forrest R. Ruhge
  • Patent number: 9615890
    Abstract: A surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image.
    Type: Grant
    Filed: August 13, 2013
    Date of Patent: April 11, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Won Jun Hwang, Kyung Shik Roh, Suk June Yoon, Seung Yong Hyung
  • Patent number: 9601019
    Abstract: A cleaning robot includes a non-circular main body, a moving assembly mounted on a bottom surface of the main body to perform forward movement, backward movement and rotation of the main body, a cleaning tool assembly mounted on the bottom surface of the main body to clean a floor, a detector to detect an obstacle around the main body, and a controller to determine whether an obstacle is present in a forward direction of the main body based on a detection signal of the detector, control the rotation of the main body to determine whether the main body rotates by a predetermined angle or more upon determining that the obstacle is present in the forward direction, and determine that the main body is in a stuck state to control the backward movement of the main body if the main body rotates by the predetermined angle or less.
    Type: Grant
    Filed: May 20, 2014
    Date of Patent: March 21, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: In Joo Kim, Dong Min Shin, Shin Kim
  • Patent number: 9599988
    Abstract: There is provided a mobile carrier and an auto following system using the mobile carrier. The mobile carrier is capable of capturing at least an image of a guiding light source and automatically following the guiding light source based on the captured image of the guiding light source. The mobile carrier is further disposed with a mobile light source for a remote image sensing device to capture an image of the mobile light source while the mobile carrier cannot capture the image of the guiding light source, so that the mobile carrier can be guided by a control signal provided according to the captured image of the mobile light source.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: March 21, 2017
    Assignee: PIXART IMAGING INC.
    Inventors: Chia-Cheun Liang, Ming-Tsan Kao, Yi-Hsien Ko, Hsin-Chia Chen
  • Patent number: 9592095
    Abstract: A medical robotic system and method of operating such comprises taking intraoperative external image data of a patient anatomy, and using that image data to generate a modeling adjustment for a control system of the medical robotic system (e.g., updating anatomic model and/or refining instrument registration), and/or adjust a procedure control aspect (e.g., regulating substance or therapy delivery, improving targeting, and/or tracking performance).
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: March 14, 2017
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Dorin Panescu, Jonathan Michael Sorger, Prashant Chopra, Tao Zhao
  • Patent number: 9558424
    Abstract: A method determines motion between first and second coordinate systems by first extracting first and second sets of keypoints from first and second images acquired of a scene by a camera arranged on a moving object. First and second poses are determined from the first and second sets of keypoints. A score for each possible motion between the first and the second poses is determined using a scoring function and a pose-transition graph constructed from training data where each node in the post-transition graph represents a relative pose and each edge represents a motion between two consecutive relative poses. Then, based on the score, a best motion is selected as the motion between the first and second coordinate systems.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 31, 2017
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Gim Hee Lee
  • Patent number: 9470548
    Abstract: Disclosed are a calibration device, a calibration system and a calibration method. The calibration device includes a camera configured to capture image information, a laser sensor configured to capture image information, a laser sensor configured to detect distance information, and a calibration module configured to perform a calibration of the camera and the laser sensor by obtaining a relation between the image information and the distance information, wherein the calibration module includes a plane member disposed to intersect a scanning surface of the laser sensor such that an intersection line is generated, and disposed within a capturing range by the camera so as to be captured by the camera, and a controller configured to perform coordinate conversions with respect to the image information and the distance information based on a ratio between one side of the plane member and the intersection line, and based on a plane member image included in the image information.
    Type: Grant
    Filed: July 24, 2013
    Date of Patent: October 18, 2016
    Assignee: AGENCY FOR DEFENSE DEVELOPMENT
    Inventors: Seong Yong Ahn, Tok Son Choe, Yong Woon Park, Won Seok Lee
  • Patent number: 9466013
    Abstract: A computer vision service includes technologies to, among other things, analyze computer vision or learning tasks requested by computer applications, select computer vision or learning algorithms to execute the requested tasks based on one or more performance capabilities of the computer vision or learning algorithms, perform the computer vision or learning tasks for the computer applications using the selected algorithms, and expose the results of performing the computer vision or learning tasks for use by the computer applications.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: October 11, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Harpreet Singh Sawhney, Jayakrishnan Eledath, Saad Ali, Bogdan C. Matei, Steven S. Weiner, Xutao Lv, Timothy J. Shields
  • Patent number: 9449393
    Abstract: A plane detection apparatus for detecting at least one plane model from an input depth image. The plane detection apparatus may include an image divider to divide the input depth image into a plurality of patches, a plane model estimator to calculate one or more plane models with respect to the plurality of patches including a first patch and a second patch, and a patch merger to iteratively merge patches having a plane model a similarity greater than or equal to a first threshold by comparing plane models of the plurality of patches. When a patch having the plane model similarity greater than or equal to the first threshold is absent, the plane detection apparatus may determine at least one final plane model with respect to the input depth image using previously merged patches.
    Type: Grant
    Filed: January 22, 2013
    Date of Patent: September 20, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seon Min Rhee, Do Kyoon Kim, Yong Beom Lee, Tae Hyun Rhee
  • Patent number: 9440354
    Abstract: A robot having a signal sensor configured to measure a signal, a motion sensor configured to measure a relative change in pose, a local correlation component configured to correlate the signal with the position and/or orientation of the robot in a local region including the robot's current position, and a localization component configured to apply a filter to estimate the position and optionally the orientation of the robot based at least on a location reported by the motion sensor, a signal detected by the signal sensor, and the signal predicted by the local correlation component. The local correlation component and/or the localization component may take into account rotational variability of the signal sensor and other parameters related to time and pose dependent variability in how the signal and motion sensor perform. Each estimated pose may be used to formulate new or updated navigational or operational instructions for the robot.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: September 13, 2016
    Assignee: iRobot Corporation
    Inventors: Steffen Gutmann, Ethan Eade, Philip Fong, Mario Munich
  • Patent number: 9415310
    Abstract: A method generates a three-dimensional map of a region from successive images of that region captured from different camera poses. The method captures successive images of the region, detects a gravitational vertical direction in respect of each captured image, detects feature points within the captured images and designates a subset of the captured images as a set of keyframes each having respective sets of image position data representing image positions of landmark points detected as feature points in that image. The method also includes, for a captured image (i) deriving a camera pose from detected feature points in the image; (ii) rotating the gravitational vertical direction to the coordinates of a reference keyframe using the camera poses derived for that image and the reference keyframe; and (iii) comparing the rotated direction with the actual gravitational vertical direction for the reference keyframe to detect a quality measure of that image.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: August 16, 2016
    Assignee: Sony Computer Entertainment Europe Limited
    Inventor: Antonio Martini
  • Patent number: 9417154
    Abstract: A method for performing a dynamic load test on a bridge includes providing a vehicle with an imaging device coupled to the vehicle and moving the vehicle across the bridge. While moving the vehicle across the bridge, a series of images is obtained using the imaging device. A position of the vehicle on the bridge is determined as a function of time using the series of images, and a response of the bridge is determined as a function of time as the vehicle crosses the bridge. The position of the vehicle on the bridge is associated with the response of the bridge.
    Type: Grant
    Filed: May 20, 2014
    Date of Patent: August 16, 2016
    Assignee: Trimble Navigation Limited
    Inventors: Darin Muncy, Curt Conquest
  • Patent number: 9402151
    Abstract: The present invention provides a method for recognizing a position of a mobile robot by using arbitrarily shaped ceiling features on a ceiling, comprising: a providing step of providing a mobile robot device for recognizing a position by using arbitrarily shaped ceiling features on a ceiling which includes an image input unit, an encoder sensing unit, a computation unit, a control unit, a storage unit, and a driving unit; a feature extraction step of extracting features which include an arbitrarily shaped ceiling feature from an outline extracted from image information inputted through the image input unit; and a localization step of recognizing a position of the mobile robot device by using the extracted features, wherein, in the feature extraction step, a descriptor indicating the characteristics of the arbitrarily shaped ceiling feature is assigned.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: July 26, 2016
    Assignee: KOREA UNIVERSITY RESEARCH AND BUSINESS FOUNDATION
    Inventors: Jae-Bok Song, Seo-Yeon Hwang
  • Patent number: 9296421
    Abstract: A trailer target detection system includes a camera located on a vehicle and arranged to capture video images of a trailer towed by the vehicle. The system includes an image processor processing the video images to detect a gesture by a user indicative of position of target on the trailer and identifying location of the target based on the detected gesture. A vehicle trailer backup assist system includes a camera and an image processor for processing images to detect a gesture and determine a command based on the gesture. The processor controls backing of the trailer based on the command.
    Type: Grant
    Filed: March 6, 2014
    Date of Patent: March 29, 2016
    Assignee: Ford Global Technologies, LLC
    Inventor: Erick Michael Lavoie
  • Patent number: 9251394
    Abstract: Undated photos are organized by estimating the date of each photo. The date is estimated by building a model based on a set of reference photos having established dates, and comparing image characteristics of the undated photo to the image characteristics of the reference photos. The photo characteristics can include hues, saturation, intensity, contrast, sharpness and graininess as represented by image pixel data. Once the date of a photo is estimated, it can be tagged with identifying information, such as by using the estimated date to associate the photo with a node in a family tree.
    Type: Grant
    Filed: April 5, 2012
    Date of Patent: February 2, 2016
    Assignee: Ancestry.com Operations Inc.
    Inventors: Chris Brookhart, Jack Reese
  • Patent number: 9183875
    Abstract: Embodiments include systems and methods for detecting logical presence and location of modules, detecting physical presence and location of modules, and mapping the logical and physical locations together for use by the storage library. For example, when an expansion module is installed, it is connected to a network and it reports its logical presence and logical network location to a base controller in the base module. A robotic mechanism is used to trigger one or more presence sensors to detect physical presence and location of the installed expansion module. The base controller or another component generates and stores a mapping between the logical location and the physical location. The storage library can use the mapping to translate between logical and physical functionality.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: November 10, 2015
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: James Lee Ries, Terry Lynn Lane, Timothy Craig Ostwald
  • Patent number: 9152870
    Abstract: A computer vision service includes technologies to, among other things, analyze computer vision or learning tasks requested by computer applications, select computer vision or learning algorithms to execute the requested tasks based on one or more performance capabilities of the computer vision or learning algorithms, perform the computer vision or learning tasks for the computer applications using the selected algorithms, and expose the results of performing the computer vision or learning tasks for use by the computer applications.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: October 6, 2015
    Assignee: SRI INTERNATIONAL
    Inventors: Harpreet Singh Sawhney, Jayakrishan Eledath, Saad Ali, Bogdan C. Matei, Steven S. Weiner, Xutao Lv
  • Patent number: 9110470
    Abstract: The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. One embodiment further advantageously uses multiple particles to maintain multiple hypotheses with respect to localization and mapping. Further advantageously, one embodiment maintains the particles in a relatively computationally-efficient manner, thereby permitting the SLAM processes to be performed in software using relatively inexpensive microprocessor-based computer systems.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: August 18, 2015
    Assignee: iRobot Corporation
    Inventors: L. Niklas Karlsson, Paolo Pirjanian, Luis Filipe Domingues Goncalves, Enrico Di Bernardo
  • Patent number: 9098913
    Abstract: Given an image and an aligned depth map of an object, the invention predicts the 3D location, 3D orientation and opening width or area of contact for an end of arm tooling (EOAT) without requiring a physical model.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: August 4, 2015
    Assignee: Cornell University
    Inventors: Yun Jiang, John R. Amend, Jr., Hod Lipson, Ashutosh Saxena, Stephen Moseson
  • Patent number: 9091553
    Abstract: Embodiments of the present invention provide improved systems and methods for matching scenes. In one embodiment, a processor for implementing robust feature matching between images comprises: a first process for extracting a first feature set from a first image projection and extracting a second feature set from a second image projection; a memory for storing the first feature set and the second feature set; and a second process for feature matching using invariant mutual relations between features of the first feature set and the second feature set; wherein the second feature set is selected from the second image projection based on the identification of similar descriptive subsets between the second image projection and the first image projection.
    Type: Grant
    Filed: December 22, 2009
    Date of Patent: July 28, 2015
    Assignee: Honeywell International Inc.
    Inventors: Ondrej Kotaba, Jan Lukas
  • Patent number: 9064150
    Abstract: A system receives a two-dimensional digital image of an aerial industrial plant area. Based on requirements of image processing, the image is zoomed in to different sub-images, that are referred to as first images. The system identifies circular tanks, vegetation areas, process areas, and buildings in the first image. The system formulates a second digital image by concatenating the first images. The system creates one or more polygons of the regions segmented in the second digital image. Each polygon encompasses a tank area, a vegetation area, a process area, or a building area in the second digital image, which is a concatenated image of the individual regions. The system displays the second digital image on a computer display device.
    Type: Grant
    Filed: May 8, 2013
    Date of Patent: June 23, 2015
    Assignee: Honeywell International Inc.
    Inventors: Lalitha M. Eswara, Chetan Nadiger, Kartavya Mohan Gupta
  • Patent number: 9037336
    Abstract: A robot system includes a planar sign, a robot, a distance direction sensor, and a controller. The controller is configured to control the robot and includes a map data memory and a progress direction determining device. The map data memory is configured to store map data of a predetermined running path including a position of the planar sign. The progress direction determining device is configured to compare a detection result of the distance direction sensor and the stored map data so as to determine a progress direction of the robot.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 19, 2015
    Assignee: KABUSHIKI KAISHA YASKAWA DENKI
    Inventors: Dai Kouno, Tamio Nakamura
  • Publication number: 20150131896
    Abstract: A safety monitoring system for human-machine symbiosis is provided, including a spatial image capturing unit, an image recognition unit, a human-robot-interaction safety monitoring unit, and a process monitoring unit. The spatial image capturing unit, disposed in a working area, acquires at least two skeleton images. The image recognition unit generates at least two spatial gesture images corresponding to the at least two skeleton images, based on information of changes in position of the at least two skeleton images with respect to time. The human-robot-interaction safety monitoring unit generates a gesture distribution based on the at least two spatial gesture images and a safety distance. The process monitoring unit determines whether the gesture distribution meets a safety criterion.
    Type: Application
    Filed: March 25, 2014
    Publication date: May 14, 2015
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Jhen-Jia Hu, Hau-Wei Wang, Chung-Ning Huang
  • Patent number: 9025857
    Abstract: A three-dimensional measurement apparatus includes a model holding unit configured to hold a three-dimensional shape model of a measurement object and a determination unit configured to determine a distance measurement region on the measurement object based on information indicating a three-dimensional shape of the measurement object. The measurement object is irradiated with a predetermined illumination pattern by an illumination unit. An image of the measurement object is sensed while the illumination unit irradiates the measurement object. Distance information indicating a distance from the image sensing unit to the measurement object is calculated based on region corresponding to the distance measurement region within the sensed image. A position and orientation of the measurement object is calculated based on the distance information and the three-dimensional shape model.
    Type: Grant
    Filed: June 18, 2010
    Date of Patent: May 5, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Daisuke Kotake, Shinji Uchiyama
  • Patent number: 9025856
    Abstract: Vision based tracking of a mobile device is used to remotely control a robot. For example, images captured by a mobile device, e.g., in a video stream, are used for vision based tracking of the pose of the mobile device with respect to the imaged environment. Changes in the pose of the mobile device, i.e., the trajectory of the mobile device, are determined and converted to a desired motion of a robot that is remote from the mobile device. The robot is then controlled to move with the desired motion. The trajectory of the mobile device is converted to the desired motion of the robot using a transformation generated by inverting a hand-eye calibration transformation.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: May 5, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Mahesh Ramachandran, Christopher Brunner, Arvind Ramanandan, Abhishek Tyagi, Murali Ramaswamy Chari
  • Patent number: 9002098
    Abstract: Described is a robotic visual perception system for determining a position and pose of a three-dimensional object. The system receives an external input to select an object of interest. The system also receives visual input from a sensor of a robotic controller that senses the object of interest. Rotation-invariant shape features and appearance are extracted from the sensed object of interest and a set of object templates. A match is identified between the sensed object of interest and an object template using shape features. The match between the sensed object of interest and the object template is confirmed using appearance features. The sensed object is then identified, and a three-dimensional pose of the sensed object of interest is determined. Based on the determined three-dimensional pose of the sensed object, the robotic controller is used to grasp and manipulate the sensed object of interest.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 7, 2015
    Assignee: HRL Laboratories, LLC
    Inventors: Suhas E. Chelian, Rashmi N. Sundareswara, Heiko Hoffmann
  • Patent number: 8994776
    Abstract: A telepresence robot uses a series of connectible modules and preferably includes a head module adapted to receive and cooperate with a third party telecommunication device that includes a display screen. The module design provides cost advantages with respect to shipping and storage while also allowing flexibility in robot configuration and specialized applications.
    Type: Grant
    Filed: November 14, 2011
    Date of Patent: March 31, 2015
    Assignee: CrossWing Inc.
    Inventors: Stephen Sutherland, Sam Coulombe, Dale Wick