Robotics Patents (Class 382/153)
  • Patent number: 9704049
    Abstract: The present disclosure relates to an apparatus configured to adjust a processing function for image data for a vehicle control system. The apparatus comprises an image sensor configured to capture the image data corresponding to a field of view. The image sensor is in communication with a controller which is further in communication with an accelerometer. The controller is operable to receive the image data from the image sensor and receive an acceleration signal from the accelerometer. The accelerometer signal may be utilized to identify a direction of gravity relative to the image sensor.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: July 11, 2017
    Assignee: GENTEX CORPORATION
    Inventors: Christopher A. Peterson, John C. Peterson
  • Patent number: 9690293
    Abstract: A system for navigating an autonomous vehicle along a road segment is disclosed. The system may have at least one processor. The processor may be programmed to receive from an image capture device, images representative of an environment of the autonomous vehicle. The processor may also be programmed to determine a travelled trajectory along the road segment based on analysis of the images. Further, the processor may be programmed to determine a current location of the autonomous vehicle along a predetermined road model trajectory based on analysis of one or more of the plurality of images. The processor may also be programmed to determine a heading direction based on the determined traveled trajectory. In addition, the processor may be programmed to determine a steering direction, relative to the heading direction, by comparing the traveled trajectory to the predetermined road model trajectory at the current location of the autonomous vehicle.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: June 27, 2017
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Amnon Shashua, Aran Reisman, Daniel Braunstein, Yoav Taieb, Igor Tubis
  • Patent number: 9688501
    Abstract: A device and a method for separating sheet material is provided. The separation of sheet material is not effected statically, but a suitable pick-up position and/or a suitable pick-up mechanism is selected for each sheet material piece, for example in dependence on the quality of a surface of the sheet material piece to be picked up, in order to pick up the sheet material piece from the sheet material stack and remove it from the same.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: June 27, 2017
    Assignee: WINCOR NIXDORF INTERNATIONAL GMBH
    Inventors: Matthias Lochbichler, Christopher Lankeit, Martin Landwehr, Ludger Hoischen
  • Patent number: 9681107
    Abstract: A camera scope inspection system with a flexible, tether mounted camera head that is maneuverable in confined internal cavities of power generation machinery. A camera head position sensing system inferentially determines the three dimension (3D) position of the camera head within the inspected machinery. Camera head position data are correlated with camera image data by a controller. In this manner correlated internal inspection image data and corresponding position data are available for future analysis and image tracking.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: June 13, 2017
    Assignee: SIEMENS ENERGY, INC.
    Inventors: Clifford Hatcher, Jr., Forrest R. Ruhge
  • Patent number: 9615890
    Abstract: A surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image.
    Type: Grant
    Filed: August 13, 2013
    Date of Patent: April 11, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Won Jun Hwang, Kyung Shik Roh, Suk June Yoon, Seung Yong Hyung
  • Patent number: 9601019
    Abstract: A cleaning robot includes a non-circular main body, a moving assembly mounted on a bottom surface of the main body to perform forward movement, backward movement and rotation of the main body, a cleaning tool assembly mounted on the bottom surface of the main body to clean a floor, a detector to detect an obstacle around the main body, and a controller to determine whether an obstacle is present in a forward direction of the main body based on a detection signal of the detector, control the rotation of the main body to determine whether the main body rotates by a predetermined angle or more upon determining that the obstacle is present in the forward direction, and determine that the main body is in a stuck state to control the backward movement of the main body if the main body rotates by the predetermined angle or less.
    Type: Grant
    Filed: May 20, 2014
    Date of Patent: March 21, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: In Joo Kim, Dong Min Shin, Shin Kim
  • Patent number: 9599988
    Abstract: There is provided a mobile carrier and an auto following system using the mobile carrier. The mobile carrier is capable of capturing at least an image of a guiding light source and automatically following the guiding light source based on the captured image of the guiding light source. The mobile carrier is further disposed with a mobile light source for a remote image sensing device to capture an image of the mobile light source while the mobile carrier cannot capture the image of the guiding light source, so that the mobile carrier can be guided by a control signal provided according to the captured image of the mobile light source.
    Type: Grant
    Filed: August 4, 2014
    Date of Patent: March 21, 2017
    Assignee: PIXART IMAGING INC.
    Inventors: Chia-Cheun Liang, Ming-Tsan Kao, Yi-Hsien Ko, Hsin-Chia Chen
  • Patent number: 9592095
    Abstract: A medical robotic system and method of operating such comprises taking intraoperative external image data of a patient anatomy, and using that image data to generate a modeling adjustment for a control system of the medical robotic system (e.g., updating anatomic model and/or refining instrument registration), and/or adjust a procedure control aspect (e.g., regulating substance or therapy delivery, improving targeting, and/or tracking performance).
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: March 14, 2017
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Dorin Panescu, Jonathan Michael Sorger, Prashant Chopra, Tao Zhao
  • Patent number: 9558424
    Abstract: A method determines motion between first and second coordinate systems by first extracting first and second sets of keypoints from first and second images acquired of a scene by a camera arranged on a moving object. First and second poses are determined from the first and second sets of keypoints. A score for each possible motion between the first and the second poses is determined using a scoring function and a pose-transition graph constructed from training data where each node in the post-transition graph represents a relative pose and each edge represents a motion between two consecutive relative poses. Then, based on the score, a best motion is selected as the motion between the first and second coordinate systems.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: January 31, 2017
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Gim Hee Lee
  • Patent number: 9470548
    Abstract: Disclosed are a calibration device, a calibration system and a calibration method. The calibration device includes a camera configured to capture image information, a laser sensor configured to capture image information, a laser sensor configured to detect distance information, and a calibration module configured to perform a calibration of the camera and the laser sensor by obtaining a relation between the image information and the distance information, wherein the calibration module includes a plane member disposed to intersect a scanning surface of the laser sensor such that an intersection line is generated, and disposed within a capturing range by the camera so as to be captured by the camera, and a controller configured to perform coordinate conversions with respect to the image information and the distance information based on a ratio between one side of the plane member and the intersection line, and based on a plane member image included in the image information.
    Type: Grant
    Filed: July 24, 2013
    Date of Patent: October 18, 2016
    Assignee: AGENCY FOR DEFENSE DEVELOPMENT
    Inventors: Seong Yong Ahn, Tok Son Choe, Yong Woon Park, Won Seok Lee
  • Patent number: 9466013
    Abstract: A computer vision service includes technologies to, among other things, analyze computer vision or learning tasks requested by computer applications, select computer vision or learning algorithms to execute the requested tasks based on one or more performance capabilities of the computer vision or learning algorithms, perform the computer vision or learning tasks for the computer applications using the selected algorithms, and expose the results of performing the computer vision or learning tasks for use by the computer applications.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: October 11, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Harpreet Singh Sawhney, Jayakrishnan Eledath, Saad Ali, Bogdan C. Matei, Steven S. Weiner, Xutao Lv, Timothy J. Shields
  • Patent number: 9449393
    Abstract: A plane detection apparatus for detecting at least one plane model from an input depth image. The plane detection apparatus may include an image divider to divide the input depth image into a plurality of patches, a plane model estimator to calculate one or more plane models with respect to the plurality of patches including a first patch and a second patch, and a patch merger to iteratively merge patches having a plane model a similarity greater than or equal to a first threshold by comparing plane models of the plurality of patches. When a patch having the plane model similarity greater than or equal to the first threshold is absent, the plane detection apparatus may determine at least one final plane model with respect to the input depth image using previously merged patches.
    Type: Grant
    Filed: January 22, 2013
    Date of Patent: September 20, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seon Min Rhee, Do Kyoon Kim, Yong Beom Lee, Tae Hyun Rhee
  • Patent number: 9440354
    Abstract: A robot having a signal sensor configured to measure a signal, a motion sensor configured to measure a relative change in pose, a local correlation component configured to correlate the signal with the position and/or orientation of the robot in a local region including the robot's current position, and a localization component configured to apply a filter to estimate the position and optionally the orientation of the robot based at least on a location reported by the motion sensor, a signal detected by the signal sensor, and the signal predicted by the local correlation component. The local correlation component and/or the localization component may take into account rotational variability of the signal sensor and other parameters related to time and pose dependent variability in how the signal and motion sensor perform. Each estimated pose may be used to formulate new or updated navigational or operational instructions for the robot.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: September 13, 2016
    Assignee: iRobot Corporation
    Inventors: Steffen Gutmann, Ethan Eade, Philip Fong, Mario Munich
  • Patent number: 9417154
    Abstract: A method for performing a dynamic load test on a bridge includes providing a vehicle with an imaging device coupled to the vehicle and moving the vehicle across the bridge. While moving the vehicle across the bridge, a series of images is obtained using the imaging device. A position of the vehicle on the bridge is determined as a function of time using the series of images, and a response of the bridge is determined as a function of time as the vehicle crosses the bridge. The position of the vehicle on the bridge is associated with the response of the bridge.
    Type: Grant
    Filed: May 20, 2014
    Date of Patent: August 16, 2016
    Assignee: Trimble Navigation Limited
    Inventors: Darin Muncy, Curt Conquest
  • Patent number: 9415310
    Abstract: A method generates a three-dimensional map of a region from successive images of that region captured from different camera poses. The method captures successive images of the region, detects a gravitational vertical direction in respect of each captured image, detects feature points within the captured images and designates a subset of the captured images as a set of keyframes each having respective sets of image position data representing image positions of landmark points detected as feature points in that image. The method also includes, for a captured image (i) deriving a camera pose from detected feature points in the image; (ii) rotating the gravitational vertical direction to the coordinates of a reference keyframe using the camera poses derived for that image and the reference keyframe; and (iii) comparing the rotated direction with the actual gravitational vertical direction for the reference keyframe to detect a quality measure of that image.
    Type: Grant
    Filed: March 13, 2015
    Date of Patent: August 16, 2016
    Assignee: Sony Computer Entertainment Europe Limited
    Inventor: Antonio Martini
  • Patent number: 9402151
    Abstract: The present invention provides a method for recognizing a position of a mobile robot by using arbitrarily shaped ceiling features on a ceiling, comprising: a providing step of providing a mobile robot device for recognizing a position by using arbitrarily shaped ceiling features on a ceiling which includes an image input unit, an encoder sensing unit, a computation unit, a control unit, a storage unit, and a driving unit; a feature extraction step of extracting features which include an arbitrarily shaped ceiling feature from an outline extracted from image information inputted through the image input unit; and a localization step of recognizing a position of the mobile robot device by using the extracted features, wherein, in the feature extraction step, a descriptor indicating the characteristics of the arbitrarily shaped ceiling feature is assigned.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: July 26, 2016
    Assignee: KOREA UNIVERSITY RESEARCH AND BUSINESS FOUNDATION
    Inventors: Jae-Bok Song, Seo-Yeon Hwang
  • Patent number: 9296421
    Abstract: A trailer target detection system includes a camera located on a vehicle and arranged to capture video images of a trailer towed by the vehicle. The system includes an image processor processing the video images to detect a gesture by a user indicative of position of target on the trailer and identifying location of the target based on the detected gesture. A vehicle trailer backup assist system includes a camera and an image processor for processing images to detect a gesture and determine a command based on the gesture. The processor controls backing of the trailer based on the command.
    Type: Grant
    Filed: March 6, 2014
    Date of Patent: March 29, 2016
    Assignee: Ford Global Technologies, LLC
    Inventor: Erick Michael Lavoie
  • Patent number: 9251394
    Abstract: Undated photos are organized by estimating the date of each photo. The date is estimated by building a model based on a set of reference photos having established dates, and comparing image characteristics of the undated photo to the image characteristics of the reference photos. The photo characteristics can include hues, saturation, intensity, contrast, sharpness and graininess as represented by image pixel data. Once the date of a photo is estimated, it can be tagged with identifying information, such as by using the estimated date to associate the photo with a node in a family tree.
    Type: Grant
    Filed: April 5, 2012
    Date of Patent: February 2, 2016
    Assignee: Ancestry.com Operations Inc.
    Inventors: Chris Brookhart, Jack Reese
  • Patent number: 9183875
    Abstract: Embodiments include systems and methods for detecting logical presence and location of modules, detecting physical presence and location of modules, and mapping the logical and physical locations together for use by the storage library. For example, when an expansion module is installed, it is connected to a network and it reports its logical presence and logical network location to a base controller in the base module. A robotic mechanism is used to trigger one or more presence sensors to detect physical presence and location of the installed expansion module. The base controller or another component generates and stores a mapping between the logical location and the physical location. The storage library can use the mapping to translate between logical and physical functionality.
    Type: Grant
    Filed: June 20, 2012
    Date of Patent: November 10, 2015
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: James Lee Ries, Terry Lynn Lane, Timothy Craig Ostwald
  • Patent number: 9152870
    Abstract: A computer vision service includes technologies to, among other things, analyze computer vision or learning tasks requested by computer applications, select computer vision or learning algorithms to execute the requested tasks based on one or more performance capabilities of the computer vision or learning algorithms, perform the computer vision or learning tasks for the computer applications using the selected algorithms, and expose the results of performing the computer vision or learning tasks for use by the computer applications.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: October 6, 2015
    Assignee: SRI INTERNATIONAL
    Inventors: Harpreet Singh Sawhney, Jayakrishan Eledath, Saad Ali, Bogdan C. Matei, Steven S. Weiner, Xutao Lv
  • Patent number: 9110470
    Abstract: The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. One embodiment further advantageously uses multiple particles to maintain multiple hypotheses with respect to localization and mapping. Further advantageously, one embodiment maintains the particles in a relatively computationally-efficient manner, thereby permitting the SLAM processes to be performed in software using relatively inexpensive microprocessor-based computer systems.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: August 18, 2015
    Assignee: iRobot Corporation
    Inventors: L. Niklas Karlsson, Paolo Pirjanian, Luis Filipe Domingues Goncalves, Enrico Di Bernardo
  • Patent number: 9098913
    Abstract: Given an image and an aligned depth map of an object, the invention predicts the 3D location, 3D orientation and opening width or area of contact for an end of arm tooling (EOAT) without requiring a physical model.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: August 4, 2015
    Assignee: Cornell University
    Inventors: Yun Jiang, John R. Amend, Jr., Hod Lipson, Ashutosh Saxena, Stephen Moseson
  • Patent number: 9091553
    Abstract: Embodiments of the present invention provide improved systems and methods for matching scenes. In one embodiment, a processor for implementing robust feature matching between images comprises: a first process for extracting a first feature set from a first image projection and extracting a second feature set from a second image projection; a memory for storing the first feature set and the second feature set; and a second process for feature matching using invariant mutual relations between features of the first feature set and the second feature set; wherein the second feature set is selected from the second image projection based on the identification of similar descriptive subsets between the second image projection and the first image projection.
    Type: Grant
    Filed: December 22, 2009
    Date of Patent: July 28, 2015
    Assignee: Honeywell International Inc.
    Inventors: Ondrej Kotaba, Jan Lukas
  • Patent number: 9064150
    Abstract: A system receives a two-dimensional digital image of an aerial industrial plant area. Based on requirements of image processing, the image is zoomed in to different sub-images, that are referred to as first images. The system identifies circular tanks, vegetation areas, process areas, and buildings in the first image. The system formulates a second digital image by concatenating the first images. The system creates one or more polygons of the regions segmented in the second digital image. Each polygon encompasses a tank area, a vegetation area, a process area, or a building area in the second digital image, which is a concatenated image of the individual regions. The system displays the second digital image on a computer display device.
    Type: Grant
    Filed: May 8, 2013
    Date of Patent: June 23, 2015
    Assignee: Honeywell International Inc.
    Inventors: Lalitha M. Eswara, Chetan Nadiger, Kartavya Mohan Gupta
  • Patent number: 9037336
    Abstract: A robot system includes a planar sign, a robot, a distance direction sensor, and a controller. The controller is configured to control the robot and includes a map data memory and a progress direction determining device. The map data memory is configured to store map data of a predetermined running path including a position of the planar sign. The progress direction determining device is configured to compare a detection result of the distance direction sensor and the stored map data so as to determine a progress direction of the robot.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 19, 2015
    Assignee: KABUSHIKI KAISHA YASKAWA DENKI
    Inventors: Dai Kouno, Tamio Nakamura
  • Publication number: 20150131896
    Abstract: A safety monitoring system for human-machine symbiosis is provided, including a spatial image capturing unit, an image recognition unit, a human-robot-interaction safety monitoring unit, and a process monitoring unit. The spatial image capturing unit, disposed in a working area, acquires at least two skeleton images. The image recognition unit generates at least two spatial gesture images corresponding to the at least two skeleton images, based on information of changes in position of the at least two skeleton images with respect to time. The human-robot-interaction safety monitoring unit generates a gesture distribution based on the at least two spatial gesture images and a safety distance. The process monitoring unit determines whether the gesture distribution meets a safety criterion.
    Type: Application
    Filed: March 25, 2014
    Publication date: May 14, 2015
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Jhen-Jia Hu, Hau-Wei Wang, Chung-Ning Huang
  • Patent number: 9025857
    Abstract: A three-dimensional measurement apparatus includes a model holding unit configured to hold a three-dimensional shape model of a measurement object and a determination unit configured to determine a distance measurement region on the measurement object based on information indicating a three-dimensional shape of the measurement object. The measurement object is irradiated with a predetermined illumination pattern by an illumination unit. An image of the measurement object is sensed while the illumination unit irradiates the measurement object. Distance information indicating a distance from the image sensing unit to the measurement object is calculated based on region corresponding to the distance measurement region within the sensed image. A position and orientation of the measurement object is calculated based on the distance information and the three-dimensional shape model.
    Type: Grant
    Filed: June 18, 2010
    Date of Patent: May 5, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Daisuke Kotake, Shinji Uchiyama
  • Patent number: 9025856
    Abstract: Vision based tracking of a mobile device is used to remotely control a robot. For example, images captured by a mobile device, e.g., in a video stream, are used for vision based tracking of the pose of the mobile device with respect to the imaged environment. Changes in the pose of the mobile device, i.e., the trajectory of the mobile device, are determined and converted to a desired motion of a robot that is remote from the mobile device. The robot is then controlled to move with the desired motion. The trajectory of the mobile device is converted to the desired motion of the robot using a transformation generated by inverting a hand-eye calibration transformation.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: May 5, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Mahesh Ramachandran, Christopher Brunner, Arvind Ramanandan, Abhishek Tyagi, Murali Ramaswamy Chari
  • Patent number: 9002098
    Abstract: Described is a robotic visual perception system for determining a position and pose of a three-dimensional object. The system receives an external input to select an object of interest. The system also receives visual input from a sensor of a robotic controller that senses the object of interest. Rotation-invariant shape features and appearance are extracted from the sensed object of interest and a set of object templates. A match is identified between the sensed object of interest and an object template using shape features. The match between the sensed object of interest and the object template is confirmed using appearance features. The sensed object is then identified, and a three-dimensional pose of the sensed object of interest is determined. Based on the determined three-dimensional pose of the sensed object, the robotic controller is used to grasp and manipulate the sensed object of interest.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 7, 2015
    Assignee: HRL Laboratories, LLC
    Inventors: Suhas E. Chelian, Rashmi N. Sundareswara, Heiko Hoffmann
  • Patent number: 8994776
    Abstract: A telepresence robot uses a series of connectible modules and preferably includes a head module adapted to receive and cooperate with a third party telecommunication device that includes a display screen. The module design provides cost advantages with respect to shipping and storage while also allowing flexibility in robot configuration and specialized applications.
    Type: Grant
    Filed: November 14, 2011
    Date of Patent: March 31, 2015
    Assignee: CrossWing Inc.
    Inventors: Stephen Sutherland, Sam Coulombe, Dale Wick
  • Patent number: 8983174
    Abstract: A robotic system that includes a mobile robot and a remote input device. The input device may be a joystick that is used to move a camera and a mobile platform of the robot. The system may operate in a mode where the mobile platform moves in a camera reference coordinate system. The camera reference coordinate system is fixed to a viewing image provided by the camera so that movement of the robot corresponds to a direction viewed on a screen. This prevents disorientation during movement of the robot if the camera is panned across a viewing area.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: March 17, 2015
    Assignee: InTouch Technologies, Inc.
    Inventors: Yulun Wang, Charles S. Jordan, Keith P. Laby, Jonathan Southard, Marco Pinter, Brian Miller
  • Patent number: 8977378
    Abstract: Disclosed are methods and systems for using hieroglyphs for communication in a rapid fabrication environment. The method includes receiving, by a control system for an articulated robotic arm, one or more images of a fabrication machine build space. The method includes identifying, by the control system, a hieroglyph present in the one or more images and translating the identified hieroglyph into one or more instructions for manipulation of the articulated robotic arm. The method includes causing the articulated robotic arm to carry out the instructions translated from the identified hieroglyph. Accordingly foreign objects are inserted into fabricated objects during an automated rapid fabrication process without extensive redesign of the rapid fabrication machine. In some implementations, an unmodified third-party stereolithographic rapid fabrication machine can be used.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: March 10, 2015
    Assignee: Northeastern University
    Inventors: Brian Weinberg, Constantinos Mavroidis
  • Patent number: 8965104
    Abstract: A cloud computing system is configured to (i) receive image and environmental data from a computing device, (ii) apply a plurality of image processing algorithms to the received image a plurality of times to generate a corresponding plurality of image processing results, where each application of an image processing algorithm to the received image is executed with a different corresponding parameter set, and (iii) based on the image processing results, select an image processing algorithm and corresponding parameter set for the computing device to use for image processing operations. The cloud computing device may also correlate the results of its analysis with the environmental data received from the computing device, and store the correlation in a machine vision knowledge base for future reference. In some embodiments, the computing device is a component of a robot.
    Type: Grant
    Filed: August 31, 2012
    Date of Patent: February 24, 2015
    Assignee: Google Inc.
    Inventors: Ryan M. Hickman, James R. Bruce
  • Patent number: 8958627
    Abstract: A computer-implemented method for designating a portion of a machine-vision analysis to be performed on a worker. A set of machine-vision algorithms is obtained for analyzing a digital image of a product. An overall time estimate is determined that represents the processing time to analyze the digital image using the entire set of machine-vision algorithms. If the overall time estimate is greater than a threshold value, then an algorithm time estimate for each of two or more algorithms of the set of machine-vision algorithms is obtained. A rank associated with each of the two or more algorithms is computed based on the algorithm time estimates. A designated algorithm to be performed on the worker is selected based on the rank associated with each of the two or more algorithms. The digital image may then be analyzed on the worker using the designated algorithm.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: February 17, 2015
    Assignee: Sight Machine, Inc.
    Inventor: Nathan Oostendorp
  • Patent number: 8953889
    Abstract: An augmented reality environment allows interaction between virtual and real objects and enhances an unstructured real-world environment. An object datastore comprising attributes of an object within the environment may be built and/or maintained from sources including manufacturers, retailers, shippers, and users. This object datastore may be local, cloud based, or a combination thereof. Applications may interrogate the object datastore to provide user functionality.
    Type: Grant
    Filed: September 14, 2011
    Date of Patent: February 10, 2015
    Assignee: Rawles LLC
    Inventors: William Spencer Worley, III, Edward Dietz Crump
  • Patent number: 8929642
    Abstract: A three-dimensional scanner according to one aspect of the embodiments includes an irradiation unit, an imaging unit, a position detecting unit, and a scanning-region determining unit. The irradiation unit emits a slit-shaped light beam while changing an irradiation position with respect to a measuring object. The imaging unit sequentially captures images of the measuring object irradiated with the light beam. The position detecting unit detects a position of the light beam in an image captured by the imaging unit by scanning the image. The scanning-region determining unit determines a scanning region in an image as a scanning target by the position detecting unit based on a position of the light beam in an image captured by the imaging unit before the image as a scanning target.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: January 6, 2015
    Assignee: Kabushiki Kaisha Yasakawa Denki
    Inventor: Yuji Ichimaru
  • Patent number: 8923602
    Abstract: Disclosed herein are embodiments and methods of a visual guidance and recognition system requiring no calibration. One embodiment of the system comprises a servo actuated manipulator configured to perform a function, a camera mounted on the face plate of the manipulator, and a recognition controller configured to acquire a two dimensional image of the work piece. The manipulator controller is configured to receive and store the face plate position at a distance “A” between the reference work piece and the manipulator along an axis of the reference work piece when the reference work piece is in the camera's region of interest. The recognition controller is configured to learn the work piece from the image and the distance “A”. During operation, a work piece is recognized with the system, and the manipulator is accurately positioned with respect to the work piece so that the manipulator can accurately perform its function.
    Type: Grant
    Filed: July 22, 2008
    Date of Patent: December 30, 2014
    Assignees: Comau, Inc., Recognition Robotics, Inc.
    Inventors: Simon Melikian, Maximiliano A. Falcone, Joseph Cyrek
  • Patent number: 8921733
    Abstract: Removing material from the surface of a first circuit comprises generating a first laser pulse using a pulse generator; targeting a spot on the first circuit using a focusing component; delivering the first laser pulse to the spot on the first circuit, the first circuit including a digital component; ablating material from the spot using the first laser pulse without changing a state of the digital component; testing performance of the first circuit, the testing being performed without reinitializing the circuit between the steps of ablating material and testing performance. Targeting the spot on the first circuit comprises generating a second laser pulse using a pulse generator; delivering a second laser pulse to a sacrificial piece of material; detecting the position of the ablation caused by the second laser pulse with a vision system that forms an image; and using this image to guide the first laser to the spot.
    Type: Grant
    Filed: April 13, 2012
    Date of Patent: December 30, 2014
    Assignee: Raydiance, Inc.
    Inventors: David Gaudiosi, Laurent Vaissie
  • Patent number: 8908923
    Abstract: A parse module calibrates an interior space by parsing objects and words out of an image of the scene and comparing each parsed object with a plurality of stored objects. The parse module further selects a parsed object that is differentiated from the stored objects as the first object and stores the first object with a location description. A search module can detect the same objects from the scene and use them to determine the location of the scene.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: December 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: James Billingham, Helen Bowyer, Kevin Brown, Edward Jellard, Graham White
  • Patent number: 8908918
    Abstract: A three-dimensional position and orientation tracking system comprises one or more pattern tags, each comprising a plurality of contrasting portions, a tracker for obtaining image information about the pattern tags, a database with geometric information describing patterns on pattern tags; and a controller for receiving and processing the image information from the tracker, accessing the database to retrieve geometric information, and comparing the image information with the geometric information. The contrasting portions are arranged in a rotationally asymmetric pattern and at least one of the contrasting portions on a pattern tag has a perimeter that has a mathematically describable curved section. The perimeter of the contrasting portion may comprise a conic section, including for example an ellipse or a circle. The tracking system can be implemented in a surgical monitoring system in which the pattern tags are attached to tracking markers or are themselves tracking markers.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: December 9, 2014
    Assignee: Navigate Surgical Technologies, Inc.
    Inventors: Ehud Daon, Martin Gregory Beckett
  • Patent number: 8903160
    Abstract: An apparatus and method of planning a traveling path of a mobile robot, the apparatus and method including a pattern extracting unit, a pattern direction extracting unit, and a path generating unit. The pattern extracting unit may extract at least one pattern from an image of a ceiling captured in a ceiling direction. The pattern direction extracting unit may extract a pattern direction of the image in the form of a line from the at least one extracted pattern. The path generating unit may generate a traveling path of the mobile robot based on the extracted pattern direction.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: December 2, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Woo-Yeon Jeong, Jun-Ho Park
  • Patent number: 8903127
    Abstract: A computer-implemented method for determining an egomotion parameter using an egomotion estimation system is provided. First and second image frames are obtained. A first portion of the first image frame and a second portion of the second image frame are selected to respectively obtain a first sub-image and a second sub-image. A transformation is performed on each of the first sub-image and the second sub-image to respectively obtain a first perspective image and a second perspective image. The second perspective image is iteratively adjusted to obtain multiple adjusted perspective images. Multiple difference values are determined that respectively correspond to the respective difference between the first perspective image and the adjusted perspective images. A translation vector for an ego motion parameter is determined. The translation vector corresponds to one of the multiple difference values.
    Type: Grant
    Filed: September 16, 2011
    Date of Patent: December 2, 2014
    Assignee: Harman International (China) Holdings Co., Ltd.
    Inventors: Zhang Yankun, Hong Chuyang, Norman Weyrich
  • Patent number: 8903161
    Abstract: A method for estimating a location of a device uses a color image and a depth image. The method includes matching the color image to the depth image, generating a 3D reference image based on the matching, generating a 3D object image based on the matching, extracting a 2D reference feature point from the reference image, extracting a 2D reference feature point from the object image, matching the extracted reference feature point from the reference image to the extracted reference feature point from the object image, extracting a 3D feature point from the object image using the matched 2D reference feature point, and estimating the location of the device based on the extracted 3D feature point.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 2, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: No San Kwak, Kyung Shik Roh, Sung Hwan Ahn, Suk June Yoon, Seung Yong Hyung
  • Patent number: 8903159
    Abstract: A method and apparatus for tracking an image considering scale are provided. A registered image patch may be divided into a scale-invariant image patch and a scale-variant image patch according to a predetermined scale invariance index (SII). If a registered image patch within an image is a scale-invariant image patch, the scale-invariant image patch is tracked by adjusting its position, while if the registered image patch is a scale-variant image patch, the scale-invariant image patch is tracked by adjusting its position and scale.
    Type: Grant
    Filed: April 21, 2010
    Date of Patent: December 2, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ki-wan Choi, Hyoung-ki Lee, Ji-young Park
  • Patent number: 8897567
    Abstract: A color determination unit determines color information of a light-emitting body of an input device. A transmitter unit communicates the determined color information to the input device. A recording unit records a history of the color information determined by the color determination unit. A color candidate determination unit determines one or more candidates of emitted color of the light-emitting body, using the color information recorded in the recording unit. An acknowledging unit acknowledges from the user a command to determine a candidate of emitted light, and the color determination unit determines the color information of the light-emitting body accordingly.
    Type: Grant
    Filed: June 10, 2011
    Date of Patent: November 25, 2014
    Assignees: Sony Corporation, Sony Computer Entertainment Inc.
    Inventor: Yoshio Miyazaki
  • Patent number: 8891816
    Abstract: A parse module calibrates an interior space by parsing objects and words out of an image of the scene and comparing each parsed object with a plurality of stored objects. The parse module further selects a parsed object that is differentiated from the stored objects as the first object and stores the first object with a location description. A search module can detect the same objects from the scene and use them to determine the location of the scene.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: November 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: James Billingham, Helen Bowyer, Kevin Brown, Edward Jellard, Graham White
  • Publication number: 20140334713
    Abstract: A method and apparatus for constructing a map for a mobile robot to be able to reduce a data amount and increase an approach speed. The method includes: searching for a plurality of feature data occupying an arbitrary space by scanning a surrounding environment of the mobile robot; performing quadtree segmentation on first feature data of the plurality of feature data to generate a plurality of first node information as a result of the quadtree segmentation; determining a position of second feature data of the plurality of feature data with respect to the first feature data; and performing a neighborhood moving algorithm for generating a plurality of second node information of the second feature data according to the position of second feature data by using the plurality of first node information.
    Type: Application
    Filed: September 26, 2013
    Publication date: November 13, 2014
    Applicant: SAMSUNG TECHWIN CO., LTD.
    Inventor: Dongshin KIM
  • Publication number: 20140334714
    Abstract: A collision detection system includes a processing section, a drawing section, and a depth buffer. Depth information of an object is set to the depth buffer as depth map information. The drawing section performs a first drawing process of performing a depth test, and drawing a primitive surface on a reverse side when viewed from a predetermined viewpoint out of primitive surfaces constituting a collision detection target object with reference to the depth buffer. Further, the drawing section performs a second drawing process of drawing the primitive surface on the reverse side when viewed from a predetermined viewpoint out of the primitive surfaces constituting the collision detection target object without performing the depth test. The processing section determines whether or not the collision detection target object collides with the object on the target side based on the result of the first drawing process and the second drawing process.
    Type: Application
    Filed: July 24, 2014
    Publication date: November 13, 2014
    Inventor: Mitsuhiro INAZUMI
  • Patent number: 8880271
    Abstract: Disclosed are a robot cleaner and a method for controlling the same. The robot cleaner and method of the present invention involve dividing the whole area to be cleaned into sub-areas, and easily calculating a full path using travel paths in the sub-areas and connection points between sub-areas, and in the event the whole area to be cleaned is extended or an area which has not been cleaned is found, do not involve regenerating the whole map for cleaning, but rather easily updating the full path using the pre-stored travel path in the sub-areas and the connection points between sub-areas.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: November 4, 2014
    Assignee: LG Electronics Inc.
    Inventor: Hyeongshin Jeon
  • Patent number: 8879831
    Abstract: Using high-level attributes to guide image processing is described. In an embodiment high-level attributes of images of people such as height, torso orientation, body shape, gender are used to guide processing of the images for various tasks including but not limited to joint position detection, body part classification, medical image analysis and others. In various embodiments one or more random decision forests are trained using images where global variable values such as player height are known in addition to ground-truth data appropriate for the image processing task concerned. In some examples sequences of images are used where global variables are static or vary smoothly over the sequence. In some examples one or more trained random decision forests are used to find global variable values as well as output values for the task concerned such as joint positions or body part classes.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: November 4, 2014
    Assignee: Microsoft Corporation
    Inventors: Pushmeet Kohli, Jamie Daniel Joseph Shotton, Min Sun