Optical Patents (Class 901/47)
-
Publication number: 20130051626Abstract: A pose of an object is estimated from an from an input image and an object pose estimation is then stored by: inputting an image containing an object; creating a binary mask of the input image; extracting a set of singlets from the binary mask of the input image, each singlet representing points in an inner and outer contour of the object in the input image; connecting the set of singlets into a mesh represented as a duplex matrix; comparing two duplex matrices to produce a set of candidate poses; and producing an object pose estimate, and storing the object pose estimate.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Inventors: Arash Abadpour, Guoyi Fu, Ivo Moravec
-
Publication number: 20130054030Abstract: In an object gripping apparatus according to the present invention, based on three-dimensional position and attitude of a gripping object and a gripping position that is preliminarily set for each gripping object, an operation of a grip part is controlled such that the grip part grips the gripping position on the gripping object. Thereby, an intended gripping position can be identified, and the object can be appropriately gripped.Type: ApplicationFiled: July 6, 2012Publication date: February 28, 2013Applicant: Dainippon Screen Mfg. Co., Ltd.Inventor: Shigeo MURAKAMI
-
Publication number: 20130054029Abstract: The present teachings provide a method of controlling a remote vehicle having an end effector and an image sensing device. The method includes obtaining an image of an object with the image sensing device, determining a ray from a focal point of the image to the object based on the obtained image, positioning the end effector of the remote vehicle to align with the determined ray, and moving the end effector along the determined ray to approach the object.Type: ApplicationFiled: April 16, 2012Publication date: February 28, 2013Applicant: iRobotInventors: Wesley Hanan Huang, Emilie A. Phillips
-
Publication number: 20130054028Abstract: In a method for controlling a robot using a computing device, 3D images of an operator are captured in real-time. Different portions of the operator are determined in one of the 3D images according to moveable joints of the robot, and each of the determined portions is correlated with one of the moveable joints. Motion data of each of the determined portions is obtained from the 3D images. A control command is sent to the robot according to the motion data of each of the determined portions, to control each moveable joint of the robot to implement a motion of a determined portion that is correlated with the moveable joint.Type: ApplicationFiled: December 7, 2011Publication date: February 28, 2013Applicant: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: HOU-HSIEN LEE, CHANG-JUNG LEE, CHIH-PING LO
-
Patent number: 8380350Abstract: An autonomous mobile robot system for bounded areas including a navigation beacon and an autonomous coverage robot. The navigation beacon has a gateway beacon emitter arranged to transmit a gateway marking emission with the navigation beacon disposed within a gateway between the first bounded area and an adjacent second bounded area. The autonomous coverage robot includes a beacon emission sensor responsive to the beacon emission, and a drive system configured to maneuver the robot about the first bounded area in a cleaning mode in which the robot is redirected in response to detecting the gateway marking emission. The drive system is also configured to maneuver the robot through the gateway into the second bounded area in a migration mode.Type: GrantFiled: December 23, 2008Date of Patent: February 19, 2013Assignee: iRobot CorporationInventors: Daniel N. Ozick, Andrea M. Okerholm, Jeffrey W. Mammen, Michael J. Halloran, Paul E. Sandin, Chikyung Won
-
Publication number: 20130041508Abstract: A system and method for providing intuitive, visual based remote control is disclosed. The system can comprise one or more cameras disposed on a remote vehicle. A visual servoing algorithm can be used to interpret the images from the one or more cameras to enable the user to provide visual based inputs. The visual servoing algorithm can then translate that commanded motion into the desired motion at the vehicle level. The system can provide correct output regardless of the relative position between the user and the vehicle and does not require any previous knowledge of the target location or vehicle kinematics.Type: ApplicationFiled: August 13, 2012Publication date: February 14, 2013Applicant: Georgia Tech Research CorporationInventors: Ai-Ping HU, Gary McMurray, James Michael Matthews, Matt Marshall
-
Publication number: 20130039541Abstract: A robot system includes a robot having a movable section, an image capture unit provided on the movable section, an output unit that allows the image capture unit to capture a target object and a reference mark and outputs a captured image in which the reference mark is imaged as a locus image, an extraction unit that extracts the locus image from the captured image, an image acquisition unit that performs image transformation on the basis of the extracted locus image by using the point spread function so as to acquire an image after the transformation from the captured image, a computation unit that computes a position of the target object on the basis of the acquired image, and a control unit that controls the robot so as to move the movable section toward the target object in accordance with the computed position.Type: ApplicationFiled: October 15, 2012Publication date: February 14, 2013Applicant: SEIKO EPSON CORPORATIONInventor: SEIKO EPSON CORPORATION
-
Patent number: 8374721Abstract: A power-saving robot system includes at least one peripheral device and a mobile robot. The peripheral device includes a controller having an active mode and a hibernation mode, and a wireless communication component capable of activation in the hibernation mode. A controller of the robot has an activating routine that communicates with and temporarily activates the peripheral device, via wireless communication, from the hibernation mode. In another aspect, a robot system includes a network data bridge and a mobile robot. The network data bridge includes a broadband network interface, a wireless command interface, and a data bridge component. The data bridge component extracts serial commands received via the broadband network interface from an internet protocol, applies a command protocol thereto, and broadcasts the serial commands via the wireless interface. The mobile robot includes a wireless command communication component that receives the serial commands transmitted from the network data bridge.Type: GrantFiled: December 4, 2006Date of Patent: February 12, 2013Assignee: iRobot CorporationInventors: Michael J. Halloran, Jeffrey W. Mammen, Tony L. Campbell, Jason S. Walker, Paul E. Sandin, John N. Billington, Jr., Daniel N. Ozick
-
Publication number: 20130035791Abstract: A vision correction method for establishing the position of a tool center point (TCP) for a robot manipulator includes the steps of: defining a preset position of the TCP; defining a preset coordinate system TG with the preset position of the TCP as its origin; capturing a two-dimensional picture of the preset coordinate system TG to establish a visual coordinate system TV; calculating a scaling ratio ? of the vision coordinate system TV relative to the preset coordinate system TG; rotating the TCP relative to axes of the preset coordinate system TG; capturing pictures of the TCP prior to and after rotation; calculating the deviation ?P between the preset position and actual position of the TCP; correcting the preset position and corresponding coordinate system TG using ?P, and repeating the rotation through correction steps until ?P is less than or equal to a maximum allowable deviation of the robot manipulator.Type: ApplicationFiled: March 8, 2012Publication date: February 7, 2013Applicants: HON HAI PRECISION INDUSTRY CO., LTD., HONG FU JIN PRECISION INDUSTRY (ShenZhen) CO., LTDInventors: LONG-EN CHIU, SHU-JUN FU, GANG ZHAO
-
Publication number: 20130034420Abstract: A gripper grasps irregular and deformable work pieces so as to lift and hold packaged, processed, or raw, and manipulate the work pieces for the purpose of material handling, assembly, packaging, and other robotic and automated manipulative functions. A vacuum is induced at multiple points through a flexible gripping hood to provide lifting force to, and facilitate rapid movement of, work pieces. An array of lighting devices and a double ring array of segmented mirrors provide uniform illumination to ensure accurate positioning of the gripping hood with respect to the work piece to be manipulated.Type: ApplicationFiled: October 11, 2012Publication date: February 7, 2013Inventor: Preben K. Hjørnet
-
Publication number: 20130034295Abstract: Methods for recognizing a category of an object are disclosed. In one embodiment, a method includes determining, by a processor, a preliminary category of a target object, the preliminary category having a confidence score associated therewith, and comparing the confidence score to a learning threshold. If the highest confidence score is less than the learning threshold, the method further includes estimating properties of the target object and generating a property score for one or more estimated properties, and searching a supplemental image collection for supplemental image data using the preliminary category and the one or more estimated properties. Robots programmed to recognize a category of an object by use of supplemental image data are also disclosed.Type: ApplicationFiled: August 2, 2011Publication date: February 7, 2013Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Masayoshi Tsuchinaga, Mario Fritz, Trevor Darrell
-
Publication number: 20130035790Abstract: A method is provided for initiating a telepresence session with a person, using a robot. The method includes receiving a request to host a telepresence session at the robot and receiving an identification for a target person for the telepresence session by the robot. The robot then searches a current location for a person. If a person is found, a determination is made regarding whether the person is the target person. If the person found is not the target person, the person is prompted for a location for the target person. The robot moves to the location given by the person in response to the prompt.Type: ApplicationFiled: August 2, 2011Publication date: February 7, 2013Applicant: MICROSOFT CORPORATIONInventors: Charles F. Olivier, III, Jean Sebastien Fouillade, Malek Chalabi, Nathaniel T. Clinton, Russell Sanchez, Adrien Felon, Graham Wheeler, Francois Burianek
-
Patent number: 8368339Abstract: A method of confining a robot in a work space includes providing a portable barrier signal transmitting device including a primary emitter emitting a confinement beam primarily along an axis defining a directed barrier. A mobile robot including a detector, a drive motor and a control unit controlling the drive motor is caused to avoid the directed barrier upon detection by the detector on the robot. The detector on the robot has an omnidirectional field of view parallel to the plane of movement of the robot. The detector receives confinement light beams substantially in a plane at the height of the field of view while blocking or rejecting confinement light beams substantially above or substantially below the plane at the height of the field of view.Type: GrantFiled: August 13, 2009Date of Patent: February 5, 2013Assignee: iRobot CorporationInventors: Joseph L. Jones, Philip R. Mass
-
Publication number: 20130030570Abstract: Provided is a robot device including an image input unit for inputting an image of surroundings, a target object detection unit for detecting an object from the input image, an object position detection unit for detecting a position of the object, an environment information acquisition unit for acquiring surrounding environment information of the position of the object, an optimum posture acquisition unit for acquiring an optimum posture corresponding to the surrounding environment information for the object, an object posture detection unit for detecting a current posture of the object from the input image, an object posture comparison unit for comparing the current posture of the object to the optimum posture of the object, and an object posture correction unit for correcting the posture of the object when the object posture comparison unit determines that there is a predetermined difference or more between the current posture and the optimum posture.Type: ApplicationFiled: July 10, 2012Publication date: January 31, 2013Applicant: Sony CorporationInventors: Satoru SHIMIZU, Kenta Kawamoto, Yoshiaki Iwai
-
Patent number: 8364311Abstract: The invention relates to a vision-based attention system, comprising: at least one vision sensor, at least one image processing module processing an output signal of the vision sensor in order to generate at least one two-dimensional feature map, a dorsal attention subsystem generating a first saliency map on the basis of the at least one feature map, the saliency map indicating a first focus of attention for the driver assistance system, a ventral attention subsystem, independent to the dorsal attention subsystem, for generating a second saliency map on the basis of at least one feature map, which can be the same as the one used for the dorsal attention system or a different one, the second saliency map indicating unexpected visual stimuli.Type: GrantFiled: April 20, 2010Date of Patent: January 29, 2013Assignee: Honda Research Institute Europe GmbHInventor: Martin Heracles
-
Patent number: 8360178Abstract: The mobile robot includes a chassis, a pair of drive system, a manipulator arm with a turret. The pair of drive systems is operably connected to opposed sides of the chassis. The turret is rotationally attached to the platform. The first link of the manipulator arm is attached to the turret at a first joint. The manipulator arm has at least a first link, and the first link is adapted to function as a traction device.Type: GrantFiled: January 14, 2011Date of Patent: January 29, 2013Assignee: Engineering Services Inc.Inventors: Andrew A. Goldenberg, Jun Lin
-
Publication number: 20130024025Abstract: An autonomous robot and a positioning method thereof are disclosed. The autonomous robot includes an environment information detection device, a map construction module, a setting module, a path planning module, and a driving module. The environment information detection device is for detecting environment information about an environment where the autonomous robot is situated. An environment map is constructed based on the environment information detected by the environment information detection device. The setting module is used for setting a working boundary on the environment map. The path planning module is for planning a moving path in a working zone and is electrically connected to the setting module. The driving module for driving the autonomous robot to move along the moving path is electrically connected to the path planning module.Type: ApplicationFiled: December 2, 2011Publication date: January 24, 2013Inventor: Harry Chia-Hung HSU
-
Publication number: 20130017128Abstract: Embodiments include integrated robotic sample transfer devices and components thereof which are used for reliably and accurately transferring small samples of material from one registered position to another registered position. Such transfers of material may be carried out by a single pin tool or an array of pin tools of a pin tool head assembly of robotic sample transfer devices. Some embodiments also include automated cleaning of the pin tools used to transfer the sample material. Some embodiments are fully integrated units having internal fluid supply and waste tanks, vacuum source, fluid pumps, controllers and user interface devices.Type: ApplicationFiled: September 24, 2012Publication date: January 17, 2013Applicant: SEQUENOM, INC.Inventors: Rolf SILBERT, Richard Capella, Justin Cuzens
-
Patent number: 8352076Abstract: A robot with a camera includes a hand with a finger, a camera disposed on the hand, a robot arm including the hand, and a control portion which searches for a work based on an image obtained by the camera and controls the robot arm. In addition, a unit detects a velocity of the camera, and a unit detects a position of the camera relative to a predicted stopping position of the camera. The control portion permits the camera to take the image used for searching for the work, when the velocity of the camera takes a preset velocity threshold value or lower and the position of the camera relative to the predicted stopping position takes a preset position threshold value or lower.Type: GrantFiled: May 25, 2010Date of Patent: January 8, 2013Assignee: Canon Kabushiki KaishaInventor: Yuichi Someya
-
Patent number: 8352074Abstract: A path planning apparatus and method of a robot, in which a path, along which the robot accesses an object to grasp the object, is planned. The path planning method includes judging whether or not a robot hand of a robot collides with an obstacle when the robot hand moves along one access path candidate selected from plural access path candidates along which the robot hand accesses an object to grasp the object, calculating an access score of the selected access path candidate when the robot hand does not collide with the obstacle, and determining an access path plan using the access score of the selected access path candidate.Type: GrantFiled: July 21, 2010Date of Patent: January 8, 2013Assignee: Samsung ElectronicsInventors: Guochunxu, Kyung Shik Roh, San Lim, Bok Man Lim, Myung Hee Kim
-
Publication number: 20130002862Abstract: Methods and systems may include high speed camera to capture a video of a display output, a robotic arm to interact with a device, a processor, and a computer readable storage medium having a set of instructions. If executed by the processor, the instructions cause the system to identify one or more user experience characteristics based on the captured video, and generate a report based on the one or more user experience characteristics. The report may include a perceptional model score that is generated based on the user experience characteristics as well as other parameters. The user experience characteristics could include response time, frame rate and run time characteristics.Type: ApplicationFiled: June 30, 2011Publication date: January 3, 2013Inventors: Damon R. Waring, Keith L. Kao, Albert Kwok
-
Publication number: 20130006424Abstract: A handheld tool is disclosed which may be used to transfer a plurality of plant tissue explants from a first container to a second container. The handheld tool may include a disposable tip member which couples the plurality of plant tissue explants through use of negative pressure. An automated system which transfers a plurality of plant tissue explants from a first container to a second container is also disclosed. The automated system may include a first presentment system which moves the first container to a region, a second presentment system which moves the second container to the region, and a robot system that transfers the plurality of plant tissue explants from the first container to the second container.Type: ApplicationFiled: September 14, 2012Publication date: January 3, 2013Applicant: DOW AGROSCIENCES LLCInventor: Tonya Strange Moynahan
-
Publication number: 20130006423Abstract: A target object gripping apparatus comprises: an estimation unit configured to estimate an orientation of a target object based on orientation estimation parameters; a gripping unit configured to grip the target object based on the orientation of the target object estimated by the estimation unit; a detection unit configured to detect a failure of gripping by the gripping unit; and a modifying unit configured to modify the orientation estimation parameters based on the orientation of the target object when the detection unit detects a gripping failure.Type: ApplicationFiled: June 14, 2012Publication date: January 3, 2013Applicant: CANON KABUSHIKI KAISHAInventors: Yoshinori Ito, Takahisa Yamamoto
-
Publication number: 20120330447Abstract: A surface data acquisition, storage, and assessment system for detecting and quantifying similarities or differences between stored data and data collected from a scan. The system operates utilizing a method that decreases the time required for calculating a pose estimate thus increasing its performance making it more practical for applications that require real-time operations. In a preferred embodiment the system comprises one or more sensing components for scanning and measuring surface features of an object for determining the identity of the object, and determines differences between data obtained from two or more scans.Type: ApplicationFiled: November 15, 2011Publication date: December 27, 2012Inventors: Adam R. Gerlach, Paul Thomas, Bruce Walker
-
Publication number: 20120330453Abstract: An automated ply layup system uses a robot and an end effector for selecting plies from a kit and placing the plies at predetermined locations on a tool.Type: ApplicationFiled: June 22, 2011Publication date: December 27, 2012Inventors: Samra Samak Sangari, Kurtis S. Willden, James M. Cobb, Gary M. Buckus, Carlos Crespo, Samuel F. Pedigo
-
Publication number: 20120323366Abstract: Provided is a manipulator with at least one camera capable of observing an end effector from a direction suitable for work. A rotating portion rotatable coaxially with the end effector is provided to a link adjacent to a link located at a manipulator tip end. At least one camera for recognizing a work piece as a object is arranged on the rotating portion through a camera platform. An actuator for controlling a rotation angle of the rotating portion is driven according to a rotation angle of the link located at the manipulator tip end, and thus the camera is arranged in a direction perpendicular to a plane where end effector can move when the end effector performs a grip work. In an assembly work, the rotating portion is rotated such that the camera is arranged in a direction parallel to the plane where end effector can move.Type: ApplicationFiled: August 21, 2012Publication date: December 20, 2012Applicant: CANON KABUSHIKI KAISHAInventor: Kota Tani
-
Publication number: 20120323363Abstract: A robot system according to embodiments includes a conveying device, a plurality of robots, an image capturing device, a workpiece detecting device, and a control device. The control device includes an operation instruction unit and an allocating unit. The operation instruction unit generates an operation instruction for performing a holding operation on workpieces on the basis of the detection result of the workpiece detecting device and transmits the operation instruction to the robots. The allocating unit determines which of the plurality of robots to which the operation instruction unit transmits the operation instruction on the basis of conveying situations of the workpieces obtained from the detection result of the workpiece detecting device.Type: ApplicationFiled: January 20, 2012Publication date: December 20, 2012Applicant: KABUSHIKI KAISHA YASKAWA DENKIInventors: Tetsuro Izumi, Kenichi Koyanagi, Kenji Matsukuma, Yukio Hashiguchi
-
Publication number: 20120323365Abstract: Described herein are technologies pertaining to autonomously docking a mobile robot at a docking station for purposes of recharging batteries of the mobile robot. The mobile robot uses vision-based navigation and a known map of the environment to navigate toward the docking station. Once sufficiently proximate to the docking station, the mobile robot captures infrared images of the docking station, and granularly aligns itself with the docking station based upon the captured infrared images of the docking station. As the robot continues to drive towards the docking station, the robot monitors infrared sensors for infrared beams emitted from the docking station. If the infrared sensors receive the infrared beams, the robot continues to drive forward until the robot successfully docks with the docking station.Type: ApplicationFiled: June 17, 2011Publication date: December 20, 2012Applicant: Microsoft CorporationInventors: Trevor Taylor, Michael Wyrzykowski, Glen C. Larsen, Mike M. Paul
-
Publication number: 20120321255Abstract: This invention discloses scalable and modular automated optical cross-connect devices which exhibit low loss and scalability to high port counts. In particular, a device for the programmable interconnection of large numbers of optical fibers is provided, whereby a two-dimensional array of fiber optic connections is mapped in an ordered and rule based fashion into a one-dimensional array with tensioned fiber optic elements tracing substantially straight lines there between. Fiber optic elements are terminated in a stacked arrangement of flexible fiber optic elements with a capacity to retain excess fiber lengths while maintaining an adequate bend radius. The combination of these elements partitions the switch volume into multiple independent, non-interfering zones.Type: ApplicationFiled: October 23, 2011Publication date: December 20, 2012Inventor: Anthony Kewitsch
-
Patent number: 8335590Abstract: An image capturing device is robotically positioned and oriented in response to operator manipulation of a master control device. An unused degree-of-freedom of the master control device is used to adjust an attribute such as focusing of the image capturing device relative to a continually updated set-point. A deadband is provided to avoid inadvertent adjusting of the image capturing device attribute and haptic feedback is provided back to the master control device so that the operator is notified when adjusting of the attribute is initiated.Type: GrantFiled: December 23, 2008Date of Patent: December 18, 2012Assignee: Intuitive Surgical Operations, Inc.Inventors: Michael Costa, David Robinson, Michael L. Hanuschik, Randal P. Goldberg, Paul Millman
-
Publication number: 20120316676Abstract: Initial interaction between a mobile robot and at least one user is described herein. The mobile robot captures several images of its surroundings, and identifies existence of a user in at least one of the several images. The robot then orients itself to face the user, and outputs an instruction to the user with regard to the orientation of the user with respect to the mobile robot. The mobile robot captures images of the face of the user responsive to detecting that the user has followed the instruction. Information captured by the robot is uploaded to a cloud-storage system, where information is included in a profile of the user and is shareable with others.Type: ApplicationFiled: June 10, 2011Publication date: December 13, 2012Applicant: MICROSOFT CORPORATIONInventors: Jean Sebastien Fouillade, Russell Sanchez, Efstathios Papaefstathiou, Malek M. Chalabi
-
Patent number: 8326460Abstract: A robot system calculates positional information of a workpiece with respect to a visual sensor and calculates positional information of an arm tip at a second time based on first times stored by a storing section and the second time at which the visual sensor measures the workpiece.Type: GrantFiled: January 27, 2011Date of Patent: December 4, 2012Assignee: Fanuc CorporationInventors: Kazunori Ban, Fumikazu Warashina, Makoto Yamada, Yuuta Namiki
-
Publication number: 20120297559Abstract: An object searching method includes: capturing an image in front of a cleaning robot by a camera. Comparing the image with a number of reference images to determine whether the image is the same as one of the reference images. Storing a position of the cleaning robot and the image when the image is the same as one of the reference images, adjusting the path of the cleaning robot to stop the cleaning robot from cleaning the object; and emitting an alarm.Type: ApplicationFiled: August 11, 2011Publication date: November 29, 2012Applicant: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: HOU-HSIEN LEE, CHANG-JUNG LEE, CHIH-PING LO
-
Publication number: 20120296471Abstract: A robot (100) has a robot mechanism unit (1) having a sensor (10) and a control unit (2), and the control unit (2) includes a normal control unit (4) that controls the operation of the robot mechanism unit, and a learning control unit (3) that, when the robot mechanism unit (1) is operated by a speed command that is given by multiplying a teaching speed designated in a task program by a speed change ratio, performs learning to calculate, from a detection result by the sensor (10), a learning correction amount for making the trajectory or position of the control target in the robot mechanism unit (1) approach the target trajectory or target position, or for reducing the vibration of the control target, and performs processes so that the control target position of the robot mechanism unit (1) moves along a fixed trajectory regardless of the speed change ratio.Type: ApplicationFiled: May 11, 2012Publication date: November 22, 2012Applicant: FANUC CORPORATIONInventors: Kiyonori INABA, Masakazu ICHINOSE
-
Publication number: 20120294483Abstract: Hazardous objects in the field of explosives ordnance disposal or safety controls are identified using a sensor and image data generating arrangement and a comparison unit. The sensor and image data generating arrangement examines the object and produces an image thereof, which is compared by the comparison unit to known stored reference images. These reference images are digital images of reference objects. In this manner safety controls and explosives ordnance disposals can be organized safely and efficiently.Type: ApplicationFiled: May 15, 2012Publication date: November 22, 2012Applicant: EADS Deutschland GmbHInventor: Dietmar VOGELMANN
-
Publication number: 20120296469Abstract: A sucking-conveying device capable of sequentially and efficiently taking out and conveying a workpiece one-by-one, even when a taking-out means attached to the robot is not correctly positioned relative to a workpiece to be taken out. The sucking-conveying device includes a robot and a vision sensor capable of detecting a plurality of workpieces randomly located in a container. A suction nozzle, configured to suck and take out the workpieces one-by-one, is mounted on the robot. By attaching the nozzle to a robot arm, the position and orientation of the nozzle may be changed. The suction nozzle is fluidly connected to a suction unit via a blow member. The suction unit sucks air through the nozzle, and generates suction force at the nozzle for sucking a target workpiece to be taken out toward the nozzle, whereby the nozzle can suck and hold the target workpiece.Type: ApplicationFiled: March 14, 2012Publication date: November 22, 2012Applicant: FANUC CorporationInventors: Toshimichi Yoshinaga, Masaru Oda, Keisuke Suga
-
Publication number: 20120296474Abstract: A robot system according to an embodiment includes a robot a switching determination unit and a rearrangement instruction unit The switching determination unit performs determination of switching between the operation of transferring the workpiece and the operation of rearranging the workpiece based on the state of transferring the workpiece by the robot The rearrangement instruction unit instructs the robot to rearrange the workpiece.Type: ApplicationFiled: March 1, 2012Publication date: November 22, 2012Applicant: KABUSHIKI KAISHA YASKAWA DENKIInventors: Toshimitsu IRIE, Tetsuya Yoshida, Shinji Murai
-
Patent number: 8315739Abstract: A method for determining the position of at least one object present within a working range of a robot by an evaluation system, wherein an image of at least one part of the working range of the robot is generated by a camera mounted on a robot. The image is generated during a motion of the camera and image data are fed to the evaluation system in real time, together with further data, from which the position and/or orientation of the camera when generating the image can be derived. The data are used for determining the position of the at least one object.Type: GrantFiled: June 15, 2010Date of Patent: November 20, 2012Assignee: ABB AGInventor: Fan Dai
-
Publication number: 20120290134Abstract: A robotic system includes a camera having an image frame whose position and orientation relative to a fixed frame is determinable through one or more image frame transforms, a tool disposed within a field of view of the camera and having a tool frame whose position and orientation relative to the fixed frame is determinable through one or more tool frame transforms, and at least one processor programmed to identify pose indicating points of the tool from one or more camera captured images, determine an estimated transform for an unknown one of the image and tool frame transforms using the identified pose indicating points and known ones of the image and tool frame transforms, update a master-to-tool transform using the estimated and known ones of the image and tool frame transforms, and command movement of the tool in response to movement of a master using the updated master-to-tool transform.Type: ApplicationFiled: January 27, 2012Publication date: November 15, 2012Applicant: Intuitive Surgical Operations, Inc.Inventors: Tao Zhao, Giuseppe Maria Prisco, John Ryan Steger, David Q. Larkin
-
Patent number: 8311677Abstract: A control device for a legged mobile robot has a unit which generates the time series of a future predicted value of a model external force manipulated variable as a feedback manipulated variable for reducing the deviation of the posture of the robot. A desired motion determining unit sequentially determines the instantaneous value of a desired motion such that the motion of the robot will reach or converge to a reaching target in the future in the case where it is assumed that the time series of an additional external force defined by the time series of a future predicted value of the model external force manipulated variable is additionally applied to the robot on a dynamic model.Type: GrantFiled: October 26, 2010Date of Patent: November 13, 2012Assignee: Honda Motor Co., Ltd.Inventors: Takahide Yoshiike, Hiroyuki Kaneko, Atsuo Orita
-
Publication number: 20120281064Abstract: A method and apparatus to provide simplified control over image configuration from virtually any capture device and allows that image to be recorded and/or projected or displayed on any monitor is disclosed. This universality enables general ease of use, and uncouples the capture device from expensive system support, thus providing a method to more efficiently utilize resources.Type: ApplicationFiled: May 3, 2011Publication date: November 8, 2012Applicants: Citynet LLC, 3D Surgical SolutionsInventors: Ray Hamilton Holloway, Christopher John Borg, Clifton Bradley Parker, Clifton Earl Parker
-
Publication number: 20120283874Abstract: The robotic work object cell calibration system includes a work object. The work object emits a pair of beam-projecting lasers acting as a crosshair, intersecting at a tool contact point (TCP). The work object emits four plane-projecting lasers are used to adjust the yaw, pitch, and roll of the robot tool relative to the tool contact point (TCP). The robotic work object cell calibration system provides a calibration system which is simpler, which involves a lower investment cost, which entails lower operating costs than the prior art, and can be used for different robot tools on a shop floor without having to perform a recalibration for each robot tool.Type: ApplicationFiled: February 1, 2012Publication date: November 8, 2012Inventor: Matthew E. Trompeter
-
Patent number: 8306738Abstract: An apparatus and method for building a map are provided. According to the apparatus and method, a path is generated on the basis of the degrees of uncertainty of features extracted from an image obtained while a mobile robot explores unknown surroundings, and the mobile robot travels along the generated path. The path based on the degrees of uncertainty of the features is generated and this may increase the accuracy of a feature map of the mobile robot or accuracy in self localization.Type: GrantFiled: March 10, 2009Date of Patent: November 6, 2012Assignee: Samsung Electronics Co., Ltd.Inventors: Dong-Geon Kong, Hyoung-Ki Lee
-
Patent number: 8306657Abstract: A control device for a legged mobile robot has a first motion determiner which sequentially determines the instantaneous value of a first motion of a robot by using a first dynamic model and a second motion determiner which sequentially determines the instantaneous value of a second motion of the robot by using a second dynamic model, and sequentially determines a desired motion of the robot by combining the first motion and the second motion. A low frequency component and a high frequency component of a feedback manipulated variable having a function for bringing a posture state amount error, which indicates the degree of the deviation of an actual posture of the robot from a desired posture, close to zero are fed back to the first motion determiner and the second motion determiner, respectively.Type: GrantFiled: October 26, 2010Date of Patent: November 6, 2012Assignee: Honda Motor Co., Ltd.Inventors: Takahide Yoshiike, Hiroyuki Kaneko, Atsuo Orita
-
Publication number: 20120272914Abstract: In certain embodiments, a system includes a robotic attacher comprising a main arm and a supplemental arm operable to extend into a stall portion of a milking box. A camera couples to the supplemental arm. The supplemental arm comprises a camera-facing nozzle operable to spray the camera with a cleanser.Type: ApplicationFiled: April 17, 2012Publication date: November 1, 2012Applicant: TECHNOLOGIES HOLDINGS CORP.Inventors: Henk Hofman, Cor de Ruijter, Menno Koekoek, Peter Willem van der Sluis
-
Publication number: 20120277913Abstract: In certain embodiments, a system includes a controller operable to access a first image generated by a first camera. The controller determines a reference point from at least one main feature of a dairy livestock included in the first image. The controller is further operable to access a second image generated by the second camera. The second image includes at least a portion of an udder of the dairy livestock. The controller determines a location of a teat of the dairy livestock based on the second image.Type: ApplicationFiled: April 28, 2011Publication date: November 1, 2012Applicant: Technologies Holdings Corp.Inventors: Henk Hofman, Peter Willem van der Sluis, Ype Groensma
-
Publication number: 20120274761Abstract: In an exemplary embodiment, a system includes a first camera, a second camera, and a processor wherein the second camera has a higher resolution than the first camera. The processor is communicatively coupled to the first camera and the second camera and is operable to determine a center coordinate of an udder of a dairy livestock based at least in part upon visual data captured by the first camera. The processor is also operable to determine a position of a teat of the dairy livestock based at least in part upon the center coordinate and visual data captured by the second camera.Type: ApplicationFiled: April 17, 2012Publication date: November 1, 2012Applicant: Technologies Holdings Corp.Inventors: Henk Hofman, Cor de Ruijter, Menno Koekoek, Peter Willem van der Sluis
-
Publication number: 20120277914Abstract: The subject disclosure is directed towards a set of autonomous and semi-autonomous modes for a robot by which the robot captures content (e.g., still images and video) from a location such as a house. The robot may produce a summarized presentation of the content (a “botcast”) that is appropriate for a specific scenario, such as an event, according to a specified style. Modes include an event mode where the robot may interact with and simulate event participants to provide desired content for capture. A patrol mode operates the robot to move among locations (e.g., different rooms) to capture a panorama (e.g., 360 degrees) of images that can be remotely viewed.Type: ApplicationFiled: April 29, 2011Publication date: November 1, 2012Applicant: MICROSOFT CORPORATIONInventors: William M. Crow, Nathaniel T. Clinton, Malek M. Chalabi, Dane T. Storrusten
-
Patent number: 8301304Abstract: A detecting apparatus and method of a robot cleaner is disclosed. The apparatus includes a detecting unit provided with a transmitting unit for sending a signal to detect the floor and a receiving unit for receiving a signal sent from the transmitting unit to be reflected on the floors an optic angle adjusting unit disposed at least one of the transmitting unit and the receiving unit and configured to adjust optic angles of the signals, and a light shielding unit configured to partially shield a signal sent through the optic angle adjusting unit in order to reduce a deviation of each signal of the transmitting unit and the receiving unit. Accordingly, a measurement deviation with respect to color and feel of the floor can be reduced. Also, an amount of light received at the receiving unit can be obtained as much as required, which allows an accurate detection of the detecting apparatus. In addition, even if there are both drop-off and bump on the floor, the robot cleaner can smoothly operate.Type: GrantFiled: December 10, 2008Date of Patent: October 30, 2012Assignee: LG Electronics Inc.Inventors: Young-Gyu Jung, Sang-Cheon Kim, Tae-Woong Nah, Woo-Jin Choi
-
Publication number: 20120271502Abstract: Disclosed are a robot cleaner and a method for controlling the same. Firstly, an obstacle may be detected by using a light pattern sensor, and a user's inconvenience due to irradiation of a light pattern may be solved. Secondly, an obstacle may be precisely detected in a three dimensional manner by using the light pattern sensor. This may allow precise creation of a cleaning map. Thirdly, a user's eyes may be prevented from being continuously exposed to a light source. This may enhance the user's convenience.Type: ApplicationFiled: April 11, 2012Publication date: October 25, 2012Inventors: Seongsoo LEE, Dongki Noh, Chulmo Sung, Seungmin Baek, Jeongsuk Yoon