Robotics Patents (Class 382/153)
-
Patent number: 7702134Abstract: A method and apparatus for identifying 3-D coordinates of a target region on a tire includes: taking a digital image of a tire; finding an edge of a tire bead using pixel brightness values from the tire image; calculating tire bead circle center and radius using a plurality of image pixels on the tire bead edge; and performing a pixel brightness search around the bead circumference using the bead circle's center and radius to identify the target area X, Y coordinates. The Z-coordinate and slope of the target area are determined from multiple point distance calculations across the region.Type: GrantFiled: December 2, 2005Date of Patent: April 20, 2010Assignee: The Goodyear Tire & Rubber CompanyInventor: Daniel McAllister, Jr.
-
Patent number: 7697740Abstract: IVR-CT apparatus has an angio-image obtaining unit, an angio-image imaging direction obtaining unit, a CT-image obtaining unit, a blood vessel part extracting unit, a projected image generating unit and a display control unit. The angio-image obtaining unit obtains a required angio-image from multiple chronological angio-images. The angio-image imaging direction obtaining unit obtains a direction of imaging as incidental information included in data on the required angio-image. The CT-image obtaining unit obtains a three-dimensional CT-image corresponding to the required angio-image. The blood vessel part extracting unit extracts a blood vessel part in the three-dimensional CT-image. The projected image generating unit generates a three-dimensional projected image by projecting the blood vessel part, and a three-dimensional projected image corresponding to a direction of a projection after a manual operation operates the direction of the projection.Type: GrantFiled: October 20, 2006Date of Patent: April 13, 2010Assignees: Kabushiki Kaisha TOshiba, Toshiba Medical Systems CorporationInventor: Yasuko Fujisawa
-
Patent number: 7693325Abstract: A system and methods for transprojection of geometry data acquired by a coordinate measuring machine (CMM). The CMM acquires geometry data corresponding to 3D coordinate measurements collected by a measuring probe that are transformed into scaled 2D data that is transprojected upon various digital object image views captured by a camera. The transprojection process can utilize stored image and coordinate information or perform live transprojection viewing capabilities in both still image and video modes.Type: GrantFiled: January 14, 2004Date of Patent: April 6, 2010Assignee: Hexagon Metrology, Inc.Inventors: Sandeep Pulla, Homer Eaton
-
Patent number: 7684916Abstract: An imaging device collects color image data to facilitate distinguishing crop image data from background data. A definer defines a series of scan line segments generally perpendicular to a transverse axis of the vehicle or of the imaging device. An intensity evaluator determines scan line intensity data for each of the scan line segments. An alignment detector identifies a preferential heading of the vehicle that is generally aligned with respect to a crop feature, associated with the crop image data, based on the determined scan line intensity meeting or exceeding a maximum value or minimum threshold value. A reliability estimator estimates a reliability of the vehicle heading based on compliance with an intensity level criteria associated with one or more crop rows.Type: GrantFiled: December 16, 2005Date of Patent: March 23, 2010Assignees: Deere & Company, Kansas State University Research FoundationInventors: Jiantao Wei, Shufeng Han
-
Patent number: 7672503Abstract: A direction-recognizing apparatus has a photographing unit, an image database, an image-recognizing unit, and a direction-recognizing unit. The database stores registered images and direction-data items associated with the registered images. The image-recognizing unit receives an input image and compares the input image with the registered images stored in the database. The image-recognizing unit selects one registered image that is identical or similar to the input image. The direction-recognizing unit recognizes a direction from the direction data associated with the registered image selected by the image-recognizing unit. The database may store N direction-data items associated with N surface segments SN of the circumferential surface of a pole. If so, the images registered in the database represent direction-recognition regions ASN that are larger than the N surface segments SN.Type: GrantFiled: August 9, 2004Date of Patent: March 2, 2010Assignee: Sony CorporationInventors: Hidehiko Morisada, Shingo Tsurumi
-
Publication number: 20100040279Abstract: A method and apparatus to build a 3-dimensional grid map and a method and apparatus to control an automatic traveling apparatus using the same. In building a 3-dimensional map to discern a current location and a peripheral environment of an unmanned vehicle or a mobile robot, 2-dimensional localization and 3-dimensional image restoration are appropriately used to accurately build the 3-dimensional grid map more rapidly.Type: ApplicationFiled: March 26, 2009Publication date: February 18, 2010Applicant: Samsung Electronics Co., LtdInventors: Sukjune Yoon, Kyung Shik Roh, Woong Kwon, Seung Yong Hyung, Hyun Kyu Kim
-
Publication number: 20100036393Abstract: A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.Type: ApplicationFiled: February 29, 2008Publication date: February 11, 2010Applicant: TITAN MEDICAL INC.Inventor: John Unsworth
-
Patent number: 7660665Abstract: Autonomous mobile equipment includes a position-of-object and own position detecting system and a moving unit, and autonomously moves. The position-of-object and own position detecting system includes a database in which pieces of information on the superficial shape and position of an object are recorded. The superficial shape of the object detected by a position measuring unit is collated with the superficial shape of the object recorded in the database. If the collated superficial shapes agree with each other, the pieces of information on the object recorded in the database are transmitted to a traveling planning unit. If the collated superficial shapes disagree with each other, the information on the object acquired by the position measuring unit is transmitted to the traveling planning unit.Type: GrantFiled: November 16, 2005Date of Patent: February 9, 2010Assignee: Hitachi, Ltd.Inventors: Junichi Tamamoto, Yuji Hosoda, Saku Egawa, Toshihiko Horiuchi
-
Publication number: 20100030379Abstract: The invention describes a method of controlling an autonomous device (1), which autonomous device records ambient data and optionally transmits the recorded ambient data, which method comprises positioning an indicator (S1, S2, S3, S4) at a boundary (B) between a private area (P) and a non-private area (N) to optically distinguish the private area (P) from the non-private area (N) for a user of the autonomous device (1). The indicator (S1, S2, S3, S4) is detected by the autonomous device (1) and interpreted to determine whether the autonomous device (1) is in a private area (P) or a non-private area (N). Subsequently, recording or transmission of ambient data is restricted while the autonomous device (1) is within the private area (P).Type: ApplicationFiled: December 12, 2007Publication date: February 4, 2010Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.Inventors: Georgios Parlantzas, Jens Friedemann Marschiner
-
Publication number: 20100021051Abstract: Disclosed herein are embodiments and methods of a visual guidance and recognition system requiring no calibration. One embodiment of the system comprises a servo actuated manipulator configured to perform a function, a camera mounted on the face plate of the manipulator, and a recognition controller configured to acquire a two dimensional image of the work piece. The manipulator controller is configured to receive and store the face plate position at a distance “A” between the reference work piece and the manipulator along an axis of the reference work piece when the reference work piece is in the camera's region of interest. The recognition controller is configured to learn the work piece from the image and the distance “A”. During operation, a work piece is recognized with the system, and the manipulator is accurately positioned with respect to the work piece so that the manipulator can accurately perform its function.Type: ApplicationFiled: July 22, 2008Publication date: January 28, 2010Applicants: RECOGNITION ROBOTICS, INC., COMAU INC.Inventors: Simon Melikian, Maximiliano A. Falcone, Joseph Cyrek
-
Patent number: 7653247Abstract: A system and method for extracting a corner point in a space using pixel information obtained from a camera are provided. The corner point extracting system includes a light generation module emitting light in a predetermined form (such as a plane form), an image acquisition module acquiring an image of a reflector reflecting the light emitted from the light generation module, and a control module obtaining distance data between the light generation module and the reflector using the acquired image and extracting a corner point by performing split-merge using a threshold proportional to the distance data. The threshold is a value proportional to the distance data which corresponds to pixel information of the image acquisition module.Type: GrantFiled: September 29, 2005Date of Patent: January 26, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Myung-Jin Jung, Dong-Ryeol Park, Seok-Won Bang, Hyoung-Ki Lee
-
Patent number: 7653216Abstract: A polyhedron recognition system that is not easily affected by the camera position and camera range when the shape of a staircase or other polyhedron is recognized from an image obtained by photography, and that can also recognize the shape with good accuracy. In the system, predetermined regions (staircase candidate regions) within the image input from two CCD cameras are selected, and a range image is obtained stereoscopically with the two cameras, while also a candidate region upon the range image obtained based on the selected region is set, and the shape of the staircase or other polyhedron is recognized based on the set range image within the candidate region.Type: GrantFiled: December 23, 2003Date of Patent: January 26, 2010Assignees: Carnegie Mellon University, Honda Motor Co., Ltd.Inventors: Takeo Kanade, Taku Osada
-
Patent number: 7643064Abstract: A predictive device system includes a first device motion control input, determines a desired first device motion using the first device motion control input, and provides actual first device motion using the first device motion control input. The predictive system also determines motion inherent in a received signal using the actual first device motion, determines a difference to be simulated in a second device signal using the desired first device motion and the motion inherent in the received signal, and outputs a predictive signal using the first device motion control input and the difference to be simulated in the second device signal.Type: GrantFiled: June 21, 2005Date of Patent: January 5, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventor: Norman Paul Jouppi
-
Patent number: 7620237Abstract: Two aiming targets are arranged at positions that are away from two infrared cameras by a predetermined distance. One of the infrared cameras images one of the aiming targets, and the other of the infrared cameras images the other of the aiming targets. Assuming that the obtained images relate to an image of one aiming target, a parallax offset value is calculated. The correction of position is performed based on the parallax offset value.Type: GrantFiled: November 23, 2005Date of Patent: November 17, 2009Assignee: Honda Motor Co., Ltd.Inventors: Nobuharu Nagaoka, Masakazu Saka, Masahito Watanabe
-
Patent number: 7616806Abstract: A first object and a second object arranged in an actual space with coordinates (Xn, Zn) and (Xn?D, Zn) are imaged, and respective coordinates x1* and x2* of the first object and the second object in the image are calculated. Then, a coordinate x1 of the first object in the image and a coordinate x2 of the second object in the image are calculated by equations: x1=F·Xn/Zn x2=F·(Xn?D)/Zn where F is a design parameter of an imaging unit. An image distortion corrective value ? to correct the design parameter F is calculated by equations: ?·x1*=x1 ?·x2*=x2 using a difference between the coordinates x1 and x1* and a difference between the coordinates x2 and x2*.Type: GrantFiled: November 23, 2005Date of Patent: November 10, 2009Assignee: Honda Motor Co., Ltd.Inventors: Nobuharu Nagaoka, Masakazu Saka, Masahito Watanabe
-
Patent number: 7606411Abstract: A gesture recognition system enabling control of a robotic device through gesture command by a user is provided, comprising a robotic unit, a video or infrared camera affixed to the robotic unit, computing means, and high and low level of control gesture recognition application code capable of enabling the system to locate points of left hand, right hand, upper torso and lower torso of the user in the video imagery and convert it to waveform data, correlate the waveform data to user command data, and form corresponding control voltage command(s) for production of electric current voltage(s) to drive one or more of the electric motors or actuators of the robotic device to thereby control same. In addition, a computer software program is provided for use in the gesture recognition system described above.Type: GrantFiled: October 5, 2006Date of Patent: October 20, 2009Assignee: The United States of America as represented by the Secretary of the NavyInventors: Larry Venetsky, Jeffrey W. Tieman
-
Patent number: 7596468Abstract: A computer-implemented method for measuring a selected portion of a curved surface of an object is disclosed. The method includes the blocks of displaying a straight-line across an object, stretching the straight-line to form a plane, determining intersection points between the plane and the curved surface of the object, determining a vertical point of each point-cloud around the straight-line on the curved surface, a corresponding vertical distance, and a corresponding normal vector, projecting the vertical points onto the plane vertically, determining measured points, up tolerance points, and down tolerance points for the point-clouds around the straight-line on the plane, connecting the corresponding points to lines, and determining if one or more the dimensions of a selected portion around the straight-line of the object is acceptable according to the connected lines.Type: GrantFiled: July 25, 2008Date of Patent: September 29, 2009Assignees: Hong Fu Jin Precision Industry (ShenZhen) Co., Ltd., Hon Hai Precision Industry Co., Ltd.Inventors: Chih-Kuang Chang, Xin-Yuan Wu, Min Wang, Hua Huang
-
Patent number: 7593546Abstract: A method for mutually-immersive telepresencing is provided with a view of a surrogate's location. An image of the surrogate's location is displayed at a user's location. A user's eye level and perspective are sensed. The height of the camera and image of the user's eyes at the surrogate's location are adjusted to match the height of the user's eyes. The user's perspective and, hence, gaze are preserved on the image while the user's eye level changes.Type: GrantFiled: March 11, 2003Date of Patent: September 22, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventor: Norman Paul Jouppi
-
Patent number: 7590276Abstract: Methods and systems of part programming for machine vision inspection systems are provided, which permit a user to readily define multiple image acquisition operations interspersed with associated image analysis and/or inspection operations during learn mode operations and in the resulting part program image acquisition operations for at least some of the images are arranged into a continuous motion image acquisition sequence that acquires images and stores images in a “non-interspersed” manner in order to increase the throughput of the machine vision inspection system. Image analysis/inspection operations associated with the stored images are performed subsequently by recalling the store images. The programming systems and methods disclosed herein may operate automatically to facilitate rapid programming for a variety of workpieces by relatively unskilled users, wherein the resulting programs include continuous motion image acquisition sequences.Type: GrantFiled: December 20, 2004Date of Patent: September 15, 2009Assignee: Mitutoyo CorporationInventor: Mark L. Delaney
-
Patent number: 7587261Abstract: An image acquisition system for machine vision systems decouples image acquisition from the transmission of the image to a host processor by using a programmable imager controller to selectively disable and enable the transmission of data to the host and by using a system of buffers to temporarily store image data pending allocation of memory. This enables the image acquisition system to acquire images asynchronously and to change the exposure parameters on a frame-by-frame basis without the latency associated with the allocation of memory for storage of the acquired image. The system architecture of the invention further permits interruption and resumption of image acquisition with minimal likelihood of missing data. Data throughput is further enhanced by transmitting to the host only that data corresponding to the region of interest within the image and discarding the data from outside of the region of interest at the camera stage.Type: GrantFiled: January 31, 2003Date of Patent: September 8, 2009Assignee: Metrovideo, Inc.Inventor: T. Eric Hopkins
-
Patent number: 7583835Abstract: A reconnaissance process for taking successive images (7, 10) of an object using a camera by pairing optimally points (A, B) into a single movement (?m) compatible with the movement of the camera that has taken two images to calculate the position of the object. The points that can be paired belong to the object even if they have been obtained automatically, whereas the background of the image often has a lower number of points making it impossible to pair with the movement (?m).Type: GrantFiled: July 6, 2005Date of Patent: September 1, 2009Assignee: Commissariat a l'Energie AtomiqueInventor: Christophe Leroux
-
Publication number: 20090208094Abstract: To make it possible, in a robot apparatus that performs actions in response to external environment, to distinguish the image of a part of its own body contained in three dimensional data of the external environment. A robot 1 includes a visual sensor 101 to visually recognize external environment, an environment restoration portion 102 to create three dimensional data of the external environment based on the information acquired by the visual sensor 101, and a body estimation portion 104 to determine whether or not an image of the body of the robot apparatus 1 is contained in the three dimensional data, and to specify, when the image of the body of the robot apparatus 1 is determined to be contained in the three dimensional data, an area occupied by the image of the body of the robot apparatus 1 in the three dimensional data.Type: ApplicationFiled: June 27, 2007Publication date: August 20, 2009Inventors: Hirohito Hattori, Yusuke Nakano, Noriaki Matsui
-
Publication number: 20090190826Abstract: A working apparatus comprises a working unit which executes work on a work subject, and a calibration jig on which a plurality of markers is arranged in a radial pattern from a center point of markers, the plurality of markers being arranged in three dimensions, and the calibration jig being attached to a working unit such that a calibration reference point set of a working unit coincides with a center point of markers. According to such a composition, it becomes possible to calibrate a position of a working unit even when a portion of the jig containing a center point of markers is occluded during image measurement.Type: ApplicationFiled: January 22, 2009Publication date: July 30, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Shunta Tate, Masato Aoba
-
Publication number: 20090192647Abstract: An object search apparatus acquires, from each IC tag of IC tags corresponding to objects respectively, an object information item including an identifier of an object, a hierarchical level, a detection method for detection of the object, and a manipulation method for the object, to obtain a plurality of object information items including a target object information item of a target object, selects, from the object information items, an object information item higher in the hierarchical level than the target object, detects an object corresponding to selected object information item by using the detection method in the selected object information item, manipulates detected object in accordance with the manipulation method in the selected object information item, selects target object information item from the object information items, and detects the target object from the detected object by using the detection method in the target object information item.Type: ApplicationFiled: January 27, 2009Publication date: July 30, 2009Inventor: Manabu NISHIYAMA
-
Patent number: 7567728Abstract: An image transformation method to be implemented by a computer carries out an image transformation process, by transforming a picked up image that is picked up by a pickup unit having an optical axis tilted by an arbitrary angle with respect to a reference plane into an image substantially equivalent to a picked up image that is picked up by the pickup unit in a state where the optical axis of the pickup unit is perpendicular to the reference plane, and substituting luminance values of coordinates before the transformation as luminance values corresponding to coordinate values after the transformation.Type: GrantFiled: August 13, 2004Date of Patent: July 28, 2009Assignee: Fujitsu LimitedInventor: Takayuki Fujimoto
-
Patent number: 7561754Abstract: An image transformation method to be implemented by a computer carries out an image transformation process, by transforming a picked up image that is picked up by a pickup unit having an optical axis tilted by an arbitrary angle with respect to a reference plane into an image substantially equivalent to a picked up image that is picked up by the pickup unit in a state where the optical axis of the pickup unit is perpendicular to the reference plane, and substituting luminance values of coordinates before the transformation as luminance values corresponding to coordinate values after the transformation.Type: GrantFiled: May 16, 2008Date of Patent: July 14, 2009Assignee: Fujitsu LimitedInventor: Takayuki Fujimoto
-
Publication number: 20090154791Abstract: A simultaneous localization and map building (SLAM) method and medium for a moving robot is disclosed. The SLAM method includes extracting a horizontal line from an omni-directional image photographed at every position where the mobile robot reaches during a movement of the mobile robot, correcting the extracted horizontal line, to create a horizontal line image, and simultaneously executing a localization of the mobile robot and building a map for the mobile robot, using the created horizontal line image and a previously-created horizontal line image.Type: ApplicationFiled: May 23, 2008Publication date: June 18, 2009Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sukjune Yoon, Seung Ki Min, Kyung Shik Roh
-
Publication number: 20090154769Abstract: A moving robot and moving object detecting method and medium thereof is disclosed. The moving object detecting method includes transforming an omni-directional image captured in the moving robot to a panoramic image, comparing the panoramic image with a previous panoramic image and estimating a movement region of the moving object based on the comparison, and recognizing that a movement of the moving object exist in the estimated movement region when the area of the estimated movement region exceeds the reference area.Type: ApplicationFiled: May 20, 2008Publication date: June 18, 2009Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sukjune Yoon, Seung Ki Min, Kyung Shik Roh, Chil Woo Lee, Chi Min Oh
-
Publication number: 20090148034Abstract: There is disclosed a mobile robot including an image processor that generates recognition information regarding a target object included in a taken image, and a main controller integrally controlling the robot based on this recognition information.Type: ApplicationFiled: December 3, 2008Publication date: June 11, 2009Applicant: Honda Motor Co., Ltd.Inventors: Nobuo Higaki, Ryujin Watanabe
-
Publication number: 20090148035Abstract: A controller of a mobile robot that moves an object such that the position of a representative point of the object and the posture of the object follow a desired position and posture trajectory is provided. The desired posture trajectory of the object includes the desired value of the angular difference about a yaw axis between a reference direction, which is a direction orthogonal to the yaw axis of the object, and the direction of the moving velocity vector of the representative point of the object, defined by the desired position trajectory. The controller has a desired angular difference setting means which variably sets the desired value of the angular difference according to at least a required condition related to a movement mode of the object. This allows the object to be moved at a posture which meets the required condition of the movement mode.Type: ApplicationFiled: December 5, 2008Publication date: June 11, 2009Applicant: HONDA MOTOR CO., LTD.Inventors: Nobuyuki Ohno, Tadaaki Hasegawa
-
Patent number: 7536028Abstract: A monitoring system according to the present invention has a plurality of cameras respectively monitoring a monitor area and having a changeable imaging magnification, monitoring section detecting an object to be monitored in the monitor area, and control section selecting at least one camera capable of obtaining an enlarged image to contribute to identification of the object to be monitored as camera for enlargement from the plural cameras when the object to be monitored is detected and selecting at least one camera other than the camera for enlargement as camera for wide area from the plural monitoring camera. The camera for wide area is set to a first imaging magnification to obtain a wide area image of the monitor area and the camera for enlargement is set to a second imaging magnification larger than the first imaging magnification to obtain an enlarged image of the detected object to be monitored.Type: GrantFiled: March 24, 2003Date of Patent: May 19, 2009Assignee: Minolta Co., Ltd.Inventor: Hironori Sumitomo
-
Publication number: 20090116728Abstract: A method and system determines a 3D pose of an object in a scene. Depth edges are determined from a set of images acquired of a scene including multiple objects while varying illumination in the scene. The depth edges are linked to form contours. The images are segmented into regions according to the contours. An occlusion graph is constructed using the regions. The occlusion graph includes a source node representing an unoccluded region of an unoccluded object in scene. The contour associated with the unoccluded region is compared with a set of silhouettes of the objects, in which each silhouette has a known pose. The known pose of a best matching silhouette is selected as the pose of the unoccluded object.Type: ApplicationFiled: November 7, 2007Publication date: May 7, 2009Inventors: Amit K. Agrawal, Ramesh Raskar
-
Publication number: 20090110265Abstract: According to one embodiment, a pattern forming system includes a patterning tool, a multi-axis robot, and a simulation tool that are coupled to a pattern forming tool that is executed on a suitable computing system. The pattern forming tool receives a contour measurement from the patterning tool and transmits the measured contour to the simulation tool to model the electrical characteristics of a conductive pattern or a dielectric pattern on the measured contour. Upon receipt of the modeled characteristics, the pattern forming system may adjust one or more dimensions of the pattern according to the model, and subsequently create, using the patterning tool, the corrected pattern on the surface.Type: ApplicationFiled: October 10, 2008Publication date: April 30, 2009Applicant: RAYTHEON COMPANYInventors: Sankerlingam Rajendran, Billy D. Ables
-
Publication number: 20090088897Abstract: In one embodiment of the invention, a method for a robotic system is disclosed to track one or more robotic instruments. The method includes generating kinematics information for the robotic instrument within a field of view of a camera; capturing image information in the field of view of the camera; and adaptively fusing the kinematics information and the image information together to determine pose information of the robotic instrument. Additionally disclosed is a robotic medical system with a tool tracking sub-system. The tool tracking sub-system receives raw kinematics information and video image information of the robotic instrument to generate corrected kinematics information for the robotic instrument by adaptively fusing the raw kinematics information and the video image information together.Type: ApplicationFiled: September 30, 2007Publication date: April 2, 2009Applicant: Intuitive Surgical, Inc.Inventors: Wenyi Zhao, Christopher J. Hasser, William C. Nowlin, Brian D. Hoffman
-
Publication number: 20090060318Abstract: In a legged mobile robot having an imaging device (such as CCD camera) for taking an image utilizing incident light from external world in which a human being to be imaged is present, brightness reduction operation is executed to reduce brightness of a high-brightness imaging region produced by high-brightness incident light, when the high-brightness imaging region is present in the image taken by the imaging device. With this, when the imaged high-brightness imaging region is present owing to high-brightness incident light from the sun or the like, the legged mobile robot can reduce the brightness to image a human being or other object with suitable brightness.Type: ApplicationFiled: August 25, 2008Publication date: March 5, 2009Inventors: Takamichi Shimada, Taro Yokoyama
-
Publication number: 20090043422Abstract: A method and apparatus for taking a picture are provided, in which a mobile apparatus detects a current position of a mobile apparatus, moves from the current position to a predetermined position to take a picture, information about an ambient image around the mobile apparatus through the image sensor, after the movement, analyzes characteristics of the received image information, compares the analyzed characteristics with a predetermined picture-taking condition, controls the mobile apparatus for the characteristics of the image information to satisfy the predetermined picture-taking condition, if the characteristics of the image information do not satisfy the predetermined picture-taking condition, and takes a picture, if the characteristics of the image information satisfy the predetermined picture-taking condition.Type: ApplicationFiled: August 6, 2008Publication date: February 12, 2009Inventors: Ji-Hyo Lee, Hyun-Soo Kim
-
Publication number: 20090018699Abstract: Regarding predetermined positioning criteria (M1, M2), ((G1, G2), (N1, or N2), (K1, K2)), there is provided image processing means (40B) for obtaining by image processing, measured values (CD1, CD2) ((GD1, GD2), (ND1, or ND2), (KD1, KD2)) and reference values (CR1, CR2) ((GR1, GR2), (NR1, or NR2), (KR1, KR2)), and for moving a work (W) in a manner that the measured values (CD1, CD2) ((GD1, GD2), (ND1, or ND2), (KD1, KD2)) and the reference values (CR1, CR2) ((GR1, GR2), (NR1, or NR2), (KR1, KR2)) coincide with each other, thereby positioning the work (W) at a predetermined position.Type: ApplicationFiled: July 10, 2008Publication date: January 15, 2009Applicant: AMADA CO., LTD.Inventors: Ichio AKAMI, Koichi Ishibashi, Teruyuki Kubota, Tetsuaki Kato, Jun Sato, Tatsuya Takahashi
-
Patent number: 7474939Abstract: An object taking-out apparatus for taking out objects randomly stacked in a container according to a condition of how each object is placed, which includes a robot hand having telescopic means and a coupling member whose one ends are connected to a robot arm end, and holding means coupled to their other ends. The telescopic means expands and contracts to cause the holding means to assume either a first orientation where a small angle is formed or a second orientation where a large angle is formed between a holding direction axis of the holding means and a rotary axis of the robot arm end, thereby taking out objects without causing interaction between the robot and the container.Type: GrantFiled: January 30, 2004Date of Patent: January 6, 2009Assignee: Fanuc LtdInventors: Masaru Oda, Toshinari Tamura
-
Patent number: 7474781Abstract: An image based bar-code reading and robotic registration apparatus and method for use in automated tape library systems is disclosed. An imager is positioned on a picker assembly with its own illumination source and appropriate optics to filter out ambient light. The imager connects to a microprocessor in its immediate vicinity. All image acquisition and processing are done by these components. To ensure operation independent of illumination variations, the image processing code developed for this invention automatically adapts to dynamic lighting situations. The tape cartridge cells are used as fiducials to allow tape cartridge registration without fiducial markings. The use of the tape cartridge cells as fiducials maximizes storage capability.Type: GrantFiled: September 20, 2001Date of Patent: January 6, 2009Assignee: International Business Machines CorporationInventor: Lyle Joseph Chamberlain
-
Publication number: 20080310705Abstract: A robot capable of performing appropriate movement control while reducing arithmetic processing for recognizing the shape of a floor. The robot sets a predetermined landing position of steps of the legs on a present assumed floor, which is a floor represented by floor shape information used for a current motion control of the robot, during movement of the robot. An image projection areas is set, and is projected on each image captured by cameras mounted on the robot for each predetermined landing position in the vicinity of each of the predetermined landing positions. Shape parameters representing the shape of an actual floor partial area are estimated, forming an actual floor whose image is captured in each partial image area, of based on the image of the partial image area generated by projecting the set image projection area on the images captured by the cameras for each partial image area.Type: ApplicationFiled: March 27, 2008Publication date: December 18, 2008Applicants: HONDA MOTOR CO., LTD., TOKYO INSTITUTE OF TECHNOLOGYInventors: Minami Asatani, Masatoshi Okutomi, Shigeki Sugimoto
-
Publication number: 20080310706Abstract: A method for performing a convergence calculation using a projective transformation between images captured by two cameras to observe a flat part of an object in the images, wherein a computational load is reduced while securing a convergence property of the convergence calculation. Initial values (n0(i), d0(i)) are set to values satisfying a limiting condition that should be satisfied by the initial values (n0(i), d0(i)), where the limiting condition is that a plane ?a(i) defined by the initial values (n0(i), d0(i)) of given types of parameters (n(i), d(i)) of a projective transformation matrix in the convergence calculation is inclined with respect to an actual plane including the flat part of the object to be observed.Type: ApplicationFiled: March 27, 2008Publication date: December 18, 2008Applicants: HONDA MOTOR CO., LTD., TOKYO INSTITUTE OF TECHNOLOGYInventors: Minami Asatani, Masatoshi Okutomi, Shigeki Sugimoto
-
Patent number: 7462814Abstract: Methods and systems for evaluating and controlling a lithography process are provided. For example, a method for reducing within wafer variation of a critical metric of a lithography process may include measuring at least one property of a resist disposed upon a wafer during the lithography process. A critical metric of a lithography process may include, but may not be limited to, a critical dimension of a feature formed during the lithography process. The method may also include altering at least one parameter of a process module configured to perform a step of the lithography process to reduce within wafer variation of the critical metric. The parameter of the process module may be altered in response to at least the one measured property of the resist.Type: GrantFiled: February 1, 2006Date of Patent: December 9, 2008Assignee: KLA-Tencor Technologies Corp.Inventors: Suresh Lakkapragada, Kyle A. Brown, Matt Hankinson, Ady Levy
-
Patent number: 7460687Abstract: A pointing position detection device is provided which, along with enabling a human being to perform pointing operation in a natural manner, can perform detection at high accuracy. The device detects the presence of a human being from an image photographed by cameras and a position at which the human being is pointing, and which includes: a section which, based upon the image, detects a head position of the human being, including at least distance information; a section which, based upon tile image, detects a hand position of the human being, including at least distance information; a section which, based upon the hand position, calculates a hand tip position and a main axis of the hand; and a section which detects a pointing direction, based upon the head position, the hand tip position, and the main axis, wherein the pointing position is detected based upon the pointing direction.Type: GrantFiled: July 10, 2003Date of Patent: December 2, 2008Assignee: Honda Motor Co., Ltd.Inventor: Taro Yokoyama
-
Patent number: 7456977Abstract: A wireless substrate-like sensor is provided to facilitate alignment and calibration of semiconductor processing systems. The wireless substrate-like sensor includes an optical image acquisition system that acquires one or more images of targets placed within the semiconductor processing system. Analysis of images of the targets obtained by the wireless substrate-like sensor provides position and/or orientation information in at least three degrees of freedom. An additional target is affixed to a known location within the semiconductor processing system such that imaging the reference position with the wireless substrate-like sensor allows the measurement and compensation for pick-up errors.Type: GrantFiled: March 15, 2006Date of Patent: November 25, 2008Assignee: CyberOptics Semiconductor, Inc.Inventors: Craig C. Ramsey, Jeffrey K. Lassahn, Greg Huntzinger, DelRae H. Gardner
-
Publication number: 20080273791Abstract: An apparatus, method, and medium for dividing regions by using feature points and a mobile robot cleaner using the same are provided. A method includes forming a grid map by using a plurality of grid points that are obtained by detecting distances of a mobile robot from obstacles; extracting feature points from the grid map; extracting candidate pairs of feature points, which are in the range of a region division element, from the feature points; extracting a final pair of feature points, which satisfies the requirements of the region division element, from the candidate pairs of feature points; forming a critical line by connecting the final pair of feature points; and forming a final region in accordance with the size relationship between regions formed of a closed curve which connects the critical line and the grid map.Type: ApplicationFiled: July 5, 2007Publication date: November 6, 2008Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Su-jinn Lee, Hyeon Myeong, Yong-beom Lee, Seok-won Bang
-
Publication number: 20080247637Abstract: A robotic tattoo application and tattoo removal methods and systems are described. This technology involves the use of a robotic system guided by control of a graphics capable computer in order to perform various types, including artistic, recreational, cosmetic, or therapeutic tattooing, or tattoo removal.Type: ApplicationFiled: December 27, 2007Publication date: October 9, 2008Applicant: RESTORATION ROBOTICS, INC.Inventor: Philip L. Gildenberg
-
Publication number: 20080240547Abstract: There are provided an apparatus and method for vision processing on a network based intelligent service robot system and a system using the same. A robot can move to a target object, avoiding obstacles without helps of a robot server interfaced with a robot terminal over network, by extracting/processing three-dimensional distance information of external objects, using a stereo camera, a low price image processing dedicated chip and an embedded processor. Therefore, the intelligent robot can travel and move using only a stereo camera image processing without other sensors, and further provides users with various functional services with low expense.Type: ApplicationFiled: December 15, 2005Publication date: October 2, 2008Applicant: Electronics and Telecommunications Reseach InstituteInventors: Jae Il Cho, Seung Min Choi, Dae Hwan Hwang
-
Publication number: 20080232678Abstract: A localization method of a moving robot is disclosed in which the moving robot includes: capturing a first omni-directional image by the moving robot; confirming at least one node at which a second omni-directional image having a high correlation with the first omni-directional image is captured; and determining that the moving robot is located at the first node when the moving robot reaches a first node, at which a second omni-directional image having a highest correlation with the first omni-directional image is captured, from among the at least one node.Type: ApplicationFiled: January 18, 2008Publication date: September 25, 2008Applicant: Samsung Electronics Co., Ltd.Inventors: Sukjune Yoon, Woosup Han, Seung Ki Min, Kyung Shik Roh
-
Publication number: 20080193009Abstract: According to an embodiment of the present invention, a tracking method includes detecting a mobile unit within a space, tracking the detected mobile unit, making a position determination of the mobile unit to be tracked to obtain positional data, and making a movement prediction of the mobile unit, based on a high frequency component of positional data.Type: ApplicationFiled: January 29, 2008Publication date: August 14, 2008Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Takafumi Sonoura
-
Patent number: 7410266Abstract: The present invention provides a three-dimensional imaging system for robot vision, which is capable of three-dimensional positioning of objects, object identification, searching and tracking of an object of interest, and compensating the aberration of the system. The three-dimensional imaging system for robot vision comprises one or more camera systems, each of which has at least one variable focal length micromirror array lens, an imaging unit, and an image processing unit. The variable focal length micromirror array lens used in the three-dimensional imaging system for robot vision has unique features including a variable focal length, a variable focal axis, and a variable field of view with a fast response time.Type: GrantFiled: December 28, 2005Date of Patent: August 12, 2008Assignees: Angstrom, Inc., Stereo Display, Inc.Inventors: Cheong Soo Seo, Sang Hyune Baek, Gyoung Il Cho