Robotics Patents (Class 382/153)
-
Patent number: 12266129Abstract: A memory storing program code that when executed by a processor of a robot effectuates operations, including: detecting, with a sensor of a plurality of sensors disposed on the robot, an object in a line of sight of the sensor; adjusting, with the processor of the robot, a current path of the robot to detour around or avoid the object; generating, with the processor of the robot, a planar representation of a workspace of the robot based on data collected by at least some sensors of the plurality of sensors; and wherein an application of a communication device paired with the robot is configured to display the planar representation.Type: GrantFiled: August 17, 2022Date of Patent: April 1, 2025Assignee: AI IncorporatedInventor: Ali Ebrahimi Afrouzi
-
Patent number: 12245826Abstract: Methods, apparatuses, and systems for automated disease detection using multiple-wavelength imaging are disclosed. The disclosed system uses multiple imaging modalities for assessing a medical condition. Data collected from multiple cameras and imaging modalities is processed to identify common structures. The common structures are used to scale and align images, which are analyzed to detect one or more medical conditions. Each acquired image is assessed, and the resulting probabilities are consolidated. The images can be assessed together by using artificial intelligence and machine learning.Type: GrantFiled: May 1, 2023Date of Patent: March 11, 2025Assignee: IX Innovation LLCInventors: Jeffrey Roh, Justin Esterberg, John Cronin, Seth Cronin, Michael John Baker
-
Patent number: 12245548Abstract: Provided is a harvester configured to effect a work-performing traveling for harvesting agricultural products in a field while traveling in the field autonomously. This harvester includes a traveling device effecting the work-performing traveling, a machine body supported to the traveling device, a harvesting section supported to a front portion of the machine body and harvesting the products in the field, an imaging device provided at a front portion of the machine body and at a position higher than the harvesting section so as to view down a product present in an un-worked area located forwardly of the harvesting section, a detecting section detecting abnormality in the field from an image imaged by the imaging device, and a controlling section executing an abnormality situation control as a control in accordance with the abnormality detected in the field.Type: GrantFiled: April 24, 2020Date of Patent: March 11, 2025Assignee: Kubota CorporationInventors: Shunsuke Edo, Kenichi Iwami, Shunsuke Miyashita, Takashi Nakabayashi
-
Patent number: 12236592Abstract: Described herein are means for systematically determining an optimal approach for the computer-aided diagnosis of a pulmonary embolism, in the context of processing medical imaging. According to a particular embodiment, there is a system specially configured for diagnosing a Pulmonary Embolism (PE) within new medical images which form no part of the dataset upon which the AI model was trained.Type: GrantFiled: September 14, 2022Date of Patent: February 25, 2025Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Nahid Ul Islam, Shiv Gehlot, Zongwei Zhou, Jianming Liang
-
Patent number: 12236340Abstract: A computer system trains a neural network to predict, for each pixel in an input image, the position that a robot's end effector would reach if a grasp (“poke”) were attempted at that position. Training data consists of images and end effector positions recorded while a robot attempts grasps in a pick-and-place environment. For an automated grasping policy, the approach is self-supervised, as end effector position labels may be recovered through forward kinematics, without human annotation. Although gathering such physical interaction data is expensive, it is necessary for training and routine operation of state of the art manipulation systems. Therefore, the system comes “for free” while collecting data for other tasks (e.g., grasping, pushing, placing). The system achieves significantly lower root mean squared error than traditional structured light sensors and other self-supervised deep learning methods on difficult, industry-scale jumbled bin datasets.Type: GrantFiled: September 14, 2020Date of Patent: February 25, 2025Assignee: OsaroInventors: Ben Goodrich, Alex Kuefler, William D. Richards, Christopher Correa, Rishi Sharma, Sulabh Kumra
-
Patent number: 12179355Abstract: The purpose of the present invention is to provide a workpiece unloading device that can stabilize cycle time even when work for reversing a workpiece is involved. A workpiece unloading system comprises a workpiece unloading device for unloading a workpiece, and a control device for setting workpiece unloading order.Type: GrantFiled: June 3, 2021Date of Patent: December 31, 2024Assignee: FANUC CORPORATIONInventor: Toshiyuki Ando
-
Patent number: 12175692Abstract: A localization method includes scanning a surrounding space by using laser outputted from a reference region; processing spatial information about the surrounding space based on a reflection signal of the laser; extracting feature vectors to which the spatial information has been reflected by using a deep learning network which uses space vectors including the spatial information as input data; and comparing the feature vectors with preset reference map data, and thus estimating location information about the reference region.Type: GrantFiled: July 25, 2022Date of Patent: December 24, 2024Assignee: NAVER LABS CORPORATIONInventors: Min Young Chang, Su Yong Yeon, Soo Hyun Ryu, Dong Hwan Lee
-
Patent number: 12175742Abstract: A method for detecting boxes includes receiving a plurality of image frame pairs for an area of interest including at least one target box. Each image frame pair includes a monocular image frame and a respective depth image frame. For each image frame pair, the method includes determining corners for a rectangle associated with the at least one target box within the respective monocular image frame. Based on the determined corners, the method includes the following: performing edge detection and determining faces within the respective monocular image frame; and extracting planes corresponding to the at least one target box from the respective depth image frame. The method includes matching the determined faces to the extracted planes and generating a box estimation based on the determined corners, the performed edge detection, and the matched faces of the at least one target box.Type: GrantFiled: October 11, 2023Date of Patent: December 24, 2024Assignee: Boston Dynamics, Inc.Inventors: Alex Perkins, Charles DuHadway, Peter Anderson-Sprecher
-
Patent number: 12162704Abstract: Disclosed is a robotic system including a telescoping transport conveyor with an automated unloader attached. The automated unloader includes a loading conveyor with at least two articulated robots attached to the first loading conveyor end. The automated unloader includes a control system with logic controlling the unloading of material to be handled from a transport container onto the telescoping transport conveyor, and thence to an automated palletizing system, where materials are loaded on pallets and supported by a pallet sleeve during storage or during transport to a stabilization system where pallet loads are stretch wrapped. Further disclosed are methods for controlling and operating the same to fully automate product unloading, handling, and distribution throughout a material handling facility.Type: GrantFiled: April 29, 2024Date of Patent: December 10, 2024Assignee: Lab0, Inc.Inventor: David Bruce McCalib, Jr.
-
Patent number: 12128567Abstract: Using machine learning to recognize variant objects is disclosed, including: identifying an object as a variant of an object type by inputting sensed data associated with the object into a modified machine learning model corresponding to the variant of the object type, wherein the modified machine learning model corresponding to the variant of the object type is generated using a machine learning model corresponding to the object type; and generating a control signal to provide to a sorting device that is configured to perform a sorting operation on the object, wherein the sorting operation on the object is determined based at least in part on the variant of the object type associated with the object.Type: GrantFiled: December 22, 2021Date of Patent: October 29, 2024Assignee: AMP Robotics CorporationInventors: Matanya B. Horowitz, Joseph M. Castagneri, Joshua M. Browning, Carson C. Potter, Paul Dawes
-
Patent number: 12097625Abstract: Systems and methods for identifying a robot end effector in a processing environment may utilize one or more sensors for digitally recording visual information and providing that information to an industrial workflow. The sensor(s) may be positioned to record at least one image of the robot including the end effector. A processor may determine the identity of the end effector from the recorded image(s) and a library or database stored digital models.Type: GrantFiled: March 19, 2021Date of Patent: September 24, 2024Assignee: VEO ROBOTICS, INC.Inventors: Clara Vu, Scott Denenberg, Ilya A. Kriveshko, Paul Jakob Schroeder
-
Patent number: 12093014Abstract: A position calibration system and method are disclosed, in which a control unit is provided to control a positioner sensing module to scan a circular positioner provided on a positioning substrate in a first direction and a second direction so as to acquire midpoints of two scanned line segments and acquire an intersection of lines extending from the two center points in a direction perpendicular to the first and the second directions as a calibration reference point, which correspond to a centroid (a center) of the circular positioner. The calibration reference point functions as a reference point for positioning the positioning substrate with respect to the positioner sensing module and is stored in a memory unit. The calibration reference point can be used as a positioning point during installation of a machine and can also be used for calibration of a position of the machine.Type: GrantFiled: January 21, 2022Date of Patent: September 17, 2024Assignee: CHROMA ATE INC.Inventors: Chin-Yi Ouyang, Wei-Cheng Kuo, Chien-Ming Chen, Xin-Yi Wu
-
Patent number: 12089981Abstract: The invention relates to an interventional system comprising an introduction element (4) like a catheter for being introduced into an object (9), for instance, a person. A moving unit (2) like a robot moves the introduction element within the object, wherein a tracking image generating unit (3) generates tracking images of the introduction element within the object and wherein a controller (8) controls the tracking image generating unit depending on movement parameters of the moving unit, which are indicative of the movement, such that the tracking images show the introduction element. This control can be performed very accurately based on the known real physical movement of the introduction element such that it is not necessary to, for instance, irradiate a relatively large area of the object for ensuring that the introduction element is really captured by the tracking images, thereby allowing for a reduced radiation dose applied to the object.Type: GrantFiled: January 9, 2023Date of Patent: September 17, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventor: Erik Martinus Hubertus Petrus Van Dijk
-
Patent number: 12086987Abstract: Provided are a device for managing bedsores and an operating method of the same. The operating method includes acquiring image data of a plurality of existing bedsores, acquiring existing bedsore-related information corresponding to the image data of the plurality of existing bedsores, training a convolutional neural network (CNN) with relationships between the image data of the plurality of existing bedsores and the existing bedsore-related information to acquire a machine learning model, acquiring bedsore image data of a current patient, applying the machine learning model to the bedsore image data of the current patient to determine information on a bedsore or bedsore treatment information of the current patient, and outputting the information on the bedsore or the bedsore treatment information of the current patient.Type: GrantFiled: March 14, 2022Date of Patent: September 10, 2024Assignee: FINEHEALTHCAREInventor: Hyun Kyung Shin
-
Patent number: 12076869Abstract: A method and system for calculating a minimum distance from a robot to dynamic objects in a robot workspace. The method uses images from one or more three-dimensional cameras, where edges of objects are detected in each image, and the robot and the background are subtracted from the resultant image, leaving only object edge pixels. Depth values are then overlaid on the object edge pixels, and distance calculations are performed only between the edge pixels and control points on the robot arms. Two or more cameras may be used to resolve object occlusion, where each camera's minimum distance is computed independently and the maximum of the cameras' minimum distances is used as the actual result. The use of multiple cameras does not significantly increase computational load, and does require calibration of the cameras with respect to each other.Type: GrantFiled: November 29, 2021Date of Patent: September 3, 2024Assignee: FANUC CORPORATIONInventors: Chiara Landi, Hsien-Chung Lin, Tetsuaki Kato
-
Patent number: 12025970Abstract: The present invention relates to a sampling based optimal tree planning method, a recording medium storing a program for executing the same, and a computer program stored in the computer-readable recording medium for executing the same, more particularly to, a sampling based optimal tree planning method, a recording medium storing a program for executing the same, and a computer program stored in the computer-readable recording medium for executing the same for enabling real-time path planning by reducing the number of nodes that require calculation by excluding state values corresponding to input values that a mobile robot cannot select from the calculation and sampling some state values of a set of state values feasible for the robot when planning a path in a tree structure.Type: GrantFiled: February 27, 2020Date of Patent: July 2, 2024Assignee: TWINNY CO., LTD.Inventors: Hong Seok Cheon, Tae Hyoung Kim, Han Min Jo, Jai Hoon Lee
-
Patent number: 11950981Abstract: Calibrating an intraoral scanner includes obtaining reference data of a reference three-dimensional (3D) representation of a calibration object and obtaining, based on the intraoral scanner being used by a user to scan the 3D calibration object, and from one or more device to real-world coordinate transformations of two-dimensional (2D) images of the 3D calibration object, measurement data. Calibrating the intraoral scanner further includes aligning the measurement data to the reference data to obtain alignment data and updating, based on the alignment data, said one or more transformations.Type: GrantFiled: April 19, 2023Date of Patent: April 9, 2024Assignee: Align Technology, Inc.Inventors: Tal Verker, Adi Levin, Ofer Saphier, Maayan Moshe
-
Patent number: 11922636Abstract: An electronic device places an augmented reality object in an image of a real environment based on a pose of the electronic device and based on image segmentation. The electronic device includes a camera that captures images of the real environment and sensors, such as an inertial measurement unit (IMU), that capture a pose of the electronic device. The electronic device selects an augmented reality (AR) object from a memory, segments a captured image of the real environment into foreground pixels and background pixels, and composites an image for display wherein the AR object is placed between the foreground pixels and the background pixels. As the pose of the electronic device changes, the electronic device maintains the relative position of the AR object with respect to the real environment in images for display.Type: GrantFiled: October 9, 2019Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: David Bond, Mark Dochtermann
-
Patent number: 11918299Abstract: Methods, systems, and computer-readable medium tracks locations of one or more surgical instruments. The method includes detecting a plurality of markers disposed on a distal end of a first surgical instrument within a field of view of a camera, calculating a position of the first surgical instrument based on a location of the plurality of markers within the field of view of the camera, determining the position of the first surgical instrument in relation to a second surgical instrument.Type: GrantFiled: May 7, 2018Date of Patent: March 5, 2024Assignee: COVIDIEN LPInventor: William Peine
-
Patent number: 11879984Abstract: A method and system to determine the position of a moveable platform relative to an object is disclosed. The method can include storing one or more synthetic models each trained by one of the one or more synthetic model datasets corresponding to one or more objects in a database; capturing an image of the object by one or more sensors associated with the moveable platform; identifying the object by comparing the captured image of the object to the one or more synthetic model datasets; generating a first model output using a first synthetic model of the one or more synthetic models, the first model output including a first relative coordinate position and a first spatial orientation of the moveable platform; and generating a platform coordinate output and a platform spatial orientation output of the moveable platform at the first position based on the first model output.Type: GrantFiled: May 21, 2021Date of Patent: January 23, 2024Assignee: BOOZ ALLEN HAMILTON INC.Inventor: James J. Ter Beest
-
Patent number: 11858148Abstract: A robot and a method for controlling the robot are provided. The robot includes: at least one motor provided in the robot; a camera configured to capture an image of a door; and a processor configured to determine, on the basis of at least one of depth information and a feature point identified from the image, a target position not overlapping with a moving area of the door, and control the at least one motor such that predetermined operations are performed with respect to the target position. The feature point includes at least one of a handle and a hinge of the door.Type: GrantFiled: June 24, 2020Date of Patent: January 2, 2024Assignee: LG ELECTRONICS INC.Inventor: Dongeun Lee
-
Patent number: 11843814Abstract: Signals of an immersive multimedia item are jointly considered for optimizing the quality of experience for the immersive multimedia item. During encoding, portions of available bitrate are allocated to the signals (e.g., a video signal and an audio signal) according to the overall contribution of those signals to the immersive experience for the immersive multimedia item. For example, in the spatial dimension, multimedia signals are processed to determine spatial regions of the immersive multimedia item to render using greater bitrate allocations, such as based on locations of audio content of interest, video content of interest, or both. In another example, in the temporal dimension, multimedia signals are processed in time intervals to adjust allocations of bitrate between the signals based on the relative importance of such signals during those time intervals. Other techniques for bitrate optimizations for immersive multimedia streaming are also described herein.Type: GrantFiled: August 31, 2021Date of Patent: December 12, 2023Assignee: GOOGLE LLCInventors: Neil Birkbeck, Balineedu Adsumilli, Damien Kelly
-
Patent number: 11833698Abstract: A vision system includes a robot body and a robot operation manipulator that receives inputs from an operator to manipulate the robot body. The vision system also includes a left-eye and right-eye cameras, and a display that displays parallax images for an operator. The vision system further includes an area operation manipulator that receives inputs by the operator to specify a target area to be seen three-dimensionally through the parallax images displayed on the display. The target area is located in an absolute space and is included in a portion of a field of view common between the left-eye and right-eye cameras. The vision system further includes a first controller that controls operation of the robot body, and a second controller that extracts and displays, as parallax images, images corresponding to the target area from a left-eye and right-eye capturing images captured by the left-eye and right-eye cameras, respectively.Type: GrantFiled: August 30, 2019Date of Patent: December 5, 2023Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Masayuki Kamon, Hirokazu Sugiyama
-
Patent number: 11803189Abstract: Provided is a control apparatus including: an assessment unit configured to assess whether a relevant element is represented by environment information acquired from a sensor or not; and an environment information setting unit configured to, in a case where the assessment unit has assessed that the relevant element is represented by the environment information, switch the environment information to acquired-in-advance environment information in which the relevant element is not included.Type: GrantFiled: January 18, 2019Date of Patent: October 31, 2023Assignee: SONY CORPORATIONInventor: Yudai Yuguchi
-
Patent number: 11801604Abstract: The present invention relates to a robot and method for estimating an orientation on the basis of a vanishing point in a low-luminance image, and the robot for estimating an orientation on the basis of a vanishing point in a low-luminance image according to an embodiment of the present invention includes a camera unit configured to capture an image of at least one of a forward area and an upward area of the robot and an image processor configured to extract line segments from a first image captured by the camera unit by applying histogram equalization and a rolling guidance filter to the first image, calculate a vanishing point on the basis of the line segments, and estimate a global angle of the robot corresponding to the vanishing point.Type: GrantFiled: May 8, 2019Date of Patent: October 31, 2023Assignee: LG ELECTRONICS INC.Inventors: Dong-Hoon Yi, Jaewon Chang
-
Patent number: 11797020Abstract: An autonomous mobile device (AMD) interacts with a user to provide tasks such as conveniently displaying information on a screen and moving with the user as they move. The AMD determines an area, or bounding box, of a user appearing within images obtained by a camera that is mounted on the AMD. A preferred area, with respect to the images, such as a center of the image, is specified to provide desired framing of images. As images are acquired by the camera, a difference between the bounding box and the preferred area is determined. Based at least in part on this difference, instructions are determined to move one or more of the cameras or the entire AMD to try and reframe the bounding box in subsequent images closer to the preferred area. Other factors, such as the user being backlit, may also be considered in determining the instructions.Type: GrantFiled: October 16, 2020Date of Patent: October 24, 2023Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Wenqing Jiang, Xin Yang
-
Patent number: 11780083Abstract: Methods, apparatus, and computer-readable media for determining and utilizing human corrections to robot actions. In some implementations, in response to determining a human correction of a robot action, a correction instance is generated that includes sensor data, captured by one or more sensors of the robot, that is relevant to the corrected action. The correction instance can further include determined incorrect parameter(s) utilized in performing the robot action and/or correction information that is based on the human correction. The correction instance can be utilized to generate training example(s) for training one or model(s), such as neural network model(s), corresponding to those used in determining the incorrect parameter(s). In various implementations, the training is based on correction instances from multiple robots. After a revised version of a model is generated, the revised version can thereafter be utilized by one or more of the multiple robots.Type: GrantFiled: November 5, 2021Date of Patent: October 10, 2023Assignee: GOOGLE LLCInventors: Nicolas Hudson, Devesh Yamparala
-
Patent number: 11769422Abstract: A robot manipulating system includes a game terminal having a game computer, a game controller, and a display configured to display a virtual space, a robot configured to perform a work in a real space based on robot control data, and an information processing device configured to mediate between the game terminal and the robot. The information processing device supplies game data associated with a content of work to the game terminal, acquires game manipulation data including a history of an input of manipulation accepted by the game controller while a game program to which the game data is reflected is executed, converts the game manipulation data into the robot control data based on a given conversion rule, and supplies the robot control data to the robot.Type: GrantFiled: August 8, 2019Date of Patent: September 26, 2023Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Yasuhiko Hashimoto, Masayuki Kamon, Shigetsugu Tanaka, Yoshihiko Maruyama
-
Patent number: 11745353Abstract: A method includes identifying a target surface in an environment of a robotic device. The method further includes controlling a moveable component of the robotic device to move along a motion path relative to the target surface, wherein the moveable component comprises a light source and a camera. The method additionally includes receiving a plurality of images from the camera when the moveable component is at a plurality of poses along the motion path and when the light source is illuminating the target surface. The method also includes determining bidirectional reflectance distribution function (BRDF) image data, wherein the BRDF image data comprises the plurality of images converted to angular space with respect to the target surface. The method further includes determining, based on the BRDF image data and by applying at least one pre-trained machine learning model, a material property of the target surface.Type: GrantFiled: November 30, 2020Date of Patent: September 5, 2023Assignee: Google LLCInventor: Guy Satat
-
Patent number: 11747825Abstract: A robot includes a drive system configured to maneuver the robot about an environment and data processing hardware in communication with memory hardware and the drive system. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving image data of the robot maneuvering in the environment and executing at least one waypoint heuristic. The at least one waypoint heuristic is configured to trigger a waypoint placement on a waypoint map. In response to the at least one waypoint heuristic triggering the waypoint placement, the operations include recording a waypoint on the waypoint map where the waypoint is associated with at least one waypoint edge and includes sensor data obtained by the robot. The at least one waypoint edge includes a pose transform expressing how to move between two waypoints.Type: GrantFiled: March 7, 2019Date of Patent: September 5, 2023Assignee: Boston Dynamics, Inc.Inventors: Dom Jonak, Marco da Silva, Joel Chestnutt, Matt Klingensmith
-
Patent number: 11738449Abstract: A server (3) includes a relative position recognition unit (3b) that recognizes a relative position of a robot (2) with respect to a user, and a target position determination unit (3f3) that determines a target position serving as a target of the relative position during guidance, based on the relative position when the user starts to move after the robot (2) starts the guidance.Type: GrantFiled: August 28, 2019Date of Patent: August 29, 2023Assignee: HONDA MOTOR CO., LTD.Inventor: Haruomi Higashi
-
Patent number: 11709499Abstract: A controlling method for an artificial intelligence moving robot according to an aspect of the present disclosure includes: checking nodes within a predetermined reference distance from a node corresponding to a current position; determining whether there is a correlation between the nodes within the reference distance and the node corresponding to the current position; determining whether the nodes within the reference distance are nodes of a previously learned map when there is no correlation; and registering the node corresponding to the current position on the map when the nodes within the reference distance are determined as nodes of the previously learned map, thereby being able to generate a map in which the environment of a traveling section and environmental changes are appropriately reflected.Type: GrantFiled: October 22, 2019Date of Patent: July 25, 2023Assignee: LG ELECTRONICS INC.Inventors: Gyuho Eoh, Seungwook Lim, Dongki Noh
-
Patent number: 11685052Abstract: A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.Type: GrantFiled: September 18, 2019Date of Patent: June 27, 2023Assignee: Kinova Inc.Inventors: Louis-Joseph Caron L'Ecuyer, Jean-Francois Forget, Jonathan Lussier, Sébastien Boisvert
-
Patent number: 11679944Abstract: An article picking system includes a detection device which detects a position of a plurality of articles which are moved, and a control unit, and the control unit performs work data creation processing which creates work data having position data of each of the articles, work data storage processing which stores the plurality of the created work data, region determination processing which determines determination region on the periphery of watching-target work data, which should be paid attention to, among the stored work data, and order determination processing which determines picking order of the articles by using the watching-target work data and the peripheral work data, which are within the determination region.Type: GrantFiled: August 12, 2019Date of Patent: June 20, 2023Assignee: FANUC CORPORATIONInventor: Masafumi Ooba
-
Patent number: 11662741Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine an eccentricity map based on video image data and determine vehicle motion data by processing the eccentricity map and two red, green, blue (RGB) video images with a deep neural network trained to output vehicle motion data in global coordinates. The instructions can further include instructions to operate a vehicle based on the vehicle motion data.Type: GrantFiled: June 28, 2019Date of Patent: May 30, 2023Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Bruno Sielly Jales Costa, Gintaras Vincent Puskorius
-
Patent number: 11657592Abstract: The present disclosure relates to systems and methods for object recognition. The system may obtain an image and a model. The image may include a search region in which the object recognition process is performed. In the objection recognition process, for each of one or more sub-regions of the search region, the system may determine a match metric indicating a similarity between the model and the sub-region of the search region. Further, the system may determine an instance of the model among the one or more sub-regions of the search region based on the match metrics.Type: GrantFiled: June 24, 2021Date of Patent: May 23, 2023Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventors: Xinyi Ren, Feng Wang, Haitao Sun, Jianping Xiong
-
Patent number: 11636382Abstract: A robotic self-learning visual inspection method includes determining if a fixture on a component is known by searching a database of known fixtures. If the fixture is unknown, a robotic self-programming visually learning process is performed that includes determining one or more features of the fixture and providing information via a controller about the one or more features in the database such that the fixture becomes known. When the fixture is known, a robotic self-programming visual inspection process is performed that includes determining if the one or more features each pass an inspection based on predetermined criteria. A robotic self-programming visual inspection system includes a robot having one or more arms each adapted for attaching one or more instruments and tools. The instruments and tools are adapted for performing visual inspection processes.Type: GrantFiled: August 5, 2019Date of Patent: April 25, 2023Assignee: Textron Innovations, Inc.Inventors: Micah James Stuhldreher, Darren Fair
-
Patent number: 11625870Abstract: A computer-implemented method 1000 of constructing a model of the motion of a mobile device, wherein the method comprises using a sensor of the device to obtain 1002 positional data providing an estimated pose of the mobile device, generating an initial graph 1004 based upon the positional data from the sensor, nodes of which graph provide a series of possible poses of the device, and edges of which graph represent odometry and/or loop closure constraints; processing the graph to estimate 1006 confidence scores for each loop closure by performing pairwise consistency tests between each loop closure and a set of other loop closures; and generating an augmented graph from the initial graph by retaining or deleting 1008 each loop closure based upon the confidence scores.Type: GrantFiled: July 31, 2018Date of Patent: April 11, 2023Assignee: OXFORD UNIVERSITY INNOVATION LIMITEDInventors: Linhai Xie, Sen Wang, Andrew Markham, Niki Trigoni
-
Patent number: 11607807Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: GrantFiled: April 14, 2021Date of Patent: March 21, 2023Assignee: X DEVELOPMENT LLCInventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Patent number: 11599825Abstract: Embodiments of the present disclosure relate to a method for training a trajectory classification model. The method includes: acquiring trajectory data; computing a trajectory feature of the trajectory data based on a temporal feature and a spatial feature of the trajectory data, the trajectory feature comprising at least one of a curvature or a rotation angle; and training the trajectory feature to obtain the trajectory classification model. Embodiments of the present disclosure further provide an apparatus for training a trajectory classification model, an electronic device, and a computer readable medium.Type: GrantFiled: December 11, 2019Date of Patent: March 7, 2023Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Enyang Bai, Kedi Chen, Miao Zhou, Quan Meng, Wei Wang
-
Patent number: 11594007Abstract: Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.Type: GrantFiled: October 13, 2021Date of Patent: February 28, 2023Assignee: UiPath, Inc.Inventor: Vaclav Skarda
-
Patent number: 11574089Abstract: A vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attributes associated with the objects can be determined. The data can be used to generate a synthetic scenario of a simulated environment. The scenarios can include simulated objects that traverse the simulated environment and perform actions based on the attributes associated with the objects, the captured data, and/or interactions within the simulated environment. In some instances, the simulated objects can be filtered from the scenario based on attributes associated with the simulated objects and can be instantiated and/or destroyed based on triggers within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment.Type: GrantFiled: June 28, 2019Date of Patent: February 7, 2023Assignee: Zoox, Inc.Inventor: Bryan Matthew O'Malley
-
Patent number: 11568100Abstract: A vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attributes associated with the objects can be determined. The data can be used to generate a synthetic scenario of a simulated environment. The scenarios can include simulated objects that traverse the simulated environment and perform actions based on the attributes associated with the objects, the captured data, and/or interactions within the simulated environment. In some instances, the simulated objects can be filtered from the scenario based on attributes associated with the simulated objects and can be instantiated and/or destroyed based on triggers within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment.Type: GrantFiled: June 28, 2019Date of Patent: January 31, 2023Assignee: Zoox, Inc.Inventor: Bryan Matthew O'Malley
-
Patent number: 11536737Abstract: A flexible instrument control and data storage/management system and method for representing and processing assay plates having one or more predefined plate locations is disclosed. The system utilizes a graph data structure, layer objects and data objects. The layer objects map the graph data structure to the data objects. The graph data structure can comprise one node for each of the one or more predefined plate locations, wherein the nodes can be hierarchically defined according to a predefined plate location hierarchy. Each node can be given a unique node identifier, a node type and a node association that implements the predefined plate location hierarchy. The layer objects can include an index that maps the node identifiers to the data objects.Type: GrantFiled: December 20, 2018Date of Patent: December 27, 2022Inventors: Craig P. Lovell, Thomas Lucas Hampton, III
-
Patent number: 11530924Abstract: A method for updating a high definition map according to one embodiment comprises: obtaining a two-dimensional image that captures a target area corresponding to at least a part of an area expressed by a three-dimensional high definition map, generating a three-dimensional local landmark map of the target area from a position of a landmark in the two-dimensional image, based on a position and an orientation of a photographing device which has captured the two-dimensional image and updating the high definition map with reference to the local landmark map corresponding to the target area of the three-dimensional high definition map.Type: GrantFiled: November 15, 2018Date of Patent: December 20, 2022Assignee: SK TELECOM CO., LTD.Inventor: Seongsoo Lee
-
Patent number: 11481924Abstract: An acquisition circuit acquires first position information indicating a position and a first image captured by a camera at the position indicated by the first position information. A memory stores second position information indicating a prescribed position on a map and feature information extracted from a second image corresponding to the prescribed position. The second position information is associated with the feature information. A processor estimates the position indicated by the first position information on the basis of the second position information in the case that the position indicated by the first position information falls within a prescribed range from the prescribed position indicated by the second position information and the first image corresponds to the feature information.Type: GrantFiled: October 1, 2020Date of Patent: October 25, 2022Assignee: MICWARE CO., LTD.Inventors: Sumito Yoshikawa, Shigehiko Miura
-
Patent number: 11465274Abstract: A module type home robot is provided. The module type home robot includes a device module coupling unit coupled to a device module, an input unit receiving a user input, an output unit outputting voice and images, a sensing unit sensing a user, and a control unit sensing a trigger signal, activating the device module or the output unit according to the sensed trigger signal, and controlling the module type home robot to perform an operation mapped to the sensed trigger signal. The trigger signal is a user proximity signal, a user voice signal, a user movement signal, a specific time sensing signal or an environment change sensing signal.Type: GrantFiled: March 9, 2017Date of Patent: October 11, 2022Assignee: LG ELECTRONICS INC.Inventors: Eunhyae Han, Yoonho Shin, Seungwoo Maeng, Sanghyuck Lee
-
Patent number: 11468260Abstract: Computer-implemented systems and methods for selecting a first neural network model from a set of neural network models for a first dataset, the first neural network model having a set of predictor variables and a second dataset comprising a plurality of datapoints mapped into a multi-dimensional grid that defines one or more neighborhood data regions; applying the first neural network model on the first dataset to generate a model score for one or more datapoints in the second dataset, the model score representing an optimal fit of input predictor variables to a target variable for the set of variables of the first neural network model.Type: GrantFiled: May 4, 2021Date of Patent: October 11, 2022Assignee: FAIR ISAAC CORPORATIONInventors: Scott Zoldi, Shafi Rahman
-
Patent number: 11449063Abstract: A method for identifying objects for autonomous robots, including: capturing, with an image sensor disposed on an autonomous robot, images of a workspace, wherein a field of view of the image sensor captures at least an area in front of the autonomous robot; obtaining, with a processing unit disposed on the autonomous robot, the images; generating, with the processing unit, a feature vector from the images; comparing, with the processing unit, at least one object captured in the images to objects in an object dictionary; identifying, with the processing unit, a class to which the at least one object belongs; and executing, with the autonomous robot, instructions based on the class of the at least one object identified.Type: GrantFiled: February 24, 2022Date of Patent: September 20, 2022Assignee: AI IncorporatedInventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia, Lukas Robinson
-
Patent number: 11442149Abstract: A light detection and ranging (“LIDAR”) system includes a coherent light source that generates a frequency modulated optical signal comprising a series of optical chirps. A scanning assembly transmits the series of optical chirps in a scan pattern across a scanning region, and receives a plurality of reflected optical chirps corresponding to the transmitted optical chirps that have reflected off one or more objects located within the scanning region. A photodetector mixes the reflected optical chirps with a local oscillation (LO) reference signal comprising a series of LO reference chirps. An electronic data analysis assembly processes digital data derived from the reflected optical chirps and the LO reference chirps mixed at the photodetector to generate distance data and optionally velocity data associated with each of the reflected optical chirps.Type: GrantFiled: October 6, 2016Date of Patent: September 13, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lutfollah Maleki, Scott Singer