Robotics Patents (Class 382/153)
-
Patent number: 11950981Abstract: Calibrating an intraoral scanner includes obtaining reference data of a reference three-dimensional (3D) representation of a calibration object and obtaining, based on the intraoral scanner being used by a user to scan the 3D calibration object, and from one or more device to real-world coordinate transformations of two-dimensional (2D) images of the 3D calibration object, measurement data. Calibrating the intraoral scanner further includes aligning the measurement data to the reference data to obtain alignment data and updating, based on the alignment data, said one or more transformations.Type: GrantFiled: April 19, 2023Date of Patent: April 9, 2024Assignee: Align Technology, Inc.Inventors: Tal Verker, Adi Levin, Ofer Saphier, Maayan Moshe
-
Patent number: 11918299Abstract: Methods, systems, and computer-readable medium tracks locations of one or more surgical instruments. The method includes detecting a plurality of markers disposed on a distal end of a first surgical instrument within a field of view of a camera, calculating a position of the first surgical instrument based on a location of the plurality of markers within the field of view of the camera, determining the position of the first surgical instrument in relation to a second surgical instrument.Type: GrantFiled: May 7, 2018Date of Patent: March 5, 2024Assignee: COVIDIEN LPInventor: William Peine
-
Patent number: 11922636Abstract: An electronic device places an augmented reality object in an image of a real environment based on a pose of the electronic device and based on image segmentation. The electronic device includes a camera that captures images of the real environment and sensors, such as an inertial measurement unit (IMU), that capture a pose of the electronic device. The electronic device selects an augmented reality (AR) object from a memory, segments a captured image of the real environment into foreground pixels and background pixels, and composites an image for display wherein the AR object is placed between the foreground pixels and the background pixels. As the pose of the electronic device changes, the electronic device maintains the relative position of the AR object with respect to the real environment in images for display.Type: GrantFiled: October 9, 2019Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: David Bond, Mark Dochtermann
-
Patent number: 11879984Abstract: A method and system to determine the position of a moveable platform relative to an object is disclosed. The method can include storing one or more synthetic models each trained by one of the one or more synthetic model datasets corresponding to one or more objects in a database; capturing an image of the object by one or more sensors associated with the moveable platform; identifying the object by comparing the captured image of the object to the one or more synthetic model datasets; generating a first model output using a first synthetic model of the one or more synthetic models, the first model output including a first relative coordinate position and a first spatial orientation of the moveable platform; and generating a platform coordinate output and a platform spatial orientation output of the moveable platform at the first position based on the first model output.Type: GrantFiled: May 21, 2021Date of Patent: January 23, 2024Assignee: BOOZ ALLEN HAMILTON INC.Inventor: James J. Ter Beest
-
Patent number: 11858148Abstract: A robot and a method for controlling the robot are provided. The robot includes: at least one motor provided in the robot; a camera configured to capture an image of a door; and a processor configured to determine, on the basis of at least one of depth information and a feature point identified from the image, a target position not overlapping with a moving area of the door, and control the at least one motor such that predetermined operations are performed with respect to the target position. The feature point includes at least one of a handle and a hinge of the door.Type: GrantFiled: June 24, 2020Date of Patent: January 2, 2024Assignee: LG ELECTRONICS INC.Inventor: Dongeun Lee
-
Patent number: 11843814Abstract: Signals of an immersive multimedia item are jointly considered for optimizing the quality of experience for the immersive multimedia item. During encoding, portions of available bitrate are allocated to the signals (e.g., a video signal and an audio signal) according to the overall contribution of those signals to the immersive experience for the immersive multimedia item. For example, in the spatial dimension, multimedia signals are processed to determine spatial regions of the immersive multimedia item to render using greater bitrate allocations, such as based on locations of audio content of interest, video content of interest, or both. In another example, in the temporal dimension, multimedia signals are processed in time intervals to adjust allocations of bitrate between the signals based on the relative importance of such signals during those time intervals. Other techniques for bitrate optimizations for immersive multimedia streaming are also described herein.Type: GrantFiled: August 31, 2021Date of Patent: December 12, 2023Assignee: GOOGLE LLCInventors: Neil Birkbeck, Balineedu Adsumilli, Damien Kelly
-
Patent number: 11833698Abstract: A vision system includes a robot body and a robot operation manipulator that receives inputs from an operator to manipulate the robot body. The vision system also includes a left-eye and right-eye cameras, and a display that displays parallax images for an operator. The vision system further includes an area operation manipulator that receives inputs by the operator to specify a target area to be seen three-dimensionally through the parallax images displayed on the display. The target area is located in an absolute space and is included in a portion of a field of view common between the left-eye and right-eye cameras. The vision system further includes a first controller that controls operation of the robot body, and a second controller that extracts and displays, as parallax images, images corresponding to the target area from a left-eye and right-eye capturing images captured by the left-eye and right-eye cameras, respectively.Type: GrantFiled: August 30, 2019Date of Patent: December 5, 2023Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Masayuki Kamon, Hirokazu Sugiyama
-
Patent number: 11801604Abstract: The present invention relates to a robot and method for estimating an orientation on the basis of a vanishing point in a low-luminance image, and the robot for estimating an orientation on the basis of a vanishing point in a low-luminance image according to an embodiment of the present invention includes a camera unit configured to capture an image of at least one of a forward area and an upward area of the robot and an image processor configured to extract line segments from a first image captured by the camera unit by applying histogram equalization and a rolling guidance filter to the first image, calculate a vanishing point on the basis of the line segments, and estimate a global angle of the robot corresponding to the vanishing point.Type: GrantFiled: May 8, 2019Date of Patent: October 31, 2023Assignee: LG ELECTRONICS INC.Inventors: Dong-Hoon Yi, Jaewon Chang
-
Patent number: 11803189Abstract: Provided is a control apparatus including: an assessment unit configured to assess whether a relevant element is represented by environment information acquired from a sensor or not; and an environment information setting unit configured to, in a case where the assessment unit has assessed that the relevant element is represented by the environment information, switch the environment information to acquired-in-advance environment information in which the relevant element is not included.Type: GrantFiled: January 18, 2019Date of Patent: October 31, 2023Assignee: SONY CORPORATIONInventor: Yudai Yuguchi
-
Patent number: 11797020Abstract: An autonomous mobile device (AMD) interacts with a user to provide tasks such as conveniently displaying information on a screen and moving with the user as they move. The AMD determines an area, or bounding box, of a user appearing within images obtained by a camera that is mounted on the AMD. A preferred area, with respect to the images, such as a center of the image, is specified to provide desired framing of images. As images are acquired by the camera, a difference between the bounding box and the preferred area is determined. Based at least in part on this difference, instructions are determined to move one or more of the cameras or the entire AMD to try and reframe the bounding box in subsequent images closer to the preferred area. Other factors, such as the user being backlit, may also be considered in determining the instructions.Type: GrantFiled: October 16, 2020Date of Patent: October 24, 2023Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Wenqing Jiang, Xin Yang
-
Patent number: 11780083Abstract: Methods, apparatus, and computer-readable media for determining and utilizing human corrections to robot actions. In some implementations, in response to determining a human correction of a robot action, a correction instance is generated that includes sensor data, captured by one or more sensors of the robot, that is relevant to the corrected action. The correction instance can further include determined incorrect parameter(s) utilized in performing the robot action and/or correction information that is based on the human correction. The correction instance can be utilized to generate training example(s) for training one or model(s), such as neural network model(s), corresponding to those used in determining the incorrect parameter(s). In various implementations, the training is based on correction instances from multiple robots. After a revised version of a model is generated, the revised version can thereafter be utilized by one or more of the multiple robots.Type: GrantFiled: November 5, 2021Date of Patent: October 10, 2023Assignee: GOOGLE LLCInventors: Nicolas Hudson, Devesh Yamparala
-
Patent number: 11769422Abstract: A robot manipulating system includes a game terminal having a game computer, a game controller, and a display configured to display a virtual space, a robot configured to perform a work in a real space based on robot control data, and an information processing device configured to mediate between the game terminal and the robot. The information processing device supplies game data associated with a content of work to the game terminal, acquires game manipulation data including a history of an input of manipulation accepted by the game controller while a game program to which the game data is reflected is executed, converts the game manipulation data into the robot control data based on a given conversion rule, and supplies the robot control data to the robot.Type: GrantFiled: August 8, 2019Date of Patent: September 26, 2023Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Yasuhiko Hashimoto, Masayuki Kamon, Shigetsugu Tanaka, Yoshihiko Maruyama
-
Patent number: 11747825Abstract: A robot includes a drive system configured to maneuver the robot about an environment and data processing hardware in communication with memory hardware and the drive system. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving image data of the robot maneuvering in the environment and executing at least one waypoint heuristic. The at least one waypoint heuristic is configured to trigger a waypoint placement on a waypoint map. In response to the at least one waypoint heuristic triggering the waypoint placement, the operations include recording a waypoint on the waypoint map where the waypoint is associated with at least one waypoint edge and includes sensor data obtained by the robot. The at least one waypoint edge includes a pose transform expressing how to move between two waypoints.Type: GrantFiled: March 7, 2019Date of Patent: September 5, 2023Assignee: Boston Dynamics, Inc.Inventors: Dom Jonak, Marco da Silva, Joel Chestnutt, Matt Klingensmith
-
Patent number: 11745353Abstract: A method includes identifying a target surface in an environment of a robotic device. The method further includes controlling a moveable component of the robotic device to move along a motion path relative to the target surface, wherein the moveable component comprises a light source and a camera. The method additionally includes receiving a plurality of images from the camera when the moveable component is at a plurality of poses along the motion path and when the light source is illuminating the target surface. The method also includes determining bidirectional reflectance distribution function (BRDF) image data, wherein the BRDF image data comprises the plurality of images converted to angular space with respect to the target surface. The method further includes determining, based on the BRDF image data and by applying at least one pre-trained machine learning model, a material property of the target surface.Type: GrantFiled: November 30, 2020Date of Patent: September 5, 2023Assignee: Google LLCInventor: Guy Satat
-
Patent number: 11738449Abstract: A server (3) includes a relative position recognition unit (3b) that recognizes a relative position of a robot (2) with respect to a user, and a target position determination unit (3f3) that determines a target position serving as a target of the relative position during guidance, based on the relative position when the user starts to move after the robot (2) starts the guidance.Type: GrantFiled: August 28, 2019Date of Patent: August 29, 2023Assignee: HONDA MOTOR CO., LTD.Inventor: Haruomi Higashi
-
Patent number: 11709499Abstract: A controlling method for an artificial intelligence moving robot according to an aspect of the present disclosure includes: checking nodes within a predetermined reference distance from a node corresponding to a current position; determining whether there is a correlation between the nodes within the reference distance and the node corresponding to the current position; determining whether the nodes within the reference distance are nodes of a previously learned map when there is no correlation; and registering the node corresponding to the current position on the map when the nodes within the reference distance are determined as nodes of the previously learned map, thereby being able to generate a map in which the environment of a traveling section and environmental changes are appropriately reflected.Type: GrantFiled: October 22, 2019Date of Patent: July 25, 2023Assignee: LG ELECTRONICS INC.Inventors: Gyuho Eoh, Seungwook Lim, Dongki Noh
-
Patent number: 11685052Abstract: A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.Type: GrantFiled: September 18, 2019Date of Patent: June 27, 2023Assignee: Kinova Inc.Inventors: Louis-Joseph Caron L'Ecuyer, Jean-Francois Forget, Jonathan Lussier, Sébastien Boisvert
-
Patent number: 11679944Abstract: An article picking system includes a detection device which detects a position of a plurality of articles which are moved, and a control unit, and the control unit performs work data creation processing which creates work data having position data of each of the articles, work data storage processing which stores the plurality of the created work data, region determination processing which determines determination region on the periphery of watching-target work data, which should be paid attention to, among the stored work data, and order determination processing which determines picking order of the articles by using the watching-target work data and the peripheral work data, which are within the determination region.Type: GrantFiled: August 12, 2019Date of Patent: June 20, 2023Assignee: FANUC CORPORATIONInventor: Masafumi Ooba
-
Patent number: 11662741Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine an eccentricity map based on video image data and determine vehicle motion data by processing the eccentricity map and two red, green, blue (RGB) video images with a deep neural network trained to output vehicle motion data in global coordinates. The instructions can further include instructions to operate a vehicle based on the vehicle motion data.Type: GrantFiled: June 28, 2019Date of Patent: May 30, 2023Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Bruno Sielly Jales Costa, Gintaras Vincent Puskorius
-
Patent number: 11657592Abstract: The present disclosure relates to systems and methods for object recognition. The system may obtain an image and a model. The image may include a search region in which the object recognition process is performed. In the objection recognition process, for each of one or more sub-regions of the search region, the system may determine a match metric indicating a similarity between the model and the sub-region of the search region. Further, the system may determine an instance of the model among the one or more sub-regions of the search region based on the match metrics.Type: GrantFiled: June 24, 2021Date of Patent: May 23, 2023Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventors: Xinyi Ren, Feng Wang, Haitao Sun, Jianping Xiong
-
Patent number: 11636382Abstract: A robotic self-learning visual inspection method includes determining if a fixture on a component is known by searching a database of known fixtures. If the fixture is unknown, a robotic self-programming visually learning process is performed that includes determining one or more features of the fixture and providing information via a controller about the one or more features in the database such that the fixture becomes known. When the fixture is known, a robotic self-programming visual inspection process is performed that includes determining if the one or more features each pass an inspection based on predetermined criteria. A robotic self-programming visual inspection system includes a robot having one or more arms each adapted for attaching one or more instruments and tools. The instruments and tools are adapted for performing visual inspection processes.Type: GrantFiled: August 5, 2019Date of Patent: April 25, 2023Assignee: Textron Innovations, Inc.Inventors: Micah James Stuhldreher, Darren Fair
-
Patent number: 11625870Abstract: A computer-implemented method 1000 of constructing a model of the motion of a mobile device, wherein the method comprises using a sensor of the device to obtain 1002 positional data providing an estimated pose of the mobile device, generating an initial graph 1004 based upon the positional data from the sensor, nodes of which graph provide a series of possible poses of the device, and edges of which graph represent odometry and/or loop closure constraints; processing the graph to estimate 1006 confidence scores for each loop closure by performing pairwise consistency tests between each loop closure and a set of other loop closures; and generating an augmented graph from the initial graph by retaining or deleting 1008 each loop closure based upon the confidence scores.Type: GrantFiled: July 31, 2018Date of Patent: April 11, 2023Assignee: OXFORD UNIVERSITY INNOVATION LIMITEDInventors: Linhai Xie, Sen Wang, Andrew Markham, Niki Trigoni
-
Patent number: 11607807Abstract: Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action.Type: GrantFiled: April 14, 2021Date of Patent: March 21, 2023Assignee: X DEVELOPMENT LLCInventors: Seyed Mohammad Khansari Zadeh, Mrinal Kalakrishnan, Paul Wohlhart
-
Patent number: 11599825Abstract: Embodiments of the present disclosure relate to a method for training a trajectory classification model. The method includes: acquiring trajectory data; computing a trajectory feature of the trajectory data based on a temporal feature and a spatial feature of the trajectory data, the trajectory feature comprising at least one of a curvature or a rotation angle; and training the trajectory feature to obtain the trajectory classification model. Embodiments of the present disclosure further provide an apparatus for training a trajectory classification model, an electronic device, and a computer readable medium.Type: GrantFiled: December 11, 2019Date of Patent: March 7, 2023Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.Inventors: Enyang Bai, Kedi Chen, Miao Zhou, Quan Meng, Wei Wang
-
Patent number: 11594007Abstract: Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.Type: GrantFiled: October 13, 2021Date of Patent: February 28, 2023Assignee: UiPath, Inc.Inventor: Vaclav Skarda
-
Patent number: 11574089Abstract: A vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attributes associated with the objects can be determined. The data can be used to generate a synthetic scenario of a simulated environment. The scenarios can include simulated objects that traverse the simulated environment and perform actions based on the attributes associated with the objects, the captured data, and/or interactions within the simulated environment. In some instances, the simulated objects can be filtered from the scenario based on attributes associated with the simulated objects and can be instantiated and/or destroyed based on triggers within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment.Type: GrantFiled: June 28, 2019Date of Patent: February 7, 2023Assignee: Zoox, Inc.Inventor: Bryan Matthew O'Malley
-
Patent number: 11568100Abstract: A vehicle can capture data that can be converted into a synthetic scenario for use in a simulator. Objects can be identified in the data and attributes associated with the objects can be determined. The data can be used to generate a synthetic scenario of a simulated environment. The scenarios can include simulated objects that traverse the simulated environment and perform actions based on the attributes associated with the objects, the captured data, and/or interactions within the simulated environment. In some instances, the simulated objects can be filtered from the scenario based on attributes associated with the simulated objects and can be instantiated and/or destroyed based on triggers within the simulated environment. The scenarios can be used for testing and validating interactions and responses of a vehicle controller within the simulated environment.Type: GrantFiled: June 28, 2019Date of Patent: January 31, 2023Assignee: Zoox, Inc.Inventor: Bryan Matthew O'Malley
-
Patent number: 11536737Abstract: A flexible instrument control and data storage/management system and method for representing and processing assay plates having one or more predefined plate locations is disclosed. The system utilizes a graph data structure, layer objects and data objects. The layer objects map the graph data structure to the data objects. The graph data structure can comprise one node for each of the one or more predefined plate locations, wherein the nodes can be hierarchically defined according to a predefined plate location hierarchy. Each node can be given a unique node identifier, a node type and a node association that implements the predefined plate location hierarchy. The layer objects can include an index that maps the node identifiers to the data objects.Type: GrantFiled: December 20, 2018Date of Patent: December 27, 2022Inventors: Craig P. Lovell, Thomas Lucas Hampton, III
-
Patent number: 11530924Abstract: A method for updating a high definition map according to one embodiment comprises: obtaining a two-dimensional image that captures a target area corresponding to at least a part of an area expressed by a three-dimensional high definition map, generating a three-dimensional local landmark map of the target area from a position of a landmark in the two-dimensional image, based on a position and an orientation of a photographing device which has captured the two-dimensional image and updating the high definition map with reference to the local landmark map corresponding to the target area of the three-dimensional high definition map.Type: GrantFiled: November 15, 2018Date of Patent: December 20, 2022Assignee: SK TELECOM CO., LTD.Inventor: Seongsoo Lee
-
Patent number: 11481924Abstract: An acquisition circuit acquires first position information indicating a position and a first image captured by a camera at the position indicated by the first position information. A memory stores second position information indicating a prescribed position on a map and feature information extracted from a second image corresponding to the prescribed position. The second position information is associated with the feature information. A processor estimates the position indicated by the first position information on the basis of the second position information in the case that the position indicated by the first position information falls within a prescribed range from the prescribed position indicated by the second position information and the first image corresponds to the feature information.Type: GrantFiled: October 1, 2020Date of Patent: October 25, 2022Assignee: MICWARE CO., LTD.Inventors: Sumito Yoshikawa, Shigehiko Miura
-
Patent number: 11468260Abstract: Computer-implemented systems and methods for selecting a first neural network model from a set of neural network models for a first dataset, the first neural network model having a set of predictor variables and a second dataset comprising a plurality of datapoints mapped into a multi-dimensional grid that defines one or more neighborhood data regions; applying the first neural network model on the first dataset to generate a model score for one or more datapoints in the second dataset, the model score representing an optimal fit of input predictor variables to a target variable for the set of variables of the first neural network model.Type: GrantFiled: May 4, 2021Date of Patent: October 11, 2022Assignee: FAIR ISAAC CORPORATIONInventors: Scott Zoldi, Shafi Rahman
-
Patent number: 11465274Abstract: A module type home robot is provided. The module type home robot includes a device module coupling unit coupled to a device module, an input unit receiving a user input, an output unit outputting voice and images, a sensing unit sensing a user, and a control unit sensing a trigger signal, activating the device module or the output unit according to the sensed trigger signal, and controlling the module type home robot to perform an operation mapped to the sensed trigger signal. The trigger signal is a user proximity signal, a user voice signal, a user movement signal, a specific time sensing signal or an environment change sensing signal.Type: GrantFiled: March 9, 2017Date of Patent: October 11, 2022Assignee: LG ELECTRONICS INC.Inventors: Eunhyae Han, Yoonho Shin, Seungwoo Maeng, Sanghyuck Lee
-
Patent number: 11449063Abstract: A method for identifying objects for autonomous robots, including: capturing, with an image sensor disposed on an autonomous robot, images of a workspace, wherein a field of view of the image sensor captures at least an area in front of the autonomous robot; obtaining, with a processing unit disposed on the autonomous robot, the images; generating, with the processing unit, a feature vector from the images; comparing, with the processing unit, at least one object captured in the images to objects in an object dictionary; identifying, with the processing unit, a class to which the at least one object belongs; and executing, with the autonomous robot, instructions based on the class of the at least one object identified.Type: GrantFiled: February 24, 2022Date of Patent: September 20, 2022Assignee: AI IncorporatedInventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia, Lukas Robinson
-
Patent number: 11442149Abstract: A light detection and ranging (“LIDAR”) system includes a coherent light source that generates a frequency modulated optical signal comprising a series of optical chirps. A scanning assembly transmits the series of optical chirps in a scan pattern across a scanning region, and receives a plurality of reflected optical chirps corresponding to the transmitted optical chirps that have reflected off one or more objects located within the scanning region. A photodetector mixes the reflected optical chirps with a local oscillation (LO) reference signal comprising a series of LO reference chirps. An electronic data analysis assembly processes digital data derived from the reflected optical chirps and the LO reference chirps mixed at the photodetector to generate distance data and optionally velocity data associated with each of the reflected optical chirps.Type: GrantFiled: October 6, 2016Date of Patent: September 13, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Lutfollah Maleki, Scott Singer
-
Patent number: 11430341Abstract: The present invention discloses a system and a method for optimizing Unmanned Aerial Vehicle (UAV) based warehouse management, where an optimized path for UAV is generated in real time based on the density of inventory. In operation, the present invention provides for identifying landmark features of the warehouse and density of inventory. Further, a 3D grid map an aisle of the warehouse is generated using the density of inventory. Finally, a navigation path for the UAV for a mission is generated based on the generated 3D grid map using one or more path planning techniques. Further, the present invention provides for updating the navigation path if one or more changes are observed in the density of the inventory.Type: GrantFiled: July 23, 2019Date of Patent: August 30, 2022Assignee: COGNIZANT TECHNOLOGY SOLUTIONS SINDIA PVT. LTD.Inventors: Gurpreet Singh Sachdeva, Ramesh Yechangunja
-
Patent number: 11429112Abstract: A mobile robot control method includes: acquiring a first image that is captured by a camera on a robot when the robot is in a desired pose; acquiring a second image that is captured by the camera on the robot when the robot is in a current pose; extracting multiple pairs of matching feature points from the first image and the second image, and projecting the extracted feature points onto a virtual unitary sphere to obtain multiple projection feature points, wherein a center of the virtual unitary sphere is coincident with an optical center of coordinates of the camera; acquiring an invariant image feature and a rotation vector feature based on the multiple projection feature points, and controlling the robot to move until the robot is in the desired pose according to the invariant image feature and the rotation vector feature.Type: GrantFiled: December 31, 2020Date of Patent: August 30, 2022Assignees: UBTECH NORTH AMERICA RESEARCH AND DEVELOPMENT CENTER CORP, UBTECH ROBOTICS CORP LTDInventors: Dejun Guo, Dan Shao, Yang Shen, Kang-Hao Peng, Huan Tan
-
Patent number: 11430137Abstract: An electronic device and a control method therefor are disclosed. A method for controlling an electronic device according to the present invention comprises the steps of: receiving a current frame; determining a region, within the current frame, where there is a movement, on the basis of a prior frame and the current frame; inputting the current frame into an artificial intelligence learning model on the basis of the region where there is the movement, to obtain information relating to at least one object included in the current frame; and determining the object included in the region where there is the movement, by using the obtained information relating to the at least one object. Therefore, electronic device can rapidly determine an object included in a frame configuring a captured image.Type: GrantFiled: March 26, 2019Date of Patent: August 30, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sungho Kang, Yunjae Lim, Hyungdal Kwon, Cheon Lee
-
Patent number: 11425866Abstract: Method and apparatus for automated operations, such as pruning, harvesting, spraying and/or maintenance, on plants, and particularly plants with foliage having features on many length scales or a wide spectrum of length scales, such as female flower buds of the marijuana plant. The invention utilizes a convolutional neural network for image segmentation classification and/or the determination of features. The foliage is imaged stereoscopically to produce a three-dimensional surface image, a first neural network determines regions to be operated on, and a second neural network determines how an operation tool operates on the foliage. For pruning of resinous foliage the cutting tool is heated or cooled to avoid having the resins make the cutting tool inoperable.Type: GrantFiled: February 10, 2019Date of Patent: August 30, 2022Inventor: Keith Charles Burden
-
Patent number: 11423545Abstract: The present invention relates to an image processing apparatus and a mobile robot including the same. The image processing apparatus according to an embodiment of the present invention includes an image acquisition unit for obtaining an image and a processor for performing signal processing on the image from the image acquisition unit, and the processor is configured to group super pixels in the image on the basis of colors or luminances of the image, calculate representative values of the super pixels and perform segmentation on the basis of the representative values of the super pixels. Accordingly, image segmentation can be performed rapidly and accurately.Type: GrantFiled: December 4, 2018Date of Patent: August 23, 2022Assignees: LG ELECTRONICS INC., INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITYInventors: Beomseong Kim, Yeonsoo Kim, Dongki Noh, Euntai Kim, Jisu Kim, Sangyun Lee
-
Patent number: 11393063Abstract: An object detecting method includes imaging a plurality of target objects with an imaging section and acquiring a first image, recognizing an object position/posture of one of the plurality of target objects based on the first image, counting the number of successfully recognized object positions/postures of the target object, outputting, based on the object position/posture of the target object, a signal for causing a holding section to hold the target object, calculating, as a task evaluation value, a result about whether the target object was successfully held, updating, based on an evaluation indicator including the number of successfully recognized object positions/postures and the task evaluation value, a model for estimating the evaluation indicator from an imaging position/posture of the imaging section and determining an updated imaging position/posture, acquiring a second image in the updated imaging position/posture, and recognizing the object position/posture of the target object based on the secoType: GrantFiled: March 27, 2020Date of Patent: July 19, 2022Inventor: Jun Toda
-
Patent number: 11389956Abstract: A first method comprising: predicting a scene of an environment using a model of the environment and based on a first scene of the environment obtained from sensors observing scenes of the environment; comparing the predicted scene with an observed scene from the sensors; and performing an action based on differences determined between the predicted scene and the observed scene. A second method comprising applying a vibration stimuli on an object via a computer-controlled component; obtaining a plurality of images depicting the object from a same viewpoint, captured during the application of the vibration stimuli. The second method further comprising comparing the plurality of images to detect changes occurring in response to the application of the vibration stimuli, which changes are attributed to a change of a location of a boundary of the object; and determining the boundary of the object based on the comparison.Type: GrantFiled: February 11, 2019Date of Patent: July 19, 2022Assignee: SHMUEL UR INNOVATION LTD.Inventors: Shmuel Ur, Vlad Dabija, David Hirshberg
-
Patent number: 11364581Abstract: A method and apparatus for manufacturing an aircraft structure. A drivable support may be driven from a first location to a second location to bring the drivable support together with at least one other drivable support to form a drivable support system. A structure may be held in a desired position using the drivable support system.Type: GrantFiled: October 1, 2019Date of Patent: June 21, 2022Assignee: The Boeiog CompanyInventors: Dan Dresskell Day, Clayton Lynn Munk, Steven John Schmitt, Eric M. Reid
-
Patent number: 11348276Abstract: A mobile robot and a method of controlling the mobile robot are disclosed. The method includes acquiring an image of an inside of a traveling zone. The method further includes performing a point-based feature point extraction by extracting a first feature point from the acquired image. The method also includes performing a block-based feature point extraction by dividing the acquired image into blocks having a predetermined size and extracting a second feature point from each of the divided block-unit images. The method also includes determining the current location by performing a point-based feature point matching using the first feature point and performing a block-based feature point using the second feature point. The method also includes storing the determined current location in association with the first feature point and the second feature point in a map.Type: GrantFiled: March 26, 2020Date of Patent: May 31, 2022Assignee: LG ELECTRONICS INC.Inventors: Dongki Noh, Jaekwang Lee, Seungwook Lim, Gyuho Eoh
-
Patent number: 11340606Abstract: System and method for controlling an aerial system, without physical interaction with a separate remote device, based on sensed user expressions. User expressions may include thought, voice, facial expressions, and/gestures. User expressions may be sensed by sensors associated with the aerial device or a remote device.Type: GrantFiled: August 7, 2019Date of Patent: May 24, 2022Inventors: Mengqiu Wang, Jia Lu, Tong Zhang, Lixin Liu
-
Patent number: 11304374Abstract: The invention relates to an end-effector device and automated selective thinning system. The system includes vision acquisition hardware, kinematic targeting and heuristic programming, a robotic arm, and a pomologically designed end-effector. The system is utilized to improve efficiency for the fruit-thinning process in a tree orchard, such as peach thinning. By automating the mechanical process of fruit thinning, selective fruit-thinners can eliminate manual labor inputs and further enhance favorable blossom removal. Automation used in conjunction with a heuristic approach provides improvements to the system. The system may also be configured as a robotic arm or as a handheld system by including a battery and switching microcontroller with handle or wrist straps. Handheld thinning devices that are mechanical in nature may also be part of the system.Type: GrantFiled: August 28, 2019Date of Patent: April 19, 2022Assignee: THE PENN STATE RESEARCH FOUNDATIONInventors: David Lyons, Paul Heinemann
-
Patent number: 11303799Abstract: The present technology relates to a control device, a control method, and a program that enable capturing an image suitable for use in image processing such as recognition of a target object. A control device according to an embodiment of the present technology generates a map including a target object existing around a moving object on the basis of output from a sensor provided on the moving object, and controls drive of a camera provided on the moving object that captures an image of the target object on the basis of a relation between a position of the target object and position of the moving object on the map. The present technology can be applied to a robot capable of acting autonomously.Type: GrantFiled: July 17, 2019Date of Patent: April 12, 2022Assignee: SONY CORPORATIONInventor: Xi Chen
-
Patent number: 11266049Abstract: There is provided technology which is a component mounting machine which mounts electronic components onto a circuit substrate and is capable of displaying a movable region of an inner portion of the component mounting machine within a same image. The component mounting machine is provided with a fixed camera which monitors the inner portion of the component mounting machine and a display section which is capable of displaying a captured image of the fixed camera. The fixed camera is capable of imaging a range from a pickup position at which the suction nozzle picks up the electronic component which is supplied from the component feeder to a mounting position at which the electronic component is mounted onto the circuit substrate within the same image.Type: GrantFiled: September 3, 2015Date of Patent: March 1, 2022Assignee: FUJI CORPORATIONInventor: Kazuma Hattori
-
Patent number: 11254019Abstract: Systems and methods are provided for automatic intrinsic and extrinsic calibration for a robot optical sensor. An implementation includes an optical sensor; a robot arm; a calibration chart; one or more processors; and a memory storing instructions that cause the one or more processors to perform operations that includes: determining a set of poses for calibrating the first optical sensor; generating, based at least on the set of poses, pose data comprising three dimensional (3D) position and orientation data; moving, based at least on the pose data, the robot arm into a plurality of poses; at each pose of the plurality of poses, capturing a set of images of the calibration chart with the first optical sensor and recording a pose; calculating intrinsic calibration parameters, based at least on the set of captured images; and calculating extrinsic calibration parameters, based at least on the set of captured images.Type: GrantFiled: March 5, 2019Date of Patent: February 22, 2022Assignee: The Boeing CompanyInventors: Phillip Haeusler, Jason John Cochrane
-
Patent number: 11226628Abstract: The present application provides a method, apparatus and system for controlling transportation between warehouses. The method includes: receiving, from the source RCS, first transportation information which includes information of a first to-be-transported object; transporting the first to-be-transported object to a handover area; transferring control over the AGV from the source RCS to the target RCS; receiving a location of a first target storage space from the target RCS; transporting the first to-be-transported object from the handover area to the first target storage space. In the present application, the AGV transfers the control over itself from the source RCS to the target RCS after moving the to-be-transported object to the handover area, such that the target RCS could take over the AGV and control the AGV to transport the first to-be-transported object from the handover area to the first target storage space.Type: GrantFiled: August 10, 2017Date of Patent: January 18, 2022Assignee: HANGZHOU HIKROBOT TECHNOLOGY CO., LTD.Inventors: Huapeng Wu, Keping Zhu, Shengkai Li
-
Patent number: 11215996Abstract: The present disclosure provides a method and a device for controlling a vehicle, a device and a storage medium, and relates to the field of unmanned vehicle technologies. The method includes: acquiring a vehicle environment image by an image acquirer during traveling of the vehicle; extracting a static environment image included in the vehicle environment image; obtaining a planned vehicle traveling trajectory by taking the static environment image as an input of a trajectory planning model; and controlling the vehicle to travel according to the planned vehicle traveling trajectory.Type: GrantFiled: December 28, 2019Date of Patent: January 4, 2022Assignee: Apollo Intelligent Driving Technology (Beijing) Co., Ltd.Inventor: Hao Yu