Vision Sensor (e.g., Camera, Photocell) Patents (Class 700/259)
  • Patent number: 10974390
    Abstract: The objective of the present invention is to provide an autonomous localization and navigation equipment which has the following advantages. The autonomous localization and navigation equipment is highly modularized, which greatly reduces the coupling degree with the host equipment, so the equipment is convenient to be integrated to the existing host equipment and it is flexibly expandable. Thus, the host equipment such as a robot etc. has a more concise and clear system constitution, thereby greatly reducing the development difficulty and developing time of the host equipment having the autonomous localization and navigation equipment 1. Moreover, as a result of the high degree of modularization of the autonomous localization and navigation equipment, it is possible to miniaturize the host equipment.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: April 13, 2021
    Inventors: ShiKai Chen, YiChun Liu, Ling Lin, Jueshen Huang, YuXiang Li
  • Patent number: 10966373
    Abstract: An electro-mechanical device cuts plant stems at a desired angle after a stem is inserted into a stem shaft and a sensor triggers activation of a blade along a blade path at a desired angle. A portable device powered by batteries can be used with plantings where they are growing or a powered device can be used for line production or high volume work.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: April 6, 2021
    Inventor: Robert Kaleck
  • Patent number: 10953548
    Abstract: A computer-implemented method executed by a robotic system for performing a positional search process in an assembly task is presented. The method includes applying forces to a first component to be inserted into a second component, detecting the forces applied to the first component by employing a plurality of force sensors attached to a robotic arm of the robotic system, extracting training samples corresponding to the forces applied to the first component, normalizing time-series data for each of the training samples by applying a variable transformation about a right tilt direction, creating a time-series prediction model of transformed training data, applying the variable transformation with different directions for a test sample, and calculating a matching ratio between the created time-series prediction model and the transformed test sample.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 23, 2021
    Assignee: International Business Machines Corporation
    Inventor: Takayuki Yoshizumi
  • Patent number: 10948913
    Abstract: The present disclosure relates to a method of identifying an unexpected obstacle and a robot implementing the method. The method includes: by a sensing module of a robot, sensing a blind spot located in a traveling path of the robot; by a control unit of the robot, calculating a probability that a moving object appears in the sensed blind spot; and, by the control unit, controlling the speed or direction of a moving unit of the robot based on the calculated probability.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: March 16, 2021
    Inventors: Changhyeon Lee, Byungkon Sohn
  • Patent number: 10939791
    Abstract: The present invention relates to a moving robot capable of recognizing a position on a map and a control method of the moving robot, and the moving robot according to the present invention includes: a travel drive unit configured to move a main body; an image acquisition unit configured to acquire images of surroundings; and a controller configured to recognize a current position.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: March 9, 2021
    Inventors: Seungwook Lim, Taekyeong Lee, Dongki Noh
  • Patent number: 10940590
    Abstract: A method for calibrating a system with a conveying apparatus and at least a first robot includes determining the positions of at least three measuring points of a first component transported by the conveying apparatus in a first transport position using the first robot. The method further includes determining the position of at least one of the measuring points in a second transport position using the first robot, or determining the positions of at least two of the measuring points of the component in a third transport position and the position of at least one other measuring point in the third transport position or at least one of these measuring points in a fourth transport position using at least one second robot.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: March 9, 2021
    Assignee: KUKA Deutschland GmbH
    Inventors: Thomas Purrucker, Robert Miller, Ralf Mittmann, Daniele Sagnotti, Manuela Hauschild, Benno Eichner, Stephan Clair, Felix Lückert, Markus Hager, Maximilian Lindner
  • Patent number: 10939024
    Abstract: An image processing system, an image processing device and an image processing method that can improve performance of an image measurement are provided. A control device controls a light emission portion in a manner that each of plural types of partial regions set on a light emission surface emits light, and controls a camera to image an object in synchronization with light emission of each of the plural types of partial regions. The control device performs an image measurement of the object based on reflection profile information which is generated based on a plurality of input images, and the reflection profile information shows a relationship between positions within the light emission surface and degrees of light reflected at attention sites of the object and incident to the camera with respect to light irradiated from the position to the object.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: March 2, 2021
    Assignee: OMRON Corporation
    Inventor: Yutaka Kato
  • Patent number: 10913163
    Abstract: A door opening and closing robot includes: a door operating tool including an insertion and restricting portion; a robotic arm that moves the door operating tool; and a control device that: operates the robotic arm, wherein the door operating tool moves to a first reference position, then moves toward a first presence confirmation position; obtains a position of the door operating tool as a first evaluation position; and determines a deviation between the first presence confirmation position and the first evaluation position, when the deviation is less than or equal to a predetermined first threshold, detects failed presence of the insertion portion in a window groove. The first reference position is the operating tool which is inserted in the window groove and the restricting portion is in contact with an edge of the window groove or positioned above the edge of the window groove.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: February 9, 2021
    Inventors: Takashi Oshima, Tomoyuki Nagao, Hideki Isoda
  • Patent number: 10899010
    Abstract: A cable damage detection assistance apparatus in a robot mechanism includes a program executor for executing a program to operate a robot, a plurality of times, while changing velocity for driving motors whenever the program is executed; a motor controller for controlling the motors; a state quantity detector for detecting a state quantity indicating an operation state of the robot during the execution of the program; an alarm generator that, when the state quantity exceeds a threshold value, generates an alarm and outputs information about a line number at that time; an alarm database for counting the number of occurrence of alarms on each line number on which the alarm has occurred, and storing the alarm occurrence number on each line number on a velocity-by-velocity basis; and an analysis display for displaying the relationship between the alarm occurrence number and the velocity on each line number.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: January 26, 2021
    Inventors: Soichi Arita, Kokoro Hatanaka
  • Patent number: 10893245
    Abstract: Disclosed is a robot projecting an image that selects a projection area in a space based on at least one of first information related to content of an image to be projected and second information related to a user viewing the image to be projected, and projects the image to the projection area.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: January 12, 2021
    Inventors: Hyungjin Choi, Byungjoon Kim, Sul Ran Kim, Jongook Yoon
  • Patent number: 10884421
    Abstract: A method for operating a cleaning device that automatically moves within an environment, wherein the cleaning device cleans a surface according to a prescribed work plan, wherein a detection device of the cleaning device detects contamination levels of several partial surface areas of the surface, wherein a cleaning operation of the cleaning device is varied as a function of the detection result, wherein an overall contamination level is determined for the surface from the contamination levels of several partial surface areas, and the cleaning operation is performed with cleaning parameters identical to the overall contamination level for the entire surface. In order to create an alternative to conventional operating methods, the determined overall contamination level is compared with at least one reference contamination level, specifically with an overall contamination level determined during a chronologically preceding cleaning activity.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: January 5, 2021
    Assignee: Vorwerk & Co. Interholding GmbH
    Inventors: Maike Brede, Pia Hahn, Lorenz Hillen, Gerhard Isenberg, Harald Windorfer
  • Patent number: 10882187
    Abstract: A method of planning a cleaning route for a cleaning robot: firstly, starting from an original point based on maps of grids, cleaning grid zones formed by the grids one by one until an entire region is cleaned, and then establishing a map of the entire region; secondly, searching the map of the entire region to find out uncleaned areas missed from cleaning, and then cleaning the uncleaned areas; thirdly, cleaning peripheral areas of the entire region based on the map of the entire region; and lastly, returning to the original point. A chip is also provided which stores procedures for controlling the cleaning robot to implement the method of planning a cleaning route.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: January 5, 2021
    Inventors: Yongyong Li, Gangjun Xiao, Qinwei Lai
  • Patent number: 10877442
    Abstract: Provided is a machine learning device configured to perform machine learning related to optimization of a compensation value of a compensation generation unit with respect to a servo control device configured to control a servo motor configured to drive an axis of a machine tool, a robot, or an industrial machine, and that includes at least one feedback loop, a compensation generation unit configured to generate a compensation value to be applied to the feedback loop, and an abnormality detection unit configured to detect an abnormal operation of the servo motor, wherein, during a machine learning operation, when the abnormality detection unit detects an abnormality, the compensation from the compensation generation unit is stopped and the machine learning device continues optimization of the compensation value generated by the compensation generation unit.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: December 29, 2020
    Inventors: Shougo Shinoda, Ryoutarou Tsuneki
  • Patent number: 10873689
    Abstract: Concerning a partial area image that constitutes a wide area image, to measure an elapsed time from a past point of time of image capturing and control a flying body in accordance with the elapsed time, an information processing apparatus includes a wide area image generator that extracts, from a flying body video obtained when a flying body captures a ground area spreading below while moving, a plurality of video frame images and combines the video frame images, thereby generating a captured image in a wide area, an elapsed time measurer that measures an elapsed time from an image capturing time by the flying body for each of the plurality of video frame images, and an output unit that outputs the elapsed time for each video frame image together with position information of the flying body at the time of capturing of the video frame image.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: December 22, 2020
    Inventor: Tetsuo Inoshita
  • Patent number: 10856716
    Abstract: The present disclosure discloses a robot cleaner, including a cleaner body; and a sensing unit disposed in the cleaner body, wherein the sensing unit includes a rotating body configured to be horizontally rotatable around a rotation shaft passing through an inside of the cleaner body; a sensing unit mounted on one side of the rotating body to sense a feature or an obstacle in the vicinity of the cleaner body; and a tilting unit installed inside the rotating body to vertically tilt the sensing unit.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: December 8, 2020
    Inventors: Seaunglok Ham, Jaeheon Chung, Seungjin Lee, Hwang Kim, Hyeongshin Jeon
  • Patent number: 10846647
    Abstract: An apparatus for notifying of parcel delivery comprising an aerial parcel delivery apparatus, landing gear, a processor, a number of visual sensors, and an articulated robotic arm. The robotic arm may comprise an end effector comprising one or more simulated fingers or digits of a human hand, a protruding member, and a wireless communication adapter. The end effector may have the ability to grasp an object or actuate a doorbell, key pay, or an alarm system. The articulated robotic arm including a protruding member is extended to ring a doorbell.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: November 24, 2020
    Assignee: Hall Labs LLC
    Inventor: David R. Hall
  • Patent number: 10836034
    Abstract: A system and method of determining a grasp type of an end-effector of a robot when interacting with an item wherein a plurality of velocity values of the end-effector at various positions of its movement are collected and used to determine the grasp type when a given velocity value is below a predetermined threshold.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: November 17, 2020
    Assignee: Kindred Systems Inc.
    Inventors: Petr Lipay, Richard Chad Cogar
  • Patent number: 10834026
    Abstract: A method for AI-assisted service provisioning and modification for delivering message-based services, including: receiving an input sequence from a user in relation to a request for a service, the input sequence including one or more inputs; processing the input sequence to determine a service type; associating a workflow with the request based at least in part on the service type and a profile of the user, the workflow including a set of one or more steps, a step of the set of one or more steps corresponding to a set of attributes including at least one of: a communication mode, a communication type, or a communication priority, the workflow being performed by at least one of: a chatbot, an AI assistant, or a service professional; and interacting with the user based at least in part on the workflow to deliver the service.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: November 10, 2020
    Assignee: Jiseki Health, Inc.
    Inventors: Chandra Nagaraja, Tushar Vasisht
  • Patent number: 10800032
    Abstract: An industrial robot having high operability for a user is provided. An industrial robot includes a manipulator, a controller which controls an operation of the manipulator, and a detection device attached to the manipulator and detecting a gesture input. The controller executes a process corresponding to the detected gesture input.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: October 13, 2020
    Assignee: OMRON Corporation
    Inventors: Daichi Kamisono, Yoshiharu Tani, Kazunori Osako, Toshiyuki Higuchi, Minoru Hashimoto, Masaki Fujita
  • Patent number: 10805546
    Abstract: An image processing system (SYS) specifies an arrangement situation of a workpiece (W) provided by a line (L). The image processing system (SYS) specifies a normal (V) with respect to a set measurement point (Wp) of the workpiece (W) according to the specified arrangement situation of the workpiece (W), and changes a position and posture of the two-dimensional camera (310) so that the specified normal (V) matches an optical axis of the two-dimensional camera (310) (S1). The image processing system (SYS) changes a distance between the two-dimensional camera (310) and the measurement point (Wp) so that the specified normal (V) matches the optical axis of the two-dimensional camera (310), to focus the two-dimensional camera (310) on the measurement point (Wp).
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: October 13, 2020
    Assignee: OMRON Corporation
    Inventors: Toyoo Iida, Yuki Taniyasu
  • Patent number: 10772692
    Abstract: The invention provides a probe device, a precision detection method, a precision detection system, and a positioning system. The probe device includes a positioning part and a guide detection part, wherein the positioning part is provided with a support having three or more non-collinear positioning element installed thereon, and the guide detection part is connected with the support, has a first preset positional relation with the positioning elements, and has a cylindrical outer contour matched with a guide element of the positioning system. The precision of the guide element can be accurately detected, so that the control precision of a surgical robot is effectively improved, and the system safety is improved.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: September 15, 2020
    Inventors: Yinyan Li, Bo Chen, Yubiao Wei
  • Patent number: 10775772
    Abstract: A method for controlling a velocity of a conveyance path of a system including an industrial robot, a first conveyance path configured to transfer items within a working area (b2) of the robot with a velocity v1, and a second conveyance path configured to transfer empty places within the working area (b2) of the robot with a velocity v2. The method includes: obtaining position data for a plurality of items and for a plurality of empty places; creating one or more pairs including one of the empty places of the plurality of empty places and a respective item of the plurality of items; calculating, for one of the one or more pairs, a time tAs for the empty place of the one pair and a time tBk for the item of the one pair to reach a border of the working area (b2) based on position data for the one pair and the velocities v1 and v2; and controlling the velocity v2 of the second conveyance path based on a difference between the time tAs and a time (tBk+?t), where ?t is a predetermined time difference.
    Type: Grant
    Filed: October 12, 2016
    Date of Patent: September 15, 2020
    Assignee: ABB Schweiz AG
    Inventor: Anders Lager
  • Patent number: 10775803
    Abstract: A docking system and method for charging a mobile robot at a docking station. The system includes a first module for the robot, including a first communication unit and a first control unit, and a second module for the station, including a second communication unit, one or more docking sensors, and a second control unit. When the robot enters a docking region around the station, the first communication unit sends to the second communication unit a status message indicating that the robot needs charging; upon reception of the status message, the second control unit uses the sensors to derive a traction command to drive the robot towards the station; and the second communication unit sends to the first communication unit a command message containing the traction command. The first control unit processes the traction command and uses it to operate traction motors of the robot.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: September 15, 2020
    Inventors: Gian Piero Fici, Marco Gaspardone, Miguel Efrain Kaouk Ng, Matteo Lazzarin
  • Patent number: 10751882
    Abstract: Features are disclosed for an end effector for automated identification and handling of an object. The end effector includes an end effector that can be positioned over a pick point of an overpackage in which a desired object is location using sensors. Using the location information, the end effector can identify a path to the pick point and detect whether the pick point is engaged by detecting environmental changes at the end effector.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: August 25, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Tye Michael Brady, Anna Buchele, Juan Carlos del Rio, Rocco DiVerdi, Yuzhong Huang, Hunter Normandeau, Timothy Stallman, Ziyu Wang
  • Patent number: 10743951
    Abstract: A method for repairing a bone of a patient may include superimposing a first virtual boundary on a virtual bone and superimposing a second virtual boundary on the virtual bone. The method may further include robotically modifying the bone of the patient with a planar tool along a first working boundary to create a first surface. The first working boundary may correspond to the first virtual boundary. The method may further include robotically modifying the bone of the patient with a rotary tool along a second working boundary to create a second surface. The second working boundary may correspond to the second virtual boundary.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: August 18, 2020
    Assignee: Mako Surgical Corporation
    Inventors: Ali Zafar Abbasi, Jonathan Greenwald, Philip H. Frank
  • Patent number: 10731994
    Abstract: The present disclosure relates to an information processing device and an information processing method that are capable of estimating the self-position by accurately and continuously estimating the self-movement. The information processing device according to an aspect of the present disclosure includes a downward imaging section and a movement estimation section. The downward imaging section is disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface. The movement estimation section estimates the movement of the moving object in accordance with a plurality of images representing the road surface and captured at different time points by the downward imaging section. The present disclosure can be applied, for example, to a position sensor mounted in an automobile.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: August 4, 2020
    Inventors: Takaaki Kato, Shingo Tsurumi, Masashi Eshima, Akihiko Kaino, Masaki Fukuchi
  • Patent number: 10726264
    Abstract: Techniques for localizing a device based on images captured by the device. The techniques include receiving, from a device via a data communication network, image frame data for frames captured by an imaging camera included in the device, the frames including a first frame, automatically detecting real-world objects captured in the image frame data, automatically classifying the detected real-world objects as being associated with respective object classes, automatically identifying object class and instance dependent keypoints for the real-world objects based on the object classes associated with the real-world objects, and estimating a pose of the device for the first frame based on at least the identified object class and instance dependent keypoints.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: July 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harpreet Singh Sawhney, Marc André Léon Pollefeys
  • Patent number: 10717196
    Abstract: A machine learning device that learns an operation of a robot for picking up, by a hand unit, any of a plurality of workpieces placed in a random fashion, including a bulk-loaded state, includes a state variable observation unit that observes a state variable representing a state of the robot, including data output from a three-dimensional measuring device that obtains a three-dimensional map for each workpiece, an operation result obtaining unit that obtains a result of a picking operation of the robot for picking up the workpiece by the hand unit, and a learning unit that learns a manipulated variable including command data for commanding the robot to perform the picking operation of the workpiece, in association with the state variable of the robot and the result of the picking operation, upon receiving output from the state variable observation unit and output from the operation result obtaining unit.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: July 21, 2020
    Inventors: Takashi Yamazaki, Takumi Oyama, Shun Suyama, Kazutaka Nakayama, Hidetoshi Kumiya, Hiroshi Nakagawa, Daisuke Okanohara, Ryosuke Okuta, Eiichi Matsumoto, Keigo Kawaai
  • Patent number: 10698413
    Abstract: A mobile robot includes a mode of operation to recover from a localization error. The mobile robot detects a change in state in a local region proximate the mobile robot. The location of the mobile robot is identified based at least in part on the detected change in state. In one implementation, the mobile robot interfaces with a system controller of a building to initiate a change in state in a local region of a building.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: June 30, 2020
    Assignee: SAVIOKE INC.
    Inventors: Tessa Lau, Christian Fritz, Philipp Herget, Robert S. Bauer
  • Patent number: 10698403
    Abstract: Methods, systems, and apparatus, including computer programs encoded on storage devices, for monitoring, security, and surveillance of a property. In one aspect, a system includes multiple robotic devices, multiple sensors, wherein the multiple sensors includes a first sensor, multiple charging stations, and a monitor control unit. The monitor control unit may include a network interface, one or more processors, and one or more storage devices that include instructions to cause the one or more processors to perform operations. The operations may include receiving data from the first sensor that is indicative of an alarm event, accessing information describing the capabilities of the each of the robotic devices, selecting a subset of robotic devices from the multiple robotic devices, and transmitting a command to each robotic device in the subset of robotic devices that instructs each respective robotic device to deploy to the location of the first sensor.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: June 30, 2020
    Assignee: Incorporated
    Inventor: Daniel Kerzner
  • Patent number: 10692276
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10692277
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Patent number: 10666768
    Abstract: In some implementations, techniques are described for generating and displaying visualizations that display information for an electronic device in a visual interface presented to a user by an augmented reality (AR) device. Video data collected by an augmented reality device is obtained. A device of a property is identified based on the video data. In response to identifying the device, a network status for the device is obtained. One or more visualizations representing the network status for the device are generated. The one or more visualizations are provided for display to a user by the augmented reality device.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: May 26, 2020
    Assignee: Incorporated
    Inventor: Kyle Rankin Johnson
  • Patent number: 10661190
    Abstract: An interactive play device, method and apparatus, is disclosed that includes means for generating a plurality of interactions, input control mechanisms, means for storing responses to interactions, and control means to select the next interaction based on memorized responses. This invention provides a new class of interactive play devices, which is founded on personalizing a play device so that its current functionality is based on how the player has interacted with it in prior playing sessions. The invention also discloses a doll device and a car device, which operate in a plurality of states that mimic human behavior. Further, the specification describes a game during which the player is challenged to transform a play device from an initial state to a desired state by providing appropriate responses to interactions initiated by the device.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: May 26, 2020
    Assignee: Interactive Play Devices LLC
    Inventor: Nabil N. Ghaly
  • Patent number: 10654166
    Abstract: Automation windows for attended or unattended robots are disclosed. A child session is created and hosted as a window including the applications of a window associated with a parent session. Running multiple sessions allows a robot to operate in this child session while the user interacts with the parent session. The user may thus be able to interact with applications that the robot is not using or the user and the robot may be able to interact with the same application if that application is capable of this functionality. The user and the robot are both interacting with the same application instances and file system. Changes made via the robot and the user in an application will be made as if a single user made them, rather than having the user and the robot each work with separate versions of the applications and file systems.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: May 19, 2020
    Assignee: UiPath, Inc.
    Inventor: Andrew Hall
  • Patent number: 10639112
    Abstract: An improved device for regenerating an infrared signal transmitted over the air for use in detecting a 3-dimensional position of an object. The regeneration device includes an infrared signal transmitter and detector that receives from the object a responsive infrared signal in response to the infrared signal transmitted by the transmitter. A low pass filter receives the responsive infrared signal from the detector and outputs a low-pass filtered signal. A comparator compares the output of the infrared signal detector and output of the low pass filter and generates an output representing a logic state based on the comparison.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: May 5, 2020
    Assignee: Globus Medical, Inc.
    Inventors: Robert J. LeBoeuf, II, Zachary Olenio, James Yau, Neil R. Crawford
  • Patent number: 10638915
    Abstract: A medical system having a first insertable instrument having a first image sensor; a second insertable instrument having a second image sensor; a first arm configured to move the first insertable instrument; and a controller configured to: acquire a position and orientation of the second insertable instrument in a predetermined coordinate system; calculate a position and direction of an optical axis of the second image sensor based on the position and orientation of the second insertable instrument; acquire a position and orientation of the first insertable instrument in the predetermined coordinate system; calculate a first operation amount of the first arm to move the first image sensor such that the optical axis of the second image sensor substantially coincides with an optical axis of the first image sensor, based on the position and orientation of the first insertable instrument; and control the first arm based on the first operation amount.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: May 5, 2020
    Inventor: Koji Nishizawa
  • Patent number: 10628700
    Abstract: Techniques related to improved video coding based on face detection, region extraction, and tracking are discussed. Such techniques may include performing a facial search of a video frame to determine candidate face regions in the video frame, testing the candidate face regions based on skin tone information to determine valid and invalid face regions, rejecting invalid face regions, and encoding the video frame based on valid face regions to generate a coded bitstream.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: April 21, 2020
    Assignee: Intel Corporation
    Inventors: Atul Puri, Daniel Socek
  • Patent number: 10603790
    Abstract: A workpiece picking device includes a sensor measuring a plurality of workpieces randomly piled in a three-dimensional space; a robot folding the workpieces; a hand mounted to the robot and hold the workpieces; a holding position posture calculation unit calculating holding position posture data of a position and a posture to hold the workpieces by the robot based on an output of the sensor; a loading state improvement operation generation unit generating loading state improvement operation data of improving a loading state of the workpieces by the robot based on an output of the sensor; and a robot control unit controlling the robot and the hand. The robot control unit controls the robot and the hand based on an output of the holding position posture calculation unit and the loading state improvement operation generation unit to pick the workpieces or perform a loading state improvement operation.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: March 31, 2020
    Inventor: Takefumi Gotou
  • Patent number: 10588470
    Abstract: Methods, systems and apparatus for optically determining the amount of product used and/or remaining in a product dispenser.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: March 17, 2020
    Inventors: Babak R. Ghazi, Frederick J. Williams, Jr., Stephen Becker
  • Patent number: 10585437
    Abstract: The present disclosure provides a robot, an auto-recharging method therefor and a storage medium. The auto-recharging method for a robot comprises: the robot moving from an initial position to a docking position, wherein the docking position faces a charging interface of a charging pile; the robot traveling from the docking position to a charging position along a first path such that the robot is docked to the charging pile at the charging position, wherein the first path is a straight-line or approximately straight-line path, and the robot maintains a docking pose during the course of traveling along the first path and the charging pile is identified in the images captured by the robot in real time. The present disclosure may achieve auto-recharging of the robot without guidance of active light source, thereby reducing the cost of the robot and meanwhile offering a high flexibility to the equipment.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: March 10, 2020
    Assignee: NEXTVPU (SHANGHAI) CO., LTD.
    Inventors: Ji Zhou, Xinpeng Feng
  • Patent number: 10583560
    Abstract: A robotic system includes: a control unit configured to: receive an object set including one or more object entries, wherein: the object entries correspond to source objects of an object source, each of the object entries are described by one or more object entry properties; receive sensor information representing one or more detectable object properties for detectable source objects of an object source; calculate an object match probability between the detectable source objects and the object entries based on a property correlation between the detectable object properties of the detectable source objects and the object entry properties of the object entries; generate an object identity approximation for each of the detectable source objects based on a comparison between the object match probability for each of the detectable source objects corresponding to a particular instance of the object entries; select a target object from the detectable source objects; generate an object handling strategy, for implemen
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: March 10, 2020
    Assignee: Mujin, Inc.
    Inventors: Jose Jeronimo Moreira Rodrigues, Xutao Ye, Jinze Yu, Rosen Diankov
  • Patent number: 10571575
    Abstract: Automated methods and systems are disclosed, including a method comprising: capturing images and three-dimensional LIDAR data of a geographic area with an image capturing device and a LIDAR system, the images depicting an object of interest and the three-dimensional LIDAR data including the object of interest, the image capturing device capturing the images of a vertical surface of the object of interest at one or more oblique angle, and the LIDAR system capturing the three-dimensional LIDAR data of a horizontal surface of the object of interest at a nadir angle; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images to create a 3D point cloud having a resolution greater than a resolution of the three-dimensional LIDAR data.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: February 25, 2020
    Assignee: Pictometry International Corp.
    Inventors: Stephen L. Schultz, David R. Nilosek, David S. Petterson, Timothy S. Harrington
  • Patent number: 10568258
    Abstract: A robotic vehicle may include one or more functional components configured to execute a lawn care function, a sensor network comprising one or more sensors configured to detect conditions proximate to the robotic vehicle, a positioning module configured to determine robotic vehicle position while the robotic vehicle traverses a parcel, and a boundary management module configured to enable the robotic vehicle to be operated within a bounded area. The bounded area may include a variable boundary, and the boundary management module may be configured to receive instructions from an operator to adjust the variable boundary.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: February 25, 2020
    Assignee: HUSQVARNA AB
    Inventor: Håkan Wahlgren
  • Patent number: 10554546
    Abstract: An improved technique involves using a systems dynamics model in an information technology (IT) data center in order to determine an optimal distribution of data among data center devices. Along these lines, a data center control server takes measurements of devices across an IT data center over time and compares these measurements to quantities specified in set points (e.g., a service level agreement (SLA)) to produce deviations. The data center control server then inputs the deviations from the set points into a systems dynamics engine that determines a configuration of the devices in the IT data center so that output from the IT data center satisfies a set of constraints, including those specified in the SLA. The data center control server then configures the IT data center devices according to the configuration to send incoming data along the specified data paths.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: February 4, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Stephen Graham, Harrison Roberts, Kenneth Fickie, Kent Bair, Shankar Jagannathan
  • Patent number: 10549422
    Abstract: A controller is provided with a machine learning device learning an operation start condition for storing motions for an article on the carrier device by means of the robot. The machine learning device observes operation start condition data showing the operation start condition and conveyance state data showing states of articles on the carrier device, as state variables indicating a current state of an environment. Further, the machine learning device acquires judgment data showing an appropriateness judgment result of the storing motion and learns the operation start condition in association with the conveyance state data, using the observed state variables and the acquired judgment data.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: February 4, 2020
    Inventor: Masafumi Ooba
  • Patent number: 10539406
    Abstract: A method and an apparatus for calibrating a tool in a flange coordinate system of a robot are disclosed. The method includes: acquiring a rotation angle of each joint of a robot when a to-be-calibrated tool fasted on an end joint mounting portion of the robot moves to a central point of the to-be-calibrated tool and overlaps with a calibration reference point; acquiring calibration information of a central point of a calibrated tool in a flange coordinate system of the robot; and completing calibration of the central point of the to-be-calibrated tool in the flange coordinate system of the robot according to the calibration information of the central point of the calibrated tool and a rotation angle of the to-be calibrated tool.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: January 21, 2020
    Inventor: Wenfeng Mu
  • Patent number: 10525599
    Abstract: Systems, methods, and computer-readable media are described for instructing a robotic arm to perform various actions on a touch-sensitive display screen of a mobile device based on captured images of the display screen. A camera can capture an image of the display screen while the mobile device is mounted in a mounting station with the display screen facing the camera, and the captured image can be analyzed to determine the relationship between the pixel locations in the image captured by the camera and the physical locations to which the robotic arm can be instructed to move. Based on the relationship, the system can instruct the robotic arm to touch and activate various on-screen objects displayed on the display screen of the mobile device based on the pixel locations of such on-screen objects in the images captured by the camera.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: January 7, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Samir Zutshi
  • Patent number: 10513038
    Abstract: A robot control system includes a humanoid conversation robot that has a conversation with a user, at least one service execution robot that provides a service to the user, a recognition unit that recognizes a request and an emotion of the user through the conversation between the user and the humanoid conversation robot, and a determination unit that determines a service which is to be provided to the user and a service execution robot which is to execute the service among the at least one service execution robot according to the request and the emotion of the user. The service execution robot determined by the determination unit executes the determined service.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: December 24, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Roshan Thapliya, Belinda Dunstan
  • Patent number: 10500004
    Abstract: A teleoperational medical system for performing a medical procedure in a surgical field includes a dynamic guided setup system having step-by-step setup instructions for setting up a teleoperational assembly having at least one motorized surgical arm configured to assist in a surgical procedure. It also includes a user interface configured to communicate the step-by-step setup instructions to a user. The dynamic guided setup system is configured to automatically recognize completion of a first setup step based on detected physical arrangement of at least one surgical arm on a teleoperational assembly and automatically display a prompt for a subsequent setup step after the recognizing completion of the first setup step.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: December 10, 2019
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Michael Hanuschik, Julie L. Berry, Joseph Arsanious, Paul W. Mohr, Brandon D. Itkowitz, Paul G. Griffiths