Robotics Patents (Class 382/153)
  • Patent number: 11122314
    Abstract: Signals of an immersive multimedia item are jointly considered for optimizing the quality of experience for the immersive multimedia item. During encoding, portions of available bitrate are allocated to the signals (e.g., a video signal and an audio signal) according to the overall contribution of those signals to the immersive experience for the immersive multimedia item. For example, in the spatial dimension, multimedia signals are processed to determine spatial regions of the immersive multimedia item to render using greater bitrate allocations, such as based on locations of audio content of interest, video content of interest, or both. In another example, in the temporal dimension, multimedia signals are processed in time intervals to adjust allocations of bitrate between the signals based on the relative importance of such signals during those time intervals. Other techniques for bitrate optimizations for immersive multimedia streaming are also described herein.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: September 14, 2021
    Assignee: GOOGLE LLC
    Inventors: Neil Birkbeck, Balineedu Adsumilli, Damien Kelly
  • Patent number: 11106932
    Abstract: The disclosure discloses a method for extracting a boundary of a thin-walled part with small curvature based on three-dimensional point cloud. The method includes: collecting point cloud data of a part to reduce density of the point cloud data, performing Euclidean cluster to divide into point cloud pieces, obtaining triangular mesh surfaces for each point cloud triangulation; extracting a boundary vertex of each triangular mesh surface to obtain a contour thereof, selecting a contour of the part among all contours; searching with each point on the contour as a center to form a three-dimensional boundary point cloud band; projecting the three-dimensional boundary point cloud band to a plane, orderly extracting two-dimensional boundary points within the plane, and arranging corresponding points in the three-dimensional boundary point cloud band according to an order of ordered boundary points within the plane to obtain ordered boundary points in the three-dimensional boundary point cloud band.
    Type: Grant
    Filed: June 13, 2020
    Date of Patent: August 31, 2021
    Assignee: HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Wenlong Li, Cheng Jiang, Gang Wang, Zelong Peng, Han Ding
  • Patent number: 11091264
    Abstract: An automated commissioning and floorplan configuration (CAFC) device can include a CAFC system having a transceiver and a CAFC engine, where the transceiver communicates with at least one device disposed in a volume of space, where the CAFC engine, based on communication between the transceiver and the at least one device, commissions the at least one device.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: August 17, 2021
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Jonathan Andrew Whitten, Michael Alan Lunn
  • Patent number: 11094082
    Abstract: A plurality of verification position/orientation candidates for a target object is set. A common structure model including a geometric feature of a part, among geometric features of a reference model representing a three-dimensional shape of the target object, that is common among the candidates is generated. An image including the target object is obtained. A position/orientation of the target object is estimated by verifying the common structure model and the reference model arranged at the plurality of verification position/orientation candidates, against the image.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: August 17, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Fukashi Yamazaki, Daisuke Kotake
  • Patent number: 11072067
    Abstract: Robots and robotic systems and methods can employ artificial neural networks (ANNs) to significantly improve performance. The ANNs can operate alternatingly in forward and backward directions in interleaved fashion. The ANNs can employ visible units and hidden units. Various objective functions can be optimized. Robots and robotic systems and methods can execute applications including a plurality of agents in a distributed system, for instance with a number of hosts executing respective agents, at least some of the agents in communications with one another. The hosts can execute agents in response to occurrence of defined events or trigger expressions, and can operate with a maximum latency guarantee and/or data quality guarantee.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: July 27, 2021
    Assignee: KINDRED SYSTEMS INC.
    Inventor: James Sterling Bergstra
  • Patent number: 11070713
    Abstract: Techniques are described for controlling the process of capturing three-dimensional (3D) video content. For example a controller can provide centralized control over the various components that participate in the capture, and processing, of the 3D video content. For example, the controller can establish connections with a number of components (e.g., running on other computing devices). The controller can receive state update messages from the components (e.g., comprising state change information, network address information, etc.). The controller can also broadcast messages to the components. For example, the controller can broadcast system state messages to the components where the system state messages comprise current state information of the components. The controller can also broadcast other types of messages, such as start messages that instruct the components to enter a start state.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: July 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Spencer G Fowers
  • Patent number: 11058857
    Abstract: The method for automatically producing precise tattoo markings on any anatomical body portion automatically by providing a controlled articulated arm carrying a tattoo machine implement. The method also provides a multi-axis positioning platform for supporting and positioning a person, receiving a tattoo, in a prime, optimal, and comfortable position. Also, the method provides choosing, with a selector, a tattoo of choice from any data source of images, as well as applying, rectifying, and mapping, with a physical or virtual design projection and visualization media, the chosen tattoo to the person. The method completes a tattoo using the articulated arm with the tattoo machine implement producing a precise, accurate, and aesthetically pleasing tattoo, automatically.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: July 13, 2021
    Assignee: Set Point Solutions, LLC
    Inventor: Joseph Harrington Matanane Brown
  • Patent number: 11055562
    Abstract: In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: July 6, 2021
    Assignee: The Boeing Company
    Inventors: Phillip Haeusler, Alexandre Desbiez
  • Patent number: 11040441
    Abstract: A robot in a location interacts with a user. The robot includes a camera, an image recognition processor, a microphone and a loudspeaker, a voice assistant, and a wireless transceiver. The robot moves around and creates a model of the location, and recognizes changes. It recognizes objects of interest, beings, and situations. The robot monitors the user and recognizes body language and gesture commands, as well as voice commands. The robot communicates with the user, the TV, and other devices. It may move around to monitor for regular and non-regular situations. It anticipates user commands based on a situation. It determines if a situation is desired, and mitigates the situation if undesired. It can seek immediate help for the user in an emergency. It can capture, record, categorize and document events as they happen. It can categorize and document objects in the location.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: June 22, 2021
    Assignee: Sony Group Corporation
    Inventors: David Young, Lindsay Miller, Lobrenzo Wingo, Marvin DeMerchant
  • Patent number: 11032166
    Abstract: Systems, methods and articles of manufacture that handle secondary robot commands in robot swarms may operate by receiving, at a receiving device in a swarm of devices, a packet included in a signal broadcast within an environment from a transmitting device in the swarm of devices; parsing the packet for a command associated with a primary effect and a secondary effect; in response to determining that the receiving device is paired with the transmitting device, implementing, by the receiving device, the primary effect; and in response to determining that the receiving device is not paired with the transmitting device, implementing, by the receiving device, the secondary effect.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: June 8, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Nathan D. Nocon, Michael P. Goslin, Janice K. Rosenthal, Corey D. Drake
  • Patent number: 11022980
    Abstract: Provided are communication relationship establishing method and device, computer readable storage medium, electronic device, and cleaning device.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: June 1, 2021
    Assignee: Shenzhen 3irobotix Co., Ltd.
    Inventors: Yong Yang, Zexiao Wu, Yuhui Song
  • Patent number: 11014243
    Abstract: A system and method of instructing a device is disclosed. The system includes a signal source for providing at least one visual signal where the at least one visual signal is substantially indicative of at least one activity to be performed by the device. A visual signal capturing element captures the at least one visual signal and communicates the at least one visual signal to the device where the device interprets the at least one visual signal and performs the activity autonomously and without requiring any additional signals or other information from the signal source.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: May 25, 2021
    Assignee: VECNA ROBOTICS, INC.
    Inventor: Neal Checka
  • Patent number: 11001444
    Abstract: An automated storage and retrieval system including at least one autonomous rover for transferring payload within the system and including a communicator, a multilevel storage structure, each level allowing traversal of the at least one autonomous rover, at least one registration station disposed at predetermined locations on each level and being configured to communicate with the communicator to at least receive rover identification information, and a controller in communication with the at least one registration station and configured to receive the at least rover identification information and at least one of register the at least one autonomous rover as being on a level corresponding to a respective one of the at least one registration station or deregister the at least one autonomous rover from the system, where the controller effects induction of the at least one autonomous rover into a predetermined rover space on the level.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: May 11, 2021
    Assignee: Symbotic LLC
    Inventors: Forrest Buzan, Edward A. MacDonald, Taylor A. Apgar, Thomas A. Schaefer, Melanie Ziegler, Russell G. Barbour
  • Patent number: 10997744
    Abstract: The present invention relates to localization method and system for providing augmented reality in mobile devices and includes sub-sampling image data acquired from a camera in the mobile devices, and extracting image patch including line and point in low-resolution image data, matching feature pairs of point features between the image patch and previous image patch according to movement of the camera, and producing line of subpixel for the image patch, and estimating a location of the camera in the mobile devices based on difference between the produced line and estimated line by inertia.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: May 4, 2021
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Hyeon Myeong, Kwang Yik Jung, Pillip Youn, Yeeun Kim, HyunJun Lim, Seungwon Song
  • Patent number: 10997729
    Abstract: In one embodiment, a method, apparatus, and system may predict behavior of environmental objects using machine learning at an autonomous driving vehicle (ADV). A data processing architecture comprising at least a first neural network and a second neural network is generated, the first and the second neural networks having been trained with a training data set. Behavior of one or more objects in the ADV's environment is predicted using the data processing architecture comprising the trained neural networks. Driving signals are generated based at least in part on the predicted behavior of the one or more objects in the ADV's environment to control operations of the ADV.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: May 4, 2021
    Assignee: BAIDU USA LLC
    Inventors: Liangliang Zhang, Hongyi Sun, Dong Li, Jiangtao Hu, Jinghao Miao
  • Patent number: 10984547
    Abstract: Various embodiments provide systems, methods, devices, and instructions for performing simultaneous localization and mapping (SLAM) that involve initializing a SLAM process using images from as few as two different poses of a camera within a physical environment. Some embodiments may achieve this by disregarding errors in matching corresponding features depicted in image frames captured by an image sensor of a mobile computing device, and by updating the SLAM process in a way that causes the minimization process to converge to global minima rather than fall into a local minimum.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: April 20, 2021
    Assignee: Snap Inc.
    Inventors: David Ben Ezra, Eyal Zak, Ozi Egri
  • Patent number: 10974391
    Abstract: Apparatus and methods for carpet drift estimation are disclosed. In certain implementations, a robotic device includes an actuator system to move the body across a surface. A first set of sensors can sense an actuation characteristic of the actuator system. For example, the first set of sensors can include odometry sensors for sensing wheel rotations of the actuator system. A second set of sensors can sense a motion characteristic of the body. The first set of sensors may be a different type of sensor than the second set of sensors. A controller can estimate carpet drift based at least on the actuation characteristic sensed by the first set of sensors and the motion characteristic sensed by the second set of sensors.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: April 13, 2021
    Assignee: iRobot Corporation
    Inventors: Dhiraj Goel, Ethan Eade, Philip Fong, Mario E. Munich
  • Patent number: 10977775
    Abstract: A depth decoding system and a method for rectifying a ground-truth image are introduced. The depth decoder system includes a projector, a camera, a processor and a decoder. The projector is configured to project a structural light pattern to a first reference plane and a second reference plane. The camera is configured to capture a first ground-truth image from the first reference plane and capture a second ground-truth image from the second reference plane. The processor is configured to perform a rectification operation to the first ground-truth image and the second ground-truth image to generate a rectified ground-truth image. The decoder is configured to generate a depth result according to the rectified ground-truth image.
    Type: Grant
    Filed: July 7, 2019
    Date of Patent: April 13, 2021
    Assignee: HIMAX TECHNOLOGIES LIMITED
    Inventors: Chin-Jung Tsai, Yu-Hsuan Chu, Cheng-Hung Chi, Ming-Shu Hsiao, Nai-Ting Chang, Yi-Nung Liu
  • Patent number: 10970877
    Abstract: An image processing apparatus, an image processing method, and a program that permit camera calibration with high accuracy by using a known object in images captured by a plurality of imaging sections. An estimation section estimates a 3D position of a road sign included in each of images captured by a plurality of cameras with respect to each of the imaging sections. A recognition section recognizes a positional relationship between the plurality of cameras on the basis of the 3D position of the road sign with respect to each of the cameras estimated by the estimation section. The positional relationship between the plurality of cameras recognized by the recognition section is used to correct the images captured by the plurality of cameras.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: April 6, 2021
    Assignee: Sony Corporation
    Inventors: Masashi Eshima, Akihiko Kaino, Takaaki Kato, Shingo Tsurumi
  • Patent number: 10962487
    Abstract: A flaw detecting apparatus and a method for a plane mirror based on line scanning and ring band stitching are provided. The flaw detecting apparatus comprises: a line scanning detector, an annular illumination source, a rotary table rotatable about a Z axis, a translation table translatable along an X axis and a processor. By translating and rotating the plane mirror to be detected, an entire surface of the plane mirror to be detected can be detected by the line scanning detector, and the flaw of the entire plane mirror to be detected is obtained by a ring band stitching method. The method of line scanning and ring band stitching reduces the imaging distortion, the intermediate data amount, the difficulty in the distortion correction and difficulty in stitching, and improves the detection speed and the detection quality.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: March 30, 2021
    Assignee: The Institute of Optics and Electronics, The Chinese Academy of Sciences
    Inventors: Fuchao Xu, Haiyang Quan, Taotao Fu, Xiaochuan Hu, Xi Hou, Sheng Li
  • Patent number: 10940591
    Abstract: This method is for calibrating a coordinate system of an image capture device and a coordinate system of a robot arm in a robot system that includes a display device, the image capture device, and the robot arm to which one of the display device and the image capture device is fixed, the robot arm having a drive shaft. The method includes: acquiring first captured image data based on first image data; acquiring second captured image data based on second image data different from the first image data; and calibrating the coordinate system of the image capture device and the coordinate system of the robot arm, using the first captured image data and the second captured image data.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: March 9, 2021
    Assignee: OMRON Corporation
    Inventor: Norikazu Tonogai
  • Patent number: 10931933
    Abstract: An operation method of a calibration guidance system includes a feature extraction unit executing a feature extraction operation on a first image group including a first object captured by a multi-camera system to generate a first feature point group corresponding to a predetermined position within an image capture range of the multi-camera system; and a guidance unit determining whether to generate a direction indication to guide the first object to another predetermined position within an image capture range according to a first comparison result between a block corresponding to feature points of the first feature point group of the predetermined position and a predetermined block when a number of the feature points of the first feature point group is greater than a predetermined number.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: February 23, 2021
    Assignee: eYs3D Microelectronics, Co.
    Inventor: Chi-Feng Lee
  • Patent number: 10906176
    Abstract: A teaching apparatus configured to include a display device and perform a teaching operation for a robot includes a template storage section configured to store a plurality of templates corresponding to a plurality of programs of the robot, a program explanatory content storage section configured to store plural pieces of explanatory content for explaining the respective plurality of programs, a template display section configured to display the plurality of templates stored in the template storage section on the display device, a template selection section configured to select one template from the plurality of templates displayed on the template display section, and a program explanatory content display section configured to read out the explanatory content of the program corresponding to the one template selected by the template selection section from the program explanatory content storage section and configured to display the explanatory content on the display device.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: February 2, 2021
    Assignee: Fanuc Corporation
    Inventors: Yuusuke Kurihara, Tomoyuki Yamamoto
  • Patent number: 10908606
    Abstract: A system for autonomously navigating a vehicle along a road segment may include at least one processor. The at least one processor may be programmed to receive from at least one sensor information relating to one or more aspects of the road segment. The processor may also be programmed to determine a local feature of the road segment based on the received information. Further the processor may be programmed to compare the local feature to a predetermined signature feature for the road segment. The processor may be programmed to determine a current location of the vehicle along a predetermined road model trajectory associated with the road segment based on the comparison of the received information and the predetermined signature feature. The processor may also be programmed to determine an autonomous steering action for the vehicle based on a direction of the predetermined road model trajectory at the determined location.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: February 2, 2021
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Gideon Stein, Ofer Springer, Andras Ferencz
  • Patent number: 10885661
    Abstract: Disclosed are systems and methods for determining a location of a customer within a store. The systems and methods may include receiving at least one image of an item located in the store. The item may be held by the customer. The systems and methods may also include creating a feature vector. The feature vector may store features of the at least one image of the item. The location of the customer may be determined using features stored in the feature vector.
    Type: Grant
    Filed: December 15, 2018
    Date of Patent: January 5, 2021
    Assignee: NCR Corporation
    Inventors: Pavani Lanka, Samak Radha
  • Patent number: 10875186
    Abstract: A robot system for performing drive control of a robot arm with respect to a target object according to information obtained by a camera, including a robot having a working section, a camera mounted in the vicinity of the working section, and a control device for controlling the driving of the robot while confirming the target object based on image data of the camera, is provided. The control device performs image-capture control, which executes image-capturing of the target object with the camera a plurality of times when moving the working section with respect to the target object according to a predetermined trajectory, and focus control, in which predetermined images within a plurality of images captured by image-capture control are in focus.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: December 29, 2020
    Assignee: FUJI CORPORATION
    Inventors: Yasuhiro Yamashita, Nobuo Oishi, Takayoshi Sakai
  • Patent number: 10875187
    Abstract: A robotic arm mounted camera system allows an end-user to begin using the camera for object recognition without involving a robotics specialist. Automated object model calibration is performed under conditions of variable robotic arm pose dependent feature recognition of an object. The user can then teach the system to perform tasks on the object using the calibrated model. The camera's body can have parallel top and bottom sides and adapted to be fastened to a robotic arm end and to an end effector with its image sensor and optics extending sideways in the body, and it can include an illumination source for lighting a field of view.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: December 29, 2020
    Assignee: ROBOTIQ INC.
    Inventors: Vincent Paquin, Marc-Antoine Lacasse, Yan Drolet-Mihelic, Jean-Philippe Mercier
  • Patent number: 10863668
    Abstract: This invention is a configurable ground utility robot GURU having at least the following parts: an all-terrain mobile apparatus; a payload accepting apparatus; an onboard processor; at least one sensor that communicates with said onboard processor; at least one energy beam payload device connectable to the payload accepting apparatus, capable of creating an energy beam having enough power to elevate an internal temperature of a subject when the energy beam is focused on the subject and where the energy beam payload device communicates with the onboard processor and where the ground utility robot also has a computer program that at least performs the following functions: receives and interprets data from the at least one sensor; controls the mobile apparatus; focuses the at least one energy beam on the subject; and controls the beam strength and time duration.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 15, 2020
    Assignee: DCENTRALIZED SYSTEMS, INC.
    Inventors: Georgios Chrysanthakopoulos, Adlai Felser
  • Patent number: 10853646
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for generating spatial affordances for an object, in an environment of a robot, and utilizing the generated spatial affordances in one or more robotics applications directed to the object. Various implementations relate to applying vision data as input to a trained machine learning model, processing the vision data using the trained machine learning model to generate output defining one or more spatial affordances for an object captured by the vision data, and controlling one or more actuators of a robot based on the generated output. Various implementations additionally or alternatively relate to training such a machine learning model.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: December 1, 2020
    Assignee: X DEVELOPMENT LLC
    Inventors: Adrian Li, Nicolas Hudson, Aaron Edsinger
  • Patent number: 10832078
    Abstract: An imaging system for localization and mapping of a scene including static and dynamic objects. A sensor acquires a sequence of frames in motion or stationary. A memory to store a static map of static objects and an object map of each dynamic object in the scene. The static map includes a set of landmarks, and the object map includes a set of landmarks and a set of segments. A localizer registers keypoints of the frame with landmarks in the static map using frame-based registration and to register some segments in the frame with segments in the object map using a segment-based registration. A mapper to update each object map with keypoints forming each segment and keypoints registered with the corresponding object map according to the segment-based registration, and to update the static map with the remaining keypoints in the frame using the keypoints registered with the static map.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: November 10, 2020
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Esra Cansizoglu, Sergio S Caccamo, Yuichi Taguchi
  • Patent number: 10823576
    Abstract: Systems and methods for robotic mapping are disclosed. In some exemplary implementations, a robot can travel in an environment. From travelling in the environment, the robot can create a graph comprising a plurality of nodes, wherein each node corresponds to a scan taken by a sensor of the robot at a location in the environment. In some exemplary implementations, the robot can generate a map of the environment from the graph. In some cases, to facilitate map generation, the robot can constrain the graph to start and end at a substantially similar location. The robot can also perform scan matching on extended scan groups, determined from identifying overlap between scans, to further determine the location of features in a map.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: November 3, 2020
    Assignee: Brain Corporation
    Inventors: Jaldert Rombouts, Borja Ibarz Gabardos, Jean-Baptiste Passot, Andrew Smith
  • Patent number: 10819883
    Abstract: According to one or more embodiments, a system of generating a two-dimensional (2D) image of an environment includes a 2D scanner system that includes a measurement device that is mounted to a first body equipment of an operator and one or more processors that are mounted to a second body equipment of the operator. The measurement device includes a light source, an image sensor, and a controller to determine a distance value to one or more object points. The processors generate a 2D submap of the environment in response to an activation signal from the operator and based at least in part on the distance value, each submap generated from a respective point in the environment. Further, the processors generate a 2D image of the environment using multiple 2D submaps.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: October 27, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Oliver Zweigle, Ahmad Ramadneh, Muhammad Umair Tahir, Aleksej Frank, João Santos, Roland Raith
  • Patent number: 10807249
    Abstract: A robot including an arm, a force sensor attached to a distal end portion of the arm, a support member attached to the force sensor, a tool supported by the support member, a plurality of protruding portions for detecting posture, which protrude from the support member, and a controller which determines a situation where all of the protruding portions are in contact with a work object on which the tool performs a predetermined work based on detected values of the force sensor.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 20, 2020
    Assignee: FANUC CORPORATION
    Inventor: Yoshihito Wakebe
  • Patent number: 10776652
    Abstract: Described herein are systems and methods that use motion-related data combined with image data to improve the speed and the accuracy of detecting visual features by predict the locations of features using the motion-related data. In embodiments, given a set of features in a previous image frame and given a next image frame, localization of the same set of features in the next image frame is attempted. In embodiments, motion-related data is used to compute the relative pose transformation between the two image frames, and the image location of the features may then be transformed to obtain their location prediction in the next frame. Such a process greatly reduces the search space of the features in the next image frame, and thereby accelerates and improves feature detection.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: September 15, 2020
    Assignee: Baidu USA LLC
    Inventors: Yingze Bao, Mingyu Chen
  • Patent number: 10748318
    Abstract: A system and method of generating a two-dimensional (2D) image of an environment is provided. The system includes a 2D scanner having a controller that determines a distance value to at least one of the object points. One or more processors are operably coupled to the 2D scanner, the one or more processors being responsive to nontransitory executable instructions for generating a plurality of 2D submaps of the environment based at least in part on the distance value, each submap generated from a different point in the environment. A map editor is provided that is configured to: select a subset of submaps from the plurality of 2D submaps; and generate the 2D image of the environment using the subset of 2D submaps. The method provides for realigning of 2D submaps to improve the quality of a global 2D map.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 18, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: João Santos, Ahmad Ramadneh, Aleksej Frank, Oliver Zweigle
  • Patent number: 10719727
    Abstract: A method for determining at least one property related to at least part of a real environment comprises receiving a first image of a first part of a real environment captured by a first camera, wherein the first camera is a thermal camera and the first image is a thermal image and the first part of the real environment is a first environment part, providing at least one description related to at least one class of real objects, wherein the at least one description includes at least one thermal property related to the at least one class of real objects, receiving a second image of the first environment part and of a second part of the real environment captured by a second camera, wherein the second part of the real environment is a second environment part, providing an image alignment between the first image and the second image, determining, for at least one second image region contained in the second image, at least one second probability according to the image alignment, pixel information of the first image
    Type: Grant
    Filed: October 1, 2014
    Date of Patent: July 21, 2020
    Assignee: Apple Inc.
    Inventors: Darko Stanimirovic, Daniel Kurz
  • Patent number: 10710244
    Abstract: A method and a device for operating a robot are provided. According to an example of the method, information of a first gesture is acquired from a group of gestures of an operator, each gesture from the group of gestures corresponding to an operation instruction from a group of operation instructions. A first operation instruction from the group of operation instructions is obtained based on the acquired information of the first gesture, the first operation corresponding to the first gesture. The first operation instruction is executed.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: July 14, 2020
    Assignee: Beijing Airlango Technology Co., Ltd.
    Inventors: Yinian Mao, Xinmin Liu
  • Patent number: 10682764
    Abstract: A robotic system includes a robot having an associated workspace; a vision sensor constructed to obtain a 3D image of a robot scene including a workpiece located in the workspace; and a control system communicatively coupled to the vision sensor and to the robot. The control system is configured to execute program instructions to filter the image by segmenting the image into a first image portion containing substantially only a region of interest within the robot scene, and a second image portion containing the balance of the robot scene outside the region of interest; and by storing image data associated with the first image portion. The control system is operative to control movement of the robot to perform work, on the workpiece based on the image data associated with the first image portion.
    Type: Grant
    Filed: July 30, 2017
    Date of Patent: June 16, 2020
    Assignee: ABB Schweiz AG
    Inventors: Remus Boca, Thomas A. Fuhlbrigge
  • Patent number: 10675761
    Abstract: An improved method, system, and apparatus is provided to implement a general architecture for robot systems. A mode execution module is provided to universally execute execution modes on different robotic system. A system includes an execution module that receives software instructions in a normalized programming language. The system also includes an interface having a translation layer that converts the software instructions from the normalized language into robot-specific instructions that operate in a particular robotic system. The system further includes a controller that is communicatively coupled to the interface, wherein the controller receives the robot-specific instructions. Moreover, the system includes a robotic device that is operatively controlled by the controller by execution of the robot-specific instructions.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: June 9, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Frederick Dennis Zyda, Jeffrey Steven Kranski, Vikram Chauhan
  • Patent number: 10661811
    Abstract: A vehicle information display control device includes: an automatic driving information obtaining unit obtaining automatic driving information including information indicating that each actuator of a vehicle is in a manual control mode or an automatic control mode; and a display controller causing a display to display an image based on the automatic driving information. The display controller causes the display to display: an image of a manual driving device corresponding to the actuator; a manual-operation recalling image that is superimposed on the image of the manual driving device corresponding to an actuator in the manual control mode and that recalls an operation performed by a person; and an automatic-operation recalling image that is superimposed on the image of the manual driving device corresponding to an actuator in the automatic control mode and that recalls an operation performed by a machine.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: May 26, 2020
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Tadashi Miyahara, Mitsuo Shimotani, Satoru Inoue, Yoshio Sato, Yuki Sakai, Yuji Karita, Junichi Kimura
  • Patent number: 10657419
    Abstract: Machine vision methods and systems determine if an object within a work field has one or more predetermined features. Methods comprise capturing image data of the work field, applying a filter to the image data, in which the filter comprises an aspect corresponding to a specific feature, and based at least in part on the applying, determining if the object has the specific feature. Systems comprise a camera system configured to capture image data of the work field, and a controller communicatively coupled to the camera system and comprising non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data, and based at least in part on applying the filter, determine if the object has the specific feature. Robotic installation methods and systems that utilize machine vision methods and systems also are disclosed.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: May 19, 2020
    Assignee: The Boeing Company
    Inventors: Tyler Edward Kurtz, Riley Harrison HansonSmith
  • Patent number: 10656646
    Abstract: An example method includes determining a target area of a ground plane in an environment of a mobile robotic device, where the target area of the ground plane is in front of the mobile robotic device in a direction of travel of the mobile robotic device. The method further includes receiving depth data from a depth sensor on the mobile robotic device. The method also includes identifying a portion of the depth data representative of the target area. The method additionally includes determining that the portion of the depth data lacks information representing at least one section of the target area. The method further includes providing an output signal identifying at least one zone of non-traversable space for the mobile robotic device in the environment, where the at least one zone of non-traversable space corresponds to the at least one section of the target area.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: May 19, 2020
    Assignee: X Development LLC
    Inventors: Kevin William Watts, Kurt Konolige
  • Patent number: 10640211
    Abstract: An automated commissioning and floorplan configuration (CAFC) device can include a CAFC system having a transceiver and a CAFC engine, where the transceiver communicates with at least one device disposed in a volume of space, where the CAFC engine, based on communication between the transceiver and the at least one device, commissions the at least one device.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: May 5, 2020
    Assignee: Eaton Intelligent Power Limited
    Inventors: Jonathan Andrew Whitten, Michael Alan Lunn
  • Patent number: 10617857
    Abstract: The invention generally relates to a raster injector for micropigmentation and use thereof, and more particularly to a raster injector that precisely inserts pigment, medicine or other fluids into skin or other industrial materials. The raster injector is connected to a motion platform that enables the injector to inject pigment or other fluids into the layers of the skin or other substrates through micro, hollow point needles. The raster injector includes a disposable, interchangeable ink delivery assembly, an ink reservoir assembly and a non-disposable base assembly.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: April 14, 2020
    Assignee: Odd Pixel Tattooing Technologies, Inc.
    Inventors: Lisa L. Phillips, Drew T. Morgan
  • Patent number: 10620608
    Abstract: A pick-and-place machine module is provided. The pick-and-place machine module includes a nozzle and a collet disk. The nozzle includes a body, a head and a tubular element extending between the body and the head such that the head is communicative with the body via the tubular element to enable a pick-up of a component by the head. The collet disk is affixed to a surface of the body facing the head about the tubular element and is configured to reflect light incident thereon toward an area of the base surrounding the component.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: April 14, 2020
    Assignee: RAYTHEON COMPANY
    Inventors: Kristopher W. Carlson, Christopher S. Bender
  • Patent number: 10596706
    Abstract: A mechanism-parametric-calibration method for a robotic arm system is provided, including: controlling the robotic arm to perform a plurality of actions so that one end of the robotic arm moves toward corresponding predictive positioning-points; determining a predictive relative-displacement between each two of the predictive positioning-points; after each of the actions is performed, sensing three-dimensional positioning information of the end of the robotic arm; determining, according to the three-dimensional positioning information, a measured relative-displacement moved by the end of the robotic-arm when each two of the actions are performed; deriving an equation corresponding to the robotic arm from the predictive relative-displacements and the measured relative-displacements; and utilizing a feasible algorithm to find the solution of the equation.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: March 24, 2020
    Assignee: DELTA ELECTRONICS, INC.
    Inventor: Cheng-Hao Huang
  • Patent number: 10593065
    Abstract: A method includes acquiring a plurality of training images through a capturing component, acquiring a plurality of training camera poses of the capturing component corresponding to the training images through a pose sensor disposed corresponding to the capturing component, and training a camera pose estimation model according to the training images and the training camera poses of the capturing component.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: March 17, 2020
    Assignee: HTC Corporation
    Inventors: Che-Han Chang, Edward Chang
  • Patent number: 10589421
    Abstract: The present invention relates to a mechanical energy transfer system that comprises a moving member that imparts energy to an engaging member as a friction force is applied to the moving member. The moving members may be rotating plates/disks, belts, or drums/cylinders. Upon engagement of the moving members by the engaging members, movement is transferred to the engaging members. Movement transfer may occur through the application of any frictional force, such as a magnetic force or a mechanical force.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: March 17, 2020
    Inventor: Douglas H. DeCandia
  • Patent number: 10594917
    Abstract: Techniques are described for controlling the process of capturing three-dimensional (3D) video content. For example a controller can provide centralized control over the various components that participate in the capture, and processing, of the 3D video content. For example, the controller can establish connections with a number of components (e.g., running on other computing devices). The controller can receive state update messages from the components (e.g., comprising state change information, network address information, etc.). The controller can also broadcast messages to the components. For example, the controller can broadcast system state messages to the components where the system state messages comprise current state information of the components. The controller can also broadcast other types of messages, such as start messages that instruct the components to enter a start state.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: March 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Spencer G Fowers
  • Patent number: 10571260
    Abstract: An automated rivet measurement system comprises a number of end effectors, a number of cameras, a processor, and a comparator. The number of end effectors is configured to perform drilling and riveting on a structure. The number of cameras is connected to the number of end effectors. The number of cameras is configured to take a first image of a hole in the structure and a second image of a rivet in the hole. The processor is configured to process the first image and the second image to identify a number of reference points in the first image and the second image. The comparator is configured to determine a rivet concentricity using the hole in the first image and the rivet in the second image, in which the first image and the second image are aligned using the number of reference points.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: February 25, 2020
    Assignee: The Boeing Company
    Inventors: Patrick L. Anderson, Stephen J. Bennison