Patents Examined by Gregory M. Desire
  • Patent number: 11195296
    Abstract: An information processing apparatus, includes: a memory; and a processor coupled to the memory and configured to: generate, based on three-dimensional point cloud data indicating three-dimensional coordinates of each point on a three-dimensional object, image data in which two-dimensional coordinates of each point and a depth of each point are associated with each other; specify, as a target point, a point of the three-dimensional point cloud data corresponding to an edge pixel included in an edge portion of the image data, and specifies, as a neighbor point, a point of the three-dimensional point cloud data corresponding to a neighbor pixel of the edge pixel; and eliminate the target point based on a number of the neighbor points at which a distance to the target point is less than a predetermined distance.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: December 7, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Kazuhiro Yoshimura
  • Patent number: 11189021
    Abstract: Implementations describe systems and methods for machine based defect detection of three-dimensional (3D) printed objects. A method of one embodiment of the disclosure includes providing a first illumination of a 3D printed object using a first light source arrangement. A plurality of images of the 3D printed object are then generated using one or more imaging devices. Each image may depict a distinct region of the 3D printed object. The plurality of images may then be processed by a processing device using a machine learning model trained to identify one or more types of manufacturing defects of a 3D printing process. The machine learning model may provide a probability that an image contains a manufacturing defect. The processing device may then determine, without user input, whether the 3D printed object contains one or more manufacturing defects based on the results provided by the machine learning model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 30, 2021
    Assignee: Align Technology, Inc.
    Inventors: Paren Indravadan Shah, Anatoliy Parpara, Andrey Cherkas, Alexey Kalinichenko
  • Patent number: 11182629
    Abstract: A method for machine learning based driver assistance is provided. The method may include detecting, in one or more images of a driver operating an automobile, one or more facial landmarks. The detection of the one or more facial landmarks may include applying, to the one or more images, a first machine learning model. A gaze dynamics of the driver may be determined based at least on the one or more facial landmarks. The gaze dynamics of the driver may include a change in a gaze zone of the driver from a first gaze zone to a second gaze zone. A state of the driver may be determined based at least on the gaze dynamics of the driver. An operation of the automobile may be controlled based at least on the state of the driver. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: November 23, 2021
    Assignee: The Regents of the University of California
    Inventors: Sujitha Martin, Kevan Yuen, Mohan M. Trivedi
  • Patent number: 11176674
    Abstract: A system and method for operating a robotic system to register unrecognized objects is disclosed. The robotic system may use first image data representative of an unrecognized object located at a start location to derive an initial minimum viable region (MVR) and to implement operations for initially displacing the unrecognized object. The robotic system may analyze second image data representative of the unrecognized object after the initial displacement operations to detect a condition representative of an accuracy of the initial MVR. The robotic system may register the initial MVR or an adjustment thereof based on the detected condition.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: November 16, 2021
    Assignee: MUJIN, Inc.
    Inventors: Rosen Nikolaev Diankov, Russell Islam, Xutao Ye
  • Patent number: 11170476
    Abstract: Each POI of a set of POIs of a first point cloud is filtered. At a first filter, a first set of neighborhood points of a POI is selected; a first metric for the first set of neighborhood points is computed; and based on the first metric, whether to accept, modify, reject or transmit the POI to a second filter is determined. Provided the POI is accepted or modified, the POI is transmitted to a second point cloud; provided the POI is rejected, the POI is prevented from reaching the second point cloud; provided the POI is not accepted, modified, or rejected, the POI is transmitted to a second filter. At the second filter, provided the POI is accepted or modified, the POI is transmitted to the second point cloud. At least one of range and velocity information is extracted based on the second point cloud.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 9, 2021
    Assignee: Aeva, Inc.
    Inventors: Krishna Toshniwal, Bruno Hexsel, Kumar Bhargav Viswanatha, Jose Krause Perin
  • Patent number: 11158057
    Abstract: A method for detecting a document edge is provided. The method includes: obtaining multi-color channel data of each pixel in a color image (103), where the multi-color channel data includes two-dimensional coordinate values of the pixel and a value of the pixel on each color channel; performing line detection on the multi-color channel data of each pixel in the color image (105); and detecting a quadrilateral based on preset condition and some or all of straight lines obtained by performing the line detection (107). According to the foregoing method, a success rate of detecting a document edge can be increased.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 26, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yunchao Zhang, Wenmei Gao
  • Patent number: 11156981
    Abstract: Various approaches to ensuring safe operation of industrial machinery in a workcell include disposing multiple image sensors proximate to the workcell and acquiring, with at least some of the image sensors, the first set of images of the workcell; registering the sensors to each other based at least in part on the first set of images and, based at least in part on the registration, converting the first set of images to a common reference frame of the sensors; determining a transformation matrix for transforming the common reference frame of the sensors to a global frame of the workcell; registering the sensors to the industrial machinery; acquiring the second set of images during operation of the industrial machinery; and monitoring the industrial machinery during operation thereof based at least in part on the acquired second plurality of images, transformation, and registration of the sensors to the industrial machinery.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: October 26, 2021
    Assignee: VEO ROBOTICS, INC.
    Inventors: Dmitriy Dedkov, Scott Denenberg, Ilya A. Kriveshko, Paul Jakob Schroeder, Clara Vu, Patrick Sobalvarro, Alberto Moel
  • Patent number: 11158071
    Abstract: The present disclosure discloses a method and an apparatus for point cloud registration. The method includes: segmenting a source point cloud and a destination point cloud respectively into different categories of attribute features based on semantic; segmenting the source point cloud and the destination point cloud into a plurality of grids based on the attribute features; calculating a current similarity between the source point cloud and the destination point cloud based on the plurality of grids; determining whether the current similarity and a current iterative number satisfy a preset condition; when the current similarity and the current iterative number satisfy the preset condition, performing a registration on the source point cloud and the destination point cloud to obtain a registered result; and based on the registered result, adjusting a position of the source point cloud, updating the current iterative number.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: October 26, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Miao Yan, Zixiang Zhou, Pan Luo, Yu Bai, Changjie Ma, Dangen She
  • Patent number: 11138464
    Abstract: An image processing device 10 includes: a feature extraction unit 11 which obtains features in each of scaled samples of the region of interest in a probe image; a saliency generation unit 12 which computes the probabilities of the pixels in the scaled samples that contribute to the score or the label of the object of interest in the region; a dropout processing unit 13 which removes the features from the scaled samples which are not essential for the computing the score or the label of the object, using the computed probabilities.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: October 5, 2021
    Assignee: NEC CORPORATION
    Inventor: Karan Rampal
  • Patent number: 11138740
    Abstract: The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium. The image processing method of the embodiment of the present disclosure is for an electronic device. The method includes: acquiring a depth image of a current user, and acquiring a three-dimensional (3D) background image of a scene populated by the current user; performing edge extraction on the 3D background image to acquire depth data, in the 3D background image, of edge pixels of a target object in the 3D background image; determining whether the current user collides with the target object in the scene based on the depth image of the current user and the depth data of the edge pixels of the target object; and performing a predetermined operation on the electronic device in response to that the current user collides with the target object.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: October 5, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Xueyong Zhang
  • Patent number: 11107205
    Abstract: A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yuting Hu, Ruiwen Zhen, John W. Glotzbach, Ibrahim Pekkucuksen, Hamid R. Sheikh
  • Patent number: 11107143
    Abstract: Many embodiments can include a system. In some embodiments, the system can comprise one or more processors and one or more non-transitory storage devices storing computing instructions are disclosed.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: August 31, 2021
    Assignee: WALMART APOLLO LLC
    Inventors: Stephen Dean Guo, Kannan Achan, Venkata Syam Prakash Rapaka
  • Patent number: 11093751
    Abstract: A system and methods are disclosed for using a trained machine learning model to identify constituent images within composite images. A method may include providing pixel data of a first image as input to the trained machine learning model, obtaining one or more outputs from the trained machine learning model, and extracting, from the one or more outputs, a level of confidence that (i) the first image is a composite image that includes a constituent image, and (ii) at least a portion of the constituent image is in a particular spatial area of the first image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: August 17, 2021
    Assignee: GOOGLE LLC
    Inventors: Filip Pavetic, King Hong Thomas Leung, Dmitrii Tochilkin
  • Patent number: 11087175
    Abstract: A method for learning a recurrent neural network to check an autonomous driving safety to be used for switching a driving mode of an autonomous vehicle is provided. The method includes steps of: a learning device (a) if training images corresponding to a front and a rear cameras of the autonomous vehicle are acquired, inputting each pair of the training images into corresponding CNNs, to concatenate the training images and generate feature maps for training, (b) inputting the feature maps for training into long short-term memory models corresponding to sequences of a forward RNN, and into those corresponding to the sequences of a backward RNN, to generate updated feature maps for training and inputting feature vectors for training into an attention layer, to generate an autonomous-driving mode value for training, and (c) allowing a loss layer to calculate losses and to learn the long short-term memory models.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11080555
    Abstract: Detecting trends is provided. The method comprises receiving, from a number of data sources, data regarding choices of people at a number of specified events and public places and determining, according to a number of clustering algorithms, trend clusters according to data received from the data sources cross-referenced to defined event types and place types. Customer profile data and preferences are received from a number of registered customers through user interfaces, and a number of customer clusters according to the customer profile data and preferences are determined according to clustering algorithms. Correlation rules are calculated between the trend clusters and the customer clusters. A number of trend predictions and recommendations are then sent to a user regarding a number of specified events or time frames according to the correlation rules.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Shubhadip Ray, John David Costantini, Avik Sanyal, Sarbajit K. Rakshit
  • Patent number: 11080554
    Abstract: Embodiments provide techniques, including systems and methods, for processing imaging data to identify an installed component. Embodiments include a component identification system that is configured to receive imaging data including an installed component, extract features of the installed component from the imaging data, and search a data store of components for matching reference components that match those features. A relevance score may be determined for each of the reference components based on a similarity between the image and a plurality of reference images in a component model of each of the plurality of reference components. At least one matching reference component may be identified by comparing each relevance score to a threshold relevance score and matching component information may be provided to an end-user for each matching reference component.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: August 3, 2021
    Assignee: LOMA LINDA UNIVERSITY
    Inventors: Montry Suprono, Robert Walter
  • Patent number: 11080900
    Abstract: Provided is a method and apparatus for metal artifact reduction in industrial three-dimensional (3D) cone beam computed tomography (CBCT) that may align computer-aided design (CAD) data to correspond to CT data, generate registration data from the aligned CAD data, set a sinogram surgery region corresponding to a metal region based on the registration data, perform an average fill-in process on the CT data based on the registration data, update data of the sinogram surgery region based on the averaged filled-in information, and reconstruct a 3D CT image from the updated sinogram data with surgery region.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 3, 2021
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Chang-Ock Lee, Soomin Jeon, Seongeun Kim
  • Patent number: 11074592
    Abstract: An economical and accurate method of classifying a consumer good as authentic is provided. The method leverages machine learning and the use of steganographic features on the authentic consumer good.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: July 27, 2021
    Assignee: The Procter & Gamble Company
    Inventors: Jonathan Richard Stonehouse, Boguslaw Obara
  • Patent number: 11074708
    Abstract: Various embodiments described herein relate to techniques for computing dimensions of an object. In this regard, a dimensioning system converts points cloud data associated with an object into a density image for a scene associated with the object. The dimensioning system also segments the density image to determine a void region in the density image that corresponds to the object. Furthermore, the dimensioning system determines, based on the void region for the density image, dimension data indicative of one or more dimensions of the object.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: July 27, 2021
    Assignee: Hand Held Products, Inc.
    Inventors: Scott McCloskey, Michael Albright
  • Patent number: 11074684
    Abstract: A high-frequency component removing part removes high-frequency components from a first object image obtained by picking up an image of an object and a first reference image, to acquire a second object image and a second reference image, respectively. A correction part corrects a value of each pixel of at least one of the first object image and the first reference image on the basis of a discrepancy, which is a ratio or a difference, between a value of the corresponding pixel of the second object image and a value of the corresponding pixel of the second reference image. A comparison part compares the first object image with the first reference image, to thereby detect a defect area in the first object image.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: July 27, 2021
    Assignee: SCREEN HOLDINGS CO., LTD.
    Inventor: Hiroyuki Onishi