Patents Examined by Gregory M. Desire
  • Patent number: 11238376
    Abstract: A system and a method are disclosed herein for machine-learned detection of outliers within payload requests. An entity management system uses machine learning to cluster data characterizing requests from entities to route payloads, and determines one or more data clusters that are outliers. The system receives a request to route a payload to a destination, and applies a supervised machine learning model to size and type information indicated by the payload. The supervised machine learning model applies a label to the payload data (e.g., indicating that the payload routing request is an outlier). This outlier detection may drive a validation process to address detected outliers. The system may receive an indication to perform a validation function and transmit the payload to a validation destination. The system may leverage payload data and feedback received from an entity to optimize machine learning techniques to the entity.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: February 1, 2022
    Assignee: Tekion Corp
    Inventors: Satyavrat Mudgil, Anant Sitaram, Ved Surtani
  • Patent number: 11210801
    Abstract: An adaptive multi-sensor data fusion method based on mutual information includes: receiving an RGB image of a road surface collected by a camera; receiving point cloud data of the road surface collected synchronously by LIDAR; preprocessing the point cloud data to obtain dense point cloud data; and inputting the RGB image and the dense point cloud data into a pre-established and well-trained fusion network, to output data fusion results. The fusion network is configured to calculate mutual information of a feature tensor and an expected feature of input data, assign fusion weights of the input data according to the mutual information, and then output the data fusion results according to the fusion weights. In the new method, such information theory tool as mutual information is introduced, to calculate the correlation between the extracted feature of the input data and the expected feature of the fusion network.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: December 28, 2021
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xinyu Zhang, Jun Li, Zhiwei Li, Zhenhong Zou, Li Wang
  • Patent number: 11210563
    Abstract: Embodiments of the present disclosure provide a method and apparatus for processing an image. The method may include: acquiring a feature of a target image; acquiring a style of the target image, and searching a feature most similar to the feature of the target image from a set of image related information of the style, where the set of image related information comprises features of multiple groups of paired images; and using a paired image of an image corresponding to the found feature as a paired image of the target image and outputting the paired image of the target image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: December 28, 2021
    Inventors: Zhimin Xu, Guoyi Liu, Xianglong Meng, Xiao Wang
  • Patent number: 11205278
    Abstract: The present disclosure provides a depth image processing method and apparatus, and an electronic device. The method includes: acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor; determining a scene type according to the first image and the second image; and performing a filtering process on the first image according to the scene type.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: December 21, 2021
    Assignee: SHENZHEN HEYTAP TECHNOLOGY CORP., LTD.
    Inventor: Jian Kang
  • Patent number: 11196895
    Abstract: An image processing apparatus includes a painting section and an extraction section. The painting section performs painting on a margin at an end part of an image. The extraction section extracts a document included in the image on which the painting is performed by the painting section.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: December 7, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Shigeki Ishino, Minoru Sodeura
  • Patent number: 11195296
    Abstract: An information processing apparatus, includes: a memory; and a processor coupled to the memory and configured to: generate, based on three-dimensional point cloud data indicating three-dimensional coordinates of each point on a three-dimensional object, image data in which two-dimensional coordinates of each point and a depth of each point are associated with each other; specify, as a target point, a point of the three-dimensional point cloud data corresponding to an edge pixel included in an edge portion of the image data, and specifies, as a neighbor point, a point of the three-dimensional point cloud data corresponding to a neighbor pixel of the edge pixel; and eliminate the target point based on a number of the neighbor points at which a distance to the target point is less than a predetermined distance.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: December 7, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Kazuhiro Yoshimura
  • Patent number: 11189021
    Abstract: Implementations describe systems and methods for machine based defect detection of three-dimensional (3D) printed objects. A method of one embodiment of the disclosure includes providing a first illumination of a 3D printed object using a first light source arrangement. A plurality of images of the 3D printed object are then generated using one or more imaging devices. Each image may depict a distinct region of the 3D printed object. The plurality of images may then be processed by a processing device using a machine learning model trained to identify one or more types of manufacturing defects of a 3D printing process. The machine learning model may provide a probability that an image contains a manufacturing defect. The processing device may then determine, without user input, whether the 3D printed object contains one or more manufacturing defects based on the results provided by the machine learning model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 30, 2021
    Assignee: Align Technology, Inc.
    Inventors: Paren Indravadan Shah, Anatoliy Parpara, Andrey Cherkas, Alexey Kalinichenko
  • Patent number: 11182629
    Abstract: A method for machine learning based driver assistance is provided. The method may include detecting, in one or more images of a driver operating an automobile, one or more facial landmarks. The detection of the one or more facial landmarks may include applying, to the one or more images, a first machine learning model. A gaze dynamics of the driver may be determined based at least on the one or more facial landmarks. The gaze dynamics of the driver may include a change in a gaze zone of the driver from a first gaze zone to a second gaze zone. A state of the driver may be determined based at least on the gaze dynamics of the driver. An operation of the automobile may be controlled based at least on the state of the driver. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: November 23, 2021
    Assignee: The Regents of the University of California
    Inventors: Sujitha Martin, Kevan Yuen, Mohan M. Trivedi
  • Patent number: 11176674
    Abstract: A system and method for operating a robotic system to register unrecognized objects is disclosed. The robotic system may use first image data representative of an unrecognized object located at a start location to derive an initial minimum viable region (MVR) and to implement operations for initially displacing the unrecognized object. The robotic system may analyze second image data representative of the unrecognized object after the initial displacement operations to detect a condition representative of an accuracy of the initial MVR. The robotic system may register the initial MVR or an adjustment thereof based on the detected condition.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: November 16, 2021
    Assignee: MUJIN, Inc.
    Inventors: Rosen Nikolaev Diankov, Russell Islam, Xutao Ye
  • Patent number: 11170476
    Abstract: Each POI of a set of POIs of a first point cloud is filtered. At a first filter, a first set of neighborhood points of a POI is selected; a first metric for the first set of neighborhood points is computed; and based on the first metric, whether to accept, modify, reject or transmit the POI to a second filter is determined. Provided the POI is accepted or modified, the POI is transmitted to a second point cloud; provided the POI is rejected, the POI is prevented from reaching the second point cloud; provided the POI is not accepted, modified, or rejected, the POI is transmitted to a second filter. At the second filter, provided the POI is accepted or modified, the POI is transmitted to the second point cloud. At least one of range and velocity information is extracted based on the second point cloud.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 9, 2021
    Assignee: Aeva, Inc.
    Inventors: Krishna Toshniwal, Bruno Hexsel, Kumar Bhargav Viswanatha, Jose Krause Perin
  • Patent number: 11158057
    Abstract: A method for detecting a document edge is provided. The method includes: obtaining multi-color channel data of each pixel in a color image (103), where the multi-color channel data includes two-dimensional coordinate values of the pixel and a value of the pixel on each color channel; performing line detection on the multi-color channel data of each pixel in the color image (105); and detecting a quadrilateral based on preset condition and some or all of straight lines obtained by performing the line detection (107). According to the foregoing method, a success rate of detecting a document edge can be increased.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 26, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yunchao Zhang, Wenmei Gao
  • Patent number: 11156981
    Abstract: Various approaches to ensuring safe operation of industrial machinery in a workcell include disposing multiple image sensors proximate to the workcell and acquiring, with at least some of the image sensors, the first set of images of the workcell; registering the sensors to each other based at least in part on the first set of images and, based at least in part on the registration, converting the first set of images to a common reference frame of the sensors; determining a transformation matrix for transforming the common reference frame of the sensors to a global frame of the workcell; registering the sensors to the industrial machinery; acquiring the second set of images during operation of the industrial machinery; and monitoring the industrial machinery during operation thereof based at least in part on the acquired second plurality of images, transformation, and registration of the sensors to the industrial machinery.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: October 26, 2021
    Assignee: VEO ROBOTICS, INC.
    Inventors: Dmitriy Dedkov, Scott Denenberg, Ilya A. Kriveshko, Paul Jakob Schroeder, Clara Vu, Patrick Sobalvarro, Alberto Moel
  • Patent number: 11158071
    Abstract: The present disclosure discloses a method and an apparatus for point cloud registration. The method includes: segmenting a source point cloud and a destination point cloud respectively into different categories of attribute features based on semantic; segmenting the source point cloud and the destination point cloud into a plurality of grids based on the attribute features; calculating a current similarity between the source point cloud and the destination point cloud based on the plurality of grids; determining whether the current similarity and a current iterative number satisfy a preset condition; when the current similarity and the current iterative number satisfy the preset condition, performing a registration on the source point cloud and the destination point cloud to obtain a registered result; and based on the registered result, adjusting a position of the source point cloud, updating the current iterative number.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: October 26, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Miao Yan, Zixiang Zhou, Pan Luo, Yu Bai, Changjie Ma, Dangen She
  • Patent number: 11138464
    Abstract: An image processing device 10 includes: a feature extraction unit 11 which obtains features in each of scaled samples of the region of interest in a probe image; a saliency generation unit 12 which computes the probabilities of the pixels in the scaled samples that contribute to the score or the label of the object of interest in the region; a dropout processing unit 13 which removes the features from the scaled samples which are not essential for the computing the score or the label of the object, using the computed probabilities.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: October 5, 2021
    Assignee: NEC CORPORATION
    Inventor: Karan Rampal
  • Patent number: 11138740
    Abstract: The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium. The image processing method of the embodiment of the present disclosure is for an electronic device. The method includes: acquiring a depth image of a current user, and acquiring a three-dimensional (3D) background image of a scene populated by the current user; performing edge extraction on the 3D background image to acquire depth data, in the 3D background image, of edge pixels of a target object in the 3D background image; determining whether the current user collides with the target object in the scene based on the depth image of the current user and the depth data of the edge pixels of the target object; and performing a predetermined operation on the electronic device in response to that the current user collides with the target object.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: October 5, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Xueyong Zhang
  • Patent number: 11107205
    Abstract: A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yuting Hu, Ruiwen Zhen, John W. Glotzbach, Ibrahim Pekkucuksen, Hamid R. Sheikh
  • Patent number: 11107143
    Abstract: Many embodiments can include a system. In some embodiments, the system can comprise one or more processors and one or more non-transitory storage devices storing computing instructions are disclosed.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: August 31, 2021
    Assignee: WALMART APOLLO LLC
    Inventors: Stephen Dean Guo, Kannan Achan, Venkata Syam Prakash Rapaka
  • Patent number: 11093751
    Abstract: A system and methods are disclosed for using a trained machine learning model to identify constituent images within composite images. A method may include providing pixel data of a first image as input to the trained machine learning model, obtaining one or more outputs from the trained machine learning model, and extracting, from the one or more outputs, a level of confidence that (i) the first image is a composite image that includes a constituent image, and (ii) at least a portion of the constituent image is in a particular spatial area of the first image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: August 17, 2021
    Assignee: GOOGLE LLC
    Inventors: Filip Pavetic, King Hong Thomas Leung, Dmitrii Tochilkin
  • Patent number: 11087175
    Abstract: A method for learning a recurrent neural network to check an autonomous driving safety to be used for switching a driving mode of an autonomous vehicle is provided. The method includes steps of: a learning device (a) if training images corresponding to a front and a rear cameras of the autonomous vehicle are acquired, inputting each pair of the training images into corresponding CNNs, to concatenate the training images and generate feature maps for training, (b) inputting the feature maps for training into long short-term memory models corresponding to sequences of a forward RNN, and into those corresponding to the sequences of a backward RNN, to generate updated feature maps for training and inputting feature vectors for training into an attention layer, to generate an autonomous-driving mode value for training, and (c) allowing a loss layer to calculate losses and to learn the long short-term memory models.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11080555
    Abstract: Detecting trends is provided. The method comprises receiving, from a number of data sources, data regarding choices of people at a number of specified events and public places and determining, according to a number of clustering algorithms, trend clusters according to data received from the data sources cross-referenced to defined event types and place types. Customer profile data and preferences are received from a number of registered customers through user interfaces, and a number of customer clusters according to the customer profile data and preferences are determined according to clustering algorithms. Correlation rules are calculated between the trend clusters and the customer clusters. A number of trend predictions and recommendations are then sent to a user regarding a number of specified events or time frames according to the correlation rules.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Shubhadip Ray, John David Costantini, Avik Sanyal, Sarbajit K. Rakshit