Patents Examined by Utpal D Shah
  • Patent number: 11977604
    Abstract: A method, device and apparatus for recognizing, categorizing and searching for a garment, and a storage medium. The method for recognizing a garment comprises: acquiring a target image containing a garment to be recognized, and determining, on the basis of the target image, a set of heat maps corresponding to key feature points contained in the target image, the set of heat maps comprising position probability heat maps corresponding to the respective key feature points contained in the target image (101); and processing the set of heat maps on the basis of a shape constraint corresponding to the target image, and determining position probability information of the key feature points contained in the target image (102).
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: May 7, 2024
    Assignees: Beijing Jingdong Shangke Information Tech Co., Ltd, Beijing Jingdong Century Trading Co., Ltd.
    Inventor: Hongbin Xie
  • Patent number: 11978271
    Abstract: Systems and methods for image understanding can include one or more object recognition systems and one or more vision language models to generate an augmented language output that can be both scene-aware and object-aware. The systems and methods can process an input image with an object recognition model to generate an object recognition output descriptive of identification details for an object depicted in the input image. The systems and methods can include processing the input image with a vision language model to generate a language output descriptive of a predicted scene description. The object recognition output can then be utilized to augment the language output to generate an augmented language output that includes the scene understanding of the language output with the specificity of the object recognition output.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: May 7, 2024
    Assignee: GOOGLE LLC
    Inventors: Harshit Kharbanda, Boris Bluntschli, Vibhuti Mahajan, Louis Wang
  • Patent number: 11967047
    Abstract: A method, apparatus, and a non-transitory computer-readable storage medium for image denoising. The method may include obtaining a raw image captured by a camera. The method may also include obtaining a color modeled image based on the raw image. The method may further include obtaining a subsampled raw image based on the raw image. The method may also include obtaining a denoised image based on a neural network processing the color modeled image and the subsampled raw image.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: April 23, 2024
    Assignee: KWAI INC.
    Inventors: Paras Maharjan, Ning Xu, Xuan Xu, Yuyan Song
  • Patent number: 11967111
    Abstract: Proposed is a multi-view camera-based iterative calibration method for generation of a 3D volumetric model that performs calibration between cameras adjacent in a vertical direction for a plurality of frames, performs calibration while rotating with the results of viewpoints adjacent in the horizontal direction, and creates a virtual viewpoint between each camera pair to repeat calibration. Thus, images of various viewpoints are obtained using a plurality of low-cost commercial color-depth (RGB-D) cameras. By acquiring and performing the calibration of these images at various viewpoints, it is possible to increase the accuracy of calibration, and through this, it is possible to generate a high-quality real-life graphics volumetric model.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: April 23, 2024
    Assignee: KWANGWOON UNIVERSITY INDUSTRY-ACADEMIC COLLABORATION FOUNDATION
    Inventors: Young Ho Seo, Byung Seo Park
  • Patent number: 11954180
    Abstract: Sensor fusion is performed for efficient deep learning processing. A camera image is received from an image sensor and supplemental sensor data is received from one or more supplemental sensors, the camera image and the supplemental sensor data including imaging of a cabin of a vehicle. Regions of interest in the camera image are determined based on one or more of the camera image or the supplemental sensor data, the regions of interest including areas of the camera image flagged for further image analysis. A machine-learning model is utilized to perform object detection on the regions of interest of the camera image to identify one or more objects in the camera image. The objects are placed into seating zones of the vehicle.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: April 9, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Faizan Shaik, Medha Karkare, Robert Parenti
  • Patent number: 11954944
    Abstract: Systems and methods of the present disclosure physical object-based passwords by registering an object password including first object representations, first background scene representations, and a first presentation sequence from first image data. Receiving second image data and a second presentation sequence that is a second order in which the user has presented second physical objects to an image acquisition device, and detecting in the second image data second object representations of the second physical objects, and second background scene representations. Computing a probability that the user is a permissioned user by inputting the first image data and the second image data into a machine learning model configured to compute the probability based on comparing the object password and the second image data. Tagging the user as the permissioned user or non-permissioned user based on the probability.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: April 9, 2024
    Assignee: Capital One Services, LLC
    Inventors: Gaurang Bhatt, Joshua Edwards, Lukiih Cuan
  • Patent number: 11944470
    Abstract: The present disclosure discloses a method and device for sampling a pulse signal, and a computer program medium.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: April 2, 2024
    Assignee: Raycan Technology Co., Ltd. (Suzhou)
    Inventors: Kezhang Zhu, Qingguo Xie, Pingping Dai, Hao Wang, Junhua Mei, Yuming Su
  • Patent number: 11941844
    Abstract: An object detection model generation method as well as an electronic device and a computer readable storage medium using the same are provided. The method includes: during the iterative training of the to-be-trained object detection model, the detection accuracy of the iteration nodes of the object detection model is sequentially determined according to the node order, and the mis-detected negative samples of the object detection model at the iteration nodes with the detection accuracy less than or equal to a preset threshold are enhanced. Then the object detection model is trained at the iteration node based on the enhanced negative samples and a first amount of preset training samples. After the training at the iteration nodes are completed, it returns to the step of sequentially determining the detection accuracy of the iteration nodes of the object detection model until the training of the object detection model is completed.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: March 26, 2024
    Assignee: UBTECH ROBOTICS CORP LTD
    Inventors: Yepeng Liu, Yusheng Zeng, Jun Cheng, Jing Gu, Yue Wang, Jianxin Pang
  • Patent number: 11934478
    Abstract: Methods and systems for computationally processing data with a multi-layer convolutional neural network (CNN) having an input and output layer, and one or more intermediate layers are described. Input data represented in a form of evaluations of continuous functions on a sphere may be received at a computing device and input to the input layer. The input layer may compute outputs as covariant Fourier space activations by transforming the continuous functions into spherical harmonic expansions. The output activations from the input layer may be processed sequentially through each of the intermediate layers. Each, intermediate layer may apply Ciebsch-Gordan transforms to compute respective covariant Fourier space activations as input to an immediately next layer, without computing any intermediate inverse Fourier transforms or forward Fourier transforms.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: March 19, 2024
    Assignee: The University of Chicago
    Inventors: Imre Kondor, Shubhendu Trivedi, Zhen Lin
  • Patent number: 11928822
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Patent number: 11922723
    Abstract: Provided is a mouth shape synthesis device and a method using an artificial neural network. To this end, an original video encoder that encodes original video data which is a target of a mouth shape synthesis as a video including a face of a synthesis target; an audio encoder that encodes audio data that is a basis for the mouth shape synthesis and outputs an audio embedding vector; and a synthesized video decoder that uses the original video embedding vector and the audio embedding vector as input data, and outputs synthesized video data in which a mouth shape corresponding to the audio data is synthesized on the synthesis target face may be provided.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: March 5, 2024
    Assignee: LIONROCKET INC.
    Inventors: Seung Hwan Jeong, Hyung Jun Moon, Jun Hyung Park
  • Patent number: 11915456
    Abstract: A method and apparatus of encoding/decoding attributes of points of a pint cloud are provided. The encoding method subdivides a bounding box bounding the points to be encoded, encodes data representative of attributes of a point by referring to a subdivision to which said point belongs to; and encode data representative of a number of points comprised in each subdivision.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: February 27, 2024
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Yannick Olivier, Jean-Claude Chevet, Joan Llach
  • Patent number: 11908133
    Abstract: A cell image segmentation method using scribble labels includes iteratively pre-training via an image segmentation network (U-Net) using a cell image and scribble labels indicating a cell region and a background region as training data, calculating an exponential moving average (EMA) of image segmentation prediction probabilities at a predetermined interval during the pre-training, self-training by assigning the cell region and the background region for which the EMA of image segmentation prediction probabilities is over a preset threshold to be a pseudo-label, and iteratively refining the image segmentation prediction probability based on a scribbled loss (Lsp) obtained through a result of the training and an unscribbled loss (Lup). Accordingly, it is possible to achieve cell image segmentation with high reliability using only scribble labels.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: February 20, 2024
    Assignee: Korea University Research and Business Foundation
    Inventors: Won-Ki Jeong, Hyunsoo Lee
  • Patent number: 11908182
    Abstract: An apparatus and a method for vehicle profiling. The apparatus includes at least a sensor, wherein the at least a sensor is configured to capture vehicle data of a vehicle, at least a processor and a memory communicatively connected to the at least processor, the memory contains instructions configuring the at least processor to receive the vehicle data from the at least a sensor, generate a graphic model of the vehicle as a function of the vehicle data, determine a vehicle status as a function of the vehicle data, classify the vehicle status and the vehicle data into one or more vehicle status groups and generate a vehicle report as a function of the one or more vehicle status groups.
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: February 20, 2024
    Inventors: Joseph Matthew Nichols, Christopher Clinton Chappell, Mcnamara Marlow Pope, III, Rodney Daniel Sparks, Josh David Schumacher
  • Patent number: 11900641
    Abstract: Methods and devices for encoding a point cloud. A bit sequence signalling an occupancy pattern for sub-volumes of a volume is coded using binary entropy coding. For a given bit in the bit sequence, a context may be based on a sub-volume neighbour configuration for the sub-volume corresponding to that bit. The sub-volume neighbour configuration depends on an occupancy pattern of a group of sub-volumes of neighbouring volumes to the volume, the group of sub-volumes neighbouring the sub-volume corresponding to the given bit. The context may be further based on a partial sequence of previously-coded bits of the bit sequence.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: February 13, 2024
    Assignee: Malikie Innovations Limited
    Inventor: Sébastien Lasserre
  • Patent number: 11893808
    Abstract: Learning-based 3D property extraction can include: capturing a series of live 2D images of a participatory event including at least a portion of at least one reference visual feature of the participatory event and at least a portion of at least one object involved in the participatory event; and training a neural network to recognize at least one 3D property pertaining to the object in response to the live 2D images based on a set of timestamped 2D training images and 3D measurements of the object obtained during at least one prior training event for the neural network.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: February 6, 2024
    Assignee: Mangolytics, Inc.
    Inventors: Swupnil Kumar Sahai, Richard Hsu, Adith Balamurugan, Neel Sesh Ramachandran
  • Patent number: 11893789
    Abstract: A deep neural network provides real-time pose estimation by combining two custom deep neural networks, a location classifier and an ID classifier, with a pose estimation algorithm to achieve a 6D0F location of a fiducial marker. The locations may be further refined into subpixel coordinates using another deep neural network. The networks may be trained using a combination of auto-labeled videos of the target marker, synthetic subpixel corner data, and/or extreme data augmentation. The deep neural network provides improved pose estimations particularly in challenging low-light, high-motion, and/or high-blur scenarios.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: February 6, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Danying Hu, Daniel DeTone, Tomasz Jan Malisiewicz
  • Patent number: 11886604
    Abstract: The technology described herein obfuscates image content using a local neural network and a remote neural network. The local network runs on a local computer system and a remote classifier runs in a remote computing system. Together, the local network and the remote classifier are able to classify images, while the image never leaves the local computer system. In aspects of the technology, the local network receives a local image and creates a transformed object. The transformed object may be generated by processing the image with a local neural network to generate a multidimensional array and then randomly shuffling data locations within a multidimensional array. The transformed object is communicated to the remote classifier in the remote computing system for classification. The remote classifier may not have the seed used to deterministically scramble the spatial arrangement of data within the multidimensional array.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kamlesh Dattaram Kshirsagar, Frank T. Seide
  • Patent number: 11887366
    Abstract: A method comprising performing object detection within a set of representations of a hierarchically-structured signal, the set of representations comprising at least a first representation of the signal at a first level of quality and a second representation of the signal at a second, higher level of quality.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: January 30, 2024
    Inventors: Guido Meardi, Guendalina Cobianchi, Balázs Keszthelyi, Ivan Makeev, Simone Ferrara, Stergios Poularakis
  • Patent number: 11881010
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for machine learning for video analysis and feedback. In some implementations, a machine learning model is trained to classify videos into performance level classifications based on characteristics of image data and audio data in the videos. Video data captured by a device of a user following a prompt that the device provides to the user is received. A set of feature values that describe audio and video characteristics of the video data are determined. The set of feature values are provided as input to the trained machine learning model to generate output that classifies the video data with respect to the performance level classifications. A user interface of the device is updated based on the performance level classification for the video data.
    Type: Grant
    Filed: September 1, 2022
    Date of Patent: January 23, 2024
    Assignee: Voomer, Inc.
    Inventor: David Wesley Anderton-Yang