Patents Examined by Vincent Rudolph
  • Patent number: 11967127
    Abstract: Systems and methods capture image dynamics and use those captured image dynamics for image feature recognition and classification. Other methods and systems train a neural network to capture image dynamics. An image vector representing image dynamics is extracted from an image of an image stream using a first neural network. A second neural network, predicts a previous and/or subsequent image in the image stream from the image vector. The predicted previous and/or subsequent image is compared with an actual previous and/or subsequent image from the image stream. The first and second neural networks are trained using the result of the comparison.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: April 23, 2024
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Komath Naveen Kumar
  • Patent number: 11961278
    Abstract: A method for detecting an occluded image. The method includes: after an image is captured by a camera, obtaining the image as an image to be detected; inputting the image to be detected into a trained occluded-image detection model, the occluded-image detection model is trained based on original occluded images and non-occluded images by using a trained data feature augmentation network; determining whether the image to be detected is an occluded image based on the occluded-image detection model; and outputting an image detection result.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: April 16, 2024
    Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Ruoyu Liu, Zhi Qu, Yasen Zhang, Yan Song, Zhipeng Ge
  • Patent number: 11961293
    Abstract: A system and related methods for identifying characteristics of handbags is described. One method includes receiving one or more images of a handbag, eliminating all but select images from the one or more images of the handbag to obtain a grouping of one or more select images, the select images being those embodying a complete periphery and frontal view of the handbag. For each of the one or more select images, aligning feature-corresponding pixels with an image axis, comparing at least a portion of the one or more select images with a plurality of stored images, and determining characteristics of the handbag based on said comparing.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: April 16, 2024
    Assignee: FASHIONPHILE Group, LLC
    Inventors: Sarah Davis, Ben Hemminger
  • Patent number: 11961255
    Abstract: The object detection device extracts a plurality of predetermined features from an image in which a target object is represented, calculates an entire coincidence degree between the plurality of predetermined features set for an entire model pattern of the target object and the plurality of predetermined features extracted from a corresponding region on the image while changing a relative positional relationship between the image and the model pattern, and calculates, for each partial region including a part of the model pattern, a partial coincidence degree between the predetermined features included in the partial region and the predetermined features extracted from a region corresponding to the partial region on the image. Then, the object detection device determines whether or not the target object is represented in the region on the image corresponding to the model pattern based on the entire coincidence degree and the partial coincidence degree.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: April 16, 2024
    Assignee: FANUC CORPORATION
    Inventor: Shoutarou Ogura
  • Patent number: 11935256
    Abstract: A distance estimation system comprised of a laser light emitter, two image sensors, and an image processor are positioned on a baseplate such that the fields of view of the image sensors overlap and contain the projections of an emitted collimated laser beam within a predetermined range of distances. The image sensors simultaneously capture images of the laser beam projections. The images are superimposed and displacement of the laser beam projection from a first image taken by a first image sensor to a second image taken by a second image sensor is extracted by the image processor. The displacement is compared to a preconfigured table relating displacement distances with distances from the baseplate to projection surfaces to find an estimated distance of the baseplate from the projection surface at the time that the images were captured.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: March 19, 2024
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia
  • Patent number: 11922604
    Abstract: PET/MR images are compensated with simplified adaptive algorithms for truncated parts of the body. The compensation adapts to a specific location of truncation of the body or organ in the MR image, and to attributes of the truncation in the truncated body part. Anatomical structures in a PET image that do not require any compensation are masked using a MR image with a smaller field of view. The organs that are not masked are then classified as types of anatomical structures, the orientation of the anatomical structures, and type of truncation. Structure specific algorithms are used to compensate for a truncated anatomical structure. The compensation is validated for correctness and the ROI is filled in where there is missing voxel data. Attenuation maps are generated from the compensated ROI.
    Type: Grant
    Filed: October 20, 2015
    Date of Patent: March 5, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Shekhar Dwivedi
  • Patent number: 11900704
    Abstract: The method to determine tampering of a security label (102) comprises, associating at least a portion of a first pattern to an external reference (110), wherein a first layer (202) of the security label (102) comprises the first pattern. Further, a second pattern (206) defined in a second layer (204) is used to change the contour of the portion of the first pattern, when the security label (102) is at least partially disengaged from a surface. Subsequently, when there is change in contour of the portion of the first pattern, the portion of the first pattern is disassociated from an external reference (110). Further, the portion of the first pattern and the external reference (110) are scanned and finally tampering of the security label (102) is determined based on the association between the portion of the first pattern and the external reference (110).
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 13, 2024
    Inventor: Ashish Anand
  • Patent number: 11890093
    Abstract: A method and system of training a machine learning neural network (MLNN) in anatomical degenerative conditions in accordance with anatomical dynamics. The method comprises receiving, in a first input layer of the MLNN, from a millimeter wave (mmWave) radar sensing device, a first set of mmWave radar point cloud data representing a first gait characteristic of a subject in motion, comprising an arm swing velocity, receiving, in a second layer, a second set of mmWave radar point cloud data representing a second gait characteristic comprising a measure of dynamic postural stability, the input layers being interconnected with an output layer of the MLNN via an intermediate layer, and training a MLNN classifier in accordance with a classification that increases a correlation between a degenerative condition of the subject as generated at the output layer and the sets of mmWave point cloud data.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: February 6, 2024
    Assignee: Ventech Solutions, Inc.
    Inventors: Ravi Kiran Pasupuleti, Ravi Kunduru
  • Patent number: 11887356
    Abstract: The disclosed systems, structures, and methods are directed to receiving a training data set comprising a plurality of original training samples, augmenting the original training samples by applying default transformations, training the machine learning model on at least a portion of the original training samples and at least a portion of the first set of augmented training samples, computing an unaugmented accuracy, augmenting the original training samples and the first set of augmented training samples by applying a candidate transformation, training the machine learning model on at least a portion of the original training samples, at least a portion of the first set of augmented training samples, and at least a portion of the second set of augmented training samples, computing an augmented accuracy, computing an affinity metric from the unaugmented accuracy and the augmented accuracy, and updating the candidate augmentation transformations list and the default augmentation transformations list.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: January 30, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Forouqsadat Khonsari, Jaspreet Singh Sambee
  • Patent number: 11887215
    Abstract: Provided is an image processing method according to an embodiment, which includes: obtaining a label of a first image by inputting the first image to a recognition model; obtaining reference style data for a target reference image to which a visual sentiment label is assigned, the visual sentiment label being the same as the obtained label from among visual sentiment labels pre-assigned to reference images; generating second style data based on first style data for the first image and the obtained reference style data; and generating a second image based on the generated second style data.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: January 30, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Lei Zhang, Yehoon Kim, Chanwon Seo
  • Patent number: 11883223
    Abstract: A system and method for determining kinetic parameters associated with a kinetic model of an imaging agent in a liver is provided. An image reconstruction device can receive radiotracer activities corresponding to a predetermined time period. For example, these radiotracer activities can include PET scan data corresponding to a number of time frames. The radiotracer activities can be used to determine a liver time activity curve and a circulatory input function. The liver time activity curve and circulatory input function can be used along with a kinetic model of the liver to produce kinetic parameters. These kinetic parameters can be used to determine hepatic scores, such as a hepatic steatosis score, a hepatic inflammation score, and a cirrhosis score. These scores are indicative of diseases of the liver, including nonalcoholic fatty liver disease, nonalcoholic steatohepatitis, and hepatic fibrosis.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: January 30, 2024
    Assignee: The Regents of the University of California
    Inventors: Guobao Wang, Souvik Sarkar, Ramsey D. Badawi
  • Patent number: 11887274
    Abstract: An image dataset comprising pixel depth arrays might be processed by an interpolator, wherein interpolation is based on pixel samples. Input pixels to be interpolated from and an interpolated pixel might comprise deep pixels, each represented with a list of samples. Accumulation curves might be generated from each input pixel, weights applied, and accumulation curves combined to form an interpolation accumulation curve. An interpolated deep pixel can be derived from the interpolation accumulation curve, taking into account zero-depth samples as needed. Samples might represent color values of pixels.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: January 30, 2024
    Assignee: Unity Technologies SF
    Inventor: Peter M. Hillman
  • Patent number: 11880903
    Abstract: Embodiments of the present disclosure provide a Bayesian image denoising method based on a distribution constraint of noiseless images from a distribution of noisy images. The method includes: determining a distribution constraint of noiseless images from a distribution of noisy images; and performing Bayesian denoising on noisy images based on the distribution constraint of noiseless images.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: January 23, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Yuxiang Xing, Li Zhang, Hewei Gao, Zhi Deng
  • Patent number: 11879977
    Abstract: Described are street level intelligence platforms, systems, and methods that can include a fleet of swarm vehicles having imaging devices. Images captured by the imaging devices can be used to produce and/or be integrated into maps of the area to produce high-definition maps in near real-time. Such maps may provide enhanced street level intelligence useful for fleet management, navigation, traffic monitoring, and/or so forth.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: January 23, 2024
    Assignee: Woven by Toyota, U.S., Inc.
    Inventors: Ro Gupta, Justin Day, Chase Nicholl, Max Henstell, Ioannis Stamos, David Boyle, Ethan Sorrelgreen, Huong Dinh
  • Patent number: 11881022
    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: January 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
  • Patent number: 11875583
    Abstract: The present invention belongs to the technical field of 3D reconstruction in the field of computer vision, and provides a dataset generation method for self-supervised learning scene point cloud completion based on panoramas. Pairs of incomplete point cloud and target point cloud with RGB information and normal information can be generated by taking RGB panoramas, depth panoramas and normal panoramas in the same view as input for constructing a self-supervised learning dataset for training of the scene point cloud completion network. The key points of the present invention are occlusion prediction and equirectangular projection based on view conversion, and processing of the stripe problem and point-to-point occlusion problem during conversion. The method of the present invention includes simplification of the collection mode of the point cloud data in a real scene; occlusion prediction idea of view conversion; and design of view selection strategy.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: January 16, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Yang, Tong Li, Baocai Yin, Zhaoxuan Zhang, Boyan Wei, Zhenjun Du
  • Patent number: 11862342
    Abstract: Systems and methods are disclosed for determining a patient risk assessment or treatment plan based on emboli dislodgement and destination. One method includes receiving a patient-specific anatomic model generated from patient-specific imaging of at least a portion of a patient's vasculature; determining or receiving a location of interest in the patient-specific anatomic model of the patient's vasculature; using a computing processor for calculating blood flow through the patient-specific anatomic model to determine blood flow characteristics through at least the portion of the patient's vasculature of the patient-specific anatomic model downstream from the location of interest; and using a computing processor for particle tracking through the simulated blood flow to determine a destination probability of an embolus originating from the location of interest in the patient-specific anatomic model, based on the determined blood flow characteristics.
    Type: Grant
    Filed: March 16, 2023
    Date of Patent: January 2, 2024
    Assignee: HeartFlow, Inc.
    Inventors: Leo J. Grady, Gilwoo Choi, Charles A. Taylor, Christopher K. Zarins
  • Patent number: 11861806
    Abstract: A system and method of calibrating a broadcast video feed are disclosed herein. A computing system retrieves a plurality of broadcast video feeds that include a plurality of video frames. The computing system generates a trained neural network, by generating a plurality of training data sets based on the broadcast video feed and learning, by the neural network, to generate a homography matrix for each frame of the plurality of frames. The computing system receives a target broadcast video feed for a target sporting event. The computing system partitions the target broadcast video feed into a plurality of target frames. The computing system generates for each target frame in the plurality of target frames, via the neural network, a target homography matrix. The computing system calibrates the target broadcast video feed by warping each target frame by a respective target homography matrix.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 2, 2024
    Assignee: STATS LLC
    Inventors: Long Sha, Sujoy Ganguly, Patrick Joseph Lucey
  • Patent number: 11862325
    Abstract: This disclosure provides a system and a method. The method may include: determining a processing instruction; acquiring image data based on the processing instruction; determining a configuration file based on the image data, in which the configuration file may be configured to guide implementation of the processing instruction; constructing a data processing pipeline based on the configuration file; executing the data processing process based on the data processing pipeline, in which the data processing process may be generated based on the data processing pipeline; generating a processing result of the image data based on the executed data processing process; and storing the processing result of the image data in a first storage space.
    Type: Grant
    Filed: July 26, 2022
    Date of Patent: January 2, 2024
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Zhongqi Zhang, Anling Ma
  • Patent number: 11861917
    Abstract: Method and system for detecting vehicle occupant actions are disclosed. For example, the method includes receiving, by one or more processors, previously classified image data from a previously classified image database, the previously classified image data representing driver postures that are rotated and scaled to be standardized for a range of different drivers and different locations of a vehicle interior sensor within a given vehicle, wherein the previously classified image data is representative of previously classified vehicle occupant actions; receiving, by the one or more processors, current image data from the vehicle interior sensor subsequent to the vehicle interior sensor being registered within the given vehicle, wherein the current image data is representative of current vehicle occupant actions; and determining, by the one or more processors, a vehicle occupant action based at least in part upon a comparison of the current image data and the previously classified image data.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: January 2, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANIES
    Inventors: Aaron Scott Chan, Kenneth J. Sanchez