Patents Examined by John W. Lee
  • Patent number: 12046027
    Abstract: A method includes training, using first real data objects, a generative adversarial network having a generator model and a discriminator model to create a trained generator model that generates realistic data, and training, using adversarial data objects and second real data objects, the discriminator model to output an authenticity binary class for the adversarial data objects and the second real data objects. The method further includes deploying the discriminator model to a production system. In the production system, the discriminator model outputs the authenticity binary class to a system classifier model.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: July 23, 2024
    Assignee: Intuit Inc.
    Inventors: Miriam Hanna Manevitz, Aviv Ben Arie
  • Patent number: 12039647
    Abstract: A system includes memory devices storing instructions, and one or more processors configured to execute instructions performing method steps. The method may include training a generator, encoder, and discriminator of a synthetic image generation system to enable creation of synthetic images that comply with one or more image classification requirements. A generator and discriminator may be trained in an adversarial relationship. Training may be completed when the generator outputs a synthetic image that matches a target image beyond a first predetermined threshold of accuracy and the encoder outputs a latent feature vector that matches an input latent feature vector beyond a second predetermined threshold of accuracy. After training the system may be configured to generate synthetic images that comply with one or more image classification requirements.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: July 16, 2024
    Assignee: CARMAX ENTERPRISE SERVICES, LLC
    Inventor: Samuel Martin Gottlieb
  • Patent number: 12039451
    Abstract: An information processing apparatus (2000) generates likelihood data for each of a plurality of partial regions (12) in image data (10). The likelihood data are data being associated with a position and a size on the image data (10) and indicating a likelihood that a target object exists in an image region at the position with the size. The information processing apparatus (2000) computes a distribution (probability hypothesis density: PHD) of an existence likelihood of a target object with respect to a position and a size by computing the total sum of likelihood data each piece of which is generated for each partial region (12). The information processing apparatus (2000) extracts, from the PHD, partial distributions each of which relates to one target object. For each extracted partial distribution, the information processing apparatus (2000) outputs a position and a size of a target object represented by the partial distribution, based on a statistic of the partial distribution.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: July 16, 2024
    Assignee: NEC CORPORATION
    Inventors: Hiroyoshi Miyano, Tetsuaki Suzuki
  • Patent number: 12033233
    Abstract: A method performed by at least one processing device in an illustrative embodiment comprises applying a first image and a message to an encoder of a steganographic encoder-decoder neural network, generating in the encoder, based at least in part on the first image and the message, a perturbed image containing the message, decoding the perturbed image in a decoder of the steganographic encoder-decoder neural network, and providing information characterizing the decoded perturbed image to the encoder. The generating, decoding and providing are iteratively repeated, with different perturbations being determined in the encoder as a function of respective different instances of the provided information, until the decoded perturbed image meets one or more specified criteria relative to the message. The perturbed image corresponding to the decoded perturbed image that meets the one or more specified criteria relative to the message is output as a steganographic image containing the message.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: July 9, 2024
    Assignee: Cornell University
    Inventors: Varsha Kishore, Kilian Weinberger, Xiangyu Chen, Boyi Li, Yan Wang, Ruihan Wu
  • Patent number: 12033368
    Abstract: Machine Learning models to be created for crop mapping for any region, require huge volumes of ground truth data requiring manual effort in generating region specific training dataset. Method and system for providing generalized approach for crop mapping across regions with varying characteristics is disclosed. The method provides automatic generation of a labelled pixel dataset representing cropping pattern of a Region of Interest (ROI) for building a ML crop mapping model for the ROI. The generated labelled pixel dataset captures regional dependency and localized phenological indicators for the ROI. ML crop mapping model is updated using a database, regularly updated for the set of crops and the plurality of features associated with each of the set of crops and corresponding the set of crops.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: July 9, 2024
    Assignee: Tata Consultancy Services Limited
    Inventors: Jayantrao Mohite, Suryakant Ashok Sawant, Ankur Pandit, Srinivasu Pappula
  • Patent number: 12008464
    Abstract: Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 11, 2024
    Assignee: ADOBE INC.
    Inventors: Haoxiang Li, Zhe Lin, Jonathan Brandt, Xiaohui Shen
  • Patent number: 12001959
    Abstract: The present disclosure describes methods, devices, and storage medium for generating a time-lapse photography video with a neural network model. The method includes obtaining a training sample. The training sample includes a training video and an image set. The method includes obtaining through training according to the training sample, a neural network model to satisfy a training ending condition, the neural network model comprising a basic network and an optimization network, by using the image set as an input to the basic network, the basic network being a first generative adversarial network for performing content modeling, generating a basic time-lapse photography video as an output of the basic network, using the basic time-lapse photography video as an input to the optimization network, the optimization network being a second generative adversarial network for performing motion state modeling, and generating an optimized time-lapse photography video as an output of the optimization network.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: June 4, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wenhan Luo, Lin Ma, Wei Liu
  • Patent number: 11995806
    Abstract: A method for geometrically correcting a distorted input frame and generating an undistorted output frame includes capturing and storing an input frame in an external memory, allocating an output frame with an output frame size and dividing the output frame into output blocks, computing a size of the input blocks in the input image corresponding to each output blocks, checking if the size of the input blocks is less than the size of the internal memory and if not dividing until the required input block size of divided sub blocks is less than the size of the internal memory, programming an apparatus with input parameters, fetching the input blocks into an internal memory, processing each of the divided sub blocks sequentially and processing the next output block in step until all the output blocks are processed; and composing the output frame for each of the blocks in the output frame.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: May 28, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Rajasekhar Reddy Allu, Niraj Nandan, Mihir Narendra Mody, Gang Hua, Brian Okchon Chae, Shashank Dabral, Hetul Sanghvi, Vikram Vijayanbabu Appia, Sujith Shivalingappa
  • Patent number: 11989933
    Abstract: The invention proposes a method of training a convolutional neural network in which, at each convolutional layer, weights for one seed convolutional filter per layer are updated during each training iteration. All other convolutional filters are polynomial transformations of the seed filter, or, alternatively, all response maps are polynomial transformations of the response map generated by the seed filter.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: May 21, 2024
    Assignee: Carnegie Mellon University
    Inventors: Felix Juefei Xu, Marios Savvides
  • Patent number: 11972578
    Abstract: A method and system for tracking an object in an input video using online training includes a step for training a classifier model by using global pattern matching, and a step for classifying and tracking each target through online training including the classifier model.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: April 30, 2024
    Assignee: NAVER CORPORATION
    Inventors: Myunggu Kang, Dongyoon Wee, Soonmin Bae
  • Patent number: 11941084
    Abstract: A method for training a machine learning model includes obtaining a set of training samples. For each training sample in the set of training samples, during each of one or more training iterations, the method includes cropping the training sample to generate a first cropped image, cropping the training sample to generate a second cropped image that is different than the first cropped image, and duplicating a first portion of the second cropped image. The method also includes overlaying the duplicated first portion of the second cropped image on a second portion of the second cropped image to form an augmented second cropped image. The first portion is different than the second portion. The method also includes training the machine learning model with the first cropped image and the augmented second cropped image.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: March 26, 2024
    Assignee: Google LLC
    Inventors: Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Tomas Jon Pfister
  • Patent number: 11934923
    Abstract: The quality of predictive model outputs is improved by improving input data in the cases where entities in the input data are associated with one or more classes, computed at least in part from one or more subsets of the input data. Class association function characteristic data is derived from information describing a class association function that generates input data from source data. The class association function characteristic data comprises inferences relating to operation of the class association function that are not derivable solely from the input data. The input data is transformed into improved input data using a constructed class-specific transformation function and the class association function characteristic data.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: March 19, 2024
    Assignee: Swoop Inc.
    Inventors: Simeon Simeonov, Edward Zahrebelski
  • Patent number: 11928880
    Abstract: Techniques are disclosed for detecting an uncovered portion of a body of a person in a frame of video content. In an example, a first machine learning model of a computing system may output a first score for the frame based on a map that identifies a region of the frame associated with an uncovered body part type. Depending on a value of the first score, a second machine learning model that includes a neural network architecture may further analyze the frame to output a second score. The first score and second score may be merged to produce a third score for the frame. A plurality of scores may be determined, respectively, for frames of the video content, and a maximum score may be selected. The video content may be selected for presentation on a display for further evaluation based on the maximum score.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: March 12, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Xiaohang Sun, Mohamed Kamal Omar, Alexander Ratnikov, Ahmed Aly Saad Ahmed, Tai-Ching Li, Travis Silvers, Hanxiao Deng, Muhammad Raffay Hamid, Ivan Ryndin
  • Patent number: 11922711
    Abstract: Embodiments relate to tracking and determining a location of an object in an environment surrounding a user. A system includes one or more imaging devices and an object tracking unit. The system identifies an object in a search region, determines a tracking region that is smaller than the search region corresponding to the object, and scans the tracking region to determine a location associated with the object. The system may generate a ranking of objects, determine locations associated with the objects, and generate a model of the search region based on the locations associated with the objects.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: March 5, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael Hall, Byron Taylor
  • Patent number: 11922312
    Abstract: An image classification system 10 includes: a probability computation means 11 which computes a known-image probability, which is the probability that an input image corresponds to a known image associated with a seen label that indicates the class into which content indicated by the known image is classified; a likelihood computation means 12 which computes both the likelihood that content indicated by the input image is classified into the same class as content indicated by an unseen image associated with an unseen label, and the likelihood that the content indicated by the input image is classified into the same class as the content indicated by the known image; and a correction means 13 which corrects each computed likelihood using the computed known-image probability.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: March 5, 2024
    Assignee: NEC CORPORATION
    Inventor: Takahiro Toizumi
  • Patent number: 11915442
    Abstract: An apparatus and method for geometrically correcting an arbitrary shaped input frame and generating an undistorted output frame. The method includes capturing arbitrary shaped input images with multiple optical devices and processing the images, identifying redundant blocks and valid blocks in each of the images, allocating an output frame with an output frame size and dividing the output frame into regions shaped as a rectangle, programming the apparatus and disabling processing for invalid blocks in each of the regions, fetching data corresponding to each of the valid blocks and storing in an internal memory, interpolating data for each of the regions with stitching and composing the valid blocks for the output frame and displaying the output frame on a display module.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: February 27, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Rajasekhar Reddy Allu, Niraj Nandan, Mihir Narendra Mody, Gang Hua, Brian Okchon Chae, Shashank Dabral, Hetul Sanghvi, Vikram VijayanBabu Appia, Sujith Shivalingappa
  • Patent number: 11907675
    Abstract: A generative cooperative network (GCN) comprises a dataset generator model and a learner model. The dataset generator model generates training datasets used to train the learner model. The trained learner model is evaluated according to a reference training dataset. The dataset generator model is modified according to the evaluation. The training datasets, the dataset generator model, and the leaner model are stored by the GCN. The trained learner model is configured to receive input and to generate output based on the input.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: February 20, 2024
    Assignee: Uber Technologies, Inc.
    Inventors: Felipe Petroski Such, Aditya Rawal, Joel Anthony Lehman, Kenneth Owen Stanley, Jeffrey Michael Clune
  • Patent number: 11900252
    Abstract: A training platform, a method and a computer-readable medium for evaluating users in capturing images of an internal anatomical region for the analysis of organs. Automated machine learning models, trained on a dataset of labelled training images associated with different imaging device positions, are used. The one or more automated machine learning models are used to process an image resulting from a user positioning an imaging device at various imaging device positions relative to a training manikin, a human or an animal, to determine whether the generated image corresponds to a predefined view required for the analysis of the organ features shown therein. An output indicative of whether the generated image corresponds to the predefined view expected for organ analysis and measurements is provided.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: February 13, 2024
    Assignee: CAE HEALTHCARE CANADA INC.
    Inventors: Laurent Desmet, Yannick Perron
  • Patent number: 11886963
    Abstract: A facility for optimizing machine learning models is described. The facility obtains a description of a machine learning model and a hardware target for the machine learning model. The facility obtains optimization result data from a repository of optimization result data. The facility optimizes the machine learning model for the hardware target based on the optimization result data.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: January 30, 2024
    Assignee: OctoML, Inc.
    Inventors: Matthew Welsh, Jason Knight, Jared Roesch, Thierry Moreau, Adelbert Chang, Tianqi Chen, Luis Henrique Ceze, An Wang, Michal Piszczek, Andrew McHarg, Fletcher Haynes
  • Patent number: 11881003
    Abstract: A computer-implemented method of training an image generative network f? for a set of training images, in which an output image {circumflex over (x)} is generated from an input image x of the set of training images non-losslessly, and in which a proxy network is trained for a gradient intractable perceptual metric that evaluates a quality of an output image {circumflex over (x)} given an input image x, the method of training using a plurality of scales for input images from the set of training images. In an embodiment, a blindspot network b? is trained which generates an output image {tilde over (x)} from an input image x. Related computer systems, computer program products and computer-implemented methods of training are disclosed.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: January 23, 2024
    Assignee: DEEP RENDER LTD.
    Inventors: Chri Besenbruch, Ciro Cursio, Christopher Finlay, Vira Koshkina, Alexander Lytchier, Jan Xu, Arsalan Zafar