Patents by Inventor Sukrit SHANKAR

Sukrit SHANKAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11270476
    Abstract: Disclosed is a computer-implemented method for providing photorealistic changes for a digital image. The method includes receiving a digital image of dressable model, receiving digital cutout garment textures that are indexed according to an outfitting layering order and aligned with body shape and pose of the dressable model, receiving binary silhouettes of the digital cutout garment textures, generating a garment layer index mask by compositing the binary silhouettes of the digital cutout garment textures indexed according to the outfitting layering order, receiving a composite image obtained by overlaying the digital cutout garment textures according to the indexed outfitting layering order on the digital image of the dressable model, inputting the composite image and the garment layer index mask into a machine learning system for providing photorealistic changes, and receiving from the machine learning system a digital file including photorealistic changes for application to the composite image.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: March 8, 2022
    Assignee: METAIL LIMITED
    Inventors: Yu Chen, Jim Downing, Tom Adeyoola, Sukrit Shankar
  • Patent number: 11270122
    Abstract: An image processing system has a memory storing a video depicting a multi-entity event, a trained reinforcement learning policy and a plurality of domain specific language functions. A graph formation module computes a representation of the video as a graph of nodes connected by edges. A trained machine learning system recognizes entities depicted in the video and recognizes attributes of the entities. Labels are added to the nodes of the graph according to the recognized entities and attributes. The trained machine learning system computes a predicted multi-entity event depicted in the video. For individual ones of the edges of the graph, select a domain specific language function from the plurality of domain specific language functions and assign it to the edge, the selection being made at least according to the reinforcement learning policy. An explanation is formed from the domain specific language functions.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 8, 2022
    Assignee: 3D Industries Limited
    Inventors: Sukrit Shankar, Seena Rejal
  • Publication number: 20210390840
    Abstract: In various examples there is an apparatus with a memory storing a video captured by a capture device, the video depicting a scene comprising two or more people in an environment. The apparatus has a self-supervised neural network which takes at least one frame of the video as input and in response, computes a prediction of at least four points in the frame which depict four points on a plane of the scene. A processor computes, from the four points, a plan view of the scene and detects two or more people in the plan view of the scene. The processor computes, for individual pairs of people depicted in the plan view, an estimate of shortest distance between the people in the pair.
    Type: Application
    Filed: June 11, 2020
    Publication date: December 16, 2021
    Inventors: Seena Hossein REJAL, Sukrit SHANKAR, Raj Neel SHAH
  • Publication number: 20210334542
    Abstract: An image processing system has a memory storing a video depicting a multi-entity event, a trained reinforcement teaming policy and a plurality of domain specific language functions. A graph formation module computes a representation of the video as a graph of nodes connected by edges. A trained machine learning system recognizes entities depicted in the video and recognizes attributes of the entities. Labels are added to the nodes of the graph according to the recognized entities and attributes. The trained machine learning system computes a predicted multi-entity event depicted in the video. For individual ones of the edges of the graph, select a domain specific language function from the plurality of domain specific language functions and assign it to the edge, the selection being made at least according to the reinforcement learning policy. An explanation is formed from the domain specific language functions.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Sukrit SHANKAR, Seena REJAL
  • Patent number: 11080918
    Abstract: There is provided a computer implemented method for predicting garment or accessory attributes using deep learning techniques, comprising the steps of: (i) receiving and storing one or more digital image datasets including images of garments or accessories; (ii) training a deep model for garment or accessory attribute identification, using the stored one or more digital image datasets, by configuring a deep neural network model to predict (a) multiple-class discrete attributes; (b) binary discrete attributes, and (c) continuous attributes, (iii) receiving one or more digital images of a garment or an accessory, and (iv) extracting attributes of the garment or the accessory from the one or more received digital images using the trained deep model for garment or accessory attribute identification. A related system is also provided.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: August 3, 2021
    Assignee: METAIL LIMITED
    Inventors: Yu Chen, Sukrit Shankar, Jim Downing, Joe Townsend, Duncan Robertson, Tom Adeyoola
  • Patent number: 10997779
    Abstract: A computer-implemented method of generating an image file of a 3D body model of a user wearing a garment, comprising: (i) receiving one or more two dimensional images of a model wearing a garment, which images provide a view of an outer surface of the garment; (ii) for each two dimensional image, segmenting an image of the garment to produce a set of segmented garment images; (iii) using the set of segmented garment images to generate a complete 3D garment model; (iv) receiving a 3D body model of a user; (v) simulating the complete 3D garment model worn on the 3D body model of the user and, (vi) generating an image file of the 3D body model of the user wearing the complete 3D garment model, using the simulated complete 3D garment model worn on the 3D body model of the user.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: May 4, 2021
    Assignee: METAIL LIMITED
    Inventors: Yu Chen, Sukrit Shankar, Dongjoe Shin, David Chalmers, Jim Downing, Tom Adeyoola
  • Publication number: 20200320769
    Abstract: There is provided a computer implemented method for predicting garment or accessory attributes using deep learning techniques, comprising the steps of: (i) receiving and storing one or more digital image datasets including images of garments or accessories; (ii) training a deep model for garment or accessory attribute identification, using the stored one or more digital image datasets, by configuring a deep neural network model to predict (a) multiple-class discrete attributes; (b) binary discrete attributes, and (c) continuous attributes, (iii) receiving one or more digital images of a garment or an accessory, and (iv) extracting attributes of the garment or the accessory from the one or more received digital images using the trained deep model for garment or accessory attribute identification. A related system is also provided.
    Type: Application
    Filed: May 25, 2017
    Publication date: October 8, 2020
    Inventors: Yu CHEN, Sukrit SHANKAR, Jim DOWNING, Joe TOWNSEND, Duncan ROBERTSON, Tom ADEYOOLA
  • Publication number: 20200066029
    Abstract: A computer-implemented method of generating an image file of a 3D body model of a user wearing a garment, comprising: (i) receiving one or more two dimensional images of a model wearing a garment, which images provide a view of an outer surface of the garment; (ii) for each two dimensional image, segmenting an image of the garment to produce a set of segmented garment images; (iii) using the set of segmented garment images to generate a complete 3D garment model; (iv) receiving a 3D body model of a user; (v) simulating the complete 3D garment model worn on the 3D body model of the user and, (vi) generating an image file of the 3D body model of the user wearing the complete 3D garment model, using the simulated complete 3D garment model worn on the 3D body model of the user.
    Type: Application
    Filed: February 27, 2018
    Publication date: February 27, 2020
    Inventors: Yu CHEN, Sukrit SHANKAR, Dongjoe SHIN, David CHALMERS, Jim DOWNING, Tom ADEYOOLA