Patents by Inventor Krishna Kumar Singh

Krishna Kumar Singh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117995
    Abstract: Methods, non-transitory computer readable media, apparatuses, and systems for image and depth map generation include receiving a prompt and encoding the prompt to obtain a guidance embedding. A machine learning model then generates an image and a depth map corresponding to the image based on the guidance embedding. The image and the depth map are each generated based on the guidance embedding.
    Type: Application
    Filed: October 5, 2023
    Publication date: April 10, 2025
    Inventors: Yijun Li, Matheus Abrantes Gadelha, Krishna Kumar Singh, Soren Pirk
  • Patent number: 12272031
    Abstract: An image inpainting system is described that receives an input image that includes a masked region. From the input image, the image inpainting system generates a synthesized image that depicts an object in the masked region by selecting a first code that represents a known factor characterizing a visual appearance of the object and a second code that represents an unknown factor characterizing the visual appearance of the object apart from the known factor in latent space. The input image, the first code, and the second code are provided as input to a generative adversarial network that is trained to generate the synthesized image using contrastive losses. Different synthesized images are generated from the same input image using different combinations of first and second codes, and the synthesized images are output for display.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: April 8, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman
  • Patent number: 12260530
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: March 25, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Publication number: 20250078349
    Abstract: A method, apparatus, and non-transitory computer readable medium for image generation are described. Embodiments of the present disclosure obtain a content input and a style input via a user interface or from a database. The content input includes a target spatial layout and the style input includes a target style. A content encoder of an image processing apparatus encodes the content input to obtain a spatial layout mask representing the target spatial layout. A style encoder of the image processing apparatus encodes the style input to obtain a style embedding representing the target style. An image generation model of the image processing apparatus generates an image based on the spatial layout mask and the style embedding, where the image includes the target spatial layout and the target style.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 6, 2025
    Inventors: Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Ngoc Khuc, Krishna Kumar Singh, Jingwan Lu, Ajinkya Gorakhnath Kale
  • Publication number: 20250078406
    Abstract: A modeling system accesses a two-dimensional (2D) input image displayed via a user interface, the 2D input image depicting, at a first view, a first object. At least one region of the first object is not represented by pixel values of the 2D input image. The modeling system generates, by applying a 3D representation generation model to the 2D input image, a three-dimensional (3D) representation of the first object that depicts an entirety of the first object including the first region. The modeling system displays, via the user interface, the 3D representation, wherein the 3D representation is viewable via the user interface from a plurality of views including the first view.
    Type: Application
    Filed: September 5, 2023
    Publication date: March 6, 2025
    Inventors: Jae Shin Yoon, Yangtuanfeng Wang, Krishna Kumar Singh, Junying Wang, Jingwan Lu
  • Publication number: 20250078327
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilize a text-image alignment loss to train a diffusion model to generate digital images from input text. In particular, in some embodiments, the disclosed systems generate a prompt noise representation form a text prompt with a first text concept and a second text concept using a denoising step of a diffusion neural network. Further, in some embodiments, the disclosed systems generate a first concept noise representation from the first text concept and a second concept noise representation from the second text concept. Moreover, the disclosed systems combine the first and second concept noise representation to generate a combined concept noise representation. Accordingly, in some embodiments, by comparing the combined concept noise representation and the prompt noise representation, the disclosed systems modify parameters of the diffusion neural network.
    Type: Application
    Filed: August 29, 2023
    Publication date: March 6, 2025
    Inventors: Zhipeng Bao, Yijun Li, Krishna Kumar Singh
  • Publication number: 20250069203
    Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation are described. An embodiment of the present disclosure includes obtaining an input image, an inpainting mask, and a plurality of content preservation values corresponding to different regions of the inpainting mask, and identifying a plurality of mask bands of the inpainting mask based on the plurality of content preservation values. An image generation model generates an output image based on the input image and the inpainting mask. The output image is generated in a plurality of phases. Each of the plurality of phases uses a corresponding mask band of the plurality of mask bands as an input.
    Type: Application
    Filed: August 24, 2023
    Publication date: February 27, 2025
    Inventors: Yuqian Zhou, Krishna Kumar Singh, Benjamin Delarre, Zhe Lin, Jingwan Lu, Taesung Park, Sohrab Amirghodsi, Elya Shechtman
  • Publication number: 20250037431
    Abstract: Systems and methods for training a Generative Adversarial Network (GAN) using feature regularization are described herein. Embodiments are configured to generate a candidate image using a generator network of a GAN, classify the candidate image as real or generated using a discriminator network of the GAN, and train the GAN to generate realistic images based on the classifying of the candidate image. The training process includes regularizing a gradient with respect to features extracted using a discriminator network of the GAN.
    Type: Application
    Filed: July 24, 2023
    Publication date: January 30, 2025
    Inventors: Min Jin Chong, Krishna Kumar Singh, Yijun Li, Jingwan Lu
  • Publication number: 20250005812
    Abstract: In implementations of systems for human reposing based on multiple input views, a computing device implements a reposing system to receive input data describing: input digital images; pluralities of keypoints corresponding to the input digital images, the pluralities of keypoints representing poses of a person depicted in the input digital images; and a plurality of keypoints representing a target pose. The reposing system generates selection masks corresponding to the input digital images by processing the input data using a machine learning model. The selection masks represent likelihoods of spatial correspondence between pixels of an output digital image and portions of the input digital images. The reposing system generates the output digital image depicting the person in the target pose for display in a user interface based on the selection masks and the input data.
    Type: Application
    Filed: June 28, 2023
    Publication date: January 2, 2025
    Applicant: Adobe Inc.
    Inventors: Rishabh Jain, Mayur Hemani, Mausoom Sarkar, Krishna Kumar Singh, Jingwan Lu, Duygu Ceylan Aksit, Balaji Krishnamurthy
  • Publication number: 20250005824
    Abstract: Systems and methods for image processing are described. One aspect of the systems and methods includes receiving a plurality of images comprising a first image depicting a first body part and a second image depicting a second body part and encoding, using a texture encoder, the first image and the second image to obtain a first texture embedding and a second texture embedding, respectively. Then, a composite image is generated using a generative decoder, the composite image depicting the first body part and the second body part based on the first texture embedding and the second texture embedding.
    Type: Application
    Filed: June 27, 2023
    Publication date: January 2, 2025
    Inventors: Rishabh Jain, Mayur Hemani, Duygu Ceylan Aksit, Krishna Kumar Singh, Jingwan Lu, Mausoom Sarkar, Balaji Krishnamurthy
  • Publication number: 20240428564
    Abstract: In implementations of systems for generating images for human reposing, a computing device implements a reposing system to receive input data describing an input digital image depicting a person in a first pose, a first plurality of keypoints representing the first pose, and a second plurality of keypoints representing a second pose. The reposing system generates a mapping by processing the input data using a first machine learning model. The mapping indicates a plurality of first portions of the person in the second pose that are visible in the input digital image and a plurality of second portions of the person in the second pose that are invisible in the input digital image. The reposing system generates an output digital image depicting the person in the second pose by processing the mapping, the first plurality of keypoints, and the second plurality of keypoints using a second machine learning model.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 26, 2024
    Applicant: Adobe Inc.
    Inventors: Rishabh Jain, Mayur Hemani, Mausoom Sarkar, Krishna Kumar Singh, Jingwan Lu, Duygu Ceylan Aksit, Balaji Krishnamurthy
  • Publication number: 20240404013
    Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.
    Type: Application
    Filed: November 21, 2023
    Publication date: December 5, 2024
    Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhe Lin, Qing Liu, Zhifei Zhang, Sohrab Amirghodsi, Elya Shechtman, Jingwan Lu
  • Patent number: 12159413
    Abstract: In implementations of systems for image inversion using multiple latent spaces, a computing device implements an inversion system to generate a segment map that segments an input digital image into a first image region and a second image region and assigns the first image region to a first latent space and the second image region to a second latent space that corresponds to a layer of a convolutional neural network. An inverted latent representation of the input digital image is computed using a binary mask for the second image region. The inversion system modifies the inverted latent representation of the input digital image using an edit direction vector that corresponds to a visual feature. An output digital image is generated that depicts a reconstruction of the input digital image having the visual feature based on the modified inverted latent representation of the input digital image.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: December 3, 2024
    Assignee: Adobe Inc.
    Inventors: Gaurav Parmar, Krishna Kumar Singh, Yijun Li, Richard Zhang, Jingwan Lu
  • Publication number: 20240338799
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: October 10, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Publication number: 20240338869
    Abstract: An image processing system obtains an input image (e.g., a user provided image, etc.) and a mask indicating an edit region of the image. A user selects an image editing mode for an image generation network from a plurality of image editing modes. The image generation network generates an output image using the input image, the mask, and the image editing mode.
    Type: Application
    Filed: September 26, 2023
    Publication date: October 10, 2024
    Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhifei Zhang, Difan Liu, Zhe Lin, Jianming Zhang, Qing Liu, Jingwan Lu, Elya Shechtman, Sohrab Amirghodsi, Connelly Stuart Barnes
  • Publication number: 20240331214
    Abstract: Systems and methods for image processing (e.g., image extension or image uncropping) using neural networks are described. One or more aspects include obtaining an image (e.g., a source image, a user provided image, etc.) having an initial aspect ratio, and identifying a target aspect ratio (e.g., via user input) that is different from the initial aspect ratio. The image may be positioned in an image frame having the target aspect ratio, where the image frame includes an image region containing the image and one or more extended regions outside the boundaries of the image. An extended image may be generated (e.g., using a generative neural network), where the extended image includes the image in the image region as well as generated image portions in the extended regions and the one or more generated image portions comprise an extension of a scene element depicted in the image.
    Type: Application
    Filed: March 20, 2024
    Publication date: October 3, 2024
    Inventors: Yuqian Zhou, Elya Shechtman, Zhe Lin, Krishna Kumar Singh, Jingwan Lu, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Publication number: 20240331236
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: October 3, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Publication number: 20240296607
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: September 5, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Patent number: 12067659
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: August 20, 2024
    Assignee: Adobe Inc.
    Inventors: Yangtuanfeng Wang, Duygu Ceylan Aksit, Krishna Kumar Singh, Niloy J Mitra
  • Publication number: 20240265505
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure obtain a noise image and guidance information for generating an image. A diffusion model generates an intermediate noise prediction for the image based on the noise image. A conditioning network generates noise modulation parameters. The intermediate noise prediction and the noise modulation parameters are combined to obtain a modified intermediate noise prediction. The diffusion model generates the image based on the modified intermediate noise prediction, wherein the image depicts a scene based on the guidance information.
    Type: Application
    Filed: February 6, 2023
    Publication date: August 8, 2024
    Inventors: Cusuh Ham, Tobias Hinz, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang