Patents by Inventor Hareesh Ravi

Hareesh Ravi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117972
    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image generation include encoding a text prompt to obtain a text embedding. An image prompt is encoded to obtain an image embedding. Cross-attention is performed on the text embedding and then on the image embedding to obtain a text attention output and an image attention output, respectively. A synthesized image is generated based on the text attention output and the image attention output.
    Type: Application
    Filed: August 28, 2024
    Publication date: April 10, 2025
    Inventors: Hareesh Ravi, Aashish Kumar Misraa, Ajinkya Gorakhnath Kale
  • Publication number: 20250117970
    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image generation include obtaining a text prompt and a conditioning attribute. The text prompt is encoded to obtain a text embedding. The conditioning attribute is encoded to obtain an attribute embedding. Then a synthesized image is generated using an image generation model based on the text embedding and the attribute embedding. The synthesized image has the conditioning attribute and depicts an element of the text prompt.
    Type: Application
    Filed: April 17, 2024
    Publication date: April 10, 2025
    Inventors: Sachin Madhav Kelkar, Hareesh Ravi, Ritiz Tambi, Ajinkya Gorakhnath Kale
  • Publication number: 20250117973
    Abstract: A method, apparatus, non-transitory computer readable medium, and system for media processing includes obtaining a text prompt and a style input, where the text prompt describes image content and the style input describes an image style, generating a text embedding based on the text prompt, where the text embedding represents the image content, generating a style embedding based on the style input, where the style embedding represents the image style, and generating a synthetic image based on the text embedding and the style embedding, where the text embedding is provided to the image generation model at a first step and the style embedding is provided to the image generation model at a second step after the first step.
    Type: Application
    Filed: October 1, 2024
    Publication date: April 10, 2025
    Inventors: Fengbin Chen, Midhun Harikumar, Ajinkya Gorakhnath Kale, Hareesh Ravi, Venkata Naveen Kumar Yadav Marri
  • Publication number: 20250095114
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating digital images by conditioning a diffusion neural network with input prompts. In particular, in one or more embodiments, the disclosed systems generate, utilizing a reverse diffusion model, an image noise representation from a first image prompt. Additionally, in some embodiments, the disclosed systems generate, utilizing a diffusion neural network conditioned with a first vector representation of the first image prompt, a first denoised image representation from the image noise representation. Moreover, in some embodiments, the disclosed systems generate, utilizing the diffusion neural network conditioned with a second vector representation of a second image prompt, a second denoised image representation from the image noise representation.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 20, 2025
    Inventors: Hareesh Ravi, Sachin Kelkar, Ajinkya Gorakhnath Kale
  • Publication number: 20250077842
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for selectively conditioning layers of a neural network and utilizing the neural network to generate a digital image. In particular, in some embodiments, the disclosed systems condition an upsampling layer of a neural network with an image vector representation of an image prompt. Additionally, in some embodiments, the disclosed systems condition an additional upsampling layer of the neural network with a text vector representation of a text prompt without the image vector representation of the image prompt. Moreover, in some embodiments, the disclosed systems generate, utilizing the neural network, a digital image from the image vector representation and the text vector representation.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: Hareesh Ravi, Sachin Kelkar, Ajinkya Gorakhnath Kale
  • Publication number: 20250078349
    Abstract: A method, apparatus, and non-transitory computer readable medium for image generation are described. Embodiments of the present disclosure obtain a content input and a style input via a user interface or from a database. The content input includes a target spatial layout and the style input includes a target style. A content encoder of an image processing apparatus encodes the content input to obtain a spatial layout mask representing the target spatial layout. A style encoder of the image processing apparatus encodes the style input to obtain a style embedding representing the target style. An image generation model of the image processing apparatus generates an image based on the spatial layout mask and the style embedding, where the image includes the target spatial layout and the target style.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 6, 2025
    Inventors: Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Ngoc Khuc, Krishna Kumar Singh, Jingwan Lu, Ajinkya Gorakhnath Kale
  • Publication number: 20240404144
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure, via a multi-modal encoder of an image processing apparatus, encodes a text prompt to obtain a text embedding. A color encoder of the image processing apparatus encodes a color prompt to obtain a color embedding. A diffusion prior model of the image processing apparatus generates an image embedding based on the text embedding and the color embedding. A latent diffusion model of the image processing apparatus generates an image based on the image embedding, where the image includes an element from the text prompt and a color from the color prompt.
    Type: Application
    Filed: June 5, 2023
    Publication date: December 5, 2024
    Inventors: Pranav Vineet Aggarwal, Venkata Naveen Kumar Yadav Marri, Midhun Harikumar, Sachin Madhav Kelkar, Hareesh Ravi, Ajinkya Gorakhnath Kale
  • Publication number: 20240371048
    Abstract: Systems and methods for generating abstract backgrounds are described. Embodiments are configured to obtain an input prompt, encode the input prompt to obtain a prompt embedding, and generate a latent vector based on the prompt embedding and a noise vector. Embodiments include a multimodal encoder configured to generate the prompt embedding, which is an intermediate representation the prompt. In some cases, the prompt includes or indicates an “abstract background” type image. The latent vector is generated using a mapping network of a generative adversarial network (GAN). Embodiments are further configured to generate an image based on the latent vector using the GAN.
    Type: Application
    Filed: May 4, 2023
    Publication date: November 7, 2024
    Inventors: Hareesh Ravi, Ajinkya Gorakhnath Kale
  • Publication number: 20240362842
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a diffusion prior neural network for text guided digital image editing. For example, in one or more embodiments the disclosed systems utilize a text-image encoder to generate a base image embedding from the base digital image and an edit text embedding from edit text. Moreover, the disclosed systems utilize a diffusion prior neural network to generate a text-image embedding. In particular, the disclosed systems inject the base image embedding at a conceptual editing step of the diffusion prior neural network and condition a set of steps of the diffusion prior neural network after the conceptual editing step utilizing the edit text embedding. Furthermore, the disclosed systems utilize a diffusion neural network to create a modified digital image from the text-edited image embedding and the base image embedding.
    Type: Application
    Filed: April 27, 2023
    Publication date: October 31, 2024
    Inventors: Hareesh Ravi, Sachin Kelkar, Midhun Harikumar, Ajinkya Gorakhnath Kale
  • Publication number: 20240355018
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a diffusion neural network for mask aware image and typography editing. For example, in one or more embodiments the disclosed systems utilize a text-image encoder to generate a base image embedding from a base digital image. Moreover, the disclosed systems generate a mask-segmented image by combining a shape mask with the base digital image. In one or more implementations, the disclosed systems utilize noising steps of a diffusion noising model to generate a mask-segmented image noise map from the mask-segmented image. Furthermore, the disclosed systems utilize a diffusion neural network to create a stylized image corresponding to the shape mask from the base image embedding and the mask-segmented image noise map.
    Type: Application
    Filed: April 20, 2023
    Publication date: October 24, 2024
    Inventors: Pranav Aggarwal, Hareesh Ravi, Midhun Harikumar, Ajinkya Gorakhnath Kale, Fengbin Chen, Venkata Naveen Kumar Yadav Marri
  • Publication number: 20240354895
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure include an image generation network configured to encode a plurality of abstract images using a style encoder to obtain a plurality of abstract style encodings, wherein the style encoder is trained to represent image style separately from image content. A clustering component clusters the plurality of abstract style encodings to obtain an abstract style cluster comprising a subset of the plurality of abstract style encodings. A preset component generates an abstract style transfer preset representing the abstract style cluster.
    Type: Application
    Filed: April 19, 2023
    Publication date: October 24, 2024
    Inventors: Hareesh Ravi, Midhun Harikumar, Taesung Park, Ajinkya Gorakhnath Kale