Patents by Inventor Tharun Mohandoss

Tharun Mohandoss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11610433
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Publication number: 20220058503
    Abstract: Various embodiments describe user segmentation. In an example, potential rules are generated by applying a frequency-based analysis to user interaction data points. Each of the potential rules includes a set of attributes of the user interaction data points and indicates that these data points belong to a segment of interest. An objective function is used to select an optimal set of rules from the potential rules for the segment of interest. The potential rules are used as variable inputs to the objective function and this function is optimized based on interpretability and accuracy parameters. Each rule from the optimal set is associated with a group of the segment of interest. The user interaction data points are segments into the groups by matching attributes of these data points with the rules.
    Type: Application
    Filed: November 5, 2021
    Publication date: February 24, 2022
    Inventors: Ritwik Sinha, Virgil-Artimon Palanciuc, Pranav Ravindra Maneriker, Manish Dash, Tharun Mohandoss, Dhruv Singal
  • Patent number: 11200501
    Abstract: Various embodiments describe user segmentation. In an example, potential rules are generated by applying a frequency-based analysis to user interaction data points. Each of the potential rules includes a set of attributes of the user interaction data points and indicates that these data points belong to a segment of interest. An objective function is used to select an optimal set of rules from the potential rules for the segment of interest. The potential rules are used as variable inputs to the objective function and this function is optimized based on interpretability and accuracy parameters. Each rule from the optimal set is associated with a group of the segment of interest. The user interaction data points are segments into the groups by matching attributes of these data points with the rules.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: December 14, 2021
    Assignee: ADOBE INC.
    Inventors: Ritwik Sinha, Virgil-Artimon Palanciuc, Pranav Ravindra Maneriker, Manish Dash, Tharun Mohandoss, Dhruv Singal
  • Patent number: 11158090
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210158570
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210142042
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Application
    Filed: January 21, 2021
    Publication date: May 13, 2021
    Applicant: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Patent number: 10936853
    Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Kartik Sethi, Oliver Wang, Tharun Mohandoss, Elya Shechtman, Chetan Nanda
  • Publication number: 20190180193
    Abstract: Various embodiments describe user segmentation. In an example, potential rules are generated by applying a frequency-based analysis to user interaction data points. Each of the potential rules includes a set of attributes of the user interaction data points and indicates that these data points belong to a segment of interest. An objective function is used to select an optimal set of rules from the potential rules for the segment of interest. The potential rules are used as variable inputs to the objective function and this function is optimized based on interpretability and accuracy parameters. Each rule from the optimal set is associated with a group of the segment of interest. The user interaction data points are segments into the groups by matching attributes of these data points with the rules.
    Type: Application
    Filed: December 11, 2017
    Publication date: June 13, 2019
    Inventors: Ritwik Sinha, Virgil-Artimon Palanciuc, Pranav Ravindra Maneriker, Manish Dash, Tharun Mohandoss, Dhruv Singal