Patents by Inventor Deepali Aneja

Deepali Aneja has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875442
    Abstract: Embodiments are disclosed for articulated part extraction using images of animated characters from sprite sheets by a digital design system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a plurality of images depicting an animated character in different poses. The disclosed systems and methods further comprise, for each pair of images in the plurality of images, determining, by a first machine learning model, pixel correspondences between pixels of the pair of images, and determining, by a second machine learning model, pixel clusters representing the animated character, each pixel cluster corresponding to a different structural segment of the animated character. The disclosed systems and methods further comprise selecting a subset of clusters that reconstructs the different poses of the animated character. The disclosed systems and methods further comprise creating a rigged animated character based on the selected subset of clusters.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Zhan Xu, Yang Zhou, Deepali Aneja, Evangelos Kalogerakis
  • Publication number: 20240005585
    Abstract: Embodiments are disclosed for articulated part extraction using images of animated characters from sprite sheets by a digital design system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a plurality of images depicting an animated character in different poses. The disclosed systems and methods further comprise, for each pair of images in the plurality of images, determining, by a first machine learning model, pixel correspondences between pixels of the pair of images, and determining, by a second machine learning model, pixel clusters representing the animated character, each pixel cluster corresponding to a different structural segment of the animated character. The disclosed systems and methods further comprise selecting a subset of clusters that reconstructs the different poses of the animated character. The disclosed systems and methods further comprise creating a rigged animated character based on the selected subset of clusters.
    Type: Application
    Filed: May 31, 2022
    Publication date: January 4, 2024
    Inventors: Matthew David FISHER, Zhan XU, Yang ZHOU, Deepali ANEJA, Evangelos KALOGERAKIS
  • Patent number: 11682238
    Abstract: Embodiments are disclosed for re-timing a video sequence to an audio sequence based on the detection of motion beats in the video sequence and audio beats in the audio sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input, the first input including a video sequence, detecting motion beats in the video sequence, receiving a second input, the second input including an audio sequence, detecting audio beats in the audio sequence, modifying the video sequence by matching the detected motions beats in the video sequence to the detected audio beats in the audio sequence, and outputting the modified video sequence.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: June 20, 2023
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Deepali Aneja, Dingzeyu Li, Jun Saito, Yang Zhou
  • Patent number: 11663763
    Abstract: A computer-implemented method including receiving an input image at a first image stage and receiving a request to generate a plurality of variations of the input image at a second image stage. The method including generating, using an auto-regressive generative deep learning model, the plurality of variations of the input image at the second image stage and outputting the plurality of variations of the input image at the second image stage.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: May 30, 2023
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Vineet Batra, Sumit Dhingra, Praveen Kumar Dhanuka, Deepali Aneja, Ankit Phogat
  • Publication number: 20230137233
    Abstract: Generating a vector representation of a hand-drawn sketch is described. To do so, the sketch is segmented into different superpixel regions. Superpixels are grown by distributing superpixel seeds throughout an image of the sketch and assigning unassigned pixels to a neighboring superpixel based on pixel value differences. The border between each pair of adjacent superpixels is then classified as either an active or an inactive boundary, with active boundaries indicating that the border corresponds to a salient sketch stroke. Vector paths are generated by traversing edges between pixel vertices along the active boundaries. To minimize vector paths included in the vector representation, vector paths are greedily generated first for longer curves along active boundaries until each edge is assigned to a vector path. Regions encompassed by vector paths corresponding to a foreground superpixel are filled to produce a high-fidelity vector representation of the sketch.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Applicant: Adobe Inc.
    Inventors: Ashwani Chandil, Vineet Batra, Matthew David Fisher, Deepali Aneja, Ankit Phogat
  • Publication number: 20230131321
    Abstract: A computer-implemented method including receiving an input image at a first image stage and receiving a request to generate a plurality of variations of the input image at a second image stage. The method including generating, using an auto-regressive generative deep learning model, the plurality of variations of the input image at the second image stage and outputting the plurality of variations of the input image at the second image stage.
    Type: Application
    Filed: October 25, 2021
    Publication date: April 27, 2023
    Applicant: Adobe Inc.
    Inventors: Matthew David FISHER, Vineet BATRA, Sumit DHINGRA, Praveen Kumar DHANUKA, Deepali ANEJA, Ankit PHOGAT
  • Publication number: 20220261573
    Abstract: Embodiments are disclosed for re-timing a video sequence to an audio sequence based on the detection of motion beats in the video sequence and audio beats in the audio sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input, the first input including a video sequence, detecting motion beats in the video sequence, receiving a second input, the second input including an audio sequence, detecting audio beats in the audio sequence, modifying the video sequence by matching the detected motions beats in the video sequence to the detected audio beats in the audio sequence, and outputting the modified video sequence.
    Type: Application
    Filed: February 12, 2021
    Publication date: August 18, 2022
    Inventors: Jimei YANG, Deepali ANEJA, Dingzeyu LI, Jun SAITO, Yang ZHOU
  • Patent number: 11211060
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 28, 2021
    Assignee: Adobe Inc.
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Publication number: 20200294495
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.
    Type: Application
    Filed: May 29, 2020
    Publication date: September 17, 2020
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Publication number: 20200279553
    Abstract: A conversational agent that is implemented as a voice-only agent or embodied with a face may match the speech and facial expressions of a user. Linguistic style-matching by the conversational agent may be implemented by identifying prosodic characteristics of the user's speech and synthesizing speech for the virtual agent with the same or similar characteristics. The facial expressions of the user can be identified and mimicked by the face of an embodied conversational agent. Utterances by the virtual agent may be based on a combination of predetermined scripted responses and open-ended responses generated by machine learning techniques. A conversational agent that aligns with the conversational style and facial expressions of the user may be perceived as more trustworthy, easier to understand, and create a more natural human-machine interaction.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Daniel J. McDUFF, Kael R. ROWAN, Mary P. CZERWINSKI, Deepali ANEJA, Rens HOEGEN
  • Patent number: 10699705
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Publication number: 20190392823
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.
    Type: Application
    Filed: June 22, 2018
    Publication date: December 26, 2019
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons