Patents by Inventor Wilmot Li

Wilmot Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11461947
    Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 4, 2022
    Assignee: Adobe Inc.
    Inventors: Ashwani Chandil, Wilmot Li, Vineet Batra, Matthew David Fisher, Kevin Wampler, Daniel Kaufman, Ankit Phogat
  • Publication number: 20220277501
    Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Ashwani CHANDIL, Wilmot LI, Vineet BATRA, Matthew David FISHER, Kevin WAMPLER, Daniel KAUFMAN, Ankit PHOGAT
  • Patent number: 11423549
    Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: August 23, 2022
    Assignee: Adobe Inc.
    Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
  • Patent number: 11361467
    Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: June 14, 2022
    Assignees: Adobe Inc., Princeton University
    Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
  • Patent number: 11282257
    Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: March 22, 2022
    Assignees: Adobe Inc., Princeton University
    Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
  • Patent number: 11211060
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 28, 2021
    Assignee: Adobe Inc.
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Patent number: 11182905
    Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 23, 2021
    Assignee: ADOBE INC.
    Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
  • Patent number: 11164355
    Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: November 2, 2021
    Assignee: ADOBE INC.
    Inventors: Ankit Phogat, Kevin Wampler, Wilmot Li, Matthew David Fisher, Vineet Batra, Daniel Kaufman
  • Publication number: 20210335026
    Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.
    Type: Application
    Filed: April 23, 2020
    Publication date: October 28, 2021
    Inventors: ANKIT PHOGAT, KEVIN WAMPLER, WILMOT LI, MATTHEW DAVID FISHER, VINEET BATRA, DANIEL KAUFMAN
  • Publication number: 20210295527
    Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 23, 2021
    Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
  • Publication number: 20210158565
    Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
  • Publication number: 20210158593
    Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
  • Publication number: 20210150731
    Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 20, 2021
    Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
  • Patent number: 10896161
    Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: January 19, 2021
    Assignee: ADOBE INC.
    Inventors: Lubomira A. Dontcheva, Wilmot Li, Morgan Dixon, Jasper O'Leary, Holger Winnemoeller
  • Patent number: 10789754
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: September 29, 2020
    Assignee: ADOBE INC.
    Inventors: Vladimir Kim, Wilmot Li, Marek Dvoro{hacek over (z)}{hacek over (n)}ák, Daniel Sýkora
  • Publication number: 20200294495
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.
    Type: Application
    Filed: May 29, 2020
    Publication date: September 17, 2020
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Patent number: 10699705
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
  • Publication number: 20200142572
    Abstract: The disclosure relates to methods, non-transitory computer readable media, and systems that leverage underlying digital datasets corresponding to static graphics to generate digital animated data narratives. In various embodiments, a digital narrative animation system receives static data graphics and a corresponding dataset and generate scenes for the data narrative using the static data graphics. Moreover, in one or more embodiments, the digital narrative animation system presents a storyboard animation user interface for customizing animated transitions between the scenes of the data narrative. Specifically, the digital narrative animation system can use the corresponding dataset to drive the animation transitions between scenes by linking values based on the data attached to each element, showing a different version of the data graphic based on a subset of the dataset, and/or changing the timing of an animation as a function of the data attached to each value.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 7, 2020
    Inventors: Leo Zhicheng Liu, Wilmot Li, John Thompson
  • Publication number: 20200035010
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
    Type: Application
    Filed: July 27, 2018
    Publication date: January 30, 2020
    Inventors: Vladimir Kim, Wilmot Li, Marek Dvoroznák, Daniel Sýkora
  • Publication number: 20190392823
    Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.
    Type: Application
    Filed: June 22, 2018
    Publication date: December 26, 2019
    Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons