Patents by Inventor Wilmot Li
Wilmot Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11461947Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.Type: GrantFiled: February 26, 2021Date of Patent: October 4, 2022Assignee: Adobe Inc.Inventors: Ashwani Chandil, Wilmot Li, Vineet Batra, Matthew David Fisher, Kevin Wampler, Daniel Kaufman, Ankit Phogat
-
Publication number: 20220277501Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: Ashwani CHANDIL, Wilmot LI, Vineet BATRA, Matthew David FISHER, Kevin WAMPLER, Daniel KAUFMAN, Ankit PHOGAT
-
Patent number: 11423549Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.Type: GrantFiled: November 14, 2019Date of Patent: August 23, 2022Assignee: Adobe Inc.Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
-
Patent number: 11361467Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.Type: GrantFiled: November 22, 2019Date of Patent: June 14, 2022Assignees: Adobe Inc., Princeton UniversityInventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Patent number: 11282257Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.Type: GrantFiled: November 22, 2019Date of Patent: March 22, 2022Assignees: Adobe Inc., Princeton UniversityInventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Patent number: 11211060Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.Type: GrantFiled: May 29, 2020Date of Patent: December 28, 2021Assignee: Adobe Inc.Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Patent number: 11182905Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.Type: GrantFiled: March 20, 2020Date of Patent: November 23, 2021Assignee: ADOBE INC.Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
-
Patent number: 11164355Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.Type: GrantFiled: April 23, 2020Date of Patent: November 2, 2021Assignee: ADOBE INC.Inventors: Ankit Phogat, Kevin Wampler, Wilmot Li, Matthew David Fisher, Vineet Batra, Daniel Kaufman
-
Publication number: 20210335026Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.Type: ApplicationFiled: April 23, 2020Publication date: October 28, 2021Inventors: ANKIT PHOGAT, KEVIN WAMPLER, WILMOT LI, MATTHEW DAVID FISHER, VINEET BATRA, DANIEL KAUFMAN
-
Publication number: 20210295527Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.Type: ApplicationFiled: March 20, 2020Publication date: September 23, 2021Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
-
Publication number: 20210158565Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Publication number: 20210158593Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Publication number: 20210150731Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.Type: ApplicationFiled: November 14, 2019Publication date: May 20, 2021Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
-
Patent number: 10896161Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.Type: GrantFiled: February 28, 2018Date of Patent: January 19, 2021Assignee: ADOBE INC.Inventors: Lubomira A. Dontcheva, Wilmot Li, Morgan Dixon, Jasper O'Leary, Holger Winnemoeller
-
Patent number: 10789754Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.Type: GrantFiled: July 27, 2018Date of Patent: September 29, 2020Assignee: ADOBE INC.Inventors: Vladimir Kim, Wilmot Li, Marek Dvoro{hacek over (z)}{hacek over (n)}ák, Daniel Sýkora
-
Publication number: 20200294495Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.Type: ApplicationFiled: May 29, 2020Publication date: September 17, 2020Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Patent number: 10699705Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.Type: GrantFiled: June 22, 2018Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Publication number: 20200142572Abstract: The disclosure relates to methods, non-transitory computer readable media, and systems that leverage underlying digital datasets corresponding to static graphics to generate digital animated data narratives. In various embodiments, a digital narrative animation system receives static data graphics and a corresponding dataset and generate scenes for the data narrative using the static data graphics. Moreover, in one or more embodiments, the digital narrative animation system presents a storyboard animation user interface for customizing animated transitions between the scenes of the data narrative. Specifically, the digital narrative animation system can use the corresponding dataset to drive the animation transitions between scenes by linking values based on the data attached to each element, showing a different version of the data graphic based on a subset of the dataset, and/or changing the timing of an animation as a function of the data attached to each value.Type: ApplicationFiled: November 7, 2018Publication date: May 7, 2020Inventors: Leo Zhicheng Liu, Wilmot Li, John Thompson
-
Publication number: 20200035010Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.Type: ApplicationFiled: July 27, 2018Publication date: January 30, 2020Inventors: Vladimir Kim, Wilmot Li, Marek Dvoroznák, Daniel Sýkora
-
Publication number: 20190392823Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.Type: ApplicationFiled: June 22, 2018Publication date: December 26, 2019Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons