Patents by Inventor Wilmot Li
Wilmot Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250168442Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: ApplicationFiled: January 21, 2025Publication date: May 22, 2025Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, LI-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
-
Publication number: 20250140292Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding face-aware scale magnification to the trimmed video (e.g., applying scale magnification to simulate a camera zoom effect that hides shot cuts with respect to the subject's face). For example, as the trimmed video transitions from one video segment to the next video segment, a scale magnification may be applied that zooms in on a detected face at a boundary between the video segments to smooth the transition between video segments.Type: ApplicationFiled: February 2, 2024Publication date: May 1, 2025Inventors: Anh Lan TRUONG, Deepali ANEJA, Hijung SHIN, Rubaiat HABIB, Jakub FISER, Kishore RADHAKRISHNA, Joel Richard BRANDT, Matthew David FISHER, Zeyu JIN, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
-
Publication number: 20250139161Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for cutting down a user's larger input video into an edited video comprising the most important video segments and applying corresponding video effects. Some embodiments of the present invention are directed to adding captioning video effects to the trimmed video (e.g., applying face-aware and non-face-aware captioning to emphasize extracted video segment headings, important sentences, quotes, words of interest, extracted lists, etc.). For example, a prompt is provided to a generative language model to identify portions of a transcript (e.g., extracted scene summaries, important sentences, lists of items discussed in the video, etc.) to apply to corresponding video segments as captions depending on the type of caption (e.g., an extracted heading may be captioned at the start of a corresponding video segment, important sentences and/or extracted list items may be captioned when they are spoken).Type: ApplicationFiled: February 2, 2024Publication date: May 1, 2025Inventors: Deepali ANEJA, Zeyu JIN, Hijung SHIN, Anh Lan TRUONG, Dingzeyu LI, Hanieh DEILAMSALEHY, Rubaiat HABIB, Matthew David FISHER, Kim Pascal PIMMEL, Wilmot LI, Lubomira Assenova DONTCHEVA
-
Patent number: 12206930Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: GrantFiled: January 13, 2023Date of Patent: January 21, 2025Assignee: Adobe Inc.Inventors: Kim Pascal Pimmel, Stephen Joseph Diverdi, Jiaju MA, Rubaiat Habib, Li-Yi Wei, Hijung Shin, Deepali Aneja, John G. Nelson, Wilmot Li, Dingzeyu Li, Lubomira Assenova Dontcheva, Joel Richard Brandt
-
Publication number: 20240244287Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.Type: ApplicationFiled: January 13, 2023Publication date: July 18, 2024Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
-
Patent number: 11461947Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.Type: GrantFiled: February 26, 2021Date of Patent: October 4, 2022Assignee: Adobe Inc.Inventors: Ashwani Chandil, Wilmot Li, Vineet Batra, Matthew David Fisher, Kevin Wampler, Daniel Kaufman, Ankit Phogat
-
Publication number: 20220277501Abstract: Embodiments are disclosed for constrained modification of vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first segment of a vector graphic to be edited, identifying an active region associated with the first segment, wherein the active region includes the first segment and at least one second segment which comprise a geometric primitive, identifying the region of influence including at least one third segment connected to the active region, identifying at least one constraint associated with the active region or the region of influence based at least on the geometric primitive, receiving an edit to the active region, and generating an update for the vector graphic based on the edit and the at least one constraint.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: Ashwani CHANDIL, Wilmot LI, Vineet BATRA, Matthew David FISHER, Kevin WAMPLER, Daniel KAUFMAN, Ankit PHOGAT
-
Patent number: 11423549Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.Type: GrantFiled: November 14, 2019Date of Patent: August 23, 2022Assignee: Adobe Inc.Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
-
Patent number: 11361467Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.Type: GrantFiled: November 22, 2019Date of Patent: June 14, 2022Assignees: Adobe Inc., Princeton UniversityInventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Patent number: 11282257Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.Type: GrantFiled: November 22, 2019Date of Patent: March 22, 2022Assignees: Adobe Inc., Princeton UniversityInventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Patent number: 11211060Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.Type: GrantFiled: May 29, 2020Date of Patent: December 28, 2021Assignee: Adobe Inc.Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Patent number: 11182905Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.Type: GrantFiled: March 20, 2020Date of Patent: November 23, 2021Assignee: ADOBE INC.Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
-
Patent number: 11164355Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.Type: GrantFiled: April 23, 2020Date of Patent: November 2, 2021Assignee: ADOBE INC.Inventors: Ankit Phogat, Kevin Wampler, Wilmot Li, Matthew David Fisher, Vineet Batra, Daniel Kaufman
-
Publication number: 20210335026Abstract: Systems and methods for editing an image based on multiple constraints are described. Embodiments of the systems and methods may identify a change to a vector graphics data structure, generate an update for the vector graphics data structure based on strictly enforcing a handle constraint, a binding constraint, and a continuity constraint, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints according to a priority ordering of the sculpting constraints, generate an additional update for the vector graphics data structure based on strictly enforcing the binding constraint and the continuity constraint and approximately enforcing the handle constraint and the sculpting constraints, adjust the vector graphics data structure sequentially for each of a plurality of sculpting constraints, and display the vector graphic based on the adjusted vector graphics data structure.Type: ApplicationFiled: April 23, 2020Publication date: October 28, 2021Inventors: ANKIT PHOGAT, KEVIN WAMPLER, WILMOT LI, MATTHEW DAVID FISHER, VINEET BATRA, DANIEL KAUFMAN
-
Publication number: 20210295527Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.Type: ApplicationFiled: March 20, 2020Publication date: September 23, 2021Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
-
Publication number: 20210158593Abstract: This disclosure generally relates to character animation. More specifically, but not by way of limitation, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes obtaining a selection of training poses of the subject and a set of character poses, obtaining a performance video of the subject, wherein the performance video includes a plurality of performance frames that include poses performed by the subject, grouping the plurality of performance frames into groups of performance frames, assigning a selected training pose from the selection of training poses to each group of performance frames using the clusters of training frames, generating a sequence of character poses based on the groups of performance frames and their assigned training poses, outputting the sequence of character poses.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Publication number: 20210158565Abstract: This disclosure generally relates to character animation. More specifically, this disclosure relates to pose selection using data analytics techniques applied to training data, and generating 2D animations of illustrated characters using performance data and the selected poses. An example process or system includes extracting sets of joint positions from a training video including the subject, grouping the plurality of frames into frame groups using the sets of joint positions for each frame, identifying a representative frame for each frame group using the frame groups, clustering the frame groups into clusters using the representative frames, outputting a visualization of the clusters at a user interface, and receiving a selection of a cluster for animation of the subject.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Wilmot Li, Hijung Shin, Adam Finkelstein, Nora Willett
-
Publication number: 20210150731Abstract: This disclosure involves mapping body movements to graphical manipulations for real-time human interaction with graphics. Certain aspects involve importing graphical elements and mapping input actions, such as gestures, to output graphical effects, such as moving, resizing, changing opacity, and/or deforming a graphic, by using nodes of a reference skeleton and edges (e.g., links) between the nodes of the reference skeleton and the pins. The mapping is used to trigger and interact with the graphical elements with body position and/or movement.Type: ApplicationFiled: November 14, 2019Publication date: May 20, 2021Inventors: Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, Wilmot Li
-
Patent number: 10896161Abstract: Techniques of managing design iterations include generating data linking selected snapshot histories with contextual notes within a single presentation environment. A designer may generate a design iteration in a design environment. Once the design iteration is complete, the designer may show a snapshot of the design iteration to a stakeholder. The stakeholder then may provide written contextual notes within the design environment. The computer links the contextual notes to the snapshot and stores the snapshot and contextual notes in a database. When the designer generates a new design iteration from the previous design iteration and the contextual notes, the computer generates a new snapshot and a link to the previous snapshot to form a timeline of snapshots. The designer may then present the snapshots, the timeline, and the contextual notes to the stakeholder as a coherent history of how the design of the mobile app evolved to its present state.Type: GrantFiled: February 28, 2018Date of Patent: January 19, 2021Assignee: ADOBE INC.Inventors: Lubomira A. Dontcheva, Wilmot Li, Morgan Dixon, Jasper O'Leary, Holger Winnemoeller
-
Patent number: 10789754Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.Type: GrantFiled: July 27, 2018Date of Patent: September 29, 2020Assignee: ADOBE INC.Inventors: Vladimir Kim, Wilmot Li, Marek Dvoro{hacek over (z)}{hacek over (n)}ák, Daniel Sýkora