Patents by Inventor Timothy Langlois
Timothy Langlois has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12051143Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.Type: GrantFiled: March 1, 2022Date of Patent: July 30, 2024Assignee: Adobe Inc.Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
-
Patent number: 11967011Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.Type: GrantFiled: March 1, 2022Date of Patent: April 23, 2024Assignee: Adobe Inc.Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
-
Patent number: 11812254Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.Type: GrantFiled: November 1, 2021Date of Patent: November 7, 2023Assignee: Adobe Inc.Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
-
Publication number: 20230281904Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.Type: ApplicationFiled: March 1, 2022Publication date: September 7, 2023Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
-
Publication number: 20230281903Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.Type: ApplicationFiled: March 1, 2022Publication date: September 7, 2023Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
-
Publication number: 20220060842Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.Type: ApplicationFiled: November 1, 2021Publication date: February 24, 2022Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
-
Patent number: 11190898Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.Type: GrantFiled: November 5, 2019Date of Patent: November 30, 2021Assignee: ADOBE INC.Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
-
Publication number: 20210136510Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.Type: ApplicationFiled: November 5, 2019Publication date: May 6, 2021Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
-
Patent number: 10701303Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.Type: GrantFiled: March 27, 2018Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Oliver Wang, Pedro Morgado, Timothy Langlois
-
Publication number: 20190306451Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.Type: ApplicationFiled: March 27, 2018Publication date: October 3, 2019Inventors: Oliver Wang, Pedro Morgado, Timothy Langlois
-
Publication number: 20070185612Abstract: Systems and methods for placing a radio frequency identifier (RFID) tag on a bale, reading the RFID tag on the bale, and updating a storage repository that includes information pertaining to the bale. The systems and methods can be used to allow inventory management of bales for quality control.Type: ApplicationFiled: February 8, 2006Publication date: August 9, 2007Applicant: Casella Waste Systems, Inc.Inventors: S. Stevens, Christopher Scherer, Steven Gray, James Wilborne, Matthew Potter, Scott Charter, Timothy Langlois
-
Publication number: 20050236313Abstract: The invention provides novel compositions, columns and methods for improved chromatographic separations. In particular, novel column packings are provided to improve overall chromatographic separation and efficiency. The invention provides a packing material having particles or mixtures of particles that are used for separation of small molecules, proteins or nucleic acids.Type: ApplicationFiled: July 1, 2004Publication date: October 27, 2005Inventors: William Barber, Alan Broske, Timothy Langlois