Patents by Inventor Timothy Langlois

Timothy Langlois has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12051143
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: July 30, 2024
    Assignee: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
  • Patent number: 11967011
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: April 23, 2024
    Assignee: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
  • Patent number: 11812254
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: November 7, 2023
    Assignee: Adobe Inc.
    Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
  • Publication number: 20230281904
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.
    Type: Application
    Filed: March 1, 2022
    Publication date: September 7, 2023
    Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
  • Publication number: 20230281903
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating and modifying digital animations based on user interactions with a unique user interface portraying a one-dimensional layer motion element and/or elements for generating and utilizing animation path for digital design objects and animation layers. The disclosed system can provide a dynamic one-dimensional layer motion element that adapts to a selected animation layer and portrays selectable animation frames from the animation layer. The disclosed systems can provide options for generating and modifying various frames of the digital animation based on user interactions with the one-dimensional layer motion element, an animation timeline, and/or a corresponding animation canvas.
    Type: Application
    Filed: March 1, 2022
    Publication date: September 7, 2023
    Inventors: Kazi Rubaiat Habib, Timothy Langlois, Li-Yi Wei, John Simpson, James Corbett, Christopher Nuuja, Brooke Hopper
  • Publication number: 20220060842
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 24, 2022
    Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
  • Patent number: 11190898
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: November 30, 2021
    Assignee: ADOBE INC.
    Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
  • Publication number: 20210136510
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 6, 2021
    Inventors: Zhenyu Tang, Timothy Langlois, Nicholas Bryan, Dingzeyu Li
  • Patent number: 10701303
    Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Pedro Morgado, Timothy Langlois
  • Publication number: 20190306451
    Abstract: Certain embodiments involve generating and providing spatial audio using a predictive model. For example, a generates, using a predictive model, a visual representation of visual content provideable to a user device by encoding the visual content into the visual representation that indicates a visual element in the visual content. The system generates, using the predictive model, an audio representation of audio associated with the visual content by encoding the audio into the audio representation that indicates an audio element in the audio. The system also generates, using the predictive model, spatial audio based at least in part on the audio element and associating the spatial audio with the visual element. The system can also augment the visual content using the spatial audio by at least associating the spatial audio with the visual content.
    Type: Application
    Filed: March 27, 2018
    Publication date: October 3, 2019
    Inventors: Oliver Wang, Pedro Morgado, Timothy Langlois
  • Publication number: 20070185612
    Abstract: Systems and methods for placing a radio frequency identifier (RFID) tag on a bale, reading the RFID tag on the bale, and updating a storage repository that includes information pertaining to the bale. The systems and methods can be used to allow inventory management of bales for quality control.
    Type: Application
    Filed: February 8, 2006
    Publication date: August 9, 2007
    Applicant: Casella Waste Systems, Inc.
    Inventors: S. Stevens, Christopher Scherer, Steven Gray, James Wilborne, Matthew Potter, Scott Charter, Timothy Langlois
  • Publication number: 20050236313
    Abstract: The invention provides novel compositions, columns and methods for improved chromatographic separations. In particular, novel column packings are provided to improve overall chromatographic separation and efficiency. The invention provides a packing material having particles or mixtures of particles that are used for separation of small molecules, proteins or nucleic acids.
    Type: Application
    Filed: July 1, 2004
    Publication date: October 27, 2005
    Inventors: William Barber, Alan Broske, Timothy Langlois