Patents by Inventor YANIV DE RIDDER

YANIV DE RIDDER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11763506
    Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: September 19, 2023
    Assignee: Adobe Inc.
    Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
  • Publication number: 20210256751
    Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.
    Type: Application
    Filed: April 15, 2021
    Publication date: August 19, 2021
    Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
  • Patent number: 10984574
    Abstract: The present disclosure relates to an AR animation generation system identifies an animation profile for animating the virtual object displayed in an augmented reality (AR) scene. The AR animation generation system creates a link between the virtual object and the mobile computing system based upon a position of the virtual object within the AR scene and a position of a mobile device in a real-world environment. The link enables determining for each position of the mobile device in the real-world environment, a corresponding position for the virtual object in the AR scene.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 20, 2021
    Assignee: Adobe Inc.
    Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
  • Patent number: 10791412
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: September 29, 2020
    Assignee: ADOBE INC.
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Publication number: 20200186957
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Patent number: 10649638
    Abstract: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: May 12, 2020
    Assignee: Adobe Inc.
    Inventors: Yaniv De Ridder, Michael Spencer Cragg, Benjamin Adam Farrell
  • Patent number: 10575119
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: February 25, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Patent number: 10453494
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating synchronization of audio with motion imagery. In embodiments, an indication to create a relationship between an audio feature associated with an audio and an imagery feature associated with a motion imagery is received. Thereafter, a relationship is created between the audio feature and the imagery feature in accordance with an instance or a time duration to synchronize the audio with the motion imagery. Based on the relationship between the audio feature and the imagery feature, the imagery feature of the component is automatically manipulated in relation to the audio feature at the designated instance or the time duration.
    Type: Grant
    Filed: January 10, 2017
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Yaniv De Ridder, Michael Spencer Cragg, Colin Cronin Roache
  • Publication number: 20190243530
    Abstract: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 8, 2019
    Applicant: Adobe Inc.
    Inventors: Yaniv De Ridder, Michael Spencer Cragg, Benjamin Adam Farrell
  • Publication number: 20190149941
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Application
    Filed: December 12, 2018
    Publication date: May 16, 2019
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Patent number: 10165388
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Publication number: 20180197578
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating synchronization of audio with motion imagery. In embodiments, an indication to create a relationship between an audio feature associated with an audio and an imagery feature associated with a motion imagery is received. Thereafter, a relationship is created between the audio feature and the imagery feature in accordance with an instance or a time duration to synchronize the audio with the motion imagery. Based on the relationship between the audio feature and the imagery feature, the imagery feature of the component is automatically manipulated in relation to the audio feature at the designated instance or the time duration.
    Type: Application
    Filed: January 10, 2017
    Publication date: July 12, 2018
    Inventors: Yaniv De Ridder, Michael Spencer Cragg, Colin Cronin Roache
  • Patent number: 9762316
    Abstract: A gesture is performed by a wireless accessory attempting to pair with a device. The gesture comprises a series of user interactions associated with accessory data detected at the accessory and device data detected at the device. The device begins looking for accessories advertising a Bluetooth service indicating they are attempting to pair. Once an accessory is identified, the device compares the device data to the accessory data for that particular accessory. If the accessory data matches the device data, the gesture detected at the device was made by the accessory and a secure connection can be established. Based on the secure connection, a clock associated with the accessory may synchronize with a clock associated with the device for additional security and fidelity.
    Type: Grant
    Filed: September 3, 2014
    Date of Patent: September 12, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Timothy W. Kukulski, Geoffrey Charles Dowd, Yaniv De Ridder
  • Publication number: 20160065301
    Abstract: A gesture is performed by a wireless accessory attempting to pair with a device. The gesture comprises a series of user interactions associated with accessory data detected at the accessory and device data detected at the device. The device begins looking for accessories advertising a Bluetooth service indicating they are attempting to pair. Once an accessory is identified, the device compares the device data to the accessory data for that particular accessory. If the accessory data matches the device data, the gesture detected at the device was made by the accessory and a secure connection can be established. Based on the secure connection, a clock associated with the accessory may synchronize with a clock associated with the device for additional security and fidelity.
    Type: Application
    Filed: September 3, 2014
    Publication date: March 3, 2016
    Inventors: TIMOTHY W. KUKULSKI, GEOFFREY CHARLES DOWD, YANIV DE RIDDER