Patents by Inventor YANIV DE RIDDER
YANIV DE RIDDER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11763506Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.Type: GrantFiled: April 15, 2021Date of Patent: September 19, 2023Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Publication number: 20210256751Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.Type: ApplicationFiled: April 15, 2021Publication date: August 19, 2021Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Patent number: 10984574Abstract: The present disclosure relates to an AR animation generation system identifies an animation profile for animating the virtual object displayed in an augmented reality (AR) scene. The AR animation generation system creates a link between the virtual object and the mobile computing system based upon a position of the virtual object within the AR scene and a position of a mobile device in a real-world environment. The link enables determining for each position of the mobile device in the real-world environment, a corresponding position for the virtual object in the AR scene.Type: GrantFiled: November 22, 2019Date of Patent: April 20, 2021Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Patent number: 10791412Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: GrantFiled: February 13, 2020Date of Patent: September 29, 2020Assignee: ADOBE INC.Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Publication number: 20200186957Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: ApplicationFiled: February 13, 2020Publication date: June 11, 2020Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Patent number: 10649638Abstract: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.Type: GrantFiled: February 6, 2018Date of Patent: May 12, 2020Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Michael Spencer Cragg, Benjamin Adam Farrell
-
Patent number: 10575119Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: GrantFiled: December 12, 2018Date of Patent: February 25, 2020Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Patent number: 10453494Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating synchronization of audio with motion imagery. In embodiments, an indication to create a relationship between an audio feature associated with an audio and an imagery feature associated with a motion imagery is received. Thereafter, a relationship is created between the audio feature and the imagery feature in accordance with an instance or a time duration to synchronize the audio with the motion imagery. Based on the relationship between the audio feature and the imagery feature, the imagery feature of the component is automatically manipulated in relation to the audio feature at the designated instance or the time duration.Type: GrantFiled: January 10, 2017Date of Patent: October 22, 2019Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Michael Spencer Cragg, Colin Cronin Roache
-
Publication number: 20190243530Abstract: Techniques and systems to support immersive media content navigation and editing are described. A two-dimensional equirectangular projection of a spherical video is generated by a computing device and displayed in a navigator portion of a user interface of a content editing application. A visual position indicator, indicative of a position within the spherical video, is displayed over the 2D equirectangular projection of the spherical video. A portion of the spherical video is determined based on the position, and a planar spherical view of the portion of the spherical video is generated by the computing device and displayed in a compositor portion of the user interface. The navigator portion and the compositor portion are linked such that user input to the navigator portion or the compositor portion of the user interface causes corresponding visual changes in both the navigator portion and the compositor portion of the user interface.Type: ApplicationFiled: February 6, 2018Publication date: August 8, 2019Applicant: Adobe Inc.Inventors: Yaniv De Ridder, Michael Spencer Cragg, Benjamin Adam Farrell
-
Publication number: 20190149941Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: ApplicationFiled: December 12, 2018Publication date: May 16, 2019Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Patent number: 10165388Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: GrantFiled: November 15, 2017Date of Patent: December 25, 2018Assignee: Adobe Systems IncorporatedInventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Publication number: 20180197578Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating synchronization of audio with motion imagery. In embodiments, an indication to create a relationship between an audio feature associated with an audio and an imagery feature associated with a motion imagery is received. Thereafter, a relationship is created between the audio feature and the imagery feature in accordance with an instance or a time duration to synchronize the audio with the motion imagery. Based on the relationship between the audio feature and the imagery feature, the imagery feature of the component is automatically manipulated in relation to the audio feature at the designated instance or the time duration.Type: ApplicationFiled: January 10, 2017Publication date: July 12, 2018Inventors: Yaniv De Ridder, Michael Spencer Cragg, Colin Cronin Roache
-
Patent number: 9762316Abstract: A gesture is performed by a wireless accessory attempting to pair with a device. The gesture comprises a series of user interactions associated with accessory data detected at the accessory and device data detected at the device. The device begins looking for accessories advertising a Bluetooth service indicating they are attempting to pair. Once an accessory is identified, the device compares the device data to the accessory data for that particular accessory. If the accessory data matches the device data, the gesture detected at the device was made by the accessory and a secure connection can be established. Based on the secure connection, a clock associated with the accessory may synchronize with a clock associated with the device for additional security and fidelity.Type: GrantFiled: September 3, 2014Date of Patent: September 12, 2017Assignee: Adobe Systems IncorporatedInventors: Timothy W. Kukulski, Geoffrey Charles Dowd, Yaniv De Ridder
-
Publication number: 20160065301Abstract: A gesture is performed by a wireless accessory attempting to pair with a device. The gesture comprises a series of user interactions associated with accessory data detected at the accessory and device data detected at the device. The device begins looking for accessories advertising a Bluetooth service indicating they are attempting to pair. Once an accessory is identified, the device compares the device data to the accessory data for that particular accessory. If the accessory data matches the device data, the gesture detected at the device was made by the accessory and a secure connection can be established. Based on the secure connection, a clock associated with the accessory may synchronize with a clock associated with the device for additional security and fidelity.Type: ApplicationFiled: September 3, 2014Publication date: March 3, 2016Inventors: TIMOTHY W. KUKULSKI, GEOFFREY CHARLES DOWD, YANIV DE RIDDER