Patents by Inventor Hannes Luc Herman Verlinde

Hannes Luc Herman Verlinde has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230401800
    Abstract: In one embodiment, a computer-implemented method includes receiving, through a user interface (UI) of an artificial-reality (AR) design tool, a selection of a configurable interface element to place the AR design tool and the UI into a configure phase to configure an AR effect. The computer-implemented method further includes receiving, through the UI of the AR design tool after the AR design tool and the UI are placed into the configure phase in response to selecting the configurable interface element, instructions to add a voice-command module to the AR effect. The computer-implemented method further includes configuring, while the AR design tool and the UI are placed into the configure phase, one or more parameters of the voice-command module. The computer-implemented method further includes generating the AR effect utilizing a particular voice command at runtime based on configured one or more parameters of the voice-command module.
    Type: Application
    Filed: August 24, 2023
    Publication date: December 14, 2023
    Inventors: Stef Marc Smet, Hannes Luc Herman Verlinde, Michael Slater, Benjamin Patrick Blackburne, Ram Kumar Hariharan, Chunjie Jia, Prakarn Nisarat
  • Publication number: 20230368444
    Abstract: Systems, methods, client devices, and non-transitory computer-readable media are disclosed for rendering custom video call interfaces having customizable video cells and/or interactive interface objects during a video call. For example, the disclosed systems can conduct a video call with one or more participant client devices through a streaming channel established for the video call. During the video call, the disclosed systems can render a video cell that portrays a video received from a participant client device in a grid-view display format. Subsequently, upon detecting a user interaction that indicates a request to customize a video call interface, the disclosed systems can render the video cell within a custom video call interface in a self-view display format. In some cases, the client device, via the self-view display format, facilitates various customizations and/or interactions with video cells and other interactive objects displayed on the client device during the video call.
    Type: Application
    Filed: May 13, 2022
    Publication date: November 16, 2023
    Inventors: Benjamin Patrick Blackburne, Michael Slater, Hannes Luc Herman Verlinde, Andrew James Senior
  • Publication number: 20230360282
    Abstract: Systems, methods, client devices, and non-transitory computer-readable media are disclosed for utilizing video data and video processing data to enable shared augmented reality scenes having video textures depicting participants of video calls as augmented reality (AR) effects during the video calls. For instance, the disclosed systems can establish a video call between client devices that include streaming channels (e.g., a video and audio data channel). In one or more implementations, the disclosed systems enable the client devices to transmit video processing data and video data of a participant through the streaming channel during a video call. Indeed, in one or more embodiments, the disclosed systems cause the client devices to utilize video data streams and video processing data to render videos as video textures within AR effects in a shared AR scene (or AR space) of the video call (e.g., to depict participants within the AR scene).
    Type: Application
    Filed: May 5, 2022
    Publication date: November 9, 2023
    Inventors: Benjamin Patrick Blackburne, Michael Slater, Hannes Luc Herman Verlinde, Andrew James Senior
  • Patent number: 11790611
    Abstract: A computer-implemented method, comprising, by an artificial-reality (AR) design tool: receiving, through a user interface (UI) of the AR design tool, instructions to add a voice-command module to an AR effect, the voice-command module having an intent type and at least one slot, the slot associated with one or more entities; establishing, according to instructions received through the UI, a logical connection between the slot and a logic module configured to generate the AR effect depending on a runtime value associated with the slot; and generate, for the AR effect, an executable program configured to: determine that a detected utterance corresponds to the intent type and includes one or more words associated with the slot; select, based on the one or more words, one of the one or more entities as the runtime value for the slot; send the runtime value to the logic module according to the logical connection.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 17, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Stef Marc Smet, Hannes Luc Herman Verlinde, Michael Slater, Benjamin Patrick Blackburne, Ram Kumar Hariharan, Chunjie Jia, Prakarn Nisarat
  • Publication number: 20230254438
    Abstract: Systems, methods, client devices, and non-transitory computer-readable media are disclosed for utilizing an augmented reality (AR) data channel to enable shared augmented reality video calls which facilitate the sharing of and interaction with AR elements during video calls. For example, the disclosed systems can establish a video call between client devices that include a video (and audio) data channel and an AR data channel. Moreover, in one or more embodiments, the disclosed systems enable one of the client devices to transmit AR data (e.g., AR element identifiers, AR elements, object vectors, participant identifiers) through an AR data channel to cause the other client device to render an AR element on a video captured by the other client device during a video call. Indeed, the disclosed systems can enable AR environments, AR effects, AR-based activities, and/or individual AR elements during a video call utilizing an AR data channel.
    Type: Application
    Filed: February 9, 2022
    Publication date: August 10, 2023
    Inventors: Jonathan Michael Sherman, Michael Slater, Hannes Luc Herman Verlinde, Marcus Vinicius Barbosa da Silva, Bret Hobbs, Pablo Gomez Basanta, Ahmed Shehata, Oleg Bogdanov, Sateesh Kumar Srinivasan
  • Publication number: 20220207833
    Abstract: A computer-implemented method, comprising, by an artificial-reality (AR) design tool: receiving, through a user interface (UI) of the AR design tool, instructions to add a voice-command module to an AR effect, the voice-command module having an intent type and at least one slot, the slot associated with one or more entities; establishing, according to instructions received through the UI, a logical connection between the slot and a logic module configured to generate the AR effect depending on a runtime value associated with the slot; and generate, for the AR effect, an executable program configured to: determine that a detected utterance corresponds to the intent type and includes one or more words associated with the slot; select, based on the one or more words, one of the one or more entities as the runtime value for the slot; send the runtime value to the logic module according to the logical connection.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 30, 2022
    Inventors: Stef Marc Smet, Hannes Luc Herman Verlinde, Michael Slater, Benjamin Patrick Blackburne, Ram Kumar Hariharan, Chunjie Jia, Prakarn Nisarat
  • Publication number: 20220157342
    Abstract: Aspects of the present disclosure are directed to three-dimensional (3D) video calls where at least some participants are assigned a position in a virtual 3D space. Additional aspects of the present disclosure are directed to an automated effects engine that can A) convert a source still image into a flythrough video; B) produce a transform video that replaces portions of a source video with an alternate visual effect; and/or C) produce a switch video that automatically matches frames between multiple source videos and stiches together the videos at the match points. Further aspects of the present disclosure are directed to a platform for the creation and deployment of automatic video effects that respond to lyric content and lyric timing values for audio associated with a video and/or that respond to beat types and beat timing values for audio associated with a video.
    Type: Application
    Filed: February 1, 2022
    Publication date: May 19, 2022
    Inventors: Kiryl KLIUSHKIN, Eric Liu GAN, Tali ZVI, Hannes Luc Herman VERLINDE, Michael SLATER, Franklin HO, Andrew Pitcher THOMPSON, Michelle Jia-Ying CHEUNG, Gil CARMEL, Stefan Alexandru JELER, Somayan CHAKRABARTI, Sung Kyu Robin KIM, Duylinh NGUYEN, Katherine Anne ZHU, Anaelisa ABURTO, Anthony GRISEY
  • Patent number: 10957119
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include displaying a virtual object in a 3D space of a user interface comprising a first and second display areas, wherein the virtual object is displayed from a first perspective in the first display area and displayed from a second perspective in the second display area, the second perspective being different from the first perspective, receiving, via the user interface, instructions to adjust the virtual object, adjusting the virtual object according to the instructions, and displaying the adjusted virtual object in the 3D space of the user interface, wherein the adjusted virtual object is displayed from the first perspective in the first display area and displayed from the second perspective in the second display area.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook, Inc.
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Giginiak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov, Alexander Nicholas Rozanski, Guilherme Schneider
  • Patent number: 10825258
    Abstract: In one embodiment, a method includes by a computing device, displaying a user interface for designing augmented-reality effects. The method includes receiving user input through the user interface. The method includes displaying a graph generated based on the user input. The graph may include multiple nodes and one or more edges. The nodes may include a detector node and a filter node connected by one or more edges. The method includes detecting, in accordance with an object type specified by the detector node, one or more object instances of the object type appearing in a scene. The method includes selecting, in accordance with at least one criterion specified by the filter node, at least one of the one or more detected object instances that satisfies the criterion. The method includes rendering an augmented-reality effect based on at least the selected object instance.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: November 3, 2020
    Assignee: Facebook, Inc.
    Inventors: Stef Marc Smet, Thomas Paul Mann, Michael Slater, Hannes Luc Herman Verlinde
  • Publication number: 20190333288
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include displaying a virtual object in a 3D space of a user interface comprising a first and second display areas, wherein the virtual object is displayed from a first perspective in the first display area and displayed from a second perspective in the second display area, the second perspective being different from the first perspective, receiving, via the user interface, instructions to adjust the virtual object, adjusting the virtual object according to the instructions, and displaying the adjusted virtual object in the 3D space of the user interface, wherein the adjusted virtual object is displayed from the first perspective in the first display area and displayed from the second perspective in the second display area.
    Type: Application
    Filed: July 11, 2019
    Publication date: October 31, 2019
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Giginiak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov
  • Patent number: 10360736
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include receiving a model definition of a virtual object. The virtual object may be rendered in a 3D space based on the model definition. The system may display the virtual object in the 3D space from a first perspective in a first display area of a user interface. The system may display the virtual object in the 3D space from a second perspective, different from the first, in a second display area of the user interface. The system may receive a user command input by a user through the first display area for adjusting the virtual object. The virtual object may be adjusted according to the user command. The system may display the adjusted virtual object in the 3D space from the first perspective in the first display area and from the second perspective in the second display area.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 23, 2019
    Assignee: Facebook, Inc.
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Gigniak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov
  • Publication number: 20180268615
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include receiving a model definition of a virtual object. The virtual object may be rendered in a 3D space based on the model definition. The system may display the virtual object in the 3D space from a first perspective in a first display area of a user interface. The system may display the virtual object in the 3D space from a second perspective, different from the first, in a second display area of the user interface. The system may receive a user command input by a user through the first display area for adjusting the virtual object. The virtual object may be adjusted according to the user command. The system may display the adjusted virtual object in the 3D space from the first perspective in the first display area and from the second perspective in the second display area.
    Type: Application
    Filed: June 6, 2017
    Publication date: September 20, 2018
    Inventors: Stef Marc Smet, Dolapo Omobola Falola, Michael Slater, Samantha P. Krug, Volodymyr Giginiak, Hannes Luc Herman Verlinde, Sergei Viktorovich Anpilov, Danil Gontovnik, Yu Hang Ng, Siarhei Hanchar, Milen Georgiev Dzhumerov