Patents by Inventor Scott Snibbe

Scott Snibbe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11196985
    Abstract: In one embodiment, a method includes retrieving a media content item for display on a projectable surface. The method includes sending, to an interaction device including a projector and a camera, instructions causing the camera to capture one or more images of the projectable surface, receiving the one or more images from the interaction device, and determining one or more attributes of the surface based on the one or more images. The method also includes modifying the media content item based at least in part on the one or more attributes of the projectable surface and sending, to the interaction device, the modified media content item and instructions causing the projector to project the modified media content on the projectable surface.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 7, 2021
    Assignee: Facebook, Inc.
    Inventors: Baback Elmieh, Joyce Hsu, Scott Snibbe, Amir Mesguich Havilio, Angela Chang, Alexandre Jais, Rex Crossen
  • Patent number: 11172189
    Abstract: In one embodiment, a method includes sending, to an interaction device including a projector and a camera, a media content item and instructions causing the projector to project the media content item on a projectable surface and receiving, from the interaction device, one or more media objects captured by the camera, where one or more of the media objects include images of a user in proximity to the projectable surface. The method includes determining one or more movements of the user based on the one or more of the media objects and updating the media content item based on the determined movements. The method also includes sending, to the interaction device, the updated media content item and instructions causing the projector to project the updated media content item on the projectable surface.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: November 9, 2021
    Assignee: Facebook, Inc.
    Inventors: Baback Elmieh, Joyce Hsu, Scott Snibbe, Amir Mesguich Havilio, Angela Chang, Alexandre Jais, Rex Crossen
  • Patent number: 11070792
    Abstract: A method includes retrieving a media content item for display to a user in an environment. The method includes receiving, from one or more interaction devices each including a projector and a camera, one or more media objects captured by one or more cameras, where one or more of the media objects include images of the user, and determining a location of the user in the environment based on an analysis of the one or more of the media objects. The method also includes identifying one of the interaction devices based on the determined location of the user and sending, to the identified interaction device, the media content item and instructions causing the projector associated with the identified interaction device to project the media content item.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: July 20, 2021
    Assignee: Facebook, Inc.
    Inventors: Baback Elmieh, Joyce Hsu, Scott Snibbe, Amir Mesguich Havilio, Angela Chang, Alexandre Jais, Rex Crossen
  • Patent number: 11006097
    Abstract: A method includes sending, to an interaction device including a projector and a camera, instructions causing the projector to project a light pattern to each of one or more specified directions and receiving, from the interaction device, one or more images each including illumination patterns associated with one or more surfaces of a specified direction. The method also includes constructing, based on the projected light patterns and the received illumination patterns, a model describing an environment of the interaction device, where the model includes one or more characteristics of each of one or more objects in the environment and one or more characteristics of each of one or more surfaces in the environment.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: May 11, 2021
    Assignee: Facebook, Inc.
    Inventors: Baback Elmieh, Joyce Hsu, Scott Snibbe, Amir Mesguich Havilio, Angela Chang, Alexandre Jais, Rex Crossen
  • Patent number: 10755487
    Abstract: Techniques are provided to help social networking users manage their augmented reality identity. In particular, a user may customize one or more perception profiles, each of which specifying a selection and an arrangement of augmented reality elements to display over a view of the user. The user may further associate each perception profile with a relationship category that may be defined by the user. In this way, users gain more control over what image to project to different categories of people and is thereby empowered to express their identities in a context-appropriate way.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 25, 2020
    Assignee: FACEBOOK, INC.
    Inventor: Scott Snibbe
  • Patent number: 10680993
    Abstract: An audio social networking environment is described that provides a platform for users to generate audio-only content for consumption by one or more other users that may or may not have a social networking relationship with the user creating the audio-only content. Users are able to verbally generate stories using an audio based virtual assistant that receives the stories. The stories are analyzed to identify a tone of the story and one or more categories to associate with the story. The analysis of the story can also include suggesting audio effects to the user for including in the story. When a user requests stories, the user preference information of the requester can be used to identify stories for playback that meet the requesting user's preferences.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 9, 2020
    Assignee: FACEBOOK, INC.
    Inventor: Scott Snibbe
  • Publication number: 20190306105
    Abstract: An audio social networking environment is described that provides a platform for users to generate audio-only content for consumption by one or more other users that may or may not have a social networking relationship with the user creating the audio-only content. Users are able to verbally generate stories using an audio based virtual assistant that receives the stories. The stories are analyzed to identify a tone of the story and one or more categories to associate with the story. The analysis of the story can also include suggesting audio effects to the user for including in the story. When a user requests stories, the user preference information of the requester can be used to identify stories for playback that meet the requesting user's preferences.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Inventor: Scott Snibbe
  • Publication number: 20190205929
    Abstract: Systems, methods, and non-transitory computer readable media can obtain a media content item created by a first user that includes a media effect that can be applied to image data in a camera view. A plurality of content items including the media content item can be ranked. A ranking of the media content item can be adjusted. The media content item can be provided to a second user, based on the adjusted ranking of the media content item.
    Type: Application
    Filed: December 28, 2017
    Publication date: July 4, 2019
    Inventors: Scott Snibbe, Harshdeep Singh
  • Patent number: 10120530
    Abstract: The various embodiments described herein include methods and systems for generating interactive media items. In one aspect, a method is performed at a server system. The method includes providing access for playback of an interactive media item based on metadata generated using information associated with the interactive media item. The metadata includes information associating at least one parameter with the interactive media item. The interactive media item is generated based on one or more user inputs selecting one or more interactive effects for association with the interactive media item. The user input(s) cause the server system to make the one or more interactive effects available to a subsequent viewer during the playback of the interactive media item, such that the subsequent viewer is able to interact with video and/or audio of the interactive media item by controlling the at least one parameter during the playback.
    Type: Grant
    Filed: November 5, 2015
    Date of Patent: November 6, 2018
    Assignee: FACEBOOK, INC.
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczek, Spencer Schoeben, Jesse Fulton
  • Patent number: 10120565
    Abstract: The various embodiments described herein include methods and systems for presenting interactive media items. In one aspect, a method includes publishing, by a server system, an interactive media item, the publishing comprising providing access for a playback of the interactive media item based on metadata for the interactive media item. The metadata is generated using information associated with the interactive media item and including a mapping of an effect parameter for a first effect of the one or more effects to a touch input gesture. Playback includes, in response to detecting a first user input gesture corresponding to the touch input gesture, applying the first effect to the presented interactive media item. The applying of the first effect includes determining the effect parameter according to one or more characteristics of the first user input gesture, based on the mapping of the effect parameter to the touch input gesture.
    Type: Grant
    Filed: November 5, 2015
    Date of Patent: November 6, 2018
    Assignee: FACEBOOK, INC.
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczek, Spencer Schoeben, Jesse Fulton
  • Publication number: 20180300100
    Abstract: Techniques are described for determining what effects to apply to audiovisual content. An effect that is applied to audiovisual content may be an audio effect. When applied to an audiovisual content, an audio effect modifies an audio portion of the audiovisual content. The modified audiovisual content resulting from applying the effects to the audiovisual content may then be output via a user's device. For example, the modified audiovisual content may be output via an application executed by the user's device (e.g., a camera application) that is configured to output audiovisual content. An effect to be applied to audiovisual content may be determined based on various criteria such as, without limitation, attributes of the audiovisual content (e.g.
    Type: Application
    Filed: April 17, 2017
    Publication date: October 18, 2018
    Inventors: Scott Snibbe, William J. Littlejohn, Dwayne B. Mercredi
  • Publication number: 20180239524
    Abstract: The various implementations described herein include methods, devices, and systems for providing and editing audiovisual effects. In one aspect, a method is performed at a first device having one or more processors and memory. The method includes: (1) presenting a user interface for effects development, including a specification for an effect in development; (2) displaying on a display device the effect applied to a video stream; (3) while displaying the effect applied to the video stream, receiving within the user interface one or more updates to the specification; (4) compiling the updated specification in real-time; and (5) displaying on the display device an updated effect applied to the video stream, the updated effect corresponding to the updated specification.
    Type: Application
    Filed: April 24, 2018
    Publication date: August 23, 2018
    Inventors: Scott Snibbe, Johan Ismael
  • Patent number: 10031921
    Abstract: The various embodiments described herein include methods and systems for storage of media item metadata. In one aspect, a method is performed at a server system with one or more processors and memory. The method includes receiving, from a client device, metadata corresponding to a modified media item, where the modified media item is a modified version of a media item corresponding to a particular node in a family tree within a database of media items. The method further includes, in response to receiving the metadata corresponding to the modified media item, appending, to the family tree, a new leaf node that is linked to the particular node, where the new leaf node corresponds to the modified media item.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: July 24, 2018
    Assignee: FACEBOOK, INC.
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczek, Spencer Schoeben, Jesse Fulton
  • Patent number: 10002642
    Abstract: The various implementations described herein include methods, devices, and systems for generating media items. In one aspect, a method is performed at a server system. The method includes: (1) receiving one or more audio files; (2) obtaining one or more audio characteristics for the audio files; (3) receiving a request to generate a media item using the audio files, the request including one or more criteria; and (4) in response to receiving the request, generating the media item, including: (a) identifying one or more visual media files based on the criteria and the audio characteristics; and (b) generating synchronization information; (5) storing the media item in the server system; and (6) enabling playback of the media item by sending a link for the stored media item to the client device.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: June 19, 2018
    Assignee: FACEBOOK, INC.
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczek, Spencer Schoeben, Jesse Fulton
  • Publication number: 20170325007
    Abstract: The various embodiments described herein include methods and systems for providing audiovisual media items. In one aspect, a method performed at a client device includes: (1) receiving one or more natural language inputs from a user; (2) identifying audio files by extracting one or more commands from the natural language inputs; (3) receiving one or more second natural language inputs from the user; (4) identifying visual media files by extracting one or more commands from the second natural language inputs; (5) obtaining a request to generate the media item, the media item corresponding to the visual media files and the audio files; and (6) in response to obtaining the request, sending, to a server system, a creation request to create the media item, the creation request including information identifying the audio files and the visual media files.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Inventors: Scott Snibbe, Graham McDermott, Emile Baizel, James Pollack, Justin Ponczec, Spencer Schoeben, Jesse Fulton
  • Publication number: 20170040039
    Abstract: The various implementations described herein include methods, devices, and systems for generating media items. In one aspect, a method is performed at a server system. The method includes: (1) receiving one or more audio files; (2) obtaining one or more audio characteristics for the audio files; (3) receiving a request to generate a media item using the audio files, the request including one or more criteria; and (4) in response to receiving the request, generating the media item, including: (a) identifying one or more visual media files based on the criteria and the audio characteristics; and (b) generating synchronization information; (5) storing the media item in the server system; and (6) enabling playback of the media item by sending a link for the stored media item to the client device.
    Type: Application
    Filed: October 18, 2016
    Publication date: February 9, 2017
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczec, Spencer Schoeben, Jesse Fulton
  • Patent number: 9519644
    Abstract: A server system with one or more processors and memory receives, from a client device, information including one or more criteria for a media item to be generated. In some embodiments, the one or more criteria include one or more audio tracks for the media item to be generated. In some embodiments, the one or more criteria include one or more keywords for the media item to be generated. The server system identifies one or more media files in a database of media files for the media item to be generated based at least in part on the one or more criteria. The server system sends, to the client device, first information identifying the one or more media files. In some embodiments, the server system also sends, to the client device, synchronization information for synchronizing one or more audio tracks with the one or more identified media files.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: December 13, 2016
    Assignee: FACEBOOK, INC.
    Inventors: Scott Snibbe, Graham McDermott, Justin Ponczec, Spencer Schoeben, Jesse Fulton
  • Publication number: 20160173960
    Abstract: The various embodiments described herein include methods and systems for generating an audiovisual media item. In one aspect, a method is performed at a server system. The method includes: (1) receiving, from a first electronic device associated with a first user, a creation request to create the media item, the creation request including information identifying one or more audio files and one or more visual media files; (2) obtaining the visual media files; (3) requesting at least one audio file from a server in accordance with the information identifying the audio files; (4) in response to the request, receiving at least one audio file from the server; (5) obtaining any remaining audio files; (6) in response to receiving the creation request, generating the audiovisual media item based on the audio files and the visual media files; and (7) storing the generated audiovisual media item in a media item database.
    Type: Application
    Filed: February 23, 2016
    Publication date: June 16, 2016
    Inventors: Scott Snibbe, Graham McDermott, Emile Baizel, James Pollack, Justin Ponczec, Spencer Schoeben, Jesse Fulton
  • Publication number: 20160054873
    Abstract: The various embodiments described herein include methods and devices for generating interactive media items. In one aspect, a method is performed at a client device with one or more processors and memory. The method includes: (1) displaying a first user interface enabling a user to select audio files; (2) detecting first user inputs selecting an audio file; (3) displaying a second user interface for obtaining visual media files; (4) detecting second user inputs to obtain a visual media file; (5) displaying a third user interface enabling a user to select interactive effects; (6) detecting third user inputs selecting effects, where the selected effects enable a subsequent viewer to interact with the video/audio of the media item using the selected effects during playback; and (7) generating the media item based on the visual media file, the audio file, and the interactive effects, including generating synchronization information.
    Type: Application
    Filed: November 5, 2015
    Publication date: February 25, 2016
    Inventors: Scott SNIBBE, Graham MCDERMOTT, Justin PONCZEC, Spencer SCHOEBEN, Jesse FULTON
  • Publication number: 20160054916
    Abstract: The various embodiments described herein include methods and devices for presenting interactive media items. In one aspect, a method is performed at a client device with one or more processors, memory, a touch-sensitive surface, and a display. The method includes: (1) receiving user selection of a previously generated media item, the media item associated with an audio file, one or more visual media files, and one or more effects; (2) in response to the user selection, presenting the media item on the display; and, while presenting the media item: (3) detecting a touch input gesture at a location on the touch-sensitive surface corresponding to at least a portion of the presented media item; and (4), in response to detecting the touch input gesture, applying at least one effect to the presented media item based on one or more characteristics of the touch input gesture.
    Type: Application
    Filed: November 5, 2015
    Publication date: February 25, 2016
    Inventors: Scott SNIBBE, Graham MCDERMOTT, Justin PONCZEC, Spencer SCHOEBEN, Jesse FULTON