Patents by Inventor Hermes Germi Pique Corchs

Hermes Germi Pique Corchs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11386630
    Abstract: In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: July 12, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
  • Patent number: 11182613
    Abstract: In one embodiment, a method includes a system accessing an image, which may comprise covered and uncovered portions, and an overlay image comprising opaque pixels. The covered portion may be configured to be covered by the opaque pixels of the overlay image. The system may generate a data structure comprising data elements associated with pixels of the image. Each of the data elements associated with a covered pixel in the covered portion of the image may be configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel. Each covered pixel in the covered portion of the image may be modified by accessing the data element associated with the covered pixel, determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element, and modifying a color of the covered pixel based on the distance.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: November 23, 2021
    Assignee: Facebook, Inc.
    Inventors: William S. Bailey, Ficus Kirkpatrick, Houman Meshkin, Ryan Keenan Olson, Hermes Germi Pique Corchs
  • Patent number: 11159768
    Abstract: The disclosed computer-implemented method may include receiving a first input from a first artificial reality device detecting a first environment of a first user and determining a first environmental feature of the first environment based on the first input. The method may include receiving a second input from a second artificial reality device detecting a second environment of a second user and determining a second environmental feature of the second environment based on the second input. The method may include comparing the first environmental feature with the second environmental feature and including, based on the comparison, the first and second users in a group for online interactions. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: October 26, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jim Sing Liu, Olivier Marie Bouan Du Chef Du Bos, Hermes Germi Pique Corchs, Matthew Roberts
  • Publication number: 20210327150
    Abstract: In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream.
    Type: Application
    Filed: April 30, 2021
    Publication date: October 21, 2021
    Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
  • Patent number: 11030814
    Abstract: In one embodiment, the system captures a video data stream of a scene and contextual data streams associated with the video data stream. The contextual data streams comprise a sensor data stream or a computed data stream. The system renders an artificial reality effect based on the contextual data streams for display with the video data stream. The system generates a serialized data stream by serializing data chunks of the video data stream and the contextual data streams. The system stores the serialized data stream into a storage. The system extracts the video data stream and one or more of the contextual data streams from the serialized data stream by deserializing the data chunks in the serialized data stream. The system renders the same or another artificial reality effect for display with the extracted video data stream based on the extracted contextual data streams.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: June 8, 2021
    Assignee: Facebook, Inc.
    Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
  • Patent number: 10977847
    Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: April 13, 2021
    Assignee: Facebook, Inc.
    Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
  • Patent number: 10796185
    Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: October 6, 2020
    Assignee: Facebook, Inc.
    Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
  • Patent number: 10558675
    Abstract: In one embodiment, a computing system captures, using a camera, a number of frames of a live scene. The system generates preview frames for an augmented scene by applying one or more augmented reality effects to the frames of the live scene. Each preview frame is based on a low-resolution image of the live scene. The low-resolution image has a lower resolution than a maximum resolution of the camera. The system stores at least one preview frame with the augmented reality effects into a storage of the computing device. The system displays a live preview of the augmented scene using the preview frames. The system receives a request from a user to capture an image of the augmented scene while the live preview is being displayed. The system retrieves, the at least one preview frame stored in the storage and outputs the retrieved at least one preview frame.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: February 11, 2020
    Assignee: Facebook, Inc.
    Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
  • Patent number: 10541001
    Abstract: In one embodiment, a method includes accessing a video; detecting one or more objects in one or more frames of the video; identifying one or more of the detected objects; determining a relevance score for each of the one or more of the identified objects with respect to a user the video is to be presented to; selecting one or more frames of the video based on the determined relevance scores for the identified objects in the frames; and providing for presentation to the user one or more of the selected frames of the video.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 21, 2020
    Assignee: Facebook, Inc.
    Inventors: Vincent Charles Cheung, Hermes Germi Pique Corchs, Maria Chiara Cacciani, Andrew James Thomas Buckley, Stef Marc Smet, Milen Georgiev Ozhumerov, Mircea-Gabriel Suciu, Muhammed Elsayed Muhammed Elsayed Ibrahim, Cunpu Bo
  • Patent number: 10499105
    Abstract: A media effects engine on a computer device applies one or more effects to an input media stream. A performance monitor monitors a performance metric associated with playing the input media stream and reduces a quality parameter associated with the effect upon detecting a drop in the performance metric below a target metric. The quality parameter manages a tradeoff between a quality of effect and an amount of hardware resources consumed to produce the effect. Thus, the effect can be adjusted to meet the capabilities of the computer device.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: December 3, 2019
    Assignee: Facebook, Inc.
    Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller
  • Patent number: 10423632
    Abstract: In one embodiment, a method for presenting an augmented reality effect may include receiving, by a computing system, a request for downloading an augmented reality effect, which may include a plurality of elements. The system may select a first subset of elements among the plurality of elements based on one or more predefined rules. The first subset of elements may be transmitted to the client device for display. The system may transmit a remaining subset of elements of the plurality of elements to the client device for display after the transmitting of the first subset of elements is complete. The augmented reality effect may be configured to launch and display at least one element of the first subset of elements prior to the remaining subset of elements is received by the client device.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: September 24, 2019
    Assignee: Facebook, Inc.
    Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
  • Publication number: 20190238610
    Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.
    Type: Application
    Filed: April 12, 2019
    Publication date: August 1, 2019
    Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
  • Publication number: 20190228580
    Abstract: In one embodiment, a method includes recognizing an object in an image that is captured by a camera and presented in a region of a screen of the computing device, generating a 3-dimensional mesh representation for the object by recognizing visual components of the object, where the 3-dimensional mesh representation comprises a plurality of polygons, receiving one or more inputs from the user, where the inputs cause color information for at least a part of the region of the screen to be updated, identifying one of the plurality of polygons that corresponds to a first region of the screen, identifying an area of the texture layer that corresponds to the identified polygon, recording the updated color information in the identified area of the texture layer, generating an augmented reality effect associated with the object based on the updated color information recorded in the identified area of the texture layer.
    Type: Application
    Filed: January 24, 2018
    Publication date: July 25, 2019
    Inventors: Martin Pelant, Yiting Li, Dominic Akira Burt, Hermes Germi Pique Corchs, Mircea-Gabriel Suciu, Guk Hyeon Chai, Michael Slater, Dolapo Omobola Falola
  • Patent number: 10360466
    Abstract: Systems, methods, and non-transitory computer-readable media can receive an image. One or more concepts depicted in the image are identified based on machine learning techniques. The one or more concepts are filtered based on filtering criteria to identify one or more selected concepts. An image description is generated comprising the one or more selected concepts.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: July 23, 2019
    Assignee: Facebook, Inc.
    Inventors: Shaomei Wu, Lada Ariana Adamic, Jeffrey C. Wieland, Omid Farivar, Hermes Germi Pique Corchs, Matt King, Brett Alden Lavalla, Balamanohar Paluri
  • Publication number: 20190198057
    Abstract: In one embodiment, a method includes accessing a video; detecting one or more objects in one or more frames of the video; identifying one or more of the detected objects; determining a relevance score for each of the one or more of the identified objects with respect to a user the video is to be presented to; selecting one or more frames of the video based on the determined relevance scores for the identified objects in the frames; and providing for presentation to the user one or more of the selected frames of the video.
    Type: Application
    Filed: December 27, 2017
    Publication date: June 27, 2019
    Inventors: Vincent Charles Cheung, Hermes Germi Pique Corchs, Maria Chiara Cacciani, Andrew James Thomas Buckley, Stef Marc Smet, Milen Georgiev Dzhumerov, Mircea-Gabriel Suciu, Muhammed Elsayed Muhammed Elsayed Ibrahim, Cunpu Bo
  • Patent number: 10291678
    Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.
    Type: Grant
    Filed: October 1, 2016
    Date of Patent: May 14, 2019
    Assignee: Facebook, Inc.
    Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
  • Publication number: 20190138834
    Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 9, 2019
    Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
  • Publication number: 20190104101
    Abstract: The present disclosure relates generally to increasing engagement in conversations between users, and more particularly to providing an effect to a second user in response to use of an effect by a first user. In certain embodiments, two or more users may be having a conversation via a communication platform of an SNS. The conversation may be streaming (e.g., a video or audio call) or non-streaming (e.g., a message exchange). During the conversation, a first user may send a communication that includes content with a first effect applied thereto. Based on the communication, a second effect corresponding to the first effect may be identified for use by the second user in response to the communication. The second effect may then be provided to the second user so that the second user may use the second effect in response to the communication.
    Type: Application
    Filed: October 4, 2017
    Publication date: April 4, 2019
    Inventors: Hermes Germi Pique Corchs, Ruoruo Zhang
  • Publication number: 20190026283
    Abstract: In one embodiment, a computing system captures, using a camera, a number of frames of a live scene. The system generates preview frames for an augmented scene by applying one or more augmented reality effects to the frames of the live scene. Each preview frame is based on a low-resolution image of the live scene. The low-resolution image has a lower resolution than a maximum resolution of the camera. The system stores at least one preview frame with the augmented reality effects into a storage of the computing device. The system displays a live preview of the augmented scene using the preview frames. The system receives a request from a user to capture an image of the augmented scene while the live preview is being displayed. The system retrieves, the at least one preview frame stored in the storage and outputs the retrieved at least one preview frame.
    Type: Application
    Filed: August 24, 2018
    Publication date: January 24, 2019
    Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
  • Publication number: 20190025904
    Abstract: In one embodiment, a method for presenting an augmented reality effect may include receiving, by a computing system, a request for downloading an augmented reality effect, which may include a plurality of elements. The system may select a first subset of elements among the plurality of elements based on one or more predefined rules. The first subset of elements may be transmitted to the client device for display. The system may transmit a remaining subset of elements of the plurality of elements to the client device for display after the transmitting of the first subset of elements is complete. The augmented reality effect may be configured to launch and display at least one element of the first subset of elements prior to the remaining subset of elements is received by the client device.
    Type: Application
    Filed: July 19, 2017
    Publication date: January 24, 2019
    Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley