Patents by Inventor Hermes Germi Pique Corchs
Hermes Germi Pique Corchs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11386630Abstract: In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream.Type: GrantFiled: April 30, 2021Date of Patent: July 12, 2022Assignee: Meta Platforms, Inc.Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
-
Patent number: 11182613Abstract: In one embodiment, a method includes a system accessing an image, which may comprise covered and uncovered portions, and an overlay image comprising opaque pixels. The covered portion may be configured to be covered by the opaque pixels of the overlay image. The system may generate a data structure comprising data elements associated with pixels of the image. Each of the data elements associated with a covered pixel in the covered portion of the image may be configured to identify an uncovered pixel in the uncovered portion of the image that is closest to the covered pixel. Each covered pixel in the covered portion of the image may be modified by accessing the data element associated with the covered pixel, determining a distance between the covered pixel and an associated closest uncovered pixel using the accessed data element, and modifying a color of the covered pixel based on the distance.Type: GrantFiled: June 9, 2017Date of Patent: November 23, 2021Assignee: Facebook, Inc.Inventors: William S. Bailey, Ficus Kirkpatrick, Houman Meshkin, Ryan Keenan Olson, Hermes Germi Pique Corchs
-
Patent number: 11159768Abstract: The disclosed computer-implemented method may include receiving a first input from a first artificial reality device detecting a first environment of a first user and determining a first environmental feature of the first environment based on the first input. The method may include receiving a second input from a second artificial reality device detecting a second environment of a second user and determining a second environmental feature of the second environment based on the second input. The method may include comparing the first environmental feature with the second environmental feature and including, based on the comparison, the first and second users in a group for online interactions. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: January 3, 2019Date of Patent: October 26, 2021Assignee: Facebook Technologies, LLCInventors: Jim Sing Liu, Olivier Marie Bouan Du Chef Du Bos, Hermes Germi Pique Corchs, Matthew Roberts
-
Publication number: 20210327150Abstract: In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream.Type: ApplicationFiled: April 30, 2021Publication date: October 21, 2021Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
-
Patent number: 11030814Abstract: In one embodiment, the system captures a video data stream of a scene and contextual data streams associated with the video data stream. The contextual data streams comprise a sensor data stream or a computed data stream. The system renders an artificial reality effect based on the contextual data streams for display with the video data stream. The system generates a serialized data stream by serializing data chunks of the video data stream and the contextual data streams. The system stores the serialized data stream into a storage. The system extracts the video data stream and one or more of the contextual data streams from the serialized data stream by deserializing the data chunks in the serialized data stream. The system renders the same or another artificial reality effect for display with the extracted video data stream based on the extracted contextual data streams.Type: GrantFiled: January 15, 2019Date of Patent: June 8, 2021Assignee: Facebook, Inc.Inventors: Marcin Kwiatkowski, Mark I-Kai Wang, Mykyta Lutsenko, Miguel Goncalves, Hermes Germi Pique Corchs
-
Patent number: 10977847Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.Type: GrantFiled: April 12, 2019Date of Patent: April 13, 2021Assignee: Facebook, Inc.Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
-
Patent number: 10796185Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.Type: GrantFiled: November 3, 2017Date of Patent: October 6, 2020Assignee: Facebook, Inc.Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
-
Patent number: 10558675Abstract: In one embodiment, a computing system captures, using a camera, a number of frames of a live scene. The system generates preview frames for an augmented scene by applying one or more augmented reality effects to the frames of the live scene. Each preview frame is based on a low-resolution image of the live scene. The low-resolution image has a lower resolution than a maximum resolution of the camera. The system stores at least one preview frame with the augmented reality effects into a storage of the computing device. The system displays a live preview of the augmented scene using the preview frames. The system receives a request from a user to capture an image of the augmented scene while the live preview is being displayed. The system retrieves, the at least one preview frame stored in the storage and outputs the retrieved at least one preview frame.Type: GrantFiled: August 24, 2018Date of Patent: February 11, 2020Assignee: Facebook, Inc.Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
-
Patent number: 10541001Abstract: In one embodiment, a method includes accessing a video; detecting one or more objects in one or more frames of the video; identifying one or more of the detected objects; determining a relevance score for each of the one or more of the identified objects with respect to a user the video is to be presented to; selecting one or more frames of the video based on the determined relevance scores for the identified objects in the frames; and providing for presentation to the user one or more of the selected frames of the video.Type: GrantFiled: December 27, 2017Date of Patent: January 21, 2020Assignee: Facebook, Inc.Inventors: Vincent Charles Cheung, Hermes Germi Pique Corchs, Maria Chiara Cacciani, Andrew James Thomas Buckley, Stef Marc Smet, Milen Georgiev Ozhumerov, Mircea-Gabriel Suciu, Muhammed Elsayed Muhammed Elsayed Ibrahim, Cunpu Bo
-
Patent number: 10499105Abstract: A media effects engine on a computer device applies one or more effects to an input media stream. A performance monitor monitors a performance metric associated with playing the input media stream and reduces a quality parameter associated with the effect upon detecting a drop in the performance metric below a target metric. The quality parameter manages a tradeoff between a quality of effect and an amount of hardware resources consumed to produce the effect. Thus, the effect can be adjusted to meet the capabilities of the computer device.Type: GrantFiled: June 12, 2018Date of Patent: December 3, 2019Assignee: Facebook, Inc.Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller
-
Patent number: 10423632Abstract: In one embodiment, a method for presenting an augmented reality effect may include receiving, by a computing system, a request for downloading an augmented reality effect, which may include a plurality of elements. The system may select a first subset of elements among the plurality of elements based on one or more predefined rules. The first subset of elements may be transmitted to the client device for display. The system may transmit a remaining subset of elements of the plurality of elements to the client device for display after the transmitting of the first subset of elements is complete. The augmented reality effect may be configured to launch and display at least one element of the first subset of elements prior to the remaining subset of elements is received by the client device.Type: GrantFiled: July 19, 2017Date of Patent: September 24, 2019Assignee: Facebook, Inc.Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
-
Publication number: 20190238610Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.Type: ApplicationFiled: April 12, 2019Publication date: August 1, 2019Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
-
Publication number: 20190228580Abstract: In one embodiment, a method includes recognizing an object in an image that is captured by a camera and presented in a region of a screen of the computing device, generating a 3-dimensional mesh representation for the object by recognizing visual components of the object, where the 3-dimensional mesh representation comprises a plurality of polygons, receiving one or more inputs from the user, where the inputs cause color information for at least a part of the region of the screen to be updated, identifying one of the plurality of polygons that corresponds to a first region of the screen, identifying an area of the texture layer that corresponds to the identified polygon, recording the updated color information in the identified area of the texture layer, generating an augmented reality effect associated with the object based on the updated color information recorded in the identified area of the texture layer.Type: ApplicationFiled: January 24, 2018Publication date: July 25, 2019Inventors: Martin Pelant, Yiting Li, Dominic Akira Burt, Hermes Germi Pique Corchs, Mircea-Gabriel Suciu, Guk Hyeon Chai, Michael Slater, Dolapo Omobola Falola
-
Patent number: 10360466Abstract: Systems, methods, and non-transitory computer-readable media can receive an image. One or more concepts depicted in the image are identified based on machine learning techniques. The one or more concepts are filtered based on filtering criteria to identify one or more selected concepts. An image description is generated comprising the one or more selected concepts.Type: GrantFiled: December 27, 2016Date of Patent: July 23, 2019Assignee: Facebook, Inc.Inventors: Shaomei Wu, Lada Ariana Adamic, Jeffrey C. Wieland, Omid Farivar, Hermes Germi Pique Corchs, Matt King, Brett Alden Lavalla, Balamanohar Paluri
-
Publication number: 20190198057Abstract: In one embodiment, a method includes accessing a video; detecting one or more objects in one or more frames of the video; identifying one or more of the detected objects; determining a relevance score for each of the one or more of the identified objects with respect to a user the video is to be presented to; selecting one or more frames of the video based on the determined relevance scores for the identified objects in the frames; and providing for presentation to the user one or more of the selected frames of the video.Type: ApplicationFiled: December 27, 2017Publication date: June 27, 2019Inventors: Vincent Charles Cheung, Hermes Germi Pique Corchs, Maria Chiara Cacciani, Andrew James Thomas Buckley, Stef Marc Smet, Milen Georgiev Dzhumerov, Mircea-Gabriel Suciu, Muhammed Elsayed Muhammed Elsayed Ibrahim, Cunpu Bo
-
Patent number: 10291678Abstract: A video effects application executes on a client device having an image capture device and receives video data captured by the image capture device. The video effects application extracts information from the captured video data and stores the extracted information as metadata associated with the captured video data. For example, the video effects application identifies objects in the captured video data or identifies optical flow of the captured video data and stores the identified objects or identified optical flow as metadata associated with the captured video data. The video effects application stores information describing modifications to the captured video data in association with the captured video data. When the captured video data is presented, the captured video data, associated metadata, and information describing the modifications is communicated to a renderer, which uses the metadata to perform the identified modifications to the captured video data when presenting the captured video data.Type: GrantFiled: October 1, 2016Date of Patent: May 14, 2019Assignee: Facebook, Inc.Inventors: Hermes Germi Pique Corchs, Kirill A. Pugin, Razvan Gabriel Racasanu, Colin Todd Miller, Ragavan Srinivasan, Tomer Bar, Bryce David Redd
-
Publication number: 20190138834Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.Type: ApplicationFiled: November 3, 2017Publication date: May 9, 2019Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
-
Publication number: 20190104101Abstract: The present disclosure relates generally to increasing engagement in conversations between users, and more particularly to providing an effect to a second user in response to use of an effect by a first user. In certain embodiments, two or more users may be having a conversation via a communication platform of an SNS. The conversation may be streaming (e.g., a video or audio call) or non-streaming (e.g., a message exchange). During the conversation, a first user may send a communication that includes content with a first effect applied thereto. Based on the communication, a second effect corresponding to the first effect may be identified for use by the second user in response to the communication. The second effect may then be provided to the second user so that the second user may use the second effect in response to the communication.Type: ApplicationFiled: October 4, 2017Publication date: April 4, 2019Inventors: Hermes Germi Pique Corchs, Ruoruo Zhang
-
Publication number: 20190025904Abstract: In one embodiment, a method for presenting an augmented reality effect may include receiving, by a computing system, a request for downloading an augmented reality effect, which may include a plurality of elements. The system may select a first subset of elements among the plurality of elements based on one or more predefined rules. The first subset of elements may be transmitted to the client device for display. The system may transmit a remaining subset of elements of the plurality of elements to the client device for display after the transmitting of the first subset of elements is complete. The augmented reality effect may be configured to launch and display at least one element of the first subset of elements prior to the remaining subset of elements is received by the client device.Type: ApplicationFiled: July 19, 2017Publication date: January 24, 2019Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley
-
Publication number: 20190026283Abstract: In one embodiment, a computing system captures, using a camera, a number of frames of a live scene. The system generates preview frames for an augmented scene by applying one or more augmented reality effects to the frames of the live scene. Each preview frame is based on a low-resolution image of the live scene. The low-resolution image has a lower resolution than a maximum resolution of the camera. The system stores at least one preview frame with the augmented reality effects into a storage of the computing device. The system displays a live preview of the augmented scene using the preview frames. The system receives a request from a user to capture an image of the augmented scene while the live preview is being displayed. The system retrieves, the at least one preview frame stored in the storage and outputs the retrieved at least one preview frame.Type: ApplicationFiled: August 24, 2018Publication date: January 24, 2019Inventors: Trevor Charles Armstrong, Mauricio Narvaez, Hermes Germi Pique Corchs, Pradeep George Mathias, Gwylim Aidan Ashley