Patents by Inventor Samuel Edward Hare
Samuel Edward Hare has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250097376Abstract: Systems, devices, media and methods are presented for presentation of modified objects within a video stream. The systems and methods select an object of interest depicted within a user interface based on an associated image modifier, determine a modifier context based at least in part on one or more characteristics of the selected object, identify a set of image modifiers based on the modifier context, rank a first portion of the identified set of image modifiers based on a primary ordering characteristic, rank a second portion of the identified set of image modifiers based on a secondary ordering characteristic and cause presentation of the modifier icons for the ranked set of image modifiers.Type: ApplicationFiled: December 3, 2024Publication date: March 20, 2025Inventors: Ebony James Charlton, Michael John Evans, Samuel Edward Hare, Andrew James McPhee, Robert Cornelius Murphy, Eitan Pilipski
-
Publication number: 20250086909Abstract: The subject technology generates a segmentation mask based on first image data. The subject technology applies the segmentation mask on first depth data to reduce a set of artifacts in a depth map based on the first depth data. The subject technology generates a packed depth map based at least in part on the depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels. The subject technology applies, to the first image data and the first depth data, a first augmented reality content generator corresponding to a selected first selectable graphical item, the first image data and the first depth data being captured with a camera. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.Type: ApplicationFiled: November 21, 2024Publication date: March 13, 2025Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Publication number: 20250078427Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by one or more processors that implement a messaging application, a video feed from a camera of a user device; detecting, by the messaging application, a face in the video feed; in response to detecting the face in the video feed, retrieving a three-dimensional (3D) caption; modifying the video feed to include the 3D caption at a position in 3D space of the video feed proximate to the face; and displaying a modified video feed that includes the face and the 3D caption.Type: ApplicationFiled: November 18, 2024Publication date: March 6, 2025Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lavarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 12231804Abstract: Systems, devices, media and methods are presented for presentation of modified objects within a video stream. The systems and methods select an object of interest depicted within a user interface based on an associated image modifier, determine a modifier context based at least in part on one or more characteristics of the selected object, identify a set of image modifiers based on the modifier context, rank a first portion of the identified set of image modifiers based on a primary ordering characteristic, rank a second portion of the identified set of image modifiers based on a secondary ordering characteristic and cause presentation of the modifier icons for the ranked set of image modifiers.Type: GrantFiled: July 13, 2023Date of Patent: February 18, 2025Assignee: Snap Inc.Inventors: Ebony James Charlton, Michael John Evans, Samuel Edward Hare, Andrew James McPhee, Robert Cornelius Murphy, Eitan Pilipski
-
Patent number: 12231609Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.Type: GrantFiled: October 18, 2023Date of Patent: February 18, 2025Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 12217374Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering three-dimensional virtual objects within real-world environments. Virtual rendering of a three-dimensional virtual object can be altered appropriately as a user moves around the object in the real-world through utilization of a redundant tracking system comprising multiple tracking sub-systems. Virtual object rendering can be with respect to a reference surface in a real-world three-dimensional space depicted in a camera view of a mobile computing device.Type: GrantFiled: April 10, 2023Date of Patent: February 4, 2025Assignee: Snap Inc.Inventors: Andrew James McPhee, Ebony James Charlton, Samuel Edward Hare, Michael John Evans, Jokubas Dargis, Ricardo Sanchez-Saez
-
Patent number: 12211159Abstract: Systems and methods are provided for capturing by a camera of a user device, a first image depicting a first environment of the user device; overlaying a first virtual object on a portion of the first image depicting the first environment; modifying a surface of the first virtual object using content captured by the user device; storing a second virtual object comprising the first virtual object with the modified surface; and generating for display the second virtual object on a portion of a second image depicting a second environment.Type: GrantFiled: October 16, 2023Date of Patent: January 28, 2025Assignee: Snap Inc.Inventors: Samuel Edward Hare, Andrew James McPhee, Maxim Maximov Lazarov, Wentao Shang, Kyle Goodrich, Tony Mathew
-
Patent number: 12192667Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by a messaging application, an image from a camera of a user device; receiving input that selects a user-customizable effects option for activating a user-customizable effects mode; in response to receiving the input, displaying an array of a plurality of effect options together with the image proximate to the user-customizable effects option; and applying a first effect associated with a first effect option of the plurality of effect options to the image.Type: GrantFiled: July 20, 2023Date of Patent: January 7, 2025Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 12182951Abstract: The subject technology generates a segmentation mask based on first image data. The subject technology applies the segmentation mask on first depth data to reduce a set of artifacts in a depth map based on the first depth data. The subject technology generates a packed depth map based at least in part on the depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels. The subject technology applies, to the first image data and the first depth data, a first augmented reality content generator corresponding to a selected first selectable graphical item, the first image data and the first depth data being captured with a camera. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.Type: GrantFiled: August 31, 2023Date of Patent: December 31, 2024Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 12175613Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by one or more processors that implement a messaging application, a video feed from a camera of a user device; detecting, by the messaging application, a face in the video feed; in response to detecting the face in the video feed, retrieving a three-dimensional (3D) caption; modifying the video feed to include the 3D caption at a position in 3D space of the video feed proximate to the face; and displaying a modified video feed that includes the face and the 3D caption.Type: GrantFiled: October 2, 2023Date of Patent: December 24, 2024Assignee: SNAP INC.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Publication number: 20240404208Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.Type: ApplicationFiled: August 14, 2024Publication date: December 5, 2024Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
-
Publication number: 20240372963Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can be assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.Type: ApplicationFiled: July 15, 2024Publication date: November 7, 2024Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Paliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
-
Publication number: 20240362873Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.Type: ApplicationFiled: July 3, 2024Publication date: October 31, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 12106441Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.Type: GrantFiled: December 1, 2022Date of Patent: October 1, 2024Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 12094063Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6 DoF) and 3 DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6 DoF and 3 DoF thereby providing the user with an uninterrupted experience.Type: GrantFiled: September 14, 2022Date of Patent: September 17, 2024Assignee: Snap Inc.Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
-
Patent number: 12075190Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can be assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.Type: GrantFiled: July 13, 2023Date of Patent: August 27, 2024Assignee: Snap Inc.Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
-
Publication number: 20240249522Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.Type: ApplicationFiled: April 2, 2024Publication date: July 25, 2024Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
-
Patent number: 12020377Abstract: Systems and methods are provided for receiving a two-dimensional (2D) image comprising a 2D object; identifying a contour of the 2D object; generating a three-dimensional (3D) mesh based on the contour of the 2D object; and applying a texture of the 2D object to the 3D mesh to output a 3D object representing the 2D object.Type: GrantFiled: May 9, 2023Date of Patent: June 25, 2024Assignee: Snap Inc.Inventors: Samuel Edward Hare, Andrew James McPhee, Daniel Moreno, Kyle Goodrich
-
Patent number: 11989938Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.Type: GrantFiled: May 4, 2023Date of Patent: May 21, 2024Assignee: Snap Inc.Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
-
Publication number: 20240161425Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by a messaging application, a video feed from a camera of a user device that depicts a face; receiving a request to add a 3D caption to the video feed; identifying a graphical element that is associated with context of the 3D caption; and displaying the 3D caption and the identified graphical element in the video feed at a position in 3D space of the video feed proximate to the face depicted in the video feed.Type: ApplicationFiled: January 22, 2024Publication date: May 16, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maximov Lazarov, Tony Mathew, Andrew Andrew McPhee, Daniel Moreno, Wentao Shang