Patents by Inventor Ebony James Charlton

Ebony James Charlton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10623666
    Abstract: Systems, devices, media, and methods are presented for presentation of modified objects within a video stream. The systems and methods receive a selection at a user interface of a computing device and determine a modifier context based at least in part on the selection and a position within the user interface. The systems and methods identify at least one set of modifiers based on the modifier context. The systems and methods determine an order for the set of modifiers based on the modifier context and cause presentation of modifier icons for the set of modifiers within the user interface.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: April 14, 2020
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Michael John Evans, Samuel Edward Hare, Andrew James McPhee, Robert Cornelius Murphy, Eitan Pilipski
  • Patent number: 10573043
    Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: February 25, 2020
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
  • Patent number: 10565795
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: February 18, 2020
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Patent number: 10559107
    Abstract: A system and method for presentation of computer vision (e.g., augmented reality, virtual reality) using user data and a user code is disclosed. A client device can detect an image feature (e.g., scannable code) in one or more images. The image feature is determined to be linked to a user account. User data from the user account can then be used to generate one or more augmented reality display elements that can be anchored to the image feature in the one or more images.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: February 11, 2020
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Omer Cansizoglu, Kirk Ouimet, Nathan Kenneth Boyd
  • Patent number: 10515480
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: December 24, 2019
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Patent number: 10445936
    Abstract: Systems, devices, media, and methods are presented for generating responsive augmented reality elements in a graphical user interface. The systems and methods receive a video stream having one or more frames containing image data and audio data. The systems and methods identify a set of coordinates within a portion of the one or more frames and identify one or more audio characteristics within the audio data of the video stream. In response to the one or more audio characteristics, the systems and methods generate one or more graphical interface elements and detect a change in the audio data within the video stream. The systems and methods modify the one or more graphical interface elements in a second portion of the video stream in response to the change in the audio data.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: October 15, 2019
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Michael John Evans, Samuel Edward Hare, Andrew James McPhee, Robert Cornelius Murphy, Eitan Pilipski
  • Publication number: 20190297461
    Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.
    Type: Application
    Filed: March 7, 2019
    Publication date: September 26, 2019
    Inventors: Ebony James Charlton, Sumant Milind Hanumante, Zhou Ren, Dhritiman Sagar
  • Publication number: 20190188920
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering three-dimensional virtual objects within real-world environments. Virtual rendering of a three-dimensional virtual object can be altered appropriately as a user moves around the object in the real-world through utilization of a redundant tracking system comprising multiple tracking sub-systems. Virtual object rendering can be with respect to a reference surface in a real-world three-dimensional space depicted in a camera view of a mobile computing device.
    Type: Application
    Filed: February 22, 2019
    Publication date: June 20, 2019
    Inventors: Andrew James McPhee, Ebony James Charlton, Samuel Edward Hare, Michael John Evans, Jokubas Dargis, Ricardo Sanchez-Saez
  • Publication number: 20190130616
    Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.
    Type: Application
    Filed: October 30, 2017
    Publication date: May 2, 2019
    Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
  • Patent number: 10264422
    Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: April 16, 2019
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
  • Patent number: 10242477
    Abstract: A system and method for presentation of computer vision (e.g., augmented reality, virtual reality) using user data and a user code is disclosed. A client device can detect an image feature (e.g., scannable code) in one or more images. The image feature is determined to be linked to a user account. User data from the user account can then be used to generate one or more augmented reality display elements that can be anchored to the image feature in the one or more images.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: March 26, 2019
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Omer Cansizoglu, Kirk Ouimet, Nathan Kenneth Boyd
  • Patent number: 10242503
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering three-dimensional virtual objects within real world environments. Virtual rendering of a three-dimensional virtual object can be altered appropriately as a user moves around the object in the real world, and the three-dimensional virtual object can exist similarly for multiple users. Virtual object rendering can be with respect to a reference surface in a real world environment, which reference surface can be selected by a user as part of the virtual object rendering process.
    Type: Grant
    Filed: January 5, 2018
    Date of Patent: March 26, 2019
    Assignee: Snap Inc.
    Inventors: Andrew James McPhee, Ebony James Charlton, Samuel Edward Hare, Michael John Evans, Jokubas Dargis, Ricardo Sanchez-Saez
  • Publication number: 20190069147
    Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that, are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.
    Type: Application
    Filed: April 30, 2018
    Publication date: February 28, 2019
    Inventors: Ebony James Charlton, Sumant Milind Hanumante, Zhou Ren, Dhritiman Sagar
  • Patent number: 10198859
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: February 5, 2019
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Publication number: 20180253901
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Application
    Filed: July 19, 2017
    Publication date: September 6, 2018
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20180210628
    Abstract: Among other things, embodiments of the present disclosure improve the functionality of computer imaging software and systems by facilitating the manipulation of virtual content displayed in conjunction with images of real-world objects and environments. Embodiments of the present disclosure allow virtual objects to be moved relative to a real-world environment and manipulated in other ways.
    Type: Application
    Filed: November 6, 2017
    Publication date: July 26, 2018
    Inventors: Andrew James McPhee, Trevor Stephenson, Pedram Javidpour, Ebony James Charlton
  • Patent number: 9980100
    Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: May 22, 2018
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
  • Publication number: 20180131878
    Abstract: Systems, devices, media, and methods are presented for presentation of modified objects within a video stream. The systems and methods receive a selection at a user interface of a computing device and determine a modifier context based at least in part on the selection and a position within the user interface. The systems and methods identify at least one set of modifiers based on the modifier context. The systems and methods determine an order for the set of modifiers based on the modifier context and cause presentation of modifier icons for the set of modifiers within the user interface.
    Type: Application
    Filed: November 7, 2017
    Publication date: May 10, 2018
    Inventors: Ebony James Charlton, Michael John Evans, Samuel Edward Hare, Andrew James McPhee, Robert Cornelius Murphy, Eitan Pilipski
  • Publication number: 20180075651
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Application
    Filed: November 17, 2017
    Publication date: March 15, 2018
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Patent number: 9852543
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: December 26, 2017
    Assignee: SNAP INC.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans