Patents by Inventor Jesse Chand
Jesse Chand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240104819Abstract: A computer system optionally modifies a representation of a participant based on activity associated with the participant. A computer system optionally displays a self-view representation of an avatar of a user of the computer system. A computer system optionally updates a view of an avatar in a real-time communication session. A computer system optionally displays a representation of a participant in a real-time communication session.Type: ApplicationFiled: September 5, 2023Publication date: March 28, 2024Inventors: Jesse CHAND, Shih-Sang CHIU
-
Publication number: 20240104859Abstract: The present disclosure generally relates to managing live communication sessions. A computer system optionally displays an option to invite the respective user to join the ongoing communication session. A computer system optionally one or more options to modify an appearance of an avatar representing the user of the computer system. A computer system optionally transitions a communication session from a spatial communication session to a non-spatial communication session. A computer system optionally displays information about a participant in a communication session.Type: ApplicationFiled: September 12, 2023Publication date: March 28, 2024Inventors: Jesse CHAND, Kristi E. BAUERLY, Shih-Sang CHIU, Jonathan R. DASCOLA, Amy E. DEDONATO, Karen EL ASMAR, Wesley M. HOLDER, Stephen O. LEMAY, Lorena S. PAZMINO, Jason D. RICKWALD, Giancarlo YERKES
-
Patent number: 11934569Abstract: A computer system displays a first and second user interface object in a three-dimensional environment. The first and second user interface objects have a first and second spatial relationship to a first and second anchor position corresponding to a location of a user's hand in a physical environment, respectively. While displaying the first and second user interface objects in the three-dimensional environment, the computer system detects movement of the user's hand in the physical environment, corresponding to a translational movement and a rotational movement of the user's hand relative to a viewpoint, and in response, translates the first and second user interface objects relative to the viewpoint in accordance with the translational movement of the user's hand, and rotates the first user interface object relative to the viewpoint in accordance with the rotational movement of the user's hand without rotating the second user interface object.Type: GrantFiled: September 19, 2022Date of Patent: March 19, 2024Assignee: APPLE INC.Inventors: Israel Pastrana Vicente, Jonathan R. Dascola, Christopher D. McKenzie, Jesse Chand, Stephen O. Lemay, Kristi E. S. Bauerly, Zoey C. Taylor
-
Patent number: 11934636Abstract: Disclosed are systems, methods, and computer-readable storage media to provide voice driven dynamic menus. One aspect disclosed is a method including receiving, by an electronic device, video data and audio data, displaying, by the electronic device, a video window, determining, by the electronic device, whether the audio data includes a voice signal, displaying, by the electronic device, a first menu in the video window in response to the audio data including a voice signal, displaying, by the electronic device, a second menu in the video window in response to a voice signal being absent from the audio data, receiving, by the electronic device, input from the displayed menu, and writing, by the electronic device, to an output device based on the received input.Type: GrantFiled: March 27, 2023Date of Patent: March 19, 2024Assignee: SNAP INC.Inventor: Jesse Chand
-
Patent number: 11930055Abstract: The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.Type: GrantFiled: March 14, 2023Date of Patent: March 12, 2024Assignee: Snap Inc.Inventors: Jesse Chand, Jeremy Voss
-
Publication number: 20230409119Abstract: In various example embodiments, a system and method for generating a response that depicts haptic characteristics are presented. Haptic data is received from a client device and the haptic data indicates an interaction with a sensor included in the client device. Haptic characteristics are determined based on the haptic data. At least one image that depicts the determined haptic characteristics is generated. And the at least one image is caused to be displayed on the client device.Type: ApplicationFiled: September 6, 2023Publication date: December 21, 2023Inventors: Jesse Chand, Krish Jayaram
-
Publication number: 20230376165Abstract: Embodiments of the present disclosure relate generally to mobile computing technology and, more particularly, but not by way of limitation, to systems for generating and presenting a graphical user interface (GUI) that includes a presentation of an animated icon (e.g., a digital pet) on a display of a client device.Type: ApplicationFiled: July 27, 2023Publication date: November 23, 2023Inventors: Jeremy Voss, Jesse Chand, Dylan Shane Eirinberg, William Wu, Chiayi Lin, Anna Liberman
-
Publication number: 20230350539Abstract: In some embodiments, a computer system modifies the visual appearances of user interface objects based on their spatial arrangement relative to the viewpoint of the user in a three-dimensional environment. In some embodiments, a computer system displays, via a display generation component, a representation of a message in a three-environment at a first distance from a viewpoint of the user, and then changes the distance of the representation of the message to be a second distance from the viewpoint of the user. In some embodiments, a computer system is configured to transition virtual objects from a three-dimensional appearance to a two-dimensional appearance and/or from a two-dimensional appearance to a three-dimensional appearance.Type: ApplicationFiled: April 21, 2023Publication date: November 2, 2023Inventors: James J. OWEN, Christine E. WELCH, Jesse CHAND, Lucie BELANGER, Wendy F. EDUARTE, Dorian D. DARGAN, William A. SORRENTINO, III
-
Publication number: 20230333646Abstract: In some embodiments, an electronic device navigates between user interfaces based at least on detecting a gaze of the user. In some embodiments, an electronic device enhances interactions with control elements of user interfaces. In some embodiments, an electronic device scrolls representations of categories and subcategories in a coordinated manner. In some embodiments, an electronic device navigates back from user interfaces having different levels of immersion in different ways.Type: ApplicationFiled: June 16, 2023Publication date: October 19, 2023Inventors: Israel PASTRANA VICENTE, Jay MOON, Jesse CHAND, Jonathan R. DASCOLA, William A. SORRENTINO, III, Stephen O. LEMAY, Dorian D. DARGAN
-
Patent number: 11789534Abstract: In various example embodiments, a system and method for generating a response that depicts haptic characteristics are presented. Haptic data is received from a client device and the haptic data indicates an interaction with a sensor included in the client device. Haptic characteristics are determined based on the haptic data. At least one image that depicts the determined haptic characteristics is generated. And the at least one image is caused to be displayed on the client device.Type: GrantFiled: January 27, 2021Date of Patent: October 17, 2023Assignee: Snap Inc.Inventors: Jesse Chand, Krish Jayaram
-
Publication number: 20230315247Abstract: A computer system displays a first user interface object at a first position in the three-dimensional environment that has a first spatial arrangement relative to a respective portion of a user. While displaying the first user interface object, the computer system detects an input that corresponds to movement of a viewpoint of the user, and in response, maintains display of the first user interface object at a respective position in the three-dimensional environment having the first spatial arrangement relative to the respective portion of the user. While displaying the first user interface object, the computer system detects a first gaze input directed to the first user interface object, and in response, in accordance with a determination that the first gaze input satisfies attention criteria with respect to the first user interface object, displays a plurality of affordances for accessing system functions of the computer system.Type: ApplicationFiled: February 15, 2023Publication date: October 5, 2023Inventors: Israel Pastrana Vicente, Jonathan R. Dascola, Stephen O. Lemay, Christopher D. McKenzie, Jay Moon, Jesse Chand, Dorian D. Dargan, Amy E. DeDonato, Matan Stauber, Lorena S. Pazmino, Evgenii Krivoruchko
-
Patent number: 11775134Abstract: Embodiments of the present disclosure relate generally to mobile computing technology and, more particularly, but not by way of limitation, to systems for generating and presenting a graphical user interface (GUI) that includes a presentation of an animated icon (e.g., a digital pet) on a display of a client device.Type: GrantFiled: February 11, 2021Date of Patent: October 3, 2023Assignee: SNAP INC.Inventors: Jeremy Voss, Jesse Chand, Dylan Shane Eirinberg, William Wu, Chiayi Lin, Anna Liberman
-
Patent number: 11720171Abstract: In some embodiments, an electronic device navigates between user interfaces based at least on detecting a gaze of the user. In some embodiments, an electronic device enhances interactions with control elements of user interfaces. In some embodiments, an electronic device scrolls representations of categories and subcategories in a coordinated manner. In some embodiments, an electronic device navigates back from user interfaces having different levels of immersion in different ways.Type: GrantFiled: September 20, 2021Date of Patent: August 8, 2023Assignee: Apple Inc.Inventors: Israel Pastrana Vicente, Jay Moon, Jesse Chand, Jonathan R. Dascola, Jeffrey M. Faulkner, Pol Pla I Conesa, Dorian D. Dargan
-
Publication number: 20230229290Abstract: Disclosed are systems, methods, and computer-readable storage media to provide voice driven dynamic menus. One aspect disclosed is a method including receiving, by an electronic device, video data and audio data, displaying, by the electronic device, a video window, determining, by the electronic device, whether the audio data includes a voice signal, displaying, by the electronic device, a first menu in the video window in response to the audio data including a voice signal, displaying, by the electronic device, a second menu in the video window in response to a voice signal being absent from the audio data, receiving, by the electronic device, input from the displayed menu, and writing, by the electronic device, to an output device based on the received input.Type: ApplicationFiled: March 27, 2023Publication date: July 20, 2023Inventor: Jesse Chand
-
Patent number: 11706267Abstract: The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.Type: GrantFiled: May 24, 2022Date of Patent: July 18, 2023Assignee: Snap Inc.Inventors: Jesse Chand, Jeremy Voss
-
Publication number: 20230216901Abstract: The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.Type: ApplicationFiled: March 14, 2023Publication date: July 6, 2023Inventors: Jesse Chand, Jeremy Voss
-
Patent number: 11640227Abstract: Disclosed are systems, methods, and computer-readable storage media to provide voice driven dynamic menus. One aspect disclosed is a method including receiving, by an electronic device, video data and audio data, displaying, by the electronic device, a video window, determining, by the electronic device, whether the audio data includes a voice signal, displaying, by the electronic device, a first menu in the video window in response to the audio data including a voice signal, displaying, by the electronic device, a second menu in the video window in response to a voice signal being absent from the audio data, receiving, by the electronic device, input from the displayed menu, and writing, by the electronic device, to an output device based on the received input.Type: GrantFiled: October 20, 2020Date of Patent: May 2, 2023Assignee: SNAP INC.Inventor: Jesse Chand
-
Patent number: 11632344Abstract: Disclosed are media attachment systems to enable a user to embed a first media item with a link to a second media item, and distribute the first media item in a message to one or more recipient client devices. For example, the first media item may include a picture or video captured by a user at a client device. The user may generate a message that includes the first media item. In response, a media attachment system may cause display of an interface at the client device that includes an option to attach an address to a second media item to the message. For example, the second media item may include a web page, social media post, picture, or video identified by an address such as a Uniform Resource Locator (URL).Type: GrantFiled: October 19, 2021Date of Patent: April 18, 2023Assignee: SNAP INC.Inventors: Newar Husam Al Majid, Jesse Chand
-
Publication number: 20230106627Abstract: A computer system displays an alert at a first position relative to the three-dimensional environment, the alert at least partially overlapping a first object in a first view. The first position has a respective spatial relationship to the user. The computer system detects movement of the user from a first viewpoint to a second viewpoint. At the second viewpoint, the computer system, in accordance with a determination that the alert is a first type of alert, displays the alert at a second position in the three-dimensional environment, the second position having the respective spatial relationship to the user and in accordance with a determination that the alert is a second type of alert, displays the three-dimensional environment from the second viewpoint without displaying the alert with the respective spatial relationship to the user.Type: ApplicationFiled: September 20, 2022Publication date: April 6, 2023Inventors: Jonathan R. Dascola, Lorena S. Pazmino, Israel Pastrana Vicente, Matan Stauber, Jesse Chand, William A. Sorrentino, III, Richard D. Lyons, Stephen O. Lemay
-
Publication number: 20230100610Abstract: A computer system displays a first and second user interface object in a three-dimensional environment. The first and second user interface objects have a first and second spatial relationship to a first and second anchor position corresponding to a location of a user's hand in a physical environment, respectively. While displaying the first and second user interface objects in the three-dimensional environment, the computer system detects movement of the user's hand in the physical environment, corresponding to a translational movement and a rotational movement of the user's hand relative to a viewpoint, and in response, translates the first and second user interface objects relative to the viewpoint in accordance with the translational movement of the user's hand, and rotates the first user interface object relative to the viewpoint in accordance with the rotational movement of the user's hand without rotating the second user interface object.Type: ApplicationFiled: September 19, 2022Publication date: March 30, 2023Inventors: Israel Pastrana Vicente, Jonathan R. Dascola, Christopher D. McKenzie, Jesse Chand, Stephen O. Lemay, Kristi E.S. Bauerly, Dorian D. Dargan