Patents by Inventor Saransh Solanki

Saransh Solanki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240193909
    Abstract: According to examples, an apparatus may include a memory on which is stored machine-readable instructions that when executed by a processor, cause the processor to identify an object of interest in at least one first image of an environment captured by a wearable eyewear device during a first time period and identify the object of interest in at least one second image of the environment captured by a wearable eyewear device during a second time period. The processor may also determine a pattern associated with the object of interest based on the at least one first image of the identified object of interest and the at least one second image. In one regard, the processor may determine patterns associated with the object of interest, which may be hidden to or otherwise undetected by a user of the wearable eyewear device.
    Type: Application
    Filed: December 11, 2023
    Publication date: June 13, 2024
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Saransh SOLANKI, Gregg WYGONIK, Yu-Jen LIN, Nicci YIN, James TICHENOR
  • Publication number: 20240096032
    Abstract: A technology to have better experiences, e.g., beyond-arm's-length experiences, with fine-tuned interactions in an extended-reality environment can include methods, extended-reality-compatible devices, and/or systems configured to generate, e.g., via an extended-reality-compatible device, a copy of a representation of an object in an extended-reality environment, to initiate control of the copy of the representation of the object according to a schema; and to control the copy of the representation of the object at a first location in the extended-reality environment, e.g., the first location including a location beyond-arm's-length distance from a second location in the extended-reality environment, the second location being a location of an avatar or representation of a user and/or a controller of the extended-reality-compatible device.
    Type: Application
    Filed: October 11, 2022
    Publication date: March 21, 2024
    Inventors: Sean McCracken, Saransh Solanki
  • Publication number: 20240096033
    Abstract: A technology for creating, replicating and/or controlling avatars in an extended-reality (ER) environment can include methods, extended-reality-compatible devices, and/or systems configured to generate, e.g., via an ER-compatible device, a copy of a representation of a user, e.g., a primary avatar, and/or an object in an ER environment, to initiate a recording of the graphical representation of the user or object according to a schema; to produce a copy of the recording of the graphical representation of the user as a new graphical representation of the user, e.g., a new avatar, in the ER environment; and to control the new graphical representation of the user at a first location in the ER environment. In examples, the technology enables moving the graphical representation of the user around the ER environment while the new graphical representation of the user performs motion and/or produces sound from the copy of the recording.
    Type: Application
    Filed: October 11, 2022
    Publication date: March 21, 2024
    Inventors: Sean McCracken, Saransh Solanki
  • Publication number: 20220406021
    Abstract: Aspects of the present disclosure are directed to a mapping communication system that creates a 3D model of a real-world space and places a virtual camera in the 3D model. As the mapping communication system detects changes in the space, it can provide scan updates to keep the 3D model close to a live representation of the space. Further aspects of the present disclosure are directed to traveling a user to an artificial reality (XR) environment using an intent configured XR link. Yet further aspects of the present disclosure are directed to improving audio latency by performing audio processing off-headset for artificial reality (XR) experiences.
    Type: Application
    Filed: August 23, 2022
    Publication date: December 22, 2022
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Michael James LEBEAU, Björn WANBO, Gregg WYGONIK, Saransh SOLANKI, Sarang BORUDE, Jonathan KANTROWITZ, Alexandra Paige RUBIN, Wenjin GU, Austen McRAE, Anis Ahmed SANKIGIRI KHADER
  • Patent number: 11456887
    Abstract: In one embodiment, a computing system may receive input data from several artificial-reality systems used by participants to participate in a virtual meeting. The input data may include audio data and sensor data corresponding to the participants while they are in the virtual meeting. The computing system may determine meeting characteristics based on input data. The meeting characteristics may include individual behavioral characteristics for each participant and communal characteristics associated with the participants as a group. The computing system may determine an action to improve a meeting characteristic according to a predetermined criteria and generate an instruction for performing the action. The computing system may send the instruction to one or more of the artificial-reality systems used by the participants. The instructions may be configured to cause an application running on each of the artificial-reality systems that receives the instruction to perform the action.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: September 27, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Sean McCracken, Saransh Solanki, Jessica Kitchens
  • Patent number: 11430186
    Abstract: Techniques are described herein that enable a user to provide speech inputs to control an extended reality environment, where relationships between terms in a speech input are represented in three dimensions (3D) in the extended reality environment. For example, a language processing component determines a semantic meaning of the speech input, and identifies terms in the speech input based on the semantic meaning. A 3D relationship component generates a 3D representation of a relationship between the terms and provides the 3D representation to a computing device for display. A 3D representation may include a modification to an object in an extended reality environment, or a 3D representation of a concepts and sub-concepts in a mind map in an extended reality environment, for example. The 3D relationship component may generate a searchable timeline using the terms provided in the speech input and a recording of an extended reality session.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: August 30, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Saransh Solanki, Ken Brian Koh, Sean McCracken
  • Publication number: 20220215630
    Abstract: Techniques are described herein that enable a user to provide speech inputs to control an extended reality environment, where relationships between terms in a speech input are represented in three dimensions (3D) in the extended reality environment. For example, a language processing component determines a semantic meaning of the speech input, and identifies terms in the speech input based on the semantic meaning. A 3D relationship component generates a 3D representation of a relationship between the terms and provides the 3D representation to a computing device for display. A 3D representation may include a modification to an object in an extended reality environment, or a 3D representation of a concepts and sub-concepts in a mind map in an extended reality environment, for example. The 3D relationship component may generate a searchable timeline using the terms provided in the speech input and a recording of an extended reality session.
    Type: Application
    Filed: January 5, 2021
    Publication date: July 7, 2022
    Inventors: Saransh Solanki, Ken Brian Koh, Sean McCracken