Patents by Inventor Chiranjib Choudhuri

Chiranjib Choudhuri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104180
    Abstract: Systems and techniques are provided for performing user authentication. For example, a process can include obtaining a plurality of images associated with a face and a facial expression of the user, wherein each respective image of the plurality of images includes a different portion of the face. An encoder neural network can be used to generate one or more predicted three-dimensional (3D) facial modeling parameters, wherein the encoder neural network generates the one or more predicted 3D facial modeling parameters based on the plurality of images. A reference 3D facial model associated with the face and the facial expression can be obtained. An error can be determined between the one or more predicted 3D facial modeling parameters and the reference 3D facial model, and the user can be authenticated user based on the error being less than a pre-determined authentication threshold.
    Type: Application
    Filed: September 16, 2022
    Publication date: March 28, 2024
    Inventors: Anupama S, Chiranjib CHOUDHURI, Avani RAO, Ajit Deepak GUPTE
  • Publication number: 20240096049
    Abstract: Disclosed are systems, apparatuses, processes, and computer-readable media to capture images with subjects at different depths. A method of processing image data includes obtaining, at an imaging device, a first image of an environment from an image sensor of the imaging device; determining a region of interest of the first image based on features depicted in the first image, wherein the features are associated with the environment; determining a representative luma value associated with the first image based on image data in the region of interest of the first image; determining one or more exposure control parameters based on the representative luma value; and obtaining, at the imaging device, a second image captured based on the one or more exposure control parameters.
    Type: Application
    Filed: September 19, 2022
    Publication date: March 21, 2024
    Inventors: Vinod Kumar SAINI, Pushkar GORUR SHESHAGIRI, Srujan Babu NANDIPATI, Chiranjib CHOUDHURI, Ajit Deepak GUPTE
  • Publication number: 20240062467
    Abstract: Systems and techniques are described for establishing one or more virtual sessions between users. For instance, a first device can transmit, to a second device, a call establishment request for a virtual representation call for a virtual session and can receive, from the second device, a call acceptance indicating acceptance of the call establishment request. The first device can transmit, to the second device, first mesh information for a first virtual representation of a first user of the first device and first mesh animation parameters for the first virtual representation. The first device can receive, from the second device, second mesh information for a second virtual representation of a second user of the second device and second mesh animation parameters for the second virtual representation. The first device can generate, based on the second mesh information and the second mesh animation parameters, the second virtual representation of the second user.
    Type: Application
    Filed: July 3, 2023
    Publication date: February 22, 2024
    Inventors: Michel Adib SARKIS, Chiranjib CHOUDHURI, Ke-Li CHENG, Ajit Deepak GUPTE, Ning BI, Cristina DOBRIN, Ramesh CHANDRASEKHAR, Imed BOUAZIZI, Liangping MA, Thomas STOCKHAMMER, Nikolai Konrad LEUNG
  • Publication number: 20240029354
    Abstract: Systems and techniques are provided for generating a texture for a three-dimensional (3D) facial model. For example, a process can include obtaining a first frame, the first frame including a first portion of a face. In some aspects, the process can include generating a 3D facial model based on the first frame and generating a first facial feature corresponding to the first portion of the face. In some examples, the process includes obtaining a second frame, the second frame including a second portion of the face. In some cases, the second portion of the face at least partially overlaps the first portion of the face. In some examples, the process includes combining the first facial feature with the second facial feature to generate an enhanced facial feature, wherein the combining is performed to enhance an appearance of select areas of the enhanced facial feature.
    Type: Application
    Filed: July 19, 2022
    Publication date: January 25, 2024
    Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ning BI, Ajit Deepak GUPTE
  • Publication number: 20230410447
    Abstract: Systems and techniques are provided for generating a three-dimensional (3D) facial model. For example, a process can include obtaining at least one input image associated with a face. In some aspects, the process can include obtaining a pose for a 3D facial model associated with the face. In some examples, the process can include generating, by a machine learning model, the 3D facial model associated with the face. In some cases, one or more parameters associated with a shape component of the 3D facial model are conditioned on the pose. In some implementations, the 3D facial model is configured to vary in shape based on the pose for the 3D facial model associated with the face.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ajit Deepak GUPTE, Ning BI
  • Publication number: 20230401673
    Abstract: Imaging systems and techniques are described. An imaging system receives, from an image sensor, image(s) of a user (e.g., in a pose and/or with a facial expression). The image sensor captures the first set of image(s) in a first electromagnetic (EM) frequency domain, such as the infrared and/or near-infrared domain. The imaging system generates a representation of the user in the first pose in a second EM frequency domain (e.g., visible light domain) at least in part by inputting the image(s) into one or more trained machine learning models. The representation of the user is based on an image property associated with image data of at least the part of the user in the second EM frequency domain. The imaging system outputs the representation of the user in the pose in the second EM frequency domain.
    Type: Application
    Filed: June 14, 2022
    Publication date: December 14, 2023
    Inventors: Ajit Deepak GUPTE, Chiranjib CHOUDHURI, Anupama S
  • Patent number: 11756227
    Abstract: Systems and techniques are provided for determining and applying corrected poses in digital content experiences. An example method can include receiving, from one or more sensors associated with an apparatus, inertial measurements and one or more frames of a scene; based on the one or more frames and the inertial measurements, determining, via a first filter, an angular and linear motion of the apparatus and a gravity vector indicating a direction of gravitational force interacting with the apparatus; when a motion of the apparatus is below a threshold, determining, via a second filter, an updated gravity vector indicating a direction of gravitational force interacting with the apparatus; determining, based on the updated gravity vector, parameters for aligning an axis of the scene with a gravity direction in a real-world spatial frame; and aligning, using the parameters, the axis of the scene with the gravity direction in the real-world spatial frame.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: September 12, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Srujan Babu Nandipati, Pushkar Gorur Sheshagiri, Chiranjib Choudhuri, Ajit Deepak Gupte, Gerhard Reitmayr
  • Publication number: 20230266589
    Abstract: Systems and techniques are provided for an adjustable camera system for mouth tracking. An example apparatus can include a housing with an opening formed in a first side of the housing, wherein one or more surfaces of the housing are configured to engage a head of a user; and a structure including a lens configured to receive incident light, wherein the structure is configured to move from a retracted state where at least a portion of the structure is retracted into the opening in the first side of the housing, to an extended state where at least a portion of the structure that includes the lens structure extends from the first side of the housing.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Inventors: John EATON, Manikandan MEIYOOR VELAYUTHAM, Mario SANCHEZ, Thirukumaran DECHINAMOORTHY, Chiranjib CHOUDHURI
  • Publication number: 20230095621
    Abstract: Systems and techniques are described herein for processing frames. The systems and techniques can be implemented by various types of systems, such as by an extended reality (XR) system or device. In some cases, a process can include obtaining feature information associated with a feature in a current frame, wherein the feature information is based on one or more previous frames; determining an estimated pose of the apparatus associated with the current frame; obtaining a distance associated with the feature in the current frame; and determining an estimated scale of the feature in the current frame based on the feature information associated with the feature, the estimated pose, and the distance associated with the feature.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Pushkar GORUR SHESHAGIRI, Ajit Deepak GUPTE, Chiranjib CHOUDHURI, Gerhard REITMAYR, Youngmin PARK
  • Publication number: 20220366597
    Abstract: Systems and techniques are provided for determining and applying corrected poses in digital content experiences. An example method can include receiving, from one or more sensors associated with an apparatus, inertial measurements and one or more frames of a scene; based on the one or more frames and the inertial measurements, determining, via a first filter, an angular and linear motion of the apparatus and a gravity vector indicating a direction of gravitational force interacting with the apparatus; when a motion of the apparatus is below a threshold, determining, via a second filter, an updated gravity vector indicating a direction of gravitational force interacting with the apparatus; determining, based on the updated gravity vector, parameters for aligning an axis of the scene with a gravity direction in a real-world spatial frame; and aligning, using the parameters, the axis of the scene with the gravity direction in the real-world spatial frame.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 17, 2022
    Inventors: Srujan Babu NANDIPATI, Pushkar GORUR SHESHAGIRI, Chiranjib CHOUDHURI, Ajit Deepak GUPTE, Gerhard REITMAYR
  • Patent number: 11144117
    Abstract: Methods, systems, and devices for deep learning based head motion prediction for extended reality are described. The head pose prediction may involve training one or more layers of a machine learning network based on application data and an estimated head motion range associated with the extended reality system. The network may receive one or more bias corrected inertial measurement unit (IMU) measurements based on a sensor. The network may model a relative head pose of the user as a polynomial of time over a prediction interval based on the bias corrected IMU measurements and the trained one or more layers of the machine learning network. The network may determine a future relative head pose of the user based on the polynomial (e.g., which may be used for virtual object generation, display, etc. within an extended reality system).
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: October 12, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Chiranjib Choudhuri, Ajit Deepak Gupte, Pushkar Gorur Sheshagiri, Gerhard Reitmayr, Tom Edward Botterill
  • Patent number: 11010921
    Abstract: Systems, methods, and computer-readable media are provided for distributed tracking and mapping for extended reality experiences. An example method can include computing, at a device, a pose of the device at a future time, the future time being determined based on a communication latency between the device and a mapping backend system; sending, to the mapping backend system, the pose of the device; receiving, from the mapping backend system, a map slice including map points corresponding to a scene associated with the device, the map slice being generated based on the pose of the device, wherein the map points correspond to the predicted pose; and computing an updated pose of the device based on the map slice.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 18, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Chiranjib Choudhuri, Pushkar Gorur Sheshagiri, Ajit Deepak Gupte, Vinay Melkote Krishnaprasad, Chayan Sharma, Ajit Venkat Rao
  • Publication number: 20200364901
    Abstract: Systems, methods, and computer-readable media are provided for distributed tracking and mapping for extended reality experiences. An example method can include computing, at a device, a pose of the device at a future time, the future time being determined based on a communication latency between the device and a mapping backend system; sending, to the mapping backend system, the pose of the device; receiving, from the mapping backend system, a map slice including map points corresponding to a scene associated with the device, the map slice being generated based on the pose of the device, wherein the map points correspond to the predicted pose; and computing an updated pose of the device based on the map slice.
    Type: Application
    Filed: May 16, 2019
    Publication date: November 19, 2020
    Inventors: Chiranjib CHOUDHURI, Pushkar GORUR SHESHAGIRI, Ajit Deepak GUPTE, Vinay MELKOTE KRISHNAPRASAD, Chayan SHARMA, Ajit Venkat RAO
  • Patent number: 10767997
    Abstract: Systems, methods, and computer-readable media are provided for immersive extended reality experiences on mobile platforms. In some examples, a method can include obtaining sensor measurements from one or more sensors on a mobile platform and/or a device associated with a user in the mobile platform, the sensor measurements including motion parameters associated with the mobile platform and the user; identifying features of the mobile platform and an environment outside of the mobile platform; tracking, using the sensor measurements, a first pose of the mobile platform relative to the environment outside of the mobile platform; tracking, using the sensor measurements, a second pose of the user relative to at least one of the features of the mobile platform; and tracking, based on the first pose and the second pose, a third pose of the user relative to at least one of the features of the environment outside of the mobile platform.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: September 8, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Pushkar Gorur Sheshagiri, Chayan Sharma, Chiranjib Choudhuri, Ajit Deepak Gupte
  • Publication number: 20200271450
    Abstract: Systems, methods, and computer-readable media are provided for immersive extended reality experiences on mobile platforms. In some examples, a method can include obtaining sensor measurements from one or more sensors on a mobile platform and/or a device associated with a user in the mobile platform, the sensor measurements including motion parameters associated with the mobile platform and the user; identifying features of the mobile platform and an environment outside of the mobile platform; tracking, using the sensor measurements, a first pose of the mobile platform relative to the environment outside of the mobile platform; tracking, using the sensor measurements, a second pose of the user relative to at least one of the features of the mobile platform; and tracking, based on the first pose and the second pose, a third pose of the user relative to at least one of the features of the environment outside of the mobile platform.
    Type: Application
    Filed: February 25, 2019
    Publication date: August 27, 2020
    Inventors: Pushkar GORUR SHESHAGIRI, Chayan SHARMA, Chiranjib CHOUDHURI, Ajit Deepak GUPTE
  • Patent number: 10614603
    Abstract: Techniques are described in which a device is configured to determine an overlap region between a first image and a second image, determine a first histogram based on color data included in the first image that corresponds to the overlap region, and determine a second histogram based on color data included in the second image that corresponds to the overlap region. The processor is further configured to determine, based on the first and second histograms, a mapping function that substantially maps the second histogram to the first histogram and apply the mapping function to the second image to generate a normalized second image with respect to the first image.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: April 7, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Shilpi Sahu, Chiranjib Choudhuri, Pawan Kumar Baheti, Ajit Deepak Gupte
  • Patent number: 10373360
    Abstract: A method for stitching images by an electronic device is described. The method includes obtaining at least two images. The method also includes selecting a stitching scheme from a set of stitching schemes based on one or more content measures of the at least two images. The set of stitching schemes includes a first stitching scheme, a second stitching scheme, and a third stitching scheme. The method further includes stitching the at least two images based on a selected stitching scheme.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: August 6, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Pushkar Gorur Sheshagiri, Chiranjib Choudhuri, Sudipto Banerjee, Ajit Deepak Gupte, Pawan Kumar Baheti, Ajit Venkat Rao
  • Publication number: 20180253875
    Abstract: A method for stitching images by an electronic device is described. The method includes obtaining at least two images. The method also includes selecting a stitching scheme from a set of stitching schemes based on one or more content measures of the at least two images. The set of stitching schemes includes a first stitching scheme, a second stitching scheme, and a third stitching scheme. The method further includes stitching the at least two images based on a selected stitching scheme.
    Type: Application
    Filed: March 2, 2017
    Publication date: September 6, 2018
    Inventors: Pushkar Gorur Sheshagiri, Chiranjib Choudhuri, Sudipto Banerjee, Ajit Deepak Gupte, Pawan Kumar Baheti, Ajit Venkat Rao
  • Publication number: 20180082454
    Abstract: Techniques are described in which a device is configured to determine an overlap region between a first image and a second image, determine a first histogram based on color data included in the first image that corresponds to the overlap region, and determine a second histogram based on color data included in the second image that corresponds to the overlap region. The processor is further configured to determine, based on the first and second histograms, a mapping function that substantially maps the second histogram to the first histogram and apply the mapping function to the second image to generate a normalized second image with respect to the first image.
    Type: Application
    Filed: March 29, 2017
    Publication date: March 22, 2018
    Inventors: Shilpi Sahu, Chiranjib Choudhuri, Pawan Kumar Baheti, Ajit Deepak Gupte