Patents by Inventor Pavan Kumar Anasosalu
Pavan Kumar Anasosalu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12257028Abstract: An electronic device may include body composition analysis circuitry that estimates body composition based on captured images of a face, neck, and/or body (e.g., depth map images captured by a depth sensor, visible light and infrared images captured by image sensors, and/or other suitable images). The body composition analysis circuitry may analyze the image data and may extract portions of the image data that strongly correlate with body composition, such as portions of the cheeks, neck, waist, etc. The body composition analysis circuitry may encode the image data into a latent space. The latent space may be based on a deep learning model that accounts for facial expression and neck pose in face/neck images and that accounts for breathing and body pose in body images. The body composition analysis circuitry may output an estimated body composition based on the image data and based on user demographic information.Type: GrantFiled: July 14, 2022Date of Patent: March 25, 2025Assignee: Apple Inc.Inventors: Gopal Valsan, Thilaka S Sumanaweera, Liliana I Keats, David J Feathers, Pavan Kumar Anasosalu Vasu
-
Publication number: 20240169046Abstract: Techniques are disclosed relating to biometric authentication, e.g., facial recognition. In some embodiments, a device is configured to verify that image data from a camera unit exhibits a pseudo-random sequence of image capture modes and/or a probing pattern of illumination points (e.g., from lasers in a depth capture mode) before authenticating a user based on recognizing a face in the image data. In some embodiments, a secure circuit may control verification of the sequence and/or the probing pattern. In some embodiments, the secure circuit may verify frame numbers, signatures, and/or nonce values for captured image information. In some embodiments, a device may implement one or more lockout procedures in response to biometric authentication failures. The disclosed techniques may reduce or eliminate the effectiveness of spoofing and/or replay attacks, in some embodiments.Type: ApplicationFiled: November 28, 2023Publication date: May 23, 2024Inventors: Deepti S. Prakash, Lucia E. Ballard, Jerrold V. Hauck, Feng Tang, Etai Littwin, Pavan Kumar Anasosalu Vasu, Gideon Littwin, Thorsten Gernoth, Lucie Kucerova, Petr Kostka, Steven P. Hotelling, Eitan Hirsh, Tal Kaitz, Jonathan Pokrass, Andrei Kolin, Moshe Laifenfeld, Matthew C. Waldon, Thomas P. Mensch, Lynn R. Youngs, Christopher G. Zeleznik, Michael R. Malone, Ziv Hendel, Ivan Krstic, Anup K. Sharma
-
Patent number: 11868455Abstract: Techniques are disclosed relating to biometric authentication, e.g., facial recognition. In some embodiments, a device is configured to verify that image data from a camera unit exhibits a pseudo-random sequence of image capture modes and/or a probing pattern of illumination points (e.g., from lasers in a depth capture mode) before authenticating a user based on recognizing a face in the image data. In some embodiments, a secure circuit may control verification of the sequence and/or the probing pattern. In some embodiments, the secure circuit may verify frame numbers, signatures, and/or nonce values for captured image information. In some embodiments, a device may implement one or more lockout procedures in response to biometric authentication failures. The disclosed techniques may reduce or eliminate the effectiveness of spoofing and/or replay attacks, in some embodiments.Type: GrantFiled: February 22, 2021Date of Patent: January 9, 2024Assignee: Apple Inc.Inventors: Deepti S. Prakash, Lucia E. Ballard, Jerrold V. Hauck, Feng Tang, Etai Littwin, Pavan Kumar Anasosalu Vasu, Gideon Littwin, Thorsten Gernoth, Lucie Kucerova, Petr Kostka, Steven P. Hotelling, Eitan Hirsh, Tal Kaitz, Jonathan Pokrass, Andrei Kolin, Moshe Laifenfeld, Matthew C. Waldon, Thomas P. Mensch, Lynn R. Youngs, Christopher G. Zeleznik, Michael R. Malone, Ziv Hendel, Ivan Krstic, Anup K. Sharma
-
Publication number: 20230065288Abstract: An electronic device may include body composition analysis circuitry that estimates body composition based on captured images of a face, neck, and/or body (e.g., depth map images captured by a depth sensor, visible light and infrared images captured by image sensors, and/or other suitable images). The body composition analysis circuitry may analyze the image data and may extract portions of the image data that strongly correlate with body composition, such as portions of the cheeks, neck, waist, etc. The body composition analysis circuitry may encode the image data into a latent space. The latent space may be based on a deep learning model that accounts for facial expression and neck pose in face/neck images and that accounts for breathing and body pose in body images. The body composition analysis circuitry may output an estimated body composition based on the image data and based on user demographic information.Type: ApplicationFiled: July 14, 2022Publication date: March 2, 2023Inventors: Gopal Valsan, Thilaka S. Sumanaweera, Liliana I. Keats, David J. Feathers, Pavan Kumar Anasosalu Vasu
-
Patent number: 11282180Abstract: A method includes determining a detection output that represents an object in a two-dimensional image using a detection model, wherein the detection output includes a shape definition that describes a shape and size of the object; defining a three-dimensional representation based on the shape definition, wherein the three-dimensional representation includes a three-dimensional model that represents the object that is placed in three-dimensional space according to a position and a rotation; determining a three-dimensional detection loss that describes a difference between the three-dimensional representation and three-dimensional sensor information; and updating the detection model based on the three-dimensional detection loss.Type: GrantFiled: April 24, 2020Date of Patent: March 22, 2022Assignee: Apple Inc.Inventors: Shreyas Saxena, Cuneyt Oncel Tuzel, Pavan Kumar Anasosalu Vasu
-
Patent number: 11151235Abstract: Techniques are disclosed relating to biometric authentication, e.g., facial recognition. In some embodiments, a device is configured to verify that image data from a camera unit exhibits a pseudo-random sequence of image capture modes and/or a probing pattern of illumination points (e.g., from lasers in a depth capture mode) before authenticating a user based on recognizing a face in the image data. In some embodiments, a secure circuit may control verification of the sequence and/or the probing pattern. In some embodiments, the secure circuit may verify frame numbers, signatures, and/or nonce values for captured image information. In some embodiments, a device may implement one or more lockout procedures in response to biometric authentication failures. The disclosed techniques may reduce or eliminate the effectiveness of spoofing and/or replay attacks, in some embodiments.Type: GrantFiled: July 31, 2018Date of Patent: October 19, 2021Assignee: Apple Inc.Inventors: Deepti S. Prakash, Lucia E. Ballard, Jerrold V. Hauck, Feng Tang, Etai Littwin, Pavan Kumar Anasosalu Vasu, Gideon Littwin, Thorsten Gernoth, Lucie Kucerova, Petr Kostka, Steven P. Hotelling, Eitan Hirsh, Tal Kaitz, Jonathan Pokrass, Andrei Kolin, Moshe Laifenfeld, Matthew C. Waldon, Thomas P. Mensch, Lynn R. Youngs, Christopher G. Zeleznik, Michael R. Malone, Ziv Hendel, Ivan Krstic, Anup K. Sharma, Kelsey Y. Ho
-
Publication number: 20150243035Abstract: A method of determining a transformation is provided between an image coordinate system and an object coordinate system including: providing an object coordinate system associated with the object of interest, providing a 3D model of at least part of the object of interest, wherein the 3D model comprises 3D features, providing an N-th input depth image of at least part of the object of interest, wherein an N-th image coordinate system is associated with the N-th input depth image, providing an N-th plurality of 3D features in the N-th image coordinate system according to the N-th input depth image, estimating an N-th coarse transformation between the object coordinate system and the N-th image coordinate system according to a trained pose model and the N-th input depth image, and determining an N-th accurate transformation between the N-th image coordinate system and the object coordinate system.Type: ApplicationFiled: February 21, 2014Publication date: August 27, 2015Applicant: Metaio GmbHInventors: Rajesh Narasimha, Pavan Kumar Anasosalu