Patents by Inventor Spandana Vemulapalli
Spandana Vemulapalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230359722Abstract: The technology described herein includes a method that includes obtaining a set of color-coded sequences, each of which includes a sequence of colors. Each color-coded sequence has auto-correlation properties characterized by a merit factor larger than a first predetermined threshold, and cross-correlation properties among the color-coded sequences characterized by a demerit factor lower than a second predetermined threshold. A color-coded sequence is randomly selected from the set of color-coded sequences. A subject is illuminated in accordance with the sequence of colors in the selected color-coded sequence. A sequence of images of the subject are captured and are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated from the sequence of images by a matched filtering process. Based on the filtered response image, it is determined that the subject is an alternative representation of a live person. In response, access to a secure system is prevented.Type: ApplicationFiled: July 21, 2023Publication date: November 9, 2023Inventors: Spandana Vemulapalli, David Hirvonen
-
Publication number: 20230306789Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include providing instructions for performing a head movement, capturing a set of images of a subject subsequent to providing the instructions to perform the head movement, determining, from the set of images, a first metric indicative of an amount of change in eye gaze directions of the subject during the head movement, determining, from the set of images, a second metric indicative of a change in head positions of the subject during the head movement, determining, based at least on the first metric and the second metric, a third metric indicative of a likelihood that the subject is a live person, determining that the third metric satisfies a first threshold condition, and in response to determining that the third metric satisfies the first threshold condition, identifying the subject as a live person.Type: ApplicationFiled: March 25, 2022Publication date: September 28, 2023Inventor: Spandana Vemulapalli
-
Publication number: 20230306792Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include capturing a set of images of a subject instructed to perform a facial expression. A region of interest for the facial expression is determined in a first image of the set, the first image representing a first facial state that includes the facial expression. A set of facial features is identified in the region of interest, the facial features being indicative of interaction between facial muscles and skin of the subject due to the subject performing the facial expression. A determination is made, based on the facial features, that the first image substantially matches a template image of the facial expression of the subject. Responsive to determining that the first image substantially matches the template image, identifying the subject as a live person.Type: ApplicationFiled: May 30, 2023Publication date: September 28, 2023Inventors: Spandana Vemulapalli, Reza R. Derakhshani
-
Patent number: 11741208Abstract: Technology described herein includes a method that includes obtaining a set of color-coded sequences, each of which includes a sequence of colors. Each color-coded sequence has auto-correlation properties characterized by a merit factor larger than a first predetermined threshold, and cross-correlation properties among the color-coded sequences characterized by a demerit factor lower than a second predetermined threshold. A color-coded sequence is randomly selected from the set of color-coded sequences. A subject is illuminated in accordance with the sequence of colors in the selected color-coded sequence. A sequence of images of the subject are captured, and are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated from the sequence of images by a matched filtering process. Based on the filtered response image, it is determined that the subject is an alternative representation of a live person. In response, access to a secure system is prevented.Type: GrantFiled: September 20, 2021Date of Patent: August 29, 2023Assignee: JUMIO CORPORATIONInventors: Spandana Vemulapalli, David Hirvonen
-
Patent number: 11710353Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include capturing a set of images of a subject instructed to perform a facial expression. A region of interest for the facial expression is determined in a first image of the set, the first image representing a first facial state that includes the facial expression. A set of facial features is identified in the region of interest, the facial features being indicative of interaction between facial muscles and skin of the subject due to the subject performing the facial expression. A determination is made, based on the facial features, that the first image substantially matches a template image of the facial expression of the subject. Responsive to determining that the first image substantially matches the template image, identifying the subject as a live person.Type: GrantFiled: August 31, 2021Date of Patent: July 25, 2023Assignee: Jumio CorporationInventors: Spandana Vemulapalli, Reza R. Derakhshani
-
Publication number: 20230091381Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person using a color-coded sequence including a sequence of colors. A subject is illuminated in accordance with the sequence of colors. A sequence of images of the subject is captured, where the sequence of images are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated, by a matched filtering process on the sequence of images using the selected color-coded sequence. A determination is made, based on structural features around an eye region of the filtered response image, that the subject is a live person. Responsive to determining that the subject is a live person, initiating an authentication process to authenticate the subject.Type: ApplicationFiled: September 17, 2021Publication date: March 23, 2023Inventors: Spandana Vemulapalli, David Hirvonen
-
Publication number: 20230084760Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include obtaining, by an image capture device, a set of subject images. Each image is captured at a different corresponding relative location of the image capture device with respect to the subject. Parameters are determined from the set of images of the subject. The parameters represent corneal reflections of at least one object in at least one eye of the subject. A determination is made, based on the parameters, that the subject is a live person. Responsive to determining that the subject is a live person, an authentication process is initiated to authenticate the subject.Type: ApplicationFiled: September 10, 2021Publication date: March 16, 2023Inventors: David Hirvonen, Spandana Vemulapalli
-
Publication number: 20230063229Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include capturing a set of images of a subject instructed to perform a facial expression. A region of interest for the facial expression is determined in a first image of the set, the first image representing a first facial state that includes the facial expression. A set of facial features is identified in the region of interest, the facial features being indicative of interaction between facial muscles and skin of the subject due to the subject performing the facial expression. A determination is made, based on the facial features, that the first image substantially matches a template image of the facial expression of the subject. Responsive to determining that the first image substantially matches the template image, identifying the subject as a live person.Type: ApplicationFiled: August 31, 2021Publication date: March 2, 2023Inventors: Spandana Vemulapalli, Reza R. Derakhshani
-
Publication number: 20220327321Abstract: Technology described herein includes a method that includes obtaining a set of color-coded sequences, each of which includes a sequence of colors. Each color-coded sequence has auto-correlation properties characterized by a merit factor larger than a first predetermined threshold, and cross-correlation properties among the color-coded sequences characterized by a demerit factor lower than a second predetermined threshold. A color-coded sequence is randomly selected from the set of color-coded sequences. A subject is illuminated in accordance with the sequence of colors in the selected color-coded sequence. A sequence of images of the subject are captured, and are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated from the sequence of images by a matched filtering process. Based on the filtered response image, it is determined that the subject is an alternative representation of a live person. In response, access to a secure system is prevented.Type: ApplicationFiled: September 20, 2021Publication date: October 13, 2022Applicant: EyeVerify, Inc.Inventors: Spandana Vemulapalli, David Hirvonen
-
Patent number: 11341880Abstract: Technology described herein includes a method that includes receiving, at one or more processing devices at one or more locations, one or more image frames; receiving, at the one or more processing devices, one or more signals representing outputs of one or more light sensors of a device; estimating, by the one or more processing devices based on the one or more image frames, one or more illuminance values; determining, by the one or more processing devices, that a degree of correlation between (i) a first illuminance represented by the one or more illuminance values and (ii) a second illuminance represented by the one or more signals fails to satisfy a threshold condition; and in response to determining that the degree of correlation fails to satisfy the threshold condition, determining, by the one or more processing devices, presence of an adverse condition associated with the device.Type: GrantFiled: April 14, 2021Date of Patent: May 24, 2022Assignee: EyeVerify Inc.Inventors: Reza R. Derakhshani, Spandana Vemulapalli, Tetyana Anisimova
-
Patent number: 11126879Abstract: Technology described herein includes a method that includes obtaining a set of color-coded sequences, each of which includes a sequence of colors. Each color-coded sequence has auto-correlation properties characterized by a merit factor larger than a first predetermined threshold, and cross-correlation properties among the color-coded sequences characterized by a demerit factor lower than a second predetermined threshold. A color-coded sequence is randomly selected from the set of color-coded sequences. A subject is illuminated in accordance with the sequence of colors in the selected color-coded sequence. A sequence of images of the subject are captured, and are temporally synchronized with illumination by the color-coded sequence. A filtered response image is generated from the sequence of images by a matched filtering process. Based on the filtered response image, it is determined that the subject is an alternative representation of a live person. In response, access to a secure system is prevented.Type: GrantFiled: April 8, 2021Date of Patent: September 21, 2021Assignee: EyeVerify, Inc.Inventors: Spandana Vemulapalli, David Hirvonen
-
Publication number: 20200275271Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for continuous identity authentication based on gesture data. In one aspect, a method includes receiving gesture data corresponding to one or more gestures received on a touch screen of the mobile device from an individual, generating specific features extracted from the one more gestures, comparing the specific features with gesture data collected on the mobile device from a user previously interacting with the mobile device, and verifying that an identity of the individual matches an identity of the previous user.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Applicant: Alibaba Group Holding LimitedInventors: Sashi Kanth Saripalle, Spandana Vemulapalli, Stephanie Firehammer