Patents by Inventor Michael Christian Nechyba

Michael Christian Nechyba has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220254190
    Abstract: The present disclosure is directed to computer-implemented systems and methods for performing recognition over a network of devices. In general, the systems and methods implement a machine-learned recognizability model that can process information such as a person's voice, facial characteristics, or similar information to determine a recognizability score without necessarily generating or storing biometric information that could be used to identify the person. The recognizability score can act as a proxy for the quality of the information as a reference for biometric recognition that can be performed on other devices in the network of devices. Thus a single device can be used to enroll a person in the network (e.g., by capturing a number of photographs of the person). Thereafter, connection to the other devices can utilize a sensor (e.g., a camera) on the other devices to compare features of the reference information to the input received by the sensor.
    Type: Application
    Filed: August 14, 2019
    Publication date: August 11, 2022
    Inventors: Andrew Gallagher, Joseph Edward Roth, Michael Christian Nechyba
  • Patent number: 9542976
    Abstract: A transformed video and a source video may be synchronized according to implementations disclosed herein to provide tag information to the device receiving the transformed version of the video. A synchronization signal may be computed on the source video and the transformed video using a statistic such as mean pixel intensity. The synchronization signals computed for the transformed video and source video may be compared to determine a transformed video reference point location for the requested tag information. The requested tag information may be provided to the device receiving the transformed version of the video.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: January 10, 2017
    Assignee: Google Inc.
    Inventors: Michael Andrew Sipe, Michael Christian Nechyba
  • Patent number: 9524421
    Abstract: Facial recognition can be used to determine whether or not a user has access privileges to a device such as a smartphone. It may be possible to spoof an authenticated user's image using a picture of the user. Implementations disclosed herein utilize time of flight and/or reflectivity measurements of an authenticated user to compare to such values obtained for an image of an object. If the time of flight distance and/or reflectivity distance for the object match a measured distance, then the person (i.e., object) may be granted access to the device.
    Type: Grant
    Filed: December 9, 2013
    Date of Patent: December 20, 2016
    Assignee: Google Inc.
    Inventors: Steven James Ross, Brian Patrick Williams, Henry Will Schneiderman, Michael Christian Nechyba
  • Patent number: 9177130
    Abstract: An example method includes capturing, by a camera of a computing device, an image including at least a face of a user, calculating a face template of the face of the user in the image, and analyzing the face template to determine whether the face includes at least one of a removable facial feature that decreases a level of distinctiveness between two faces and a non-removable facial feature that decreases a level of distinctiveness between two faces. When the face includes the removable facial feature, the method further includes outputting a notification for the user to remove the removable facial feature. When the face includes the non-removable facial feature, the method further includes adjusting a first similarity score threshold to a second similarity score threshold.
    Type: Grant
    Filed: January 8, 2013
    Date of Patent: November 3, 2015
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Michael Andrew Sipe
  • Patent number: 9158904
    Abstract: An example method includes capturing, by an image capture device of a computing device, an image of a face of a user. The method further includes detecting, by the computing device, whether a distance between the computing device and an object represented by at least a portion of the image is less than a threshold distance, and, when the detected distance is less than a threshold distance, denying authentication to the user with respect to accessing one or more functionalities controlled by the computing device, where the authentication is denied independent of performing facial recognition based at least in part on the captured image.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: October 13, 2015
    Assignee: Google Inc.
    Inventors: Steven James Ross, Henry Will Schneiderman, Michael Christian Nechyba, Yong Zhao
  • Patent number: 9117109
    Abstract: An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Henry Will Schneiderman, Michael Andrew Sipe
  • Publication number: 20150161434
    Abstract: Facial recognition can be used to determine whether or not a user has access privileges to a device such as a smartphone. It may be possible to spoof an authenticated user's image using a picture of the user. Implementations disclosed herein utilize time of flight and/or reflectivity measurements of an authenticated user to compare to such values obtained for an image of an object. If the time of flight distance and/or reflectivity distance for the object match a measured distance, then the person (i.e., object) may be granted access to the device.
    Type: Application
    Filed: December 9, 2013
    Publication date: June 11, 2015
    Applicant: Google Inc.
    Inventors: Steven James Ross, Brian Patrick Williams, Henry Will Schneiderman, Michael Christian Nechyba
  • Patent number: 9047538
    Abstract: An example method includes capturing, by a camera of a mobile computing device, an image, determining whether the image includes a representation of at least a portion of a face, and, when the image includes the representation of at least the portion of the face, analyzing characteristics of the image. The characteristics include at least one of a tonal distribution of the image that is associated with a darkness-based mapping of a plurality of pixels of the image, and a plurality of spatial frequencies of the image that are associated with a visual transition between adjacent pixels of the image. The method further includes classifying, by the mobile computing device, a quality of the image based at least in part on the analyzed characteristics of the image.
    Type: Grant
    Filed: May 10, 2013
    Date of Patent: June 2, 2015
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Michael Andrew Sipe
  • Publication number: 20150078729
    Abstract: A transformed video and a source video may be synchronized according to implementations disclosed herein to provide tag information to the device receiving the transformed version of the video. A synchronization signal may be computed on the source video and the transformed video using a statistic such as mean pixel intensity. The synchronization signals computed for the transformed video and source video may be compared to determine a transformed video reference point location for the requested tag information. The requested tag information may be provided to the device receiving the transformed version of the video.
    Type: Application
    Filed: September 13, 2013
    Publication date: March 19, 2015
    Applicant: Google Inc.
    Inventors: Michael Andrew Sipe, Michael Christian Nechyba
  • Publication number: 20140307929
    Abstract: An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
    Type: Application
    Filed: June 25, 2014
    Publication date: October 16, 2014
    Inventors: Michael Christian Nechyba, Henry Will Schneiderman, Michael Andrew Sipe
  • Patent number: 8798336
    Abstract: An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: August 5, 2014
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Henry Will Schneiderman, Michael Andrew Sipe
  • Publication number: 20140188997
    Abstract: The present disclosure includes systems and methods for creating and sharing inline commentary relating to media within an online community, for example, a social network. The inline commentary can be one or more types of media, for example, text, audio, image, video, URL link, etc. In some implementations, the systems and methods either receive media that is live or pre-recorded, permit viewing by users and receive selective added commentary by users inline. The systems and methods are configured to send one or more notifications regarding the commentary. In some implementations, the systems and methods are configured to receive responses by other users to the initial commentary provided by a particular user.
    Type: Application
    Filed: December 31, 2012
    Publication date: July 3, 2014
    Inventors: Henry Will Schneiderman, Michael Andrew Sipe, Steven James Ross, Brian Ronald Colonna, Danielle Marie Millett, Uriel Gerardo Rodriguez, Michael Christian Nechyba, Mikkel Crone Köser, Ankit Jain
  • Publication number: 20140016837
    Abstract: An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
    Type: Application
    Filed: September 23, 2013
    Publication date: January 16, 2014
    Applicant: Google Inc.
    Inventors: Michael Christian Nechyba, Henry Will Schneiderman, Michael Andrew Sipe
  • Publication number: 20130336527
    Abstract: An example method includes capturing, by a camera of a mobile computing device, an image, determining whether the image includes a representation of at least a portion of a face, and, when the image includes the representation of at least the portion of the face, analyzing characteristics of the image. The characteristics include at least one of a tonal distribution of the image that is associated with a darkness-based mapping of a plurality of pixels of the image, and a plurality of spatial frequencies of the image that are associated with a visual transition between adjacent pixels of the image. The method further includes classifying, by the mobile computing device, a quality of the image based at least in part on the analyzed characteristics of the image.
    Type: Application
    Filed: May 10, 2013
    Publication date: December 19, 2013
    Applicant: Google Inc.
    Inventors: Michael Christian Nechyba, Michael Andrew Sipe
  • Patent number: 8611616
    Abstract: An example method includes capturing, by an image capture device of a computing device, an image of a face of a user. The method further includes detecting, by the computing device, whether a distance between the computing device and an object represented by at least a portion of the image is less than a threshold distance, and, when the detected distance is less than a threshold distance, denying authentication to the user with respect to accessing one or more functionalities controlled by the computing device, where the authentication is denied independent of performing facial recognition based at least in part on the captured image.
    Type: Grant
    Filed: January 10, 2013
    Date of Patent: December 17, 2013
    Assignee: Google Inc.
    Inventors: Steven James Ross, Henry Will Schneiderman, Michael Christian Nechyba, Yong Zhao
  • Patent number: 8559684
    Abstract: In general, aspects of the present disclosure are directed to techniques for adjusting the threshold similarity score to match the facial features of an authorized user in a facial recognition system. Instead of a uniform threshold similarity score, representations of authorized faces enrolled in the computing device may be assigned a custom threshold similarity score based at least in part on the distinctiveness of the facial features in the representations of the authorized faces. A computing device can determine a distinctiveness of facial features of an enrolling user, and can determine a threshold similarity score associated with the facial features of the enrolling user based at least in part on the distinctiveness of the facial features of the enrolling user.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: October 15, 2013
    Assignee: Google Inc.
    Inventor: Michael Christian Nechyba
  • Patent number: 8542879
    Abstract: An example method includes receiving a first image and a second image of a face of a user, where one or both images have been granted a match by facial recognition. The method further includes detecting a liveness gesture based on at least one of a yaw angle of the second image relative to the first image and a pitch angle of the second image relative to the first image, where the yaw angle corresponds to a transition along a horizontal axis, and where the pitch angle corresponds to a transition along a vertical axis. The method further includes generating a liveness score based on a yaw angle magnitude and/or a pitch angle magnitude, comparing the liveness score to a threshold value, and determining, based on the comparison, whether to deny authentication to the user with respect to accessing one or more functionalities controlled by the computing device.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: September 24, 2013
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Henry Will Schneiderman, Michael Andrew Sipe
  • Publication number: 20130247175
    Abstract: An example method includes capturing, by a camera of a computing device, an image including at least a face of a user, calculating a face template of the face of the user in the image, and analyzing the face template to determine whether the face includes at least one of a removable facial feature that decreases a level of distinctiveness between two faces and a non-removable facial feature that decreases a level of distinctiveness between two faces. When the face includes the removable facial feature, the method further includes outputting a notification for the user to remove the removable facial feature. When the face includes the non-removable facial feature, the method further includes adjusting a first similarity score threshold to a second similarity score threshold.
    Type: Application
    Filed: January 8, 2013
    Publication date: September 19, 2013
    Inventors: Michael Christian Nechyba, Michael Andrew Sipe
  • Patent number: 8515139
    Abstract: An example method includes capturing, by a camera of a computing device, an image including at least a face of a user, calculating a face template of the face of the user in the image, and analyzing the face template to determine whether the face includes at least one of a removable facial feature that decreases a level of distinctiveness between two faces and a non-removable facial feature that decreases a level of distinctiveness between two faces. When the face includes the removable facial feature, the method further includes outputting a notification for the user to remove the removable facial feature. When the face includes the non-removable facial feature, the method further includes adjusting a first similarity score threshold to a second similarity score threshold.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: August 20, 2013
    Assignee: Google Inc.
    Inventors: Michael Christian Nechyba, Michael Andrew Sipe
  • Patent number: 8457367
    Abstract: An example method includes receiving first and second images of a face of a user, where one or both images have been granted a match by facial recognition. The method includes identifying at least one facial landmark in the first image and at least one corresponding facial landmark in the second image, and extracting a first sub-image from the first image, where the first sub-image includes a representation of the at least one facial landmark. The method includes extracting a second sub-image from the second image, where the second sub-image includes a representation of the at least one corresponding facial landmark, detecting a facial gesture by determining whether a sufficient difference exists between the second sub-image and first sub-image to indicate the facial gesture, and determining, based on detecting the facial gesture, whether to deny authentication to the user with respect to accessing functionalities controlled by the computing device.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: June 4, 2013
    Assignee: Google Inc.
    Inventors: Michael Andrew Sipe, Henry Will Schneiderman, Michael Christian Nechyba