Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 10296791
    Abstract: The present disclosure is directed towards a compact, mobile apparatus for iris image acquisition, adapted to address effects of ocular dominance in the subject and to guide positioning of the subject's iris for the image acquisition. The apparatus may include a sensor for acquiring an iris image from a subject. A compact mirror may be oriented relative to a dominant eye of the subject, and sized to present an image of a single iris to the subject when the apparatus is positioned at a suitable distance for image acquisition. The mirror may assist the subject in positioning the iris for iris image acquisition. The mirror may be positioned between the sensor and the iris during iris image acquisition, and transmit a portion of light reflected off the iris to the sensor.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: May 21, 2019
    Assignee: EyeLock LLC
    Inventors: Keith J. Hanna, Gary Alan Greene, David James Hirvonen, George Herbert Needham Riddle
  • Patent number: 10296783
    Abstract: There is provided an image processing device that includes a facial organ information detection unit that detects facial organ information which is a position of a facial organ of an object from an input image, a face direction information calculation unit that calculates face direction information of the object from the facial organ information, and an arbitrary face direction image generation unit that generates an image obtained by changing the face direction of the object from the facial organ information and the face direction information, and when it is determined by the face direction information that the face is inclined, the arbitrary face direction image generation unit generates an arbitrary face direction image after conducting a correction on the face direction information based on front facial organ information which is a facial organ arrangement in the front face of the object.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: May 21, 2019
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Nao Tokui, Ikuko Tsubaki
  • Patent number: 10296811
    Abstract: A user's collection of images may be analyzed to identify people's faces within the images, then create clusters of similar faces, where each of the clusters may represent a person. The clusters may be ranked in order of size to determine a relative importance of the associated person to the user. The ranking may be used in many social networking applications to filter and present content that may be of interest to the user. In one use scenario, the clusters may be used to identify images from a second user's image collection, where the identified images may be pertinent or interesting to the first user. The ranking may also be a function of user interactions with the images, as well as other input not related to the images. The ranking may be incrementally updated when new images are added to the user's collection.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: May 21, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eyal Krupka, Igor Abramovski, Igor Kviatkovsky
  • Patent number: 10297233
    Abstract: A method for modifying a presentation of content. The method includes a computer processor determining whether a user of a computing device wears eyewear based, at least in part, on analyzing an image of the face of the user. The method further includes responding to determining that the user wears eyewear, by determining a set of characteristics of the eyewear of the user. The method further includes determining a set of environmental factors in proximity of the user and the computing device. The method further includes modifying a presentation of visual content on the computing device based, on the set of characteristics of the eyewear of the user and the determined set of environmental factors in proximity of the user and the computing device.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 21, 2019
    Assignee: International Business Machines Corporation
    Inventors: James E. Carey, Jim C. Chen, Rafal P. Konik, Ryan L. Rossiter, John M. Santosuosso
  • Patent number: 10298810
    Abstract: An authentication device includes: an image capturing unit that captures an image of a person around an apparatus including the authentication device; an authentication unit that performs authentication by a facial image captured by the image capturing unit; and a selection unit, when the image capturing unit captures facial images of plural persons, that selects a facial image of a person who is determined to have a high possibility of using the apparatus from the facial images of the plural persons.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: May 21, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Naoya Nobutani, Masafumi Ono, Manabu Hayashi, Kunitoshi Yamamoto, Toru Suzuki
  • Patent number: 10289897
    Abstract: Disclosed is an apparatus for face verification. The apparatus may comprise a feature extraction unit and a verification unit. In one embodiment, the feature extraction unit comprises a plurality of convolutional feature extraction systems trained with different face training set, wherein each of systems comprises: a plurality of cascaded convolutional, pooling, locally-connected, and fully-connected feature extraction units configured to extract facial features for face verification from face regions of face images; wherein an output unit of the unit cascade, which could be a fully-connected unit in one embodiment of the present application, is connected to at least one of previous convolutional, pooling, locally-connected, or fully-connected units, and is configured to extract facial features (referred to as deep identification-verification features or DeepID2) for face verification from the facial features in the connected units.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: May 14, 2019
    Assignee: Beijing SenseTime Technology Development Co., Ltd
    Inventors: Xiaoou Tang, Yi Sun, Xiaogang Wang
  • Patent number: 10291564
    Abstract: A social media platform is searched by a computer to identify a set of duplicate images including a first image that was posted to the platform by a first user and a second image that was posted to the platform by a second user. A notification is provided by the computer to the first user and the second user indicating that the set of duplicate images exists. A host is selected by the computer for a single consolidated image of the set of duplicate images. The first image or the second image is used by the computer to provide the single consolidated image. One or more social media interactions associated with the first image are consolidated by the computer with one or more social media interactions associated with the second image to generate a single set of social media interactions for the single consolidated image.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: May 14, 2019
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Grant, Jeremy A. Greenberger, Trudy L. Hewitt, Jana H. Jenkins
  • Patent number: 10282818
    Abstract: An image deformation method and an image deformation device are provided. The method includes: acquiring an original image, and acquiring a target shape; deforming the original image into a target image based on a ratio of deformation at a center of the original image to deformation at an edge of the original image, wherein the further the edge of the target image is away from the center of the target image, the greater a deforming degree of the edge of the target image is, and a shape of the target image is the target shape; and displaying the target image.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: May 7, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiaoyi Chen, Yang Lu, Hao Feng
  • Patent number: 10282610
    Abstract: An eye tracking method comprising: capturing image data by an image sensor; determining a region of interest as a subarea or disconnected subareas of said sensor which is to be read out from said sensor to perform an eye tracking based on the read out image data; wherein said determining said region of interest comprises: a) initially reading out only a part of the area of said sensor; b) searching the image data of said initially read out part for one or more features representing the eye position and/or the head position of a subject to be tracked; c) if said search for one or more features has been successful, determining the region of interest based on the location of the successfully searched one or more features, and d) if said search for one or more features has not been successful, reading out a further part of said sensor to perform a search for one or more features representing the eye position and/or the head position based on said further part.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: May 7, 2019
    Assignee: SENSOMOTORIC INSTRUMENTS GESELLSCHAFT FUR INNOVATIVE SENSORIK MBH
    Inventors: Stefan Ronnecke, Thomas Jablonski, Christian Villwock, Walter Nistico
  • Patent number: 10282609
    Abstract: An identity verification method and an identity verification apparatus are provided. A face image sequence of a user is analyzed to determine whether a biological feature conforms to a preset feature. An input interface is displayed after the biological feature conforms to the preset feature. An eye tracking detection is executed for a face image sequence to detect a blinking movement of the user. And a mental verification is executed by using the blinking movement. The invention combines the biological feature with the eye tracking detection for identity verification. The invention not only tightens security but also diversifies operation.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: May 7, 2019
    Assignee: UTECHZONE CO., LTD.
    Inventor: Chia-Chun Tsou
  • Patent number: 10282619
    Abstract: An information processing apparatus configured to detect an object from which an individual is identifiable from captured image data, store information from which the object is restorable in memory and transmit image data generated by omitting the information regarding the object to a server. The information processing apparatus also detects the existence of a wireless terminal and controls deletion of the information regarding the object based on a privacy level associated with the wireless terminal.
    Type: Grant
    Filed: December 5, 2014
    Date of Patent: May 7, 2019
    Assignee: SONY CORPORATION
    Inventors: Kazuyuki Sakoda, Masakazu Yajima, Mitsuru Takehara, Yuki Koga, Tomoya Onuma, Akira Tange, Takatoshi Nakamura
  • Patent number: 10284505
    Abstract: A social media platform is searched by a computer to identify a set of duplicate images including a first image that was posted to the platform by a first user and a second image that was posted to the platform by a second user. A notification is provided by the computer to the first user and the second user indicating that the set of duplicate images exists. A host is selected by the computer for a single consolidated image of the set of duplicate images. The first image or the second image is used by the computer to provide the single consolidated image. One or more social media interactions associated with the first image are consolidated by the computer with one or more social media interactions associated with the second image to generate a single set of social media interactions for the single consolidated image.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Grant, Jeremy A. Greenberger, Trudy L. Hewitt, Jana H. Jenkins
  • Patent number: 10282599
    Abstract: Embodiments of the invention provide a method, system and computer program product for video sentiment analysis in video messaging. In an embodiment of the invention, a method for video sentiment analysis in video messaging includes receiving different video contributions to a thread in a social system executing in memory of a computer and sensing from a plurality of the video contributions a contributor sentiment. Thereafter, a sentiment value for the different video contributions is computed and a sentiment value for a selected one of the video contributions is displayed in a user interface to the thread for an end user contributing a new video contribution to the thread.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: Liam Harpur, Erik H. Katzen, Sumit Patel, John Rice
  • Patent number: 10282592
    Abstract: A face detecting method and a face detecting system are provided. The face detecting method includes the following steps: At least one original image block is received. The original image block is transformed by a transforming unit to obtain a plurality of different transformed image blocks. Whether each of the transformed image blocks contains a face is detected by a detecting unit according to only one identical face database and a detecting result value is outputted accordingly. The transformed image blocks are detected by a plurality of parallel processing cores. Whether a maximum of the detecting result values is larger than a threshold value is determined by a determiner. If the maximum of the detecting result values is larger than the threshold value, then the determiner deems that the original image block contains a face.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: May 7, 2019
    Assignee: iCATCH TECHNOLOGY INC.
    Inventor: Min-Jung Huang
  • Patent number: 10275709
    Abstract: A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: April 30, 2019
    Assignee: FotoNation Limited
    Inventors: Mihai Constantin Munteanu, Alexandru Caliman, Dragos Dinu
  • Patent number: 10275425
    Abstract: A method and system for dividing up large image files, for example, a subsurface wellbore log, into smaller files or slices for faster analysis and for faster transmission. The transmission and analysis can be performed over a network system for display to a user to perform data interpretation, such as geological interpretations. The side by side comparison can be individually controlled and analyzed as well as synchronized manually for comparison. The data from one or multiple different logs can be viewed side by side as smaller slices of the whole while being able to independently vary the view depth of the data from each wellbore by scrolling. Well tops, and other subsurface data, can be interpreted and shown in the well log image with associated depth registration.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: April 30, 2019
    Inventor: Henry Edward Kernan
  • Patent number: 10275641
    Abstract: The present invention discloses methods and systems face recognition. Face recognition involves receiving an image/frame, detecting one or more faces in the image, detecting feature points for each of the detected faces in the image, aligning and normalizing the detected feature points, extracting feature descriptors based on the detected feature points and matching the extracted feature descriptors with a set of pre-stored images for face recognition.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: April 30, 2019
    Assignee: IntelliVision Technologies Corp
    Inventors: Chandan Gope, Gagan Gupta, Nitin Jindal, Amit Agarwal
  • Patent number: 10268875
    Abstract: A method and an apparatus for registering a face, and a method and an apparatus for recognizing a face are disclosed, in which a face registering apparatus may change a stored three-dimensional (3D) facial model to an individualized 3D facial model based on facial landmarks extracted from two-dimensional (2D) face images, match the individualized 3D facial model to a current 2D face image of the 2D face images, and extract an image feature of the current 2D face image from regions in the current 2D face image to which 3D feature points of the individualized 3D facial model are projected, and a face recognizing apparatus may perform facial recognition based on image features of the 2D face images extracted by the face registering apparatus.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: April 23, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jungbae Kim, Seon Min Rhee, Youngkyoo Hwang, Jaejoon Han
  • Patent number: 10268886
    Abstract: Examples of the disclosure enable efficient processing of images. One or more features are extracted from a plurality of images. Based on the extracted features, the plurality of images are classified into a first set including a plurality of first images and a second set including a plurality of second images. One or more images of the plurality of first images are false positives. The plurality of first images and none of the plurality of second images are transmitted to a remote device. The remote device is configured to process one or more images including recognizing the extracted features, understanding the images, and/or generating one or more actionable items. Aspects of the disclosure facilitate conserving memory at a local device, reducing processor load or an amount of energy consumed at the local device, and/or reducing network bandwidth usage between the local device and the remote device.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: April 23, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mohammed Shoaib, Jie Liu, Jin Li
  • Patent number: 10269100
    Abstract: In one embodiment, a system may access an image of a face and generate blurred color information and blurred brightness information based on the image's color information. The system may detect edge information associated with the face based on the blurred brightness information. The edge information may identify regions in the image that correspond to edges of the face. The system may modify the blurred color information based on the edge information associated with the face. Edge color information may be determined based on the modified blurred color information and the image. The system may generate smoothed color information based on the color information of the image and modify the smoothed color information based on the edge color information. The system may generate an output of the face with smoothed skin using a portion of the color information of the image and a portion of the modified smoothed color information.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: April 23, 2019
    Assignee: Facebook, Inc.
    Inventor: Andrei Igorevich Kopysov
  • Patent number: 10270896
    Abstract: A device senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns device-based cognition of a scene viewed by the device's camera. Tasks, which can be selected with the aid of context, are allocated increased or decreased resources based on data comprising (a) user input data indicating express or implied encouragement or discouragement of the task and/or (b) a detection state metric, representing a quantified likelihood that a goal sought by the task will be reached. A great number of other features and arrangements are also detailed.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: April 23, 2019
    Assignee: Digimarc Corporation
    Inventor: Geoffrey B. Rhoads
  • Patent number: 10268911
    Abstract: Some implementations provide a computer-implemented method that include accessing image frames from a video sequence; detecting face of a subject in each image frame from the video sequence; detecting facial landmark features in the detected face; determining whether a sufficient number of facial landmark features and image frames have been obtained; in response to determining that a sufficient number of facial landmark features and image frames have been obtained, classifying detected facial landmark features from various detected image frames; quantifying variations of the classified facial landmark features across the detected image frames; correlating the quantified variations to a pre-determined threshold; and in response to determining that the quantified variations meet the pre-determined threshold, determining that the video sequence is from a live session.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: April 23, 2019
    Assignee: MorphoTrust USA, LLC
    Inventor: Yecheng Wu
  • Patent number: 10264192
    Abstract: A video processing device detects, as a characteristic region, a region having a prescribed characteristic in each frame of a video and performs specific image processing on either the characteristic region in the frame or a region other than the characteristic region in the frame with a specified processing strength. The processing strength is specified so as to be altered stepwise in at least two steps that involve an intermediate value between a minimum value and a maximum value when there is a change in whether or not a characteristic region has been detected.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: April 16, 2019
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Masaaki Moriya, Katsuya Otoi, Takayuki Murai, Yoshimitsu Murahashi
  • Patent number: 10262192
    Abstract: Aspects of the present disclosure provide an image-based face detection and recognition system that processes and/or analyzes portions of an image using “image strips” and cascading classifiers to detect faces and/or various facial features, such an eye, nose, mouth, cheekbone, jaw line, etc.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: April 16, 2019
    Assignee: Blue Line Security Solutions LLC
    Inventor: Marcos Silva
  • Patent number: 10255709
    Abstract: Some implementations may provide a method for generating a portrait of a subject for an identification document, the method including: receiving a photo image of the subject, the photo image including the subject's face in a foreground against an arbitrary background; determining the arbitrary background of the photo image based on the photo image alone and without user intervention; masking the determined background from the photo image; and subsequently generating the portrait of the subject for the identification document of the subject, the portrait based on the photo image with the determined background masked.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: April 9, 2019
    Assignee: MorphoTrust USA, LLC
    Inventors: Brian Martin, Zhiqiang Lao, Mohamed Lazzouni, Robert Andrew Eckel
  • Patent number: 10257680
    Abstract: Methods for securely purchasing, sharing and transferring music files using NFC technology. Method of sharing a music playlist using near field communication (NFC), including assigning a playlist Identifier (playlist ID) to a playlist of music files; receiving an NFC Identifier (NFC ID) from an NFC chip using an NFC enabled device; writing the playlist ID to the NFC chip using the NFC enabled device; storing the NFC ID and playlist ID to a server system; receiving the NFC ID and playlist ID from the NFC chip using a subsequent NFC enabled device; authenticating the received NFC ID and playlist ID on the subsequent NFC enabled device with the server system; and if authenticated, streaming the copy of the music files through the subsequent NFC enabled device without downloading the music files into long-term memory.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: April 9, 2019
    Inventors: Bruce Quarto, Chi Huynh
  • Patent number: 10257495
    Abstract: In general, one innovative aspect of the subject matter described in this specification may be embodied in methods that include generating a three-dimensional composite image of a user from a set of two dimensional facial images. For instance, a depth map may initially be generated for each of the two dimensional facial images based on depth information. The depth maps may be used to identify matching elements that are used to combine multiple two-dimensional images. The generated three-dimensional composite image may then be displayed on a digital identification of a user device. In some instances, the rendering of the three-dimensional composite image on the user device may be adjusted based on tilting motions.
    Type: Grant
    Filed: December 31, 2015
    Date of Patent: April 9, 2019
    Assignee: MorphoTrust USA, LLC
    Inventors: Daniel Poder, Brian Martin, Richard Austin Huber
  • Patent number: 10255482
    Abstract: A method, non-transitory computer readable medium and apparatus for generating an interactive image of facial skin of a user that is displayed via a mobile endpoint device of the user are disclosed. For example, the method includes displaying a guide to position a face of the user, capturing an image of the face of the user, transmitting the image to a facial skin analysis server for analyzing one or more parameters of the facial skin of the user, receiving the interactive image of the face of the user that includes metadata associated with the one or more parameters of the facial skin that were analyzed by the facial skin analysis server, and displaying the interactive image of the face of the user.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: April 9, 2019
    Assignee: The Procter & Gamble Company
    Inventors: Stephen C. Morgana, Raja Bala, Matthew Adam Shreve, Luisa Fernanda Polania Cabrera, Paul Jonathan Matts, Ankur Purwar
  • Patent number: 10250598
    Abstract: Liveness detection and an identity authentication is included in the disclosure. A user's biological characteristic information is collected and displayed at an initial position on a screen of a computing device. A target position is determined using the initial position, and the target position is displayed on the screen. The user is prompted to move the user's biological characteristic information to cause the displayed biological characteristic to move from the initial position on the screen to the target position on the screen. The user's movement is detected and the display position of the displayed biological characteristic information is determined using the detected user's movement; a judgment is made whether the user is a living being using a relationship between the determined display position and the target position. The biological characteristic information of a living being can be verified, e.g., when the user logs in, thereby improving security.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: April 2, 2019
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Jidong Chen, Liang Li
  • Patent number: 10248867
    Abstract: Systems, methods, and non-transitory computer-readable media can identify a set of video segments that represents a video. A subset of video segments can be selected out of the set of video segments. A list that indicates a playback sequence for the subset of video segments can be generated. Playback of the subset of video segments can be provided based on the playback sequence indicated via the list.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: April 2, 2019
    Assignee: Facebook, Inc.
    Inventor: Colleen Kelly Henry
  • Patent number: 10248844
    Abstract: A training method of training an illumination compensation model includes extracting, from a training image, an albedo image of a face area, a surface normal image of the face area, and an illumination feature, the extracting being based on an illumination compensation model; generating an illumination restoration image based on the albedo image, the surface normal image, and the illumination feature; and training the illumination compensation model based on the training image and the illumination restoration image.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: April 2, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD., THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO
    Inventors: Jungbae Kim, Ruslan Salakhutdinov, Jaejoon Han, Byungin Yoo
  • Patent number: 10248847
    Abstract: A device may store images of people and profile information associated with the images of the people, and may generate configuration information associated with providing customized profile information to a user device. The device may receive an image, of a person, captured by the user device, and may perform facial recognition of the image of the person to generate facial features of the person. The device may compare the facial features of the person and the images of the people, and may identify a stored image of the person, from the images of the people, based on comparing the facial features of the person and the images of the people. The device may determine, from the profile information and based on the configuration information, particular profile information that corresponds to the stored image of the person, and may provide the particular profile information to the user device.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: April 2, 2019
    Assignee: Accenture Global Solutions Limited
    Inventor: Wendy Cambor
  • Patent number: 10248840
    Abstract: Provided are a method and system for automatically tracking a face position and recognizing a face. In the present invention, after a face image of a user is captured, a capturing unit is moved such that the face image is moved to a face authentication region where optimum face recognition is performed, thereby having a changed capturing direction. This can allow face recognition of the user to be executed without movement of the user. Accordingly, convenience in face recognition can be maximized. Further, a plurality of registered face images are stored with matching frequencies indicating the number of times that the plurality of registered face images have been matched with an authentication image, and the authentication image is firstly compared with registered face images having large matching frequencies. This can enhance a face recognition speed.
    Type: Grant
    Filed: May 2, 2014
    Date of Patent: April 2, 2019
    Assignee: FIVEGT CO., LTD
    Inventor: Gyu Taek Jeong
  • Patent number: 10242265
    Abstract: Approaches, techniques, and mechanisms are disclosed for generating thumbnails. According to one embodiment, a subset of images each depicting character face(s) is identified from a collection of images. An unsupervised learning method is applied to automatically cluster the subset of images into image clusters. Top image clusters are selected from the image clusters based at least in part on weighted scores of images clustered within the image clusters. Thumbnail(s) are generated from images in the top image clusters.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: March 26, 2019
    Assignee: PCCW VUCLIP (SINGAPORE) PTE. LTD.
    Inventor: Kulbhushan Pachauri
  • Patent number: 10235561
    Abstract: As the use of facial biometrics expands in the commercial and government sectors, the need to ensure that human facial examiners use proper procedures to compare facial imagery will grow. Human examiners have examined fingerprint images for many years such that fingerprint examination processes and techniques have reached a point of general acceptance for both commercial and governmental use. The growing deployment and acceptance of facial recognition can be enhanced and solidified if new methods can be used to assist in ensuring and recording that proper examination processes were performed during the human examination of facial imagery.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: March 19, 2019
    Assignee: AWARE, INC.
    Inventors: Neal Joseph Gieselman, Jonathan Issac Guillory
  • Patent number: 10237843
    Abstract: A position determining unit (4a) determines the position of each of wireless communication apparatuses carried into a vehicle cabin on the basis of the intensities of an electric wave which is transmitted by each wireless communication apparatus and is received by plural antennas (2), or on the basis of the difference between the intensities at the antennas of the electric wave which is transmitted by each wireless communication apparatus and is received by the plural antennas (2). By using a determination result outputted by the position determining unit (4a), a display control unit (4b) outputs, to a display unit (5), an image signal for showing the position in the vehicle cabin of each of the wireless communication apparatuses, and allowing the user to select a wireless communication apparatus which is to be wirelessly connected.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: March 19, 2019
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Yoshikazu Yoshida
  • Patent number: 10235562
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: March 19, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10229311
    Abstract: Implementations generally relate to face template balancing. In some implementations, a method includes generating face templates corresponding to respective images. The method also includes matching the images to a user based on the face templates. The method also includes receiving a determination that one or more matched images are mismatched images. The method also includes flagging one or more face templates corresponding to the one or more mismatched images as negative face templates.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: March 12, 2019
    Assignee: Google LLC
    Inventors: Jonathan McPhie, Hartwig Adam, Dan Fredinburg, Alexei Masterov
  • Patent number: 10223577
    Abstract: A face image processing apparatus includes: a lighting portion including a first polarizer which polarizes light in a first direction and a light emitter which emits, through the first polarizer, infrared light; an image capturing portion including a second polarizer which polarizes light in a second direction perpendicular to the first direction and an image capturing unit which captures images through the second polarizer; and an image processing portion which detects candidates of eyes using a first image captured when the lighting portion emits the infrared light and a second image captured when the lighting portion does not emit the infrared light. The image processing portion determines, as an eye, a candidate having a hyperbolic or cross shaped pattern present in the first image but not present in the second image.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: March 5, 2019
    Assignee: OMRON AUTOMOTIVE ELECTRONICS CO., LTD.
    Inventors: Keishin Aoki, Shunji Ota
  • Patent number: 10216404
    Abstract: An electronic device and method is disclosed herein. The electronic device may include a memory configured to store image data including at least one object, user identification information, and a specific object mapped to the user identification information, and a processor. The processor may execute the method, including extracting an object from the image data, determining whether the extracted object matches the specific object, if the extracted object matches the specific object, encrypting the image data using the user identification information mapped to the specific object as an encryption key, and storing the encrypted image data in the memory.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: February 26, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jaehwan Kwon
  • Patent number: 10217085
    Abstract: An approach is provided for recognizing one or more people from media content and determining if the one or more people are associated with a social networking service. A request is received from a user equipment specifying a media content. Electronically processing of the media content to recognize one or more people is initiated. It is determined whether the one or more people are associated with a member account of a social networking service. A prompting of the user is initiated with an option based on the determination.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: February 26, 2019
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Brenda Castro, James Francis Reilly, Matti Johannes Sillanpää, Toni Peter Strandell, Jyri Kullervo Virtanen, Mikko Antero Nurmi
  • Patent number: 10218898
    Abstract: Aspects identify one or more persons appearing within a photographic image framing of a camera viewfinder. A geographic location is determined for an additional person related to such identified persons, wherein the additional person is located within a specified proximity range to the identified persons but does not appear within the photographic image framing. In response to determining that a relationship of the additional person to a person identified within the image framing indicates that the additional person should be included within photographic images of the identified person, aspects recommend that the additional person be added to the photographic image framing prior to acquisition of image data by the camera from the photographic image framing.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
  • Patent number: 10217009
    Abstract: A method for enhancing user liveness detection is provided that includes calculating, by a computing device, a first angle and a second angle for each frame in a video of captured face biometric data. The first angle is between a plane defined by a front face of the terminal device and a vertical axis, and the second angle is between the plane defined by the front face of the terminal device and a plane defined by the face of the user. Moreover, the method includes creating a first signal from the first angles and a second signal from the second angles, calculating a similarity score between the first and second signals, and determining the user is live when the similarity score is at least equal to a threshold score.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: February 26, 2019
    Assignee: DAON HOLDINGS LIMITED
    Inventor: Mircea Ionita
  • Patent number: 10210627
    Abstract: A computer system determines a metric for an input object, which could be an image of a person with the metric being measure of the person's body size, age, etc. A paired neural network system is trained on a training set of objects having pairs of objects each assigned a relative metric. A relative metric for a pair indicate which of the pair has the higher metric. A representative set of objects includes a known assigned metric value for each object. The trained paired neural network system pairwise compares an input object with objects from the representative set to determine a relative metric for each such pair, to arrive at a collection of relative metrics of the input object relative to various objects in the representative set. A metric value can be estimated for the input object based on the collection of relative metrics and those known metric values.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: February 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ilia Vitsnudel, Ilya Vladimirovich Brailovskiy
  • Patent number: 10212338
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventor: Rodrigo Carceroni
  • Patent number: 10210393
    Abstract: According to one aspect, embodiments herein provide a visual monitoring system for a load panel comprising a first camera having a field of view and configured to be mounted on a surface of the load panel at a first camera position such that a first electrical component of the load panel is in the field of view of the first camera and to generate image based information corresponding to the first electrical component, and a server in communication with the first camera and configured to receive the image based information corresponding to the first electrical component from the first camera and to provide the image based information from the first camera to a user via a user interface.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: February 19, 2019
    Assignee: SCHNEIDER ELECTRIC USA, INC.
    Inventors: John C. Van Gorp, Matthew Stanlake, Mark A. Chidichimo
  • Patent number: 10210379
    Abstract: At least one example embodiment discloses a method of extracting a feature from an input image. The method may include detecting landmarks from the input image, detecting physical characteristics between the landmarks based on the landmarks, determining a target area of the input image from which at least one feature is to be extracted and an order of extracting the feature from the target area based on the physical characteristics and extracting the feature based on the determining.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: February 19, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungjoo Suh, Seungju Han, Jaejoon Han
  • Patent number: 10204265
    Abstract: Provided is a user authentication method using a natural gesture input. The user authentication method includes recognizing a plurality of natural gesture inputs from image data of a user, determining number of the plurality of natural gesture inputs as total number of authentication steps, determining a reference ratio representing a ratio of number of authentication steps requiring authentication pass to the total number of the authentication steps, determining an actual ratio representing a ratio of number of authentication steps, where authentication has actually passed, to the total number of the authentication steps, and performing authentication on the user, based on a result obtained by comparing the actual ratio and the reference ratio.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: February 12, 2019
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jun Seong Bang, Dong Chun Lee
  • Patent number: 10204090
    Abstract: System, method and architecture for providing improved visual recognition by modeling visual content, semantic content and an implicit social network representing individuals depicted in a collection of content, such as visual images, photographs, etc., which network may be determined based on co-occurrences of individuals represented by the content, and/or other data linking the individuals. In accordance with one or more embodiments, using images as an example, a relationship structure may comprise an implicit structure, or network, determined from co-occurrences of individuals in the images. A kernel jointly modeling content, semantic and social network information may be built and used in automatic image annotation and/or determination of relationships between individuals, for example.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: February 12, 2019
    Assignee: OATH INC.
    Inventors: Jia Li, Xiangnan Kong
  • Patent number: 10200652
    Abstract: Techniques provided herein apply a precomputed graphical object to one or more images to generate a video that is modified with the precomputed graphical object. Various implementations characterize facial positions on a face in a first image and determine a respective facial position on the face to apply a precomputed graphical object at. One or more implementations modify the first image by applying the precomputed graphical object to the respective facial position in the first image. Some implementations modify one or more images that are captured after the first image by applying the precomputed graphical object to each respective location for the respective facial position in the one or more images. In turn, various implementations generate a video with images that are modified based on the precomputed graphical object.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: February 5, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Henrik Valdemar Turbell