Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 11977515
    Abstract: Disclosed are systems and methods that automate the process of analyzing interactive content data using artificial intelligence and natural language processing technology to generate subject matter identifiers and sentiment identifiers that characterize the interaction represented by the content data. The automated processing classifies, reduces, segments, and filters content data to accurately, automatically, and efficiently characterize the content data. The results of the analysis in turn allow for identification of system and service problems and the implementation of system enhancements.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: May 7, 2024
    Assignee: TRUIST BANK
    Inventors: Phu Pham, Merle Hidinger, Jun Ji
  • Patent number: 11972526
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.
    Type: Grant
    Filed: September 29, 2023
    Date of Patent: April 30, 2024
    Assignee: Apple Inc.
    Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
  • Patent number: 11972014
    Abstract: A method executed by a computer includes receiving an image from a client device. A facial recognition technique is executed against an individual face within the image to obtain a recognized face. Privacy rules are applied to the image, where the privacy rules are associated with privacy settings for a user associated with the recognized face. A privacy protected version of the image is distributed, where the privacy protected version of the image has an altered image feature.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: April 30, 2024
    Assignee: Snap Inc.
    Inventors: Robert Murphy, Evan Spiegel
  • Patent number: 11971971
    Abstract: The present invention is a system for and method of enabling an initiating party to capture, store, and retrieve an image of at least one acknowledging party performing an acknowledgement requested by the initiating party where the acknowledging party(s) may be remotely located from the initiating party.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: April 30, 2024
    Assignee: LAD Verification Services, LLC
    Inventor: David C. Ruma
  • Patent number: 11972639
    Abstract: Computerized systems, and method and computer readable media that store instructions for history based face recognition.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: April 30, 2024
    Assignee: CORSIGHT.AI
    Inventors: Ran Vardimon, Matan Noga, Keren-Or Curtis, Kai Mizrahi
  • Patent number: 11967164
    Abstract: Systems, methods and computer program products for detecting objects using a multi-detector are disclosed, according to various embodiments. In one aspect, a computer-implemented method includes defining analysis profiles, where each analysis profile: corresponds to one of a plurality of detectors, and comprises: a unique set of analysis parameters and/or a unique detection algorithm. The method further includes analyzing image data in accordance with the analysis profiles; selecting an optimum analysis result based on confidence scores associated with different analysis results; and detecting objects within the optimum analysis result. According to additional aspects, the analysis parameters may define different subregions of a digital image to be analyzed; a composite analysis result may be generated based on analysis of the different subregions by different detectors; and the optimum analysis result may be based on the composite analysis result.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: April 23, 2024
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen M. Thompson, Jan W. Amtrup
  • Patent number: 11967151
    Abstract: Embodiments of this application disclose a video classification method performed by a computer device and belong to the field of computer vision (CV) technologies. The method includes: obtaining a video; selecting n image frames from the video; extracting respective feature information of the n image frames according to a learned feature fusion policy by using a feature extraction network, the learned feature fusion policy being used for indicating proportions of the feature information of the other image frames that have been fused with feature information of a first image frame in the n image frames; and determining a classification result of the video according to the respective feature information of the n image frames. By replacing complex and repeated 3D convolution operations with simple feature information fusion between adjacent image frames, time for finally obtaining a classification result of the video is therefore reduced, thereby having high efficiency.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: April 23, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yan Li, Xintian Shi, Bin Ji
  • Patent number: 11967173
    Abstract: A system for biometric enrollment can include a server including a processor configured to receive an uncovered face image of a subject. The processor can generate a first fixed-size representation (FXR) based on the uncovered face image and a covered face image based on the uncovered face image. The processor can generate a second FXR based on the covered face image. The processor can enroll the subject associated with the uncovered face image by storing the first FXR and the second FXR in a data store.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: April 23, 2024
    Assignee: T Stamp Inc.
    Inventors: Gareth Neville Genner, Norman Hoon Thian Poh
  • Patent number: 11967177
    Abstract: Disclosed herein is a method for managing item recommendation using a degree of association between language units and usage history to manage recommendation of similar items with high probability of purchase, rather than a matching method expressed by keywords, recommendation management, by adding or deleting experience items using a vector model-based reasoning method based on a word-to-word association, in a scheme of planning a novel recognition system through the study of human emotions and tastes, T.P.O (Time, Place, Occasion) and various list-specific characteristics (color, texture, etc.) based on the language used in everyday life in consideration of language units and items preferred or experienced and/or purchased by a user, and of applying machine learning technology and natural language understanding technology.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: April 23, 2024
    Assignee: MYCELEBS CO., LTD.
    Inventor: Jun Woong Doh
  • Patent number: 11968476
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for casting from a virtual environment to a video communications platform. The system may provide a video conference session in a video conference application. A connection may be established between the video conference application and a VR or AR device. The video conference application may receive 2D video content from the VR or AR device. The 2D video content may comprise a view of a virtual environment. The video conference application may stream the 2D video content in the video conference session.
    Type: Grant
    Filed: October 31, 2021
    Date of Patent: April 23, 2024
    Assignee: Zoom Video Communications, Inc.
    Inventor: Jordan Thiel
  • Patent number: 11960527
    Abstract: A buried object information management unit manages information about a buried object included in a search image showing the presence or absence of the buried object in a wall surface generated by a buried object scanning device that is scanned along a wall surface, and comprises a data receiving unit, an input unit, and a collation unit. The data receiving unit acquires search information including the search image generated by the buried object scanning device. To the input unit is inputted construction information including position information about the buried object in the wall surface. The collation unit collates the search information acquired by the data receiving unit with the construction information inputted to the input unit, and determines whether or not there is a match.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: April 16, 2024
    Assignee: OMRON CORPORATION
    Inventors: Tetsuro Tsurusu, Shingo Kawamoto, Mitsunori Sugiura, Takahide Yagi
  • Patent number: 11961283
    Abstract: Methods, systems, and computer readable media for model-based robust deep learning. In some examples, a method includes obtaining a model of natural variation for a machine learning task. The model of natural variation includes a mapping that specifies how an input datum can be naturally varied by a nuisance parameter. The method includes training, using the model of natural variation and training data for the machine learning task, a neural network to complete the machine learning task such that the neural network is robust to natural variation specified by the model of natural variation.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: April 16, 2024
    Assignee: The Trustees of the University of Pennsylvania
    Inventors: George J. Pappas, Hamed Hassani, Alexander Robey
  • Patent number: 11961293
    Abstract: A system and related methods for identifying characteristics of handbags is described. One method includes receiving one or more images of a handbag, eliminating all but select images from the one or more images of the handbag to obtain a grouping of one or more select images, the select images being those embodying a complete periphery and frontal view of the handbag. For each of the one or more select images, aligning feature-corresponding pixels with an image axis, comparing at least a portion of the one or more select images with a plurality of stored images, and determining characteristics of the handbag based on said comparing.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: April 16, 2024
    Assignee: FASHIONPHILE Group, LLC
    Inventors: Sarah Davis, Ben Hemminger
  • Patent number: 11962381
    Abstract: Embodiments of the present invention disclose a communication method and a device. Configuration information indicating one or more spatial domain beam basis vector groups and Q thresholds is received from a network device, where the Q thresholds correspond one-to-one to spatial domain beam basis vectors in the one or more spatial domain beam basis vector groups. L spatial domain beam basis vectors are selected from a spatial domain beam basis vector group set. K frequency domain basis vectors are selected from a frequency domain basis vector set for each of the L spatial domain beam basis vectors.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: April 16, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xiang Gao, Kunpeng Liu, Ruiqi Zhang
  • Patent number: 11961326
    Abstract: Methods and systems are described for maintaining hygienic conditions in automatic teller machines by detecting whether or not a user is not in compliance with a hygienic standard. If a user is not in compliance, then the automatic teller machine may execute a hygienic action to cleanse the automatic teller machine. For example, the hygienic action may comprise automatically cleansing the automatic teller machine, disabling the automatic teller machine from service, transmitting a sanitation service request to an automatic teller machine provide, and/or initiating an alternative control scheme (e.g., voice controls, gesture-based controls, etc.) for the automatic teller machine.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: April 16, 2024
    Assignee: Capital One Services, LLC
    Inventors: Shekhar Bhardwaj, Andrew Yocca, Kelvin Goodman, Christopher McVay, Dong Zhang, Neer Pandya
  • Patent number: 11960787
    Abstract: A vehicle and control method of the vehicle are provided. The vehicle includes a camera provided on the vehicle and configured to capture an image of an object outside the vehicle, a controller configured to determine a photographing position required for facial recognition from the captured image, a guide configured to guide the photographing position, and a display configured to display a result of the facial recognition.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: April 16, 2024
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Yun Sup Ann, Hyunsang Kim
  • Patent number: 11962729
    Abstract: An image forming apparatus includes a mounting surface provided on a main body and on which paper is placed; a person detection section provided in the main body; a processor configured to control return from a sleep state of the main body, based on the detection of a person by the person detection section, and perform control such that in a case where there is no paper on the mounting surface when transition to the sleep state, a nearby person approaching the main body is included in a detection target of the person detection section, and a passerby passing near the main body is not included in the detection target, in the sleep state, and in a case where there is a paper on the mounting surface when the transition, the nearby person and the passerby are included in the detection target, in the sleep state; and a notification section that notifies of paper remaining in a case where there is a paper on the mounting surface after the return.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Masayoshi Miki, Teiju Sato, Masato Saito, Yasuhiro Nakatani
  • Patent number: 11960146
    Abstract: In various embodiments, a process for trying on glasses includes determining an event associated with updating a current model of a user's face. In response to the event, using a set of historical recorded frames of the user's face to update the current model of the user's face. The process includes obtaining a newly recorded frame of the user's face, using the current model of the user's face to generate a corresponding image of a glasses frame, and presenting the image of the glasses frame over the newly recorded frame of the user's face.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: April 16, 2024
    Assignee: DITTO TECHNOLOGIES, INC.
    Inventors: Cliff Mercer, Ebube Anizor, Tenzile Berkin Cilingiroglu, Trevor Noel Howarth
  • Patent number: 11961331
    Abstract: A first computing device acquires video data representing a user performing an activity. The first device uses a first pose extraction algorithm to determine a pose of the user within a frame of video data. If the pose is determined to be potentially inaccurate, the user is prompted for authorization to send the frame of video data to a second computing device. If authorization is granted, the second computing device may use a different algorithm to determine a pose of the user and send data indicative of this pose to the first computing device to enable the first computing device to update a score or other output. The second computing device may also use the frame of video data as training data to retrain or modify the first pose extraction algorithm, and may send the modified algorithm to the first computing device for future use.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: April 16, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Ido Yerushalmy, Michael Chertok, Sharon Alpert
  • Patent number: 11954905
    Abstract: An example system includes: a landmark detection engine to detect landmark positions of landmarks in images based on facial detection; an optical flow landmark engine to determine the landmark positions in the images based on optical flow of the landmarks between the images; a landmark difference engine to determine, for a landmark in a given image: a distance between a detected landmark position and an optical flow landmark position of the landmark; and a weighted landmark determination engine to determine, for a first and second image, a position for the landmark in the second image based on: a respective detected landmark position and a respective optical flow position of the landmark in the second image; and respective distances, determined with the landmark difference engine, between a first detected landmark position of the landmark in the first image and respective optical flow landmark positions for the first and second images.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: April 9, 2024
    Assignees: Hewlett-Packard Development Company, L.P., Purdue Research Foundation
    Inventors: Yang Cheng, Xiaoyu Xiang, Shaoyuan Xu, Qian Lin, Jan Philip Allebach
  • Patent number: 11954881
    Abstract: In some implementations a neural network is trained to perform a main task using a clustering constraint, for example, using both a main task training loss and a clustering training loss. Training inputs are inputted into a main task neural network to produce output labels predicting locations of the parts of the objects in the training inputs. Data from pooled layers of the main task neural network is inputted into a clustering neural network. The main task neural network and the clustering neural network are trained based on a main task loss from the main task neural network and a clustering loss from the clustering neural network. The main task loss is determined by comparing differences between the output labels and the training labels. The clustering loss encourages the clustering network to learn to label the parts of the objects individually, e.g., to learn groups corresponding to the object parts.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: April 9, 2024
    Assignee: Apple Inc.
    Inventors: Peter Meier, Tanmay Batra
  • Patent number: 11954908
    Abstract: The communication support device includes a position acquisition unit, an imaging unit, a storage, a category ranking setting unit, a counterpart detector, and a notification unit. The position acquisition unit acquires position information indicating a position of a user. The imaging unit captures an image of a surrounding environment of the user to acquire a captured image. The storage stores the counterpart database. In the counterpart database, an image of a counterpart and a category indicating a property of the counterpart are associated with the counterpart. The category ranking setting unit sets a priority to the category according to the position information acquired by the position acquisition unit. The counterpart detector detects a counterpart belonging to the category in the captured image in order of the priority set by the category ranking setting unit. The notification unit notifies the user of information regarding the counterpart detected by the counterpart detector.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 9, 2024
    Assignee: OMRON CORPORATION
    Inventors: Endri Rama, Kazuo Yamamoto, Tomohiro Yabuuchi
  • Patent number: 11954829
    Abstract: The invention relates to a videoconferencing system 1, comprising: a display screen 10, for displaying an image Ie(ti) containing N images Iint(k)(ti); a camera 20, for acquiring an image Ic(tj); a single-pixel-imager-employing optical device suitable for determining N images Ico(k)(tj) on the basis of sub-matrices SMimp(k)(tj) comprising: an optical source 31, suitable for irradiating an ocular portion Po(tj) of the face of the user; a matrix of single-pixel imagers that are suitable for reconstructing a correction image Ico(k)(tj) on the basis of the light beam reflected by the ocular portion Po(tj); a processing unit 40, suitable for: determining, in each image Iint(k)(ti) of the image Ie(ti), a target point Pc(k)(tj), then selecting N sub-matrices SMimp(k)(tj) each centred on a target point Pc(k)(tj); correcting the image Ic(tj), by replacing a region of the image Pc(tj) representing the ocular portion Po(tj) with the N images Ico(k)(tj).
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: April 9, 2024
    Assignee: Commissariat à l'Energie Atomique et aux Energies Alternatives
    Inventors: Christophe Martinez, François Templier
  • Patent number: 11947244
    Abstract: A gate apparatus includes a supporting portion, a first light, and a camera device. The support portion is an element that extends vertically upward from a main body of the gate apparatus. The first light is installed in a ceiling portion attached to the supporting portion. The camera device is a device that is attached to the supporting portion and that acquires biological information of a user. The camera device acquires biological information of the user when the first light 102 is illuminating light.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: April 2, 2024
    Assignee: NEC CORPORATION
    Inventors: Fumi Irie, Yoshitaka Yoshimura
  • Patent number: 11948076
    Abstract: A media rendering device controlled based on a trained neural network is provided. The media rendering device captures an image of a user, and determines a user-type of the user and user-profile information of the user or the user-type based on the captured image. The user-type corresponds to an age group, a gender, an emotional state, and/or a geo-location, associated with the user. The user-profile information corresponds to interests or preferences of the user or the determined user-type. The media rendering device further determines device-assistive information based on application of the trained neural network model on the determined user-type. The media rendering device is further controlled based on the determined device-assistive information, to change at least one configuration setting of the media rendering device or to output media content.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: April 2, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Ananya Banik, Ashish Agnihotri, Ashritha Udyavar, Madhvesh Sulibhavi
  • Patent number: 11948402
    Abstract: Methods, systems, and computer-readable storage media for determining that a subject is a live person include capturing one or more images of two eyes of a subject. The one or more images, from each of the two eyes are used to obtain respective corneal reflections. Depth information associated with a scene in front of the subject is determined, based on an offset between the respective corneal reflections. A determination is made, based at least on the depth information, that the subject is a live person. Responsive to determining that the subject is a live person, an authentication process is initiated to authenticate the subject.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: April 2, 2024
    Assignee: Jumio Corporation
    Inventor: David Hirvonen
  • Patent number: 11948252
    Abstract: An apparatus is provided. The apparatus includes a communications interface to receive raw data from an external source. The raw data includes a representation of an object. Furthermore, the apparatus includes a memory storage unit to store the raw data. The apparatus also includes a pre-processing engine to generate a coarse segmentation map and a joint heatmap from the raw data. The coarse segmentation map is to outline the object and the joint heatmap is to represent a point on the object. The apparatus further includes a neural network engine to receive the raw data, the coarse segmentation map, and the joint heatmap. The neural network engine is to generate a plurality of two-dimensional maps. Also, the apparatus includes a mesh creator engine to generate a three-dimensional mesh based on the plurality of two-dimensional maps.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: April 2, 2024
    Assignee: HINGE HEALTH, INC.
    Inventors: Sohail Zangenehpour, Colin Joseph Brown, Paul Anthony Kruszewski
  • Patent number: 11941498
    Abstract: An image processing method executed by a computer, the method includes detecting a plurality of feature points of a face from an input image, referring to importance information that indicates an importance of a region within an image in a process of detecting a predetermined facial motion from the image, selecting, from the plurality of feature points detected by the detecting, one or more points that correspond to an image region including an importance indicated by the importance information equal to or smaller than a first threshold value, correcting the input image by using the one or more points selected by the selecting, to generate a corrected image; and determining whether or not the predetermined facial motion is occurring in the input image, based on an output obtained by inputting the corrected image to a recognition model.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: March 26, 2024
    Assignee: FUJITSU LIMITED
    Inventors: Ryosuke Kawamura, Kentaro Murase
  • Patent number: 11941044
    Abstract: A method including training a recurrent neural network model to create a trained model based at least in part on: (a) first images associated with first items on a website, (b) first search terms used by users of the website to search for the first items on the website, and (c) personal features of the users. The method also can include receiving an input image that was uploaded by a current user. The method additionally can include obtaining a user encoded representation vector for the current user based on a set of personal features of the current user. The method further can include generating an image encoded representation vector for the input image. The method additionally can include deriving search terms that are personalized to the current user for the one or more items depicted in the input image, using the trained model and based on the user encoded representation vector for the current user and the image encoded representation vector for the input image. Other embodiments are disclosed.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 26, 2024
    Assignee: WALMART APOLLO, LLC
    Inventors: Kannan Achan, Sushant Kumar, Kaushiki Nag, Venkata Syam Prakash Rapaka
  • Patent number: 11943564
    Abstract: Methods and systems provide for video appearance adjustments within a video communication session. First, the system receives video content. The system then receives an appearance adjustment request comprising an adjustment depth, and detects imagery of a user within the video content. The system then detects a face region within the video content. The system segments the face region into a number of skin areas. For each of the plurality of skin areas, the system classifies the skin area as a smooth texture region or rough texture region. If the skin area is classified as a smooth texture region, the system modifies the imagery of the user in real time or substantially real time by applying a smoothing process to the skin area, where the amount of smoothing applied corresponds to the adjustment depth.
    Type: Grant
    Filed: July 31, 2021
    Date of Patent: March 26, 2024
    Assignee: Zoom Video Communications, Inc.
    Inventors: Abhishek Balaji, Bo Ling, Juliana Park, Nitasha Walia, Jianpeng Wang, Ruizhen Wang
  • Patent number: 11925870
    Abstract: A system for providing user-driven customization and enhanced personalization of interactive experiences. The system includes data storage for storing player profiles, with each including customization preferences useful in enhancing or generating one of the interactive experiences. The system includes a gameplay space adapted to provide an interactive experience, which includes one or more interactive elements. The system includes a gameplay device configured to be worn or carried by a player. A detection device detects a presence of the player in the gameplay space and obtains a unique identifier for the gameplay device. The system includes a controller retrieving a set of the customization preferences in one of the player profiles associated with the identifier.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: March 12, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Christina Jaio, Bob Hickman, Brent D. Strong, Jeffrey L. Elbert
  • Patent number: 11928198
    Abstract: An authentication device is provided with: a plurality of attribute-dependent score calculation units each calculating an attribute-dependent score dependent on a prescribed attribute for input data; an attribute-independent score calculation unit for calculating an attribute-independent score independent of the attribute for the input data; an attribute estimation unit for performing attribute estimation for the input data; and a score integration unit for determining a score weight of each of a plurality of attribute-dependent scores and of the attribute-independent score using the result of the attribute estimation and calculating an output score using the attribute-dependent scores, the attribute-independent score, and the determined score weights.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: March 12, 2024
    Assignee: NEC CORPORATION
    Inventors: Koji Okabe, Hitoshi Yamamoto, Takafumi Koshinaka
  • Patent number: 11921831
    Abstract: Person or object authentication can be performed using artificial intelligence-enabled systems. Reference information, such as for use in comparisons or assessments for authentication, can be updated over time to accommodate changes in an individual's appearance, voice, or behavior. In an example, reference information can be updated automatically with test data, or reference information can be updated conditionally, based on instructions from a system administrator. Various types of media can be used for authentication, including image information, audio information, or biometric information. In an example, authentication can be performed wholly or partially at an edge device such as a security panel in an installed security system.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: March 5, 2024
    Assignee: INTELLIVISION TECHNOLOGIES CORP
    Inventors: Krishna Khadloya, Manuel Gonzalez
  • Patent number: 11922724
    Abstract: A device and a method and a non-transitory readable storage medium, for face recognition are provided, the method comprise: extracting a face sample image from a predetermined face sample library and performing feature point detection to obtain multiple face feature points; obtaining multiple mask images; selecting first to fourth face feature points from the multiple face feature points; defining a distance between the first and second face feature point as a mask image height, and defining a distance between the third and fourth face feature point as a mask image width; adjusting a size of each mask image according to the mask image height and the mask image width; fusing each adjusted mask image with the face sample image to obtain multiple face mask images to save into the predetermined face sample library; training a face recognition model based on the predetermined face sample library for face recognition.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: March 5, 2024
    Assignee: HONG FU JIN PRECISION INDUSTRY (WuHan) CO., LTD.
    Inventor: Chin-Wei Yang
  • Patent number: 11922312
    Abstract: An image classification system 10 includes: a probability computation means 11 which computes a known-image probability, which is the probability that an input image corresponds to a known image associated with a seen label that indicates the class into which content indicated by the known image is classified; a likelihood computation means 12 which computes both the likelihood that content indicated by the input image is classified into the same class as content indicated by an unseen image associated with an unseen label, and the likelihood that the content indicated by the input image is classified into the same class as the content indicated by the known image; and a correction means 13 which corrects each computed likelihood using the computed known-image probability.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: March 5, 2024
    Assignee: NEC CORPORATION
    Inventor: Takahiro Toizumi
  • Patent number: 11924349
    Abstract: Systems and methods for secure distribution of biometric matching processing are provided. Certain configurations include homomorphic encrypting of captured biometric information. In some configurations, the biometric information is classified without decryption between a first identity class and a second identity class. The biometric information may be formed as a feature vector. A homomorphic encrypted feature vector may be formed by homomorphic encrypting of the biometric information.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: March 5, 2024
    Assignee: The Government of the United States of America, as represented by the Secretary of Homeland Security
    Inventor: Arun Vemury
  • Patent number: 11914694
    Abstract: A computing device includes a system that authenticates a user of the computing device. A first sensor obtains a first representation of a physical characteristic of the user that is compared to a registered representation of the physical characteristic of the user. A first level of access to the computing device is enabled based on the first representation of the physical characteristic matching the second representation of the physical characteristic. A second sensor obtains a first representation of a liveness characteristic of the user that indicates that the user is alive. The first representation of the liveness characteristic is compared to a registered representation of the liveness characteristic of the user. A second level of access to the computing device is enabled based on the first representation of the liveness characteristic of the user matching the second representation of the liveness characteristic of the user.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: February 27, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kwang Oh Kim, Yibing Michelle Wang, Kamil Bojanczyk
  • Patent number: 11915514
    Abstract: This application relates to a method and an apparatus for detecting facial key points, a computer device, and a storage medium including: acquiring a to-be-detected face image from a current frame; determining partial images in the to-be-detected face image, each partial image including one or more facial key points; determining, within each of the partial images, candidate points of the one or more facial key points corresponding to the partial image, respectively; and jointly constraining the candidate points in the partial images to determine a set of facial key points from the candidate points for the to-be-detected face image. For the partial images in the entire to-be-detected face image, the candidate points of the facial key points corresponding to the partial images are respectively determined. Therefore, a calculation amount may be reduced, and the efficiency of determining the candidate points of the facial key points is improved.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: February 27, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xuan Cao, Weijian Cao, Yanhao Ge, Chengjie Wang
  • Patent number: 11914691
    Abstract: A method for recognizing an identity in a video conference including: obtaining, by an identity recognition apparatus, first biometric feature information in a video conference; obtaining second biometric feature information corresponding to an identity list in a database and a conference probability value corresponding to the identity list, where the identity list includes at least one personal unique identifier, and the biometric feature information first and the second biometric feature information include at least one of facial feature information and voiceprint feature information, and the conference probability value is determined based on at least one of a participation probability value and a same conference probability value; and determining, from the identity list and based on the second biometric feature information corresponding to the identity list and the conference probability value corresponding to the identity list, a personal unique identifier corresponding to the first biometric feature inf
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: February 27, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Guangyao Zhao
  • Patent number: 11915515
    Abstract: A facial verification method and apparatus is disclosed. The facial verification method includes detecting a face region in an input image, determining whether the detected face region represents a partial face, in response to a determination that the detected face region represents the partial face, generating a synthesized image by combining image information of the detected face region and reference image information, performing a verification operation with respect to the synthesized image and predetermined first registration information, and indicating whether facial verification of the input image is successful based on a result of the performed verification operation.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: February 27, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungju Han, Minsu Ko, Deoksang Kim, Jae-Joon Han
  • Patent number: 11915355
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: February 27, 2024
    Assignee: Snap Inc.
    Inventors: Yurii Volkov, Pavel Savchenkov, Nikolai Smirnov, Aleksandr Mashrabov
  • Patent number: 11908128
    Abstract: Systems and methods process images to determine a skin condition severity analysis and to visualize a skin analysis such as using a deep neural network (e.g. a convolutional neural network) where a problem was formulated as a regression task with integer-only labels. Auxiliary classification tasks (for example, comprising gender and ethnicity predictions) are introduced to improve performance. Scoring and other image processing techniques may be used (e.g. in assoc. with the model) to visualize results such as highlighting the analyzed image. It is demonstrated that the visualization of results, which highlight skin condition affected areas, can also provide perspicuous explanations for the model. A plurality (k) of data augmentations may be made to a source image to yield k augmented images for processing. Activation masks (e.g. heatmaps) produced from processing the k augmented images are used to define a final map to visualize the skin analysis.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: February 20, 2024
    Assignee: L'Oreal
    Inventors: Ruowei Jiang, Irina Kezele, Zhi Yu, Sophie Seite, Frederic Antoinin Raymond Serge Flament, Parham Aarabi, Mathieu Perrot, Julien Despois
  • Patent number: 11908211
    Abstract: Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: February 20, 2024
    Assignee: West Texas Technology Partners, LLC
    Inventor: Allen Yang Yang
  • Patent number: 11908177
    Abstract: The learning device 10D is learned to extract moving image feature amount Fm which is feature amount relating to the moving image data Dm when the moving image data Dm is inputted thereto, and is learned to extract still image feature amount Fs which is feature amount relating to the still image data Ds when the still image data Ds is inputted thereto. The first inference unit 32D performs a first inference regarding the moving image data Dm based on the moving image feature amount Fm. The second inference unit 34D performs a second inference regarding the still image data Ds based on the still image feature amount Fs. The learning unit 36D performs learning of the feature extraction unit 31D based on the results of the first inference and the second inference.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: February 20, 2024
    Assignee: NEC CORPORATION
    Inventors: Shuhei Yoshida, Makoto Terao
  • Patent number: 11909854
    Abstract: Systems and methods for secure distribution of biometric matching processing are provided. Certain configurations include homomorphic encrypting of captured biometric information. In some configurations, the biometric information is classified without decryption between a first identity class and a second identity class. The biometric information may be formed as a feature vector. A homomorphic encrypted feature vector may be formed by homomorphic encrypting of the biometric information.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: February 20, 2024
    Assignee: The Government of the United States of America, as represented by the Secretary of Homeland Security
    Inventor: Arun Vemury
  • Patent number: 11908239
    Abstract: The disclosure provides an image recognition network model training method, including: acquiring a first image feature corresponding to an image set; acquiring a first identity prediction result by using an identity classifier, and acquiring a first pose prediction result by using a pose classifier; obtaining an identity classifier according to the first identity prediction result and an identity tag, and obtaining a pose classifier according to the first pose prediction result and a pose tag; performing pose transformation on the first image feature by using a generator, to obtain a second image feature corresponding to the image set; acquiring a second identity prediction result by using the identity classifier, and acquiring a second pose prediction result by using the pose classifier; and training the generator.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zheng Ge, Ze Qun Jie, Hao Wang, Zhi Feng Li, Di Hong Gong, Wei Liu
  • Patent number: 11908233
    Abstract: A system, method, and apparatus for generating a normalization of a single two-dimensional image of an unconstrained human face. The system receives the single two-dimensional image of the unconstrained human face, generates an undistorted face based on the unconstrained human face by removing perspective distortion from the unconstrained human face via a perspective undistortion network, generates an evenly lit face based on the undistorted face by normalizing lighting of the undistorted face via a lighting translation network, and generates a frontalized and neutralized expression face based on the evenly lit face via an expression neutralization network.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: February 20, 2024
    Assignee: Pinscreen, Inc.
    Inventors: Koki Nagano, Huiwen Luo, Zejian Wang, Jaewoo Seo, Liwen Hu, Lingyu Wei, Hao Li
  • Patent number: 11909937
    Abstract: A processing system includes a hardware processor. The hardware processor obtains first read data of a sample image for image data included in user desired job data for image formation, searches job data stored in a storage section for data of a similar image to an image included in the obtained first read data to detect the data of the similar image, outputs a list of the detected data of the similar image, causes an image forming apparatus to form an image based on selected data selected from the data included in the list, obtains second read data of the formed image, and performs color adjustment on the selected data based on the first read data and the second read data.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: February 20, 2024
    Assignee: KONICA MINOLTA, INC.
    Inventor: Tanmoy Majumder
  • Patent number: 11908117
    Abstract: An image processing method implemented by a processor includes receiving an image, acquiring a target image that includes an object from the image, calculating an evaluation score by evaluating a quality of the target image, and detecting the object from the target image based on the evaluation score.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: February 20, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hao Feng, Jae-Joon Han, Changkyu Choi, Chao Zhang, Jingtao Xu, Yanhu Shan, Yaozu An
  • Patent number: 11908234
    Abstract: Some embodiments of a method may include obtaining an image a real-world environment, determining an estimated illuminant spectral power distribution of an illuminant of the real-world environment, and detecting a region of the image representing human skin. The method may further include determining a representative skin color value of the region and based on the estimated illuminant spectral power distribution and the representative skin color value most closely matching the representative color value, selecting a candidate skin reflectance spectrum. The method may further include updating the estimated illuminant spectral power distribution base on the representative skin color value and the selected candidate skin reflectance spectrum.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: February 20, 2024
    Assignee: InterDigital VC Holdings, Inc.
    Inventor: David Wyble