Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 11073899
    Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 27, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
  • Patent number: 11074495
    Abstract: Specification covers new algorithms, methods, and systems for: Artificial Intelligence; the first application of General-AI (versus Specific, Vertical, or Narrow-AI) (as humans can do); addition of reasoning, inference, and cognitive layers/engines to learning module/engine/layer; soft computing; Information Principle; Stratification; Incremental Enlargement Principle; deep-level/detailed recognition, e.g., image recognition (e.g., for action, gesture, emotion, expression, biometrics, fingerprint, tilted or partial-face, OCR, relationship, position, pattern, and object); Big Data analytics; machine learning; crowd-sourcing; classification; clustering; SVM; similarity measures; Enhanced Boltzmann Machines; Enhanced Convolutional Neural Networks; optimization; search engine; ranking; semantic web; context analysis; question-answering system; soft, fuzzy, or un-sharp boundaries/impreciseness/ambiguities/fuzziness in class or set, e.g.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: July 27, 2021
    Assignee: Z ADVANCED COMPUTING, INC. (ZAC)
    Inventors: Lotfi A. Zadeh, Saied Tadayon, Bijan Tadayon
  • Patent number: 11074444
    Abstract: Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for preview in iris recognition are provided. One of the methods includes: obtaining an iris image and a facial image of a user; determining a preview image corresponding to the iris image based on the facial image; and displaying the determined preview image.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: July 27, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventor: Xiaofeng Li
  • Patent number: 11068701
    Abstract: Apparatus for vehicle driver recognition includes: a NIR LED illuminator, configured to emit NIR light in the vehicle; a NIR light sensing unit, configured to capture reflected NIR light; an image controlling and processing unit, configured to coordinate the NIR LED illuminator and the NIR light sensing unit, and analyze the reflected NIR light to generate an image; a face detector, configured to determine that a human face exists in the image, and identify a face region; a face feature extractor, configured to analyze the face region to extract a feature vector representing the face region; a face feature dictionary, configured to store existing feature vectors; a face retrieval system, configured to generate an identification result, indicating whether a similarity between the feature vector and any of the existing feature vectors is greater than a first threshold; and a user interface, configured to display the identification result.
    Type: Grant
    Filed: June 13, 2019
    Date of Patent: July 20, 2021
    Assignee: XMOTORS.AI INC.
    Inventors: Cong Zhang, Tianpeng Feng, Cheng Lu, Yandong Guo, Jun Ma
  • Patent number: 11069152
    Abstract: Provided is a face pose correction apparatus for correcting a face pose, using a person-specific frontal model specialized for input three-dimensional (3D) facial data. The face pose correction apparatus may comprise: a generation unit for generating corrected first facial data by performing a 3D rigid transformation of input 3D facial data on the basis of a pre-stored 3D standard facial model; and a correction unit for generating a first subject-specialized frontal model which is a bilateral symmetry model of the first facial data, and correcting the first facial data by performing the 3D rigid transformation of the first facial data on the basis of the generated first subject-specialized frontal model.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: July 20, 2021
    Assignee: KOREA INSTITUTE OF ORIENTAL MEDICINE
    Inventors: Jun Su Jang, Jun Hyeong Do
  • Patent number: 11068069
    Abstract: A system and method for facial and gesture recognition, and more particularly a system and method for facial and gesture recognition using a heterogeneous convolutional neural network (CNN).
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: July 20, 2021
    Assignee: DUS Operating Inc.
    Inventors: Indraneel Krishna Page, Iyad Faisal Ghazi Mansour
  • Patent number: 11069210
    Abstract: Some embodiments provide for obtaining image data representative of a field of view of a camera as captured by the camera of an A/V recording and communication device. The image data may be analyzed and, based at least in part on the analysis, it may be determined that the image data is representative of a first facial image of a person and a second facial image of the person. From the facial images, it may be determined that the first facial image is of higher quality than the second facial image and, based on this determination, a frame may be selected that is represented by the image data and corresponds to the first facial image. A notification may be generated that includes a portion of the image data that represents the frame, and the notification may be transmitted to a client device associated with the A/V recording and communication device.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 20, 2021
    Assignee: Amazon Technologies, Inc.
    Inventor: Mark Troughton
  • Patent number: 11069168
    Abstract: Disclosed are various embodiments for controlling access to resources by a client device. Methods may include receiving a user request to access a resource on the device and determining whether the resource requires a facial capture. If the resource requires a facial capture, a camera of the device may be automatically activated to capture an image and the resource may be rendered on the device. In some cases, access to the resource may be limited based on whether the image includes a face or not. A record associating the image and the requested resource may be stored, for example, on the device or on a remote server.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: July 20, 2021
    Assignee: AirWatch, LLC
    Inventor: Erich Stuntebeck
  • Patent number: 11068572
    Abstract: A first similarity is calculated between a first feature amount of first biological information acquired from a first person among multiple persons to be subjected to authentication and a second feature amount of second biological information acquired from a second person among the multiple persons to be subjected to authentication, and multiple second similarities between the first feature amount and multiple registered feature amounts of biological information acquired from multiple registered persons are calculated. When authentication of the first person is successful, first registered feature amounts to be subjected to similarity calculation are selected from among the multiple registered feature amounts, based on the first similarity and the multiple second similarities. Third similarities are calculated between the second feature amount and the first registered feature amounts, and authentication on the second person to be subjected to authentication is executed based on the third similarities.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: July 20, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Narishige Abe
  • Patent number: 11069031
    Abstract: A control method for an image processing apparatus includes setting a virtual light source, analyzing characteristics of a partial region of a subject in an input image acquired through image capturing, smoothing at least a portion of the input image on the basis of information about a result of the analysis, generating, on the basis of the smoothed input image, reflected color components in a case where the subject is irradiated with light from the virtual light source, and performing correction based on the reflected color components on the input image.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: July 20, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kaori Tajima
  • Patent number: 11068751
    Abstract: An image processing device has an extraction unit that extracts image data by using a predetermined sliding window in an original image; a learning unit that generates a prediction model by performing machine learning on learning data including the image data by using a teaching signal representing classification of the image data; and a select unit that selects, out of other image data different from the image data, another image data in which an error in classification based on the prediction model is larger than a predetermined threshold and adds the selected another image data to the learning data, and the learning unit updates the prediction model by repeating the machine learning in the learning data in which the another image data is added.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: July 20, 2021
    Assignee: NEC CORPORATION
    Inventor: Hikaru Nakayama
  • Patent number: 11068697
    Abstract: Methods and apparatuses for video-based facial recognition, devices, media, and programs can include: forming a face sequence for face images, in a video, appearing in multiple continuous video frames and having positions in the multiple video frames meeting a predetermined displacement requirement, wherein the face sequence is a set of face images of a same person in the multiple video frames; and performing facial recognition for the face sequence by using a preset face library at least according to face features in the face sequence.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: July 20, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Wentao Liu, Chen Qian
  • Patent number: 11068741
    Abstract: Techniques and systems are provided for determining features for one or more objects in one or more video frames. For example, an image of an object, such as a face, can be received, and features of the object in the image can be identified. A size of the object can be determined based on the image, for example based on inter-eye distance of a face. Based on the size, either a high-resolution set of features or a low-resolution set of features is selected to compare to the features of the object. The object can be identified by matching the features of the object to matching features from the selected set of features.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: July 20, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Lei Wang, Ning Bi, Ying Chen
  • Patent number: 11070694
    Abstract: An image processing apparatus includes: an image reading unit that reads an image of a document according to an instruction of an operator; an acquiring unit that acquires information on the operator performing document reading, or information on motions of the operator performed on the image processing apparatus in order to perform document reading; and a display that performs display for receiving an input for document reading, based on the line of sight, from the operator, when the information acquired by the acquiring unit satisfies a predetermined condition.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: July 20, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Tomoyo Nishida, Yuichi Kawata, Hideki Yamasaki, Ryoko Saitoh, Yoshifumi Bando, Kensuke Okamoto
  • Patent number: 11064087
    Abstract: An information processing apparatus includes a registration unit configured to register information on a certain person before acquiring image data to be printed, a specification unit configured to specify a number of certain persons in the image data to be printed based on the image data to be printed and the information on the certain person, a determination unit configured to determine a number of copies to be printed based on a number of people obtained by subtracting the specified number of certain persons from a number of people detected from the image data to be printed, and an instruction unit configured to give an instruction for printing the determined number of copies.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: July 13, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Takehiro Yoshida, Yusuke Shirakawa, Hiroshi Atobe, Yusuke Haruyama
  • Patent number: 11062007
    Abstract: A method and system for improving automated software execution is provided. The method includes receiving in real time from a video retrieval device, visual data associated with a user of a hardware device. The user is identified with respect to the visual data. Internal software applications and hardware structures are scanned in real time and relationships between a group of Web based software applications and a group of internal software applications and hardware structures authorized for access by the user are determined. Information associated with network and hardware device access by the user is analyzed actions for execution with respect to access to the group of Web based software applications and internal software applications and hardware structures are determined. In response, the automated actions are executed with respect to access to the group of Web based software applications and internal software applications and hardware structures.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: July 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: Giuseppe Ciano, Gianluca Della Corte, Giuseppe Longobardi, Antonio M. Sgro
  • Patent number: 11062160
    Abstract: A server analyzes feature information including a whole body and a face of a person reflected in each of video images from a plurality of monitoring cameras and stores a whole body image and a face image as an analysis result. In response to designation of the whole body image and the face image of a person of interest, a client terminal sends a request for execution of each of first collation processing and second collation processing to the server. When a person matching at least one of the whole body image and the face image of the person of interest is specified by at least one of the first collation processing and the second collation processing, the server sends an alarm notification to the client terminal that the person of interest is found to the client terminal.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: July 13, 2021
    Assignee: PANASONIC I-PRO SENSING SOLUTIONS CO., LTD.
    Inventors: Takamitsu Arai, Takashi Kamio, Koji Kawamoto, Yuumi Miyake, Eisaku Miyata
  • Patent number: 11062176
    Abstract: Computerized techniques for real-time object detection from video data include: defining an analysis profile comprising an initial number of analysis cycles dedicated to each of a plurality of detectors, each detector being independently configured to detect objects according to a unique set of analysis parameters; receiving a plurality of frames of digital video data, the digital video data depicting an object; analyzing the plurality of frames using the plurality of detectors and in accordance with the analysis profile, wherein analyzing the plurality of frames produces an analysis result for each of the plurality of detectors; determining a confidence score for each of the analysis results; and updating the analysis profile by adjusting the number of analysis cycles dedicated to at least one of the plurality of detectors based on the confidence scores. Corresponding systems and computer program products are also disclosed.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: July 13, 2021
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen M. Thompson, Jan W. Amtrup
  • Patent number: 11062578
    Abstract: An information processing device and a determination method for determining whether a person other than the persons determined to be permitted to enter each zone has entered the zone is provided. The information processing device has a communication section for receiving face image data from cameras for photographing respective plurality of zones in a building and a control section for collating the face image data with the registered face image data of the persons permitted to enter each zone and for determining whether the entry of the person corresponding to the face image data is permitted or not.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 13, 2021
    Assignee: PANASONIC I-PRO SENSING SOLUTIONS CO., LTD.
    Inventors: Takashi Kamio, Kosuke Shinozaki, Koji Kawamoto, Hiromichi Sotodate, Yuiko Takase, Masashige Tsuneno, Eisaku Miyata, Nobuhito Seki
  • Patent number: 11062124
    Abstract: The application discloses a face pose detection method, which includes the steps of: extracting N face feature points from a face image through a face detection algorithm; extracting key feature points from the N face feature points; and calculating face pose information such as a rotation direction and a rotation angle of a face around a coordinate axis according to coordinate values of the key feature points. According to the application, a real-time detection of a face pose is realized by calculating the pose information of the face in the face image with the coordinate values of the feature points of the face. Accordingly, the present application also provides a calculating device and a computer-readable storage medium.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: July 13, 2021
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Lin Chen, Guohui Zhang
  • Patent number: 11062127
    Abstract: An example method may include applying an automated face detection program implemented on a computing device to a plurality of training digital images associated with a particular TV program to identify a sub-plurality of the training digital images, each containing a single face of a particular person associated with the particular TV program. A set of feature vectors determined for the sub-plurality may be used to train a computational model of a face recognition program for recognizing the particular person in any given digital image. The face recognition program and the computational model may be applied to a runtime digital image associated with the particular TV program to recognize the particular person in the runtime digital image, together with geometric coordinates. The runtime digital image may be stored together with information identifying the particular person and corresponding geometric coordinates of the particular person in the runtime digital image.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: July 13, 2021
    Assignee: Gracenote, Inc.
    Inventors: Jeffrey Scott, Aneesh Vartakavi
  • Patent number: 11055889
    Abstract: According to an embodiment, an electronic device comprises a camera, a display, and a processor configured to control the electronic device to: obtain a plurality of images including a first image and a second image corresponding to a user's face using the camera, display, on the display, a first avatar selected from among at least one 3D avatar including model information related to a motion and created based on 3D modeling, determine a degree of variation in at least some feature points among the plurality of feature points of the face based on a comparison between the plurality of feature points of the face included in each of the first image and the second image, determine a weight for at least some of a plurality of reference models related to a motion of the first avatar based at least on the degree of variation determined, and display, on the display, the first avatar on which the motion is performed based on the plurality of reference models and the weight.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: July 6, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wooyong Lee, Hyejin Kang, Jiyoon Park, Jaeyun Song, Junho An, Jonghoon Won, Hyoungjin Yoo, Minsheok Choi
  • Patent number: 11055878
    Abstract: A person counting method and a person counting system are provided. The method includes extracting a group of person images to obtain a first image set; dividing the first image set into first and second subsets based on whether a related image exists in a second image set, and reusing a person ID of the related image; estimating posture patterns of images in the first subset, and storing the images in the first subset into an image library based on person IDs and the posture patterns; and selecting a target image whose similarity to an image in the second subset is highest from the image library, reusing a person ID of the target image when the similarity is greater than a threshold, and assigning a new person ID and incrementing a person counter by 1 when the similarity is not greater than the threshold.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: July 6, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventors: Hong Yi, Haijing Jia, Weitao Gong, Wei Wang
  • Patent number: 11055576
    Abstract: System, methods, and other embodiments described herein relate to improving querying of a visual dataset of images through implementing system-aware cascades. In one embodiment, a method includes enumerating a set of cascade classifiers that are each separately comprised of transformation modules and machine learning modules arranged in multiple pairs. Classifiers of the set of cascade classifiers are configured to extract content from an image according to a query. The method includes selecting a query classifier from the set of cascade classifiers based, at least in part, on system costs that characterize computational resources consumed by the classifiers of the set of cascade classifiers. The computational resources include at least data handling costs. The method includes identifying content within the image using the query classifier.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: July 6, 2021
    Assignees: Toyota Research Institute, Inc., The Regents of The University of Michigan
    Inventors: Michael Robert Anderson, Thomas Friedrich Wenisch, German Ros Sanchez
  • Patent number: 11057557
    Abstract: Techniques for starting an electronic communication based on a captured image are disclosed herein. In some embodiments, a computer system detects that an image has been captured by a camera on a first mobile device of a first user, where the captured image has been captured by the camera at a point in time, and, in response to detecting that the image has been captured, the computer system identifies at least one other user in the captured image. In some example embodiments, the computer system transmits a message to an electronic destination associated with the other user(s) based on the identifying of the other user(s) in the captured image.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: July 6, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christopher Szeto, Viet Hung Nguyen, Jingwei Huang, Haowen Ning, Haoyang Li
  • Patent number: 11057533
    Abstract: Provided is an image forming apparatus capable of improving operability. A recognizing unit recognizes surroundings of the image forming apparatus itself from image data captured by the image capturing unit. An additional information collecting unit collects additional information that possibly is associated with the target operator recognized by the recognizing unit. A situation inferring unit infers a situation of the image forming apparatus itself from recognition information recognized by the recognizing unit. At this time, the situation inferring unit may infer a situation of a target operator from additional information collected by the additional information collecting unit, and recognition information of the target operator. A response unit changes a response and/or an operation corresponding to the situation inferred by the situation inferring unit.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: July 6, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Yoshihiro Osada
  • Patent number: 11049310
    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 29, 2021
    Assignee: Snap Inc.
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Patent number: 11048913
    Abstract: The disclosure discloses a focusing method, device, and computer apparatus for realizing a clear human face. The focusing method includes: acquiring first position information of a human face in a current frame of an image to be captured by performing face recognition on the image, after a camera finishing focusing; acquiring second position information of the human face in a next frame, before shooting the image; determining whether a position of the human face changes, based on the first and second position information; resetting an ROI of the human face when it changes; and refocusing on the human face based on the ROI. The disclosure can track the human face in real-time, and trigger the camera to refocus after the human face deviates from a previous focusing position, thereby to make the human face in the captured photograph clear.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: June 29, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Shijie Zhuo, Xiaopeng Li
  • Patent number: 11050920
    Abstract: Methods, apparatuses, mobile terminals, and cameras for recognizing a photographed object are provided. The recognition method includes starting a photographing device of a terminal to auto-focus a photographed object in response to receiving a photographing instruction; obtaining a post-focusing focal distance of the photographing device after the photographing device successfully focuses on the photographed object; and determining whether an object type of the photographed object is a photograph based on the post-focusing focal distance. The present application resolves the technical problem of the high complexity associated with the schemes for recognizing a remade photograph in existing technologies.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: June 29, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Wentao Yu, Chen Zhang
  • Patent number: 11048916
    Abstract: Systems, devices, media, and methods are presented for generating facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial landmarks within the portion of the face. The systems and methods determine one or more characteristics representing the portion of the face, in response to detecting the portion of the face. Based on the one or more characteristics and the set of facial landmarks, the systems and methods generate a representation of a face.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: June 29, 2021
    Assignee: Snap Inc.
    Inventors: Maksim Gusarov, Igor Kudriashov, Valerii Filev, Sergei Kotcur
  • Patent number: 11042725
    Abstract: A method for selecting frames used in face processing includes capturing video data featuring a face of an individual and determining with a processing unit at least one image quality indicator for at least some frames in the video data. The quality indicator is used for selecting a subset of the frames and a sequence of frames is determined corresponding to a movement of a body portion detected in the video data and/or corresponding to a response window during which a response to a challenge should be given. At least one second frame is added to the subset within a predefined interval before or after the sequence and the selected frames are stored in a memory.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: June 22, 2021
    Assignee: Keylemon SA
    Inventors: Yann Rodriguez, François Moulin, Sébastien Piccand, Sara Sedlar
  • Patent number: 11042728
    Abstract: An electronic apparatus is provided. The electronic apparatus includes a camera configured to obtain a user image by capturing an image of a user, a memory configured to store one or more instructions, and a processor configured to execute the one or more instructions stored in the memory, wherein the processor is further configured to, by executing the one or more instructions, recognize the user from a face region of the user image by using a first recognition model learned based on face information of a plurality of users, extract additional feature information regarding the recognized user from the user image, allow the first recognition model to additionally learn based on the extracted additional feature information, recognize the user from a person region of the user image by using an additionally learned second recognition model, and output a recognition result of the second recognition model.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: June 22, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Heung-Woo Han, Seong-Min Kang
  • Patent number: 11042727
    Abstract: In one aspect, a device may include at least one processor and storage accessible to the at least one processor. The storage may include instructions executable by the at least one processor to receive input from a camera indicating a first face of a first user. The instructions may also be executable to access first facial recognition data indicating one or more enrolled faces and to access second facial recognition data indicating time-variant data, where the second facial recognition data may not establish non-time-variant face data. The instructions may then be executable to select first time-variant data associated with the first user from the second facial recognition data and to authenticate the first user based on the first time-variant data and enrolled face data for the first user identified from the first facial recognition data.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: June 22, 2021
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Robert J. Kapinos, Robert Norton, Russell Speight VanBlon, Scott Wentao Li
  • Patent number: 11038909
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for anomaly detection and recovery. An apparatus to isolate a first controller in an autonomous vehicle includes a first controller to control a reference signal of the autonomous vehicle via a communication bus, a second controller to control the reference signal of the autonomous vehicle when the first controller is compromised, and a message neutralizer to neutralize messages transmitted by the first controller when the first controller is compromised, the neutralized messages to cause the first controller to become isolated from the communication bus.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 15, 2021
    Assignee: Intel Corporation
    Inventors: Marcio Juliato, Liuyang Lily Yang, Manoj Sastry, Christopher Gutierrez, Shabbir Ahmed, Vuk Lesi
  • Patent number: 11036970
    Abstract: A computer implemented method for gender classification by applying feature learning and feature engineering to face images. The method includes conducting feature learning on a face image comprising feeding the face image into a first convolution neural network to obtain a first decision, conducting feature engineering on a face image, comprising the steps of automatically detecting facial landmarks in the face image, transforming the facial features into a two-dimensional matrix, and feeding the two-dimensional matrix into a second convolution neural network to obtain a second decision, computing a hybrid decision based on the first decision and the second decision, and classifying gender of the face image in accordance with the hybrid decision.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: June 15, 2021
    Assignee: Shutterfly, LLC
    Inventor: Leo Cyrus
  • Patent number: 11036285
    Abstract: A method (200) for mixed reality interactions with avatar, comprises steps of receiving (210) one or more of an audio input through a microphone (104) and a visual input through a camera (106), displaying (220) one or more avatars (110) that interact with a user through one or more of an audio outputted from one or more speakers (112) and a video outputted from a display device (108) and receiving (230) one or more of a further audio input through the microphone (104) and a further visual input through the camera (106). Further, a system (600) for mixed reality interactions with avatar has also been provided.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: June 15, 2021
    Inventors: Abhinav Aggarwal, Raghav Aggarwal
  • Patent number: 11036975
    Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 15, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Noranart Vesdapunt, Baoyuan Wang, Ying Jin, Pierrick Arsenault
  • Patent number: 11030439
    Abstract: A facial expression synthesis method is provided. The method includes obtaining a to-be-processed facial image of a target object, and processing the to-be-processed facial image by using a face-recognition operation, to obtain skin color information of the to-be-processed facial image; screening out a target expression-material image, from a plurality of expression-material images in an expression-material image library, matching the skin color information; extracting a region image corresponding to a target synthesis region in the target expression-material image; and performing Poisson fusion processing on the region image and the to-be-processed facial image to fuse the region image with the to-be-processed facial image, so as to obtain a target facial image of the target object.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: June 8, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yongsen Zheng, Xiaolong Zhu
  • Patent number: 11030470
    Abstract: A processor-implemented liveness test method includes: obtaining a color image including an object and an infrared (IR) image including the object; performing a first liveness test using the color image; performing a second liveness test using the IR image; and determining a liveness of the object based on a result of the first liveness test and a result of the second liveness test.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: June 8, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaejoon Han, Youngjun Kwak, Byungln Yoo, Changkyu Choi
  • Patent number: 11030481
    Abstract: A method for occlusion detection on a target object is provided. The method includes: determining, based on a pixel value of each pixel in a target image, first positions of a first feature and second positions of a second feature in the target image. The first feature is an outer contour feature of a target object in the target image, the second feature is a feature of an interfering subobject in the target object. The method also includes: determining, based on the first positions, an image region including the target object; dividing, based on the second positions, the image region into at least two detection regions; and determining, according to a pixel value of a target detection region, whether the target detection region meets a preset unoccluded condition, to determine whether the target object is occluded. The target detection region is any one of the at least two detection regions.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: June 8, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Cheng Yi, Bin Li
  • Patent number: 11030798
    Abstract: A computing device obtains a digital image depicting an individual and determines lighting conditions of the content in the digital image. The computing device obtains selection of a makeup effect from a user and determines surface properties of the selected makeup effect. The computing device applies a facial alignment technique to the facial region of the individual and defines a region of interest corresponding to the makeup effect. The computing device extracts lighting conditions of the region of interest and adjusts visual characteristics of the makeup effect based on the surface properties of the makeup effect and the lighting conditions of the region of interest. The computing device performs virtual application of the adjusted makeup effect to the region of interest in the digital image.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: June 8, 2021
    Assignee: PERFECT MOBILE CORP.
    Inventor: Chia-Chen Kuo
  • Patent number: 11030461
    Abstract: The disclosed computer-implemented method may include receiving an input indicating that a picture is to be taken using a camera on an electronic device. The method may next include taking the picture with the camera, and storing the associated picture data. Next, the method may include accessing the picture data to recognize the persons in the picture based on facial features associated with those persons. Still further, the method may include creating a group for the recognized persons, where the group is associated with the picture taken by the camera, and generating a collaborative group storyline for the created group that allows members of the group to add stories to the collaborative group storyline. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: June 8, 2021
    Assignee: Facebook, Inc.
    Inventor: Debashish Paul
  • Patent number: 11023715
    Abstract: The present disclosure provides a method and apparatus for expression recognition, which is applied to the field of image processing. The method includes acquiring a three-dimensional image of a target face and a two-dimensional image of the target face, where the three-dimensional image includes first depth information of the target face and first color information of the target face, and the two-dimensional image includes second color information of the target face. A first neural network classifies an expression of the target face according to the first depth information, the first color information, the second color information, and a first parameter to the target face. The first parameter includes at least one facial expression category and first parameter data for identifying an expression category of the target facial. The disclosed method and device can accurately recognize facial expressions under different facial positions and different illumination conditions.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: June 1, 2021
    Assignee: ArcSoft Corporation Limited
    Inventors: Han Qiu, Fang Deng, Kangning Song
  • Patent number: 11023714
    Abstract: A suspiciousness degree estimation model generation device includes: a clustering unit that performs clustering on an input face image based on the feature extracted from the face image; and a suspiciousness degree estimation model generation unit that generates a suspiciousness degree estimation model used for estimating the suspiciousness degree of an estimation target person, based on the result of clustering by the clustering unit and suspiciousness degree information that is previously associated with a face image included by the clustering result and that shows the suspiciousness degree of a person shown by the face image.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: June 1, 2021
    Assignee: NEC CORPORATION
    Inventor: Masahiro Saikou
  • Patent number: 11023712
    Abstract: A suspiciousness degree estimation model generation device includes: a clustering unit that performs clustering on an input face image based on the feature extracted from the face image; and a suspiciousness degree estimation model generation unit that generates a suspiciousness degree estimation model used for estimating the suspiciousness degree of an estimation target person, based on the result of clustering by the clustering unit and suspiciousness degree information that is previously associated with a face image included by the clustering result and that shows the suspiciousness degree of a person shown by the face image.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: June 1, 2021
    Assignee: NEC CORPORATION
    Inventor: Masahiro Saikou
  • Patent number: 11023713
    Abstract: A suspiciousness degree estimation model generation device includes: a clustering unit that performs clustering on an input face image based on the feature extracted from the face image; and a suspiciousness degree estimation model generation unit that generates a suspiciousness degree estimation model used for estimating the suspiciousness degree of an estimation target person, based on the result of clustering by the clustering unit and suspiciousness degree information that is previously associated with a face image included by the clustering result and that shows the suspiciousness degree of a person shown by the face image.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: June 1, 2021
    Assignee: NEC CORPORATION
    Inventor: Masahiro Saikou
  • Patent number: 11024101
    Abstract: A method of generating an augmented reality LENS comprises: causing to display a list of LENS categories on a display screen of a client device; receiving a user choice from the displayed list; causing to prepopulate a LENS features display on the display device based on the user choice, wherein each LENS feature comprises image transformation data configured to modify or overlay video or image data; receiving a user selection of a LENS feature from the prepopulated LENS display; receiving a trigger selection that activates the LENS feature to complete the LENS; saving the completed LENS to a memory of a computer device; generating a variant of the completed LENS; and deploying the variant of the completed LENS and the completed LENS to a messaging system to generate messages.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: June 1, 2021
    Assignee: Snap Inc.
    Inventors: Oleksandr Chepizhenko, Jean Luo, Bogdan Maksymchuk, Vincent Sung, Ashley Michelle Wayne
  • Patent number: 11023745
    Abstract: Systems and processes can automatically identify lane markings within images through the use of a machine learning model. The machine learning model may use a reduced set of data and output an improved estimate of lane markings by applying normalized data or images to the machine learning model. Each image applied to the model can be normalized by, for example, rotating each of the images such that the depicted roads are horizontal or otherwise share the same angle. By aligning disparate images of roads, it is possible to reduce the amount of data applied to the model or to model generation, and to increase the accuracy of the machine learning model. Further, the use of normalized images by the machine learning model enables a reduction in computing resources used to apply data to the machine learning model to, for example, identify lane markings within images.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 1, 2021
    Assignee: Beijing DiDi Infinity Technology and Development Co., Ltd.
    Inventors: Tingbo Hou, Yan Zhang
  • Patent number: 11023710
    Abstract: System and method for classifying data objects occurring in an unstructured dataset, comprising: extracting feature vectors from the unstructured dataset, each feature vector representing an occurrence of a data object in the unstructured dataset; classifying the feature vectors into feature vector sets that each correspond to a respective object class from a plurality of object classes; for each feature vector set: performing multiple iterations of a clustering operation, each iteration including clustering feature vectors from the feature vector set into clusters of similar feature vectors and identifying outlier feature vectors, wherein for at least one iteration after a first iteration of the clustering operation, outlier feature vectors identified in a previous iteration are excluded from the clustering operation; and outputting a key cluster for the feature vector set from a final iteration of the multiple iterations, the key cluster including a greater number of similar feature vectors than any of the
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: June 1, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng Dai, Juwei Lu, Bharath Sekar, Wei Li, Jianpeng Xu, Ruiwen Li
  • Patent number: 11017646
    Abstract: An information processing device and a determination method for determining whether a person other than the persons determined to be permitted to enter each zone has entered the zone is provided. The information processing device has a communication section for receiving face image data from cameras for photographing respective plurality of zones in a building and a control section for collating the face image data with the registered face image data of the persons permitted to enter each zone and for determining whether the entry of the person corresponding to the face image data is permitted or not.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: May 25, 2021
    Assignee: PANASONIC I-PRO SENSING SOLUTIONS CO., LTD.
    Inventors: Takashi Kamio, Kosuke Shinozaki, Koji Kawamoto, Hiromichi Sotodate, Yuiko Takase, Masashige Tsuneno, Eisaku Miyata, Nobuhito Seki