Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 10728447
    Abstract: The technology described in this document can be embodied in a method for capturing an image. The method includes generating a first control signal configured to cause a rolling shutter camera to capture an image of a subject over a first time period. The method also includes generating, at a first time point during the first time period, a second control signal configured to set a multi-spectral illumination source at a first intensity level. The multi-spectral illumination source is configured to illuminate the subject. The method further includes generating, at a second time point during the first time period, a third control signal configured to set the multi-spectral illumination source at a second intensity level that is less than the first intensity level. A portion of the image captured by the rolling shutter camera between the first and second time points includes a target feature associated with the subject.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: July 28, 2020
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Zikomo Fields, Yash Joshi
  • Patent number: 10726301
    Abstract: Method for treating a surface includes: automatically evaluating at least one digital image which includes the target surface; determining the nature of the target surface according to the evaluation of the at least one digital image; determining at least one available treatment implement according to the evaluation of the at least one image; determining the nature of the surface treatment according to the evaluation of the at least one image; automatically determining a use of the determined treatment implement in the determined treatment of the determined surface; and providing information analogous to the determined use of the treatment implement.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: July 28, 2020
    Assignee: The Procter & Gamble Company
    Inventors: Jonathan Livingston Joyce, Faiz Feisal Sherman, Jennifer Theresa Werner
  • Patent number: 10726244
    Abstract: A method of detecting a target includes determining a quality type of a target image captured using a camera, determining a convolutional neural network of a quality type corresponding to the quality type of the target image in a database comprising convolutional neural networks, determining a detection value of the target image based on the convolutional neural network of the corresponding quality type, and determining whether a target in the target image is a true target based on the detection value of the target image.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: July 28, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jingtao Xu, Biao Wang, Yaozu An, ByungIn Yoo, Changkyu Choi, Deheng Qian, Jae-Joon Han
  • Patent number: 10726572
    Abstract: A reference image of the field of view in a reference state of illumination is obtained. The display is controlled to display a first image to the object to cause the field of view to be in a first state of illumination different from the reference state of illumination. A first captured image of the field of view in the first state of illumination is then obtained. Based on the reference image and the first captured image, a position of the display relative to the object is determined.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: July 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwais Ashraf, Stewart O. M. Francis, Craig J. Morten
  • Patent number: 10719729
    Abstract: A computing device with a digital camera obtains a reference image depicting at least one reference color and calibrates parameters of the digital camera based on the at least one reference color. The computing device captures, by the digital camera, a digital image of an individual utilizing the calibrated parameters. The computing device defines a region of interest in a facial region of the individual depicted in the digital image captured by the digital camera. The computing device generates a skin tone profile for pixels within the region of interest and displays a predetermined makeup product recommendation based on the skin tone profile.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: July 21, 2020
    Assignee: PERFECT CORP.
    Inventors: Chia-Chen Kuo, Ho-Chao Huang
  • Patent number: 10721519
    Abstract: Disclosed are various embodiments to automatically generate network pages from extracted media content. In one embodiment, it is determined that a first facial expression of a face appearing in a frame of a digitally encoded video matches a second facial expression specified in a media extraction rule. The frame of the digitally encoded video is selected in response to the determination that the first facial expression matches the second facial expression. A user interface is generated that includes an image extracted from the selected frame of the digitally encoded video.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: July 21, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Piers George Cowburn, James William John Cumberbatch, Eric Michael Molitor, Joshua Ceri Russell-Hobson
  • Patent number: 10721070
    Abstract: In one embodiment, a set of feature vectors can be derived from any biometric data, and then using a deep neural network (“DNN”) on those one-way homomorphic encryptions (i.e., each biometrics' feature vector) can determine matches or execute searches on encrypted data. Each biometrics' feature vector can then be stored and/or used in conjunction with respective classifications, for use in subsequent comparisons without fear of compromising the original biometric data. In various embodiments, the original biometric data is discarded responsive to generating the encrypted values. In another embodiment, the homomorphic encryption enables computations and comparisons on cypher text without decryption. This improves security over conventional approaches. Searching biometrics in the clear on any system, represents a significant security vulnerability. In various examples described herein, only the one-way encrypted biometric data is available on a given device.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: July 21, 2020
    Assignee: Private Identity LLC
    Inventor: Scott Edward Streit
  • Patent number: 10720125
    Abstract: A computer-implemented method of image processing comprises sensing a 3D position of at least one head mounted display (HMD) arranged to display video sequences to a person, and determining the location of the sensed 3D HMD position relative to a base, where the sensing and determining are performed without using a radio-based search. The method also comprises determining a beam position of a wireless radio-based transmission beam by using the 3D HMD position, and wirelessly transmitting images from the base and directed toward the HMD along the beam position to display the images to the person at the HMD.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: July 21, 2020
    Assignee: Intel Corporation
    Inventor: Greg D. Kaine
  • Patent number: 10715842
    Abstract: A method and a system for distributing Internet cartoon content, and a recording medium are disclosed. The content distribution method comprises the steps of: registering a high-definition original image for at least one unit scene of all the unit scenes of the cartoon content; and capturing the high-definition original image of the unit scene selected by a user from the cartoon content.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: July 14, 2020
    Assignee: NAVER WEBTOON CORPORATION
    Inventor: Hyungil Kim
  • Patent number: 10713344
    Abstract: A method for secure user identification is disclosed, comprising the steps of: creating a first user identification; uniquely associating the user identification with the user; recording, using the identification device, an unknown user's head from a range of positions and using illumination in different wavelengths; retrieving a second user identification; and comparing, using the identification device, the second user identification against the recording of the unknown user's head and a plurality of measured movements of the unknown user's head and hand to identify the unknown user.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: July 14, 2020
    Assignee: LEXTRON SYSTEMS, INC.
    Inventor: Dan Kikinis
  • Patent number: 10713472
    Abstract: A first face region within a first image is determined. The first face region includes a location of a face within the first image. Based on the determined first face region within the first image, a predicted face region within a second image is determined. A first region of similarity within the predicted face region is determined. The first region of similarity has at least a predetermined degree of similarity to the first face region within the first image. Whether a second face region is present within the second image is determined. The location of the face within the second image is determined based on the first region of similarity, the determination of whether the second face region is present within the second image, and a face region selection rule.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: July 14, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Nan Wang, Zhijun Du, Yu Zhang
  • Patent number: 10713532
    Abstract: The present disclosure discloses an image recognition method and apparatus, and belongs to the field of computer technologies. The method includes: extracting a local binary pattern (LBP) feature vector of a target image; calculating a high-dimensional feature vector of the target image according to the LBP feature vector; obtaining a training matrix, the training matrix being a matrix obtained by training images in an image library by using a joint Bayesian algorithm; and recognizing the target image according to the high-dimensional feature vector of the target image and the training matrix. The image recognition method and apparatus according to the present disclosure may combine LBP algorithm with a joint Bayesian algorithm to perform recognition, thereby improving the accuracy of image recognition.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: July 14, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shouhong Ding, Jilin Li, Chengjie Wang, Feiyue Huang, Yongjian Wu, Guofu Tan
  • Patent number: 10715468
    Abstract: A mechanism is described for dynamically facilitating tracking of targets and generating and communicating of messages at computing devices according to one embodiment. An apparatus of embodiments, as described herein, includes one or more capturing/sensing components to facilitate seeking of the apparatus, where the apparatus is associated with a user, and recognition/transformation logic to recognize the apparatus. The apparatus may further include command and data analysis logic to analyze a command received at the apparatus from the user, where the command indicates sending a message to the apparatus. The apparatus may further include message generation and preparation logic to generate the message based on the analysis of the command, and communication/compatibility logic to communicate the message.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Glen J. Anderson, Cory J. Booth, Lenitra M. Durham, Kathy Yuen
  • Patent number: 10706579
    Abstract: Certain embodiments of the methods and systems disclosed herein determine a location of a tracked object with respect to a coordinate system of a sensor array by using analog signals from sensors having overlapping nonlinear responses. Hyperacuity and real time tracking are achieved by either digital or analog processing of the sensor signals. Multiple sensor arrays can be configured in a plane, on a hemisphere or other complex surface to act as a single sensor or to provide a wide field of view and zooming capabilities of the sensor array. Other embodiments use the processing methods to adjust to contrast reversals between an image and the background.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: July 7, 2020
    Assignee: Lamina Systems, Inc.
    Inventors: Ricardo A. G. Unglaub, Michael Wilcox, Paul Swanson, Chris Odell
  • Patent number: 10706502
    Abstract: According to one embodiment, a monitoring system includes a monitoring terminal and a server. The monitoring terminal includes a detector, a tracking unit, a first selector, and a transmitter. The server includes a receiver, a second selector, a collation unit, and an output unit. The receiver receives a first best shot images from the monitoring terminal. The second selector performs second selection processing, as part of a predetermined selection processing other than a first selection processing, of selecting a second best shot image suitable for collation with a predetermined image from among the first best shot images. The collation unit performs collation processing of collating the second best shot image with the predetermined image. The output unit outputs a result of the collation processing.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: July 7, 2020
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Infrastructure Systems & Solutions Corporation
    Inventor: Hiroo Saito
  • Patent number: 10699103
    Abstract: The present disclosure provides a living body detecting method and apparatus, a device and a storage medium. The method comprises: regarding a to-be-detected user, respectively obtaining a first picture and a second picture taken with two near infrared cameras and a third picture taken with a visible light camera; generating a depth map according to the first picture and second picture; determining whether the user is a living body according to the depth map and the third picture. The solution of the present disclosure can be applied to improve accuracy of detection results.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: June 30, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Zhibin Hong
  • Patent number: 10698995
    Abstract: Systems and methods for authenticating a user in an authentication system using a computing device configured to capture authentication biometric identity information. The authentication biometric identify information captured during an authentication session. The authentication biometric identify information may comprise or be derived from one or more images of the user being authenticated. The authentication biometric identify information is compared to root identify biometric information. The root identify biometric information is captured from a trusted source, such as trusted devices located at trusted locations, such as a government entity, financial institution, or business. Identity verification may occur by comparing the trusted root identify biometric information to the biometric identify information captured during an authentication session. Liveness determination may also occur to verify the user is a live person.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: June 30, 2020
    Assignee: FaceTec, Inc.
    Inventor: Kevin Alan Tussy
  • Patent number: 10701274
    Abstract: A subject detection unit of an imaging apparatus detects a subject image from an image. An automatic zoom control unit performs zoom control according to a size of a subject detected by the subject detection unit. The automatic zoom control unit automatically selects a specific composition among a plurality of compositions and sets a reference size of the subject used to control a zoom magnification based on the selected composition and the size and position of the detected subject. A process of determining a scene using information including a detection result by the subject detection unit, a composition selection process is performed on the determination scene, and one composition is selected from a composition of upper body of the subject, a composition of whole body, a composition of a subject face, and a composition of multiple people.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: June 30, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Akihiro Tsubusaki
  • Patent number: 10692183
    Abstract: Systems, methods and computer storage media for using body key points in received images and cropping rule representations to crop images are provided. Cropping configurations are received that specify characteristics of cropped images. Also obtained are images to crop. For a given image, a plurality of body key points is determined. A list of tuples is determined from the body key points and the cropping configurations. Each tuple includes a reference point, a reference length and an offset scale. A possible anchor level is calculated for each tuple. Each tuple sharing a common reference body key point is aggregated and a border representation is determined by calculating the minimum, maximum or average of all such possible anchor levels. The image is then cropped at the border representation. This process can be repeated for multiple border representations within a single image and/or for multiple images.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventor: Jianming Zhang
  • Patent number: 10691923
    Abstract: Systems, apparatuses and methods may provide for detecting a facial image including generating a spatial convolutional neural network score for one or more detected facial images from a facial image detector, generating a temporal convolutional network score for detected facial video frames from the facial image detector and generating a combined spatial-temporal score to determine whether a detected facial image gains user access to a protected resource.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 23, 2020
    Assignee: Intel Corporation
    Inventors: Jianguo Li, Chong Cao, Yurong Chen
  • Patent number: 10691927
    Abstract: A method and system are provided. The method includes positioning facial feature base points in a face image in an obtained image. A deformation template is obtained, the deformation template carrying configuration reference points and configuration base points. In the facial feature base points, a current reference point is determined corresponding to the configuration reference point, and a to-be-matched base point is determined corresponding to the configuration base point. A target base point is determined that corresponds to the configuration base point and that is in a to-be-processed image. The target base point and the corresponding to-be-matched base point forming form a mapping point pair. A to-be-processed image point is mapped to a corresponding target location according to a location relationship between the target base point and the to-be-matched base point, and a location relationship between the mapping point pair and the to-be-processed image point.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: June 23, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Meng Ren Qian, Zhi Bin Wang, Pei Cheng, Xuan Qiu, Xiao Yi Li
  • Patent number: 10692269
    Abstract: A method for generating a set of respective transformation maps for a set of respective 2D images from a same object and using a parameter-based transformation model, comprises the steps of —receiving said set of respective 2D images and said parameter-based transformation model —detecting matching regions across several pairs of the 2D images, based on the set of 2D images and 3D information of said object, —identifying respective interdependencies of the matching regions over the 2D images, —optimizing the parameters of the parameter-based transformation model over the matching regions of all images as well as over the non-matching regions in all images.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: June 23, 2020
    Assignee: Alcatel Lucent
    Inventor: Donny Tytgat
  • Patent number: 10684467
    Abstract: Various devices, arrangements and methods for managing communications using a head mounted display device are described. In one aspect, tracking data is generated at least in part by one or more sensors in a head mounted display (HMD) device. The tracking data indicates one or more facial movements of a user wearing the HMD device. A patch image is obtained based on the tracking data. The patch image is merged with a facial image. Various embodiments relate to the HMD device and other methods for generating and using the patch and facial images.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: June 16, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Simon J. Gibbs, Anthony S. Liot, Yu Song, Yoshiya Hirase
  • Patent number: 10685215
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing a face. A specific embodiment of the method includes: acquiring at least two facial images of a to-be-recognized face under different illuminations using a near-infrared photographing device; generating at least one difference image based on a brightness difference between each two of the at least two facial images; determining a facial contour image of the to-be-recognized, face based on the at least one difference image; inputting the at least two facial images, the at least one difference image, and the facial contour image into a pre-trained real face prediction value calculation model to obtain a real face prediction value of the to-be-recognized face; and outputting prompt information for indicating successful recognition of a real face, in response to determining the obtained real face prediction value being greater than a preset threshold.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 16, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Dingfu Zhou, Ruigang Yang, Yanfu Zhang, Zhibin Hong
  • Patent number: 10685214
    Abstract: The present disclosure is directed to face detection window refinement using depth. Existing face detection systems may perform face detection by analyzing portions of visual data such as an image, video, etc. identified by sub-windows. These sub-windows are now determined only based on pixels, and thus may number in the millions. Consistent with the present disclosure, at least depth data may be utilized to refine the size and appropriateness of sub-windows that identify portions of the visual data to analyze during face detection, which may substantially reduce the number of sub-windows to be analyzed, the total data processing burden, etc. For example, at least one device may comprise user interface circuitry including capture circuitry to capture both visual data and depth data. Face detection circuitry in the at least one device may refine face detection by determining criteria for configuring the sub-windows that will be used in face detection.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Haibing Ren, Yimin Zhang, Sirui Yang, Wei Hu
  • Patent number: 10679082
    Abstract: Real-time facial recognition is augmented with a machine-learning process that samples pixels from images captured for the physical environmental background of a device, which captures an image of a user's face for facial authentication. The background pixel points that are present in a captured image of a user's face from a camera of the device are authenticated with the image of the user's face. The value of the background pixel points are compared against the expected values for the background pixel points provided by the on-going machine-learning process for the background.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: June 9, 2020
    Assignee: NCR Corporation
    Inventors: Weston Lee Hecker, Nir Veltman, Yehoshua Zvi Licht
  • Patent number: 10678846
    Abstract: In a method for detecting an object in an input image, an input image vector representing the input image is generated by performing a regional maximum activations of convolutions (R-MAC) using a convolutional neural network (CNN) applied to the input image and using regions for the R-MAC defined by applying a region proposal network (RPN) to the output of the CNN applied to the input image. Likewise, a reference image vector representing a reference image depicting the object is generated by performing the R-MAC using the CNN applied to the reference image and using regions for the R MAC defined by applying the RPN to the output of the CNN applied to the reference image. A similarity metric between the input image vector and the reference image vector is computed, and the object is detected as present in the input image if the similarity metric satisfies a detection criterion.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: June 9, 2020
    Assignee: Xerox Corporation
    Inventors: Albert Gordo Soldevila, Jon Almazan, Jerome Revaud, Diane Larlus-Larrondo
  • Patent number: 10679041
    Abstract: A computer implemented method for recognizing facial expressions by applying feature learning and feature engineering to face images. The method includes conducting feature learning on a face image comprising feeding the face image into a first convolution neural network to obtain a first decision, conducting feature engineering on a face image, comprising the steps of automatically detecting facial landmarks in the face image, transforming the facial features into a two-dimensional matrix, and feeding the two-dimensional matrix into a second convolution neural network to obtain a second decision, computing a hybrid decision based on the first decision and the second decision, and recognizing a facial expression in the face image in accordance to the hybrid decision.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: June 9, 2020
    Assignee: Shutterfly, LLC
    Inventor: Leo Cyrus
  • Patent number: 10671887
    Abstract: Methods and apparatus, including computer program products, for creating a quality annotated training data set of images for training a quality estimating neural network. A set of images depicting a same object is received. The images in the set of images have varying image quality. A probe image whose quality is to be estimated is selected from the set of images. A gallery of images is selected from the set of images. The gallery of images does not include the probe image. The probe image is compared to each image in the gallery and a match score is generated for each image comparison. Based on the match scores, a quality value is determined for the probe image. The probe image and its associated quality value are added to a quality annotated training data set for the neural network.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: June 2, 2020
    Assignee: Axis AB
    Inventors: Niclas Danielsson, Markus Skans
  • Patent number: 10671837
    Abstract: There is provided a display control apparatus that allows an operator to grasp a factor leading a face recognition result at a glance and to confirm or modify the face recognition result on the spot. The display control apparatus comprises a similarity acquirer that acquires a similarity between each pair of partial regions of face images by performing collation processing between the each pair of partial regions of the face images, and a display controller that controls to overlay, on the face images, at least one of a first region the similarity of which exceeds a threshold and a second region the similarity of which does not exceed the threshold, and display the overlaid face images. The display controller controls to overlay and display the first region and the second region in contrast with each other on the face images.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: June 2, 2020
    Assignee: NEC CORPORATION
    Inventor: Yasushi Hamada
  • Patent number: 10671838
    Abstract: Systems and methods are disclosed configured to train an autoencoder using images that include faces, wherein the autoencoder comprises an input layer, an encoder configured to output a latent image from a corresponding input image, and a decoder configured to attempt to reconstruct the input image from the latent image. An image sequence of a face exhibiting a plurality of facial expressions and transitions between facial expressions is generated and accessed. Images of the plurality of facial expressions and transitions between facial expressions are captured from a plurality of different angles and using different lighting. An autoencoder is trained using source images that include the face with different facial expressions captured at different angles with different lighting, and using destination images that include a destination face.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: June 2, 2020
    Assignee: Neon Evolution Inc.
    Inventors: Carl Davis Bogan, III, Kenneth Michael Lande, Jacob Myles Laser, Brian Sung Lee, Cody Gustave Berlin
  • Patent number: 10674057
    Abstract: A plenoptic camera and associated method is provided. The camera has an array of sensors for generating digital images. The images have associated audio signals. The array of sensors are configured to capture digital images associated with a default spatial coordinate and are also configured to receive control input from a processor to change focus from said default spatial coordinate to a new spatial coordinate based on occurrence of an event at said new spatial coordinate.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: June 2, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Pierre Hellier, Quang Khanh Ngoc Duong, Valerie Allie, Philippe Leyendecker
  • Patent number: 10664727
    Abstract: An image pattern recognition device includes: a data reception unit that receives data; a supervision reception unit that receives supervision; and an artificial neural network processing unit that performs artificial neural network processing. The artificial neural network processing unit includes a first sub-network including one or more layers that process a main task, a second sub-network including one or more layers that process a sub-task, and a third sub-network including one or more layers that do not belong to any of the first sub-network and the second sub-network. The third sub-network includes a branch processing unit that outputs a value same as an input feature amount to a plurality of layers. The first sub-network includes a coupling processing unit that couples inputs from the plurality of layers and outputs a result.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: May 26, 2020
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Ryosuke Shigenaka, Yukihiro Tsuboshita, Noriji Kato
  • Patent number: 10666859
    Abstract: A self-photographing control method, a self-photographing control device and an electronic device are provided. The self-photographing control method includes the steps of acquiring a first original image, selecting a human eye from the first original image as a main eye for controlling photographing, acquiring an action of the main eye, and triggering a photographing operation if the action of the main eye meets a set condition.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: May 26, 2020
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yue Yu, Xiangdong Yang
  • Patent number: 10664986
    Abstract: Determining occupants' interactions in a space by applying a computer vision algorithm to track an occupant in a set of images of a space to obtain locations in the space of the occupant over time, where a history log of the occupant includes the locations of the occupant in the space over time is created and history logs of a plurality of occupants are compared to extract interaction points between the plurality of occupants.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: May 26, 2020
    Assignee: POINTGRAB LTD.
    Inventors: Itamar Roth, Haim Perski, Udi Benbaron
  • Patent number: 10664940
    Abstract: The present disclosures relates generally to digital watermarking and data hiding.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: May 26, 2020
    Assignee: Digimarc Corporation
    Inventors: Alastair M. Reed, Ravi K. Sharma
  • Patent number: 10659529
    Abstract: Technical solutions are described automatically filtering user images being uploaded to a social network. An example computer-implemented method includes detecting an image file, which contains an image of a user, being uploaded to the social network server. The method further includes determining compliance of the image file with a predetermined profile associated with the user. The method further includes, in response to the image failing to comply with the predetermined profile, modifying the image file to generate a modified image file, and uploading the modified image file to the social network server.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Al Chakra, Jonathan Dunne, Liam Harpur, Asima Silva
  • Patent number: 10657281
    Abstract: An information processing apparatus includes level setting unit configured to set a disclosure level of learning data used when a discriminator is generated, first specifying unit configured to specify, in accordance with the disclosure level, disclosure data to be disclosed to administrators other than a first administrator who manages the learning data in the learning data and association data which is associated with the learning data, second specifying unit configured to specify, in accordance with the disclosure level, reference data which is referred to by the first administrator in the learning data and the association data of the other administrators which are registered in a common storage apparatus, obtaining unit configured to obtain the reference data specified by the second specifying unit from the common storage apparatus, and generating unit configured to generate a discriminator using data obtained by the obtaining unit.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: May 19, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masafumi Takimoto
  • Patent number: 10657718
    Abstract: An example method for estimating an emotion based upon a facial expression of a user can include: receiving one or more captured facial expressions from the user at a visual computing device; comparing the one or more captured facial expressions to one or more known facial expressions; and assigning an emotion to the plurality of captured facial expressions based upon the comparing.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: May 19, 2020
    Assignee: Wells Fargo Bank, N.A.
    Inventors: Darius A. Miranda, Chris Kalaboukis
  • Patent number: 10657359
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object embedding system. In one aspect, a method comprises providing selected images as input to the object embedding system and generating corresponding embeddings, wherein the object embedding system comprises a thumbnailing neural network and an embedding neural network. The method further comprises backpropagating gradients based on a loss function to reduce the distance between embeddings for same instances of objects, and to increase the distance between embeddings for different instances of objects.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: May 19, 2020
    Assignee: Google LLC
    Inventors: Gerhard Florian Schroff, Dmitry Kalenichenko, Keren Ye
  • Patent number: 10657365
    Abstract: This specific person detection system: identifies, from among the persons recorded in a specific person recording unit, a person who most closely matches a feature value extracted from image data; calculates the degree to which feature values of a plurality of persons extracted from other image data match the identified person; and outputs, as an identification result, information about a person who has a feature value closely matching the identified person, and who is associated with angle information that most closely matches angle information associated with the identified person.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: May 19, 2020
    Assignee: HITACHI KOKUSAI ELECTRIC INC.
    Inventor: Seiichi Hirai
  • Patent number: 10650564
    Abstract: A method of generating 3D facial geometry for a computing device is disclosed. The method comprises obtaining a 2D image, performing a deep neural network, DNN, operation on the 2D image, to classify each of facial features of the 2D image as texture components and obtain probabilities that the facial feature belong to the texture components, wherein the texture components are represented by 3D face mesh and are predefined in the computing device, and generating a 3D facial model based on a 3D face template predefined in the computing device and the texture component with the highest probability.
    Type: Grant
    Filed: April 21, 2019
    Date of Patent: May 12, 2020
    Assignee: XRSpace CO., LTD.
    Inventors: Ting-Chieh Lin, Shih-Chieh Chou
  • Patent number: 10650442
    Abstract: Information is provided a user of a mobile device. Media content is sensed by the mobile device and sent to an information processing server. The information processing server also obtains media content from another source and associates at least a portion of the obtained media content with a product or service offer. The sensed media content is correlated to the obtained media content such that an associated buy or service offer is selected. The buy or service offer is sent to the mobile device for display to the user.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 12, 2020
    Inventors: Amro Shihadah, Mohammad Shihadah, Hassan Sawaf
  • Patent number: 10650668
    Abstract: A unified presence detection and prediction platform that is privacy aware is described. The platform is receives signals from plural sensor devices that are disposed within a premises. The platform produces profiles of entities based on detected characteristics developed from relatively inexpensive and privacy-aware sensors, i.e., non-video and non-audio sensor devices. The platform using these profiles and sensor signals from relatively inexpensive and privacy-aware sensors determines specific identification and produces historical patterns. Also described are techniques that allow users (persons), when authorized, to control remote devices/systems generally without direct interaction with such systems merely by the systems detecting and in instances predicting the specific presence of an identified individual in a location within the premises.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: May 12, 2020
    Assignee: TYCO SAFETY PRODUCTS CANADA LTD.
    Inventors: Rajmy Sayavong, Gregory W. Hill, David J. LeBlanc, Samuel D. Rosewell, Jr., Gerald M. Bluhm, Michael DeRose, IV, Rob Vandervecht
  • Patent number: 10650261
    Abstract: One embodiment facilitates identification of re-photographed images. During operation, the system obtains a sequence of video frames of a target object. The system selects a frame with an acceptable level of quality. The system obtains, from the selected frame, a first image and a second image associated with the target object, wherein at least one of a zoom ratio property and a size property is different between the first image and the second image. The system inputs the first image and the second image to at least a first neural network to obtain scores for the first image and the second image, wherein a respective score indicates a probability that the corresponding image is re-photographed, wherein a re-photographed image is obtained by photographing or recording an image of the target object. The system indicates the selected frame as re-photographed based on the obtained probabilities.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: May 12, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Xuetao Feng, Yan Wang
  • Patent number: 10642269
    Abstract: A method for localizing and navigating a vehicle on underdeveloped or unmarked roads. The method includes: gathering image data with non-hyperspectral image sensors and audio data of a current scene; classifying the current scene based on the gathered image data and audio data to identify a stored scene model that most closely corresponds to the current scene; and setting spectral range of hyperspectral image sensors based on a spectral range used to capture a stored scene model that most closely corresponds to the current scene.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: May 5, 2020
    Assignee: DENSO International America, Inc.
    Inventors: Joseph Lull, Shawn Hunt
  • Patent number: 10643063
    Abstract: Methods, systems, and devices for object recognition are described. A device may generate a subspace based at least in part on a set of representative feature vectors for an object. The device may obtain an array of pixels representing an image. The device may determine a probe feature vector for the image by applying a convolutional operation to the array of pixels. The device may create a reconstructed feature vector in the subspace based at least in part on the set of representative feature vectors and the probe feature vector. The device may compare the reconstructed feature vector and the probe feature vector and recognize the object in the image based at least in part on the comparison. For example, the described techniques may support pose invariant facial recognition or other such object recognition applications.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: May 5, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Lei Wang, Yingyong Qi, Ning Bi
  • Patent number: 10635890
    Abstract: The present invention provides a facial recognition method, including: obtaining a target facial image; determining a covered region and a non-covered region of the target facial image; calculating the weight of the covered region, and calculating the weight of the non-covered region; extracting feature vectors of the covered region, and extracting feature vectors of the non-covered region; comparing the target facial image with each template facial image in a facial database according to the feature vectors of the covered region, the feature vectors of the non-covered region, the weight of the covered region, and the weight of the non-covered region, to calculate a facial similarity between each template facial image and the target facial image; and determining, when at least one of the facial similarities between the template facial images and the target facial image is greater than or equal to a similarity threshold, that facial recognition succeeds.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: April 28, 2020
    Assignee: SHENZHEN INTELLIFUSION TECHNOLOGIES CO., LTD.
    Inventors: Rui Yan, Yongqiang Mou
  • Patent number: 10635894
    Abstract: The present system may be deployed in various scenarios to provide proof of liveness (also referred to herein as “liveness verification”) of an image without interaction of the subject of the image. The liveness verification process generally comprises imperative analysis and dynamic analysis of the image, after which liveness of the image may be determined.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: April 28, 2020
    Assignee: T Stamp Inc.
    Inventor: Gareth Genner
  • Patent number: 10635888
    Abstract: Provided is a smart-security digital system that enables, when there is a habitual shoplifter or suspicious behavior person, an employee or the like close to the habitual shoplifter or suspicious behavior person to quickly rush to the scene and prevent an act of shoplifting.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: April 28, 2020
    Assignee: TECHNOMIRAI CO., LTD.
    Inventor: Kazuo Miwa