Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 9898836
    Abstract: A method for automatic video face replacement includes steps of capturing a face image, detecting a rotation angle of the face image, defining a region to be replaced in the face image, and pasting a region to be replaced of one of the replaced images having the corresponding rotation angle of the face image into a target replacing region. Therefore, the region to be replaced of a static or dynamic face image can be replaced by a replaced image quickly by a single camera without requiring a manual setting of the feature points of a target image. These methods support face replacement at different angles and compensate the color difference to provide a natural look of the replaced image.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: February 20, 2018
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Hsiu-Chien Hsu
  • Patent number: 9892652
    Abstract: A scoring device has an acquisition unit that acquires image data in which a singer is photographed, a detector that detects a feature associated with an expression or a facial motion during singing as a facial feature of the singer from the image data acquired by the acquisition unit, a calculator that calculates a score for singing action of the singer based on the feature detected by the detector, and an output unit that outputs the score.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: February 13, 2018
    Assignee: OMRON Corporation
    Inventors: Tatsuya Murakami, Lizhou Zhang
  • Patent number: 9892324
    Abstract: Approaches, techniques, and mechanisms are disclosed for generating thumbnails. According to one embodiment, a subset of images each depicting character face(s) is identified from a collection of images. An unsupervised learning method is applied to automatically cluster the subset of images into image clusters. Top image clusters are selected from the image clusters based at least in part on weighted scores of images clustered within the image clusters. Thumbnail(s) are generated from images in the top image clusters.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: February 13, 2018
    Assignee: PCCW VUCLIP (SINGAPORE) PTE. LTD.
    Inventor: Kulbhushan Pachauri
  • Patent number: 9886502
    Abstract: A system and method in which metadata associated with content data received from an information processing apparatus is used in determining related content data that includes a similar feature as the content data. The related content data may be determined from a channel in which the content data belongs according to a playback rate determination by the information processing apparatus of give content data used in determining the similar feature as the feature in the content data. The related content data may then be transmitted to the information apparatus, which may selectively playback the content data and the related content data.
    Type: Grant
    Filed: May 11, 2012
    Date of Patent: February 6, 2018
    Assignee: Sony Corporation
    Inventors: Yuki Murata, Soichiro Atsumi
  • Patent number: 9886622
    Abstract: Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: February 6, 2018
    Assignee: Intel Corporation
    Inventors: Yangzhou Du, Wenlong Li, Wei Hu, Xiaofeng Tong, Yimin Zhang
  • Patent number: 9881209
    Abstract: In one embodiment, an image processing device for detecting tampering in a document image is disclosed. The image processing device comprises a processor and a memory communicatively coupled to the processor. The memory stores processor instructions, which, on execution, causes the processor to determine an image quality of the document image by analyzing one or more quality features extracted from the document image. The processor is caused to pre-process the document image based on a pre-defined ontology of documents when the image quality is above a pre-defined quality threshold. Further, the processor is caused to segment the pre-processed document image into one or more region of interests based on the pre-defined ontology of documents and detect tampering in a region of interest in the document image by processing each region of interest of the one or more region of interests to detect tampering in the document image.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: January 30, 2018
    Assignee: WIPRO LIMITED
    Inventors: Ramachandra Budihal, Sujatha Jagannath, Sendil Kumar Jaya Kumar
  • Patent number: 9881203
    Abstract: An image processing device (10) includes a posture estimation unit (110) that estimates posture information including a yaw angle and a pitch angle of a person's face from an input image including the person's face, and an image conversion unit (120) that generates a normalized face image in which an orientation of a face is corrected, on the basis of positions of a plurality of feature points in a face region image which is a region including the person's face in the input image, positions of the plurality of feature points in a three-dimensional shape model of a person's face, and the posture information.
    Type: Grant
    Filed: August 26, 2014
    Date of Patent: January 30, 2018
    Assignee: NEC CORPORATION
    Inventor: Akihiro Hayasaka
  • Patent number: 9876950
    Abstract: An image capturing apparatus capable of setting a focus detection area in an imaging range includes a photometry unit, a face detection unit configured to detect a face area, and a photometry area setting unit configured to set a position of a main photometry area. When a user sets the focus detection area and the face detection unit detects a face area, the photometry area setting unit determines whether to set a position of the main photometry area to a position corresponding to the face area or to set a position of the main photometry area to a position corresponding to the focus detection area based on information regarding the face area and information regarding the focus detection area set by the user, and the photometry unit performs photometry on the object based on the position of the main photometry area set by the photometry area setting unit.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: January 23, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hinako Nakamura
  • Patent number: 9875392
    Abstract: According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: January 23, 2018
    Assignee: ACCENTURE GLOBAL SERVICES LIMITED
    Inventors: Cyrille Bataller, Anders Astrom
  • Patent number: 9875608
    Abstract: A sports-wagering kiosk including at least a (i) display; (ii) identification card reader; (iii) camera; (iv) processing means; (v) printer; (vi) ticket reader; (vii) bill validator; and (viii) communication link for communicating with a central computer system. The sports-wagering kiosk facilitates a registration and wagering process by: (i) capturing an image of a prospective player; (ii) reading data from an identification card including a photograph; (iii) verifying an age of the prospective player and validity of the identification card based on reading the data therefrom; (iv) transmitting at least some of the data read from the identification card to a central computer system; (v) verifying the identification of the prospective player by comparing the captured photograph to a photograph on the identification card; (vi) once the identification is verified, prompting the prospective player to enter a password; and (vii) printing a receipt with player account information.
    Type: Grant
    Filed: September 16, 2014
    Date of Patent: January 23, 2018
    Assignee: American Wagering, Inc.
    Inventors: Sandra Drozd, Ronald Tabat, Sean Cronan
  • Patent number: 9875395
    Abstract: A system and method for tagging an image of an individual in a plurality of photos is disclosed herein. A feature vector of an individual is used to analyze a set of photos on a social networking website to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: January 23, 2018
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: Charles A. Myers, Alex Shah
  • Patent number: 9875255
    Abstract: A terminal and a method for sharing content are provided. A terminal includes an image acquirer configured to acquire face image data from a camera while content is being displayed, a face recognizer configured to recognize a face included in the face image data, a face change detector configured to detect whether the recognized face is different from a face recognized in a previous image data, a contact searcher configured to, in response to the detection that the recognized face is different from the face recognized in the previous image data, search a contact corresponding to the recognized face, and an information transmitter configured to transmit content usage information to the searched contact.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: January 23, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang Hyun Yoo, Kyoung Gu Woo, Seok Jin Hong, Yo Han Roh, Ji Hyun Lee, Ho Dong Lee
  • Patent number: 9870507
    Abstract: In the image extraction device, an instruction acquisition unit acquires an instruction input by a user, and an image group selection unit selects a second image group, which has a smaller number of images than a first image group, from the first image group in response to the instruction. Then, an extraction reference determination unit determines an image extraction reference when extracting an image from the second image group based on images included in the first image group, and an image extraction unit extracts one or more images, the number of which is smaller than the number of images in the second image group, from the second image group according to the image extraction reference.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: January 16, 2018
    Assignee: FUJIFILM Corporation
    Inventors: Kei Yamaji, Junichi Asada
  • Patent number: 9864901
    Abstract: Implementations relate to image feature detection and masking in images based on color distributions. In some implementations, a computer-implemented method to determine a mask for an image includes determining a spatial function for a detected feature depicted in the image, the function indicating pixels of the image relative to an estimated feature boundary. A respective color likelihood distribution is determined for each of multiple regions of the image based on one or more distributions of color values in the regions, including a feature region and a non-feature region. A confidence mask is determined based on the spatial function and one or more of the color likelihood distributions. The confidence mask indicates, for each of multiple pixels of the image, an associated confidence that the pixel is a feature pixel. A modification is applied to pixels in the image using the confidence mask.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: January 9, 2018
    Assignee: Google LLC
    Inventor: Jason Chang
  • Patent number: 9864903
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for matching faces. The method includes receiving an image of a face of a first person from a device of a second person, comparing the image of the face of the first person to a database of known faces in a contacts list of the second person, identifying a group of potential matching faces from the database of known faces, and displaying to the second person the group of potential matching faces. In one variation, the method receives input selecting one face from the group of potential matching faces and displays additional information about the selected one face. In a related variation, the method displays additional information about one or more face in the displayed group of potential matching faces without receiving input.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: January 9, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: William Roberts Cheswick
  • Patent number: 9864902
    Abstract: An image processing apparatus includes: a holding unit that holds a plurality of images; a condition checking unit that checks imaging conditions of the plurality of images; a collation determining unit that determines whether to collate images among the plurality of images based on the imaging conditions of the images; a collation unit that collates the images determined to be collated by the collation determining unit to obtain a degree of similarity; and a classifying unit that classifies the collated images into a same category when the degree of similarity is equal to or greater than a predetermined threshold.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: January 9, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Shunsuke Nakano, Hiroshi Sato, Yuji Kaneda, Takashi Suzuki, Atsuo Nomoto
  • Patent number: 9858470
    Abstract: A method for performing a method for performing a performing a face tracking function in an electric device is provided. The electric device has a touch panel, a camera, and a processor. The method includes the following steps. A touch signal is receiving by the touch panel. Under a video call, a face tracking mode is entered based on the touch signal by the processor. Face tracking is performed on a captured frame from the camera to obtain at least one region of interesting (ROI) of the captured frame by the processor, each of the ROI having an image of a face. A target frame is generated by combining the at least one ROI by the processor. The target frame is transmitted to another electric device by the processor, so that the target frame is shown on the another electric device as a video talk frame.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: January 2, 2018
    Assignee: HTC CORPORATION
    Inventors: Ming-Che Kang, Chung-Ko Chiu
  • Patent number: 9858404
    Abstract: Embodiments of the present invention may involve a method, system, and computer program product for controlling privacy in a face recognition application. A computer may receive an input including a face recognition query and a digital image of a face. The computer may identify a target user associated with a facial signature in a first database based at least in part on a statistical correlation between a detected facial signature and one or more facial signatures in the first database. The computer may extract a profile of the target user from a second database. The profile of the target user may include one or more privacy preferences. The computer may generate a customized profile of the target user. The customized profile may omit one or more elements of the profile of the target user based on the one or more privacy preferences and/or a current context.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Seraphin B. Calo, Bong Jun Ko, Kang-Won Lee, Theodoros Salonidis, Dinesh C. Verma
  • Patent number: 9860282
    Abstract: Systems and methods are provided for enabling real-time synchronous communication with persons appearing in image or video files. For example, an image or video file is displayed on a display screen of a computing device. The computing device detects a user selection of a person present in the displayed image or video. A request is sent from the computing device to a service provider for profile information associated with the user-selected person. The computing device receives from the service provider profile information associated with the user-selected person, wherein the profile information includes a communications address of the user-selected person. The communications address is utilized to initiate a communications session on the computing device with the user-selected person present in the displayed image or video.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventor: Robert G. Farrell
  • Patent number: 9858679
    Abstract: Systems and methods associated with dynamic face identification are disclosed. One example method includes matching a query face against a set of clusters in a dynamic collection. Matching the query face against the set of clusters may facilitate identifying a person associated with the query face. The example method also includes matching the query face against a set of images in a static gallery to identify the person. Matching the query face against the static gallery may be performed when matching the query face against the set of clusters fails to identify the person. The example method also includes updating the set of clusters in the dynamic collection using the query face.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: January 2, 2018
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Tong Zhang
  • Patent number: 9854967
    Abstract: A detector detects head postures and gaze directions from images of a person to be measured captured by an imaging unit; a generation unit generates a gaze direction distribution with respect to each of the head postures, from the head postures and the gaze directions detected by the detector; a calibration unit determines predetermined one of the head postures as a reference posture, and calculates calibration parameters to be used to calibrate the gaze direction distribution with respect to the reference posture; and a correction unit corrects the gaze direction distributions with respect to the head postures other than the reference posture by using the calibration parameters calculated by the calibration unit. This reduces the influence on the calibration which may vary due to change in the head posture.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: January 2, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Masayuki Kimura, Tadashi Shibata
  • Patent number: 9858525
    Abstract: Disclosed herein are technologies directed to training a neural network to perform semantic segmentation. A system receives a training image, and using the training image, candidate masks are generated. The candidate masks are ranked and a set of the ranked candidate masks are selected for further processing. One of the set of the ranked candidate masks is selected to train the neural network. The one of the set of the set of the ranked candidate masks is also used as an input to train the neural network in a further training evolution. In some examples, the one of the set of the ranked candidate masks is selected randomly to reduce the likelihood of ending up in poor local optima that result in poor training inputs.
    Type: Grant
    Filed: October 14, 2015
    Date of Patent: January 2, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jifeng Dai, Kaiming He, Jian Sun
  • Patent number: 9858296
    Abstract: A technique for selecting a representative image from a group of digital images includes extracting data representing an image of a face of a person from each image in the group using a face recognition algorithm, determining a score for each image based on one or more quality parameters that are satisfied for the respective image, and selecting the image having the highest score as the representative image for the group. The quality parameters may be based on any quantifiable characteristics of the data. Each of these quality parameters may be uniquely weighted, so as to define the relative importance of one parameter with respect to another. The score for determining the representative image of the group may be obtained by adding together the weights corresponding to each quality parameter that is satisfied for a given image. Once selected, the representative image may be displayed in a graphical user interface.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: January 2, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Angad Kumar Gupta, Alok Kumar Singh, Ram Prasad Purumala
  • Patent number: 9858471
    Abstract: An identification apparatus has an extraction unit comprising: an extraction unit; an acquisition unit; and an identification unit. The extraction unit extracts, from an object region in an image, a candidate region including candidate points of a predetermined object using parallax information. The acquisition unit acquires a characteristic value based on image information in the candidate region. The identification unit that identifies whether or not the candidate region includes the predetermined object based on similarity between the characteristic value and a reference characteristic value.
    Type: Grant
    Filed: March 11, 2016
    Date of Patent: January 2, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Guifen Tian
  • Patent number: 9860594
    Abstract: A method and apparatus 200 are provided for identifying image frames 10 in a video stream 20, and for comparing one video stream 20-1 against one or more other video streams 20-2. The or each video stream 20 is examined to produce a respective digest stream 320-1 comprising digest values 310-1, which may be recorded in a digest record 410. A candidate image frame 10-2 from a second video stream 20-2 provides a respective second digest value 310-2. A match of the digest values indicates matching images in the respective video streams.
    Type: Grant
    Filed: October 3, 2014
    Date of Patent: January 2, 2018
    Assignee: Supponor OY
    Inventor: Arto Vuori
  • Patent number: 9852543
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: December 26, 2017
    Assignee: SNAP INC.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Patent number: 9852364
    Abstract: In one embodiment, a method determines known features for existing face tracks that have identity labels and builds a database using these features. The face tracks may have multiple different views of a face. Multiple features from the multiple faces may be taken to build the face models. For an unlabeled face track without identity information, the method determines its sampled features and finds labeled nearest neighbor features with respect to multiple feature spaces from the face models. For each face in the unlabeled face track, the method decomposes the face as a linear combination of its neighbors from the known features from the face models. Then, the method determines weights for the known features to weight the coefficients of the known features. Particular embodiments use a non-linear weighting function to learn the weights that provides more accurate labels.
    Type: Grant
    Filed: March 19, 2015
    Date of Patent: December 26, 2017
    Assignee: HULU, LLC
    Inventors: Cailiang Liu, Zhibing Wang, Chenguang Zhang, Tao Xiong
  • Patent number: 9842358
    Abstract: The present invention enables personalized recommendations for a user. In a preferred embodiment, a user submits an image of her face along with personal information. The image is analyzed to produce measurements of the user's facial characteristics. A user profile containing the image measurements and personal information is compared to two or more categories of reference data. For each category, one or more recommendations are produced according to the results of the comparison. The top recommendations are chosen according to a prioritization hierarchy and provided to the user.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: December 12, 2017
    Assignee: BrighTex Bio-Photonics LLC
    Inventors: Christopher Butler, Brittania Boey
  • Patent number: 9842266
    Abstract: A system and method for detecting electronic device use by a driver of a vehicle including acquiring an image including a vehicle from an associated image capture device positioned to view oncoming traffic, locating a windshield region of the vehicle in the captured image, processing pixels of the windshield region of the image for computing a feature vector describing the windshield region of the vehicle, applying the feature vector to a classifier for classifying the image into respective classes including at least classes for candidate electronic device use and candidate electronic device non-use, and outputting the classification.
    Type: Grant
    Filed: February 5, 2015
    Date of Patent: December 12, 2017
    Assignee: Conduent Business Services, LLC
    Inventors: Orhan Bulan, Yusuf O. Artan, Robert P. Loce, Peter Paul
  • Patent number: 9836643
    Abstract: Biometric enrollment and verification techniques for ocular-vascular, periocular, and facial regions are described. Periocular image regions can be defined based on the dimensions of an ocular region identified in an image of a facial region. Feature descriptors can be generated for interest points in the ocular and periocular regions using a combination of patterned histogram feature descriptors. Quality metrics for the regions can be determined based on region value scores calculated based on texture surrounding the interest points. A biometric matching process for calculating a match score based on the ocular and periocular regions can progressively include additional periocular regions to obtain a greater match confidence.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: December 5, 2017
    Assignee: EyeVerify Inc.
    Inventors: Sashi K. Saripalle, Vikas Gottemukkula, Reza R. Derakhshani
  • Patent number: 9830727
    Abstract: In some implementations, faces based on image data from a camera of a mobile device are detected and one or more of the detected faces are determined to correspond to one or more people in a set of people that are classified as being important to a user. In response to determining that one or more of the detected faces correspond to one or more people in the set of people that are classified as being important to the user, quality scores are determined for the one or more detected faces that are determined to correspond to one or more people that are classified as important to the user. Multiple images with the camera are captured based on the quality scores such that, for each face determined to correspond to a person that is classified as important to the user, at least one of the multiple images includes an image of the face having at least a minimum quality score. A composite image is generated that combines the multiple images.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: November 28, 2017
    Assignee: Google Inc.
    Inventors: Damien Henry, Murphy Stein
  • Patent number: 9827483
    Abstract: A billiard table top lighting apparatus provides substantially uniform lighting across the surface of a billiard table surface. The frame may support one or more cameras, one or more motion sensors, one or more microphones, and/or one or more computing devices to enable any of a variety of innovative features. Such features could include automatic game play recording from one or more perspectives, merged video track storage for replay, review, and analysis, automatic lighting and dimming control, control of the apparatus from any mobile device, automatic provision of a shot clock, and the like.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: November 28, 2017
    Assignee: Smart Billiard Lighting LLC
    Inventors: James W. Bacus, James V. Bacus
  • Patent number: 9824261
    Abstract: A method of face detection to be performed by an apparatus including an image pickup module includes: obtaining a first image including image information of an object that is in focus; obtaining a pseudo distance between the image pickup module and the object when the first image is obtained by the obtaining; determining a first ratio of a plurality of ratios as a scaling value based on the pseudo distance; and performing face detection of the first image by changing the scaling value based on the first ratio.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: November 21, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Eung-Joo Kim
  • Patent number: 9826001
    Abstract: Systems and methods are provided for enabling real-time synchronous communication with persons appearing in image or video files. For example, an image or video file is displayed on a display screen of a computing device. The computing device detects a user selection of a person present in the displayed image or video. A request is sent from the computing device to a service provider for profile information associated with the user-selected person. The computing device receives from the service provider profile information associated with the user-selected person, wherein the profile information includes a communications address of the user-selected person. The communications address is utilized to initiate a communications session on the computing device with the user-selected person present in the displayed image or video.
    Type: Grant
    Filed: October 13, 2015
    Date of Patent: November 21, 2017
    Assignee: International Business Machines Corporation
    Inventor: Robert G. Farrell
  • Patent number: 9824301
    Abstract: In an information processing apparatus that includes sequences of weak classifiers which are logically cascade-connected in each sequence and the sequences respectively correspond to categories of an object and in which the weak classifiers are grouped into at least a first group and a second group in the order of connection, classification processing by weak classifiers belonging to the first group of respective categories is performed by pipeline processing. Based on the processing results of the weak classifiers belonging to the first group of the respective categories, categories in which classification processing by weak classifiers belonging to the second group is to be performed are decided out of the categories. The classification processing by the weak classifiers respectively corresponding to the decided categories and belonging to the second group is performed by pipeline processing.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: November 21, 2017
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tsewei Chen
  • Patent number: 9824313
    Abstract: The disclosure relates (a) a method and computer program product for training a content classifier and (b) a method and computer program product for using the trained content classifier to determine compliance of content items with a content policy of an online system. A content classifier is trained using two training sets, one containing NSFW content items and the other containing SFW content items. Content signals are extracted from each content item and used by the classifier to output a decision, which is compared against its known classification. Parameters used in the classifier are adjusted iteratively to improve accuracy of classification. The trained classifier is then used to classify content items with unknown classifications. Appropriate action is taken for each content item responsive to its classification. In alternative embodiments, multiple classifiers are implemented as part of a two-tier classification system, with text and image content classified separately.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: November 21, 2017
    Assignee: Flipboard, Inc.
    Inventor: Robert Griesmeyer
  • Patent number: 9824280
    Abstract: A face detection method includes acquiring a video image sequence, performing a video shot boundary detection process on the video image sequence to determine whether a shot change exists in the video image sequence to obtain a first judgment result, and determining, when the first judgment result indicates that a shot change exists in the video image sequence, that face detection has failed. The present disclosure also provides a face detection device, which includes an acquisition unit configured to acquire a video image sequence, a first detection unit configured to perform a video shot boundary detection process on the video image sequence to determine whether a shot change exists in the video image sequence to obtain a first judgment result, and a determination unit configured to determine, when the first judgment result indicates that a shot change exists in the video image sequence, that face detection has failed.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: November 21, 2017
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Peng Li
  • Patent number: 9817845
    Abstract: A three-dimensional image file searching method and a three-dimensional image file searching system are provided, and the three-dimensional image file searching method includes the following steps. A three-dimensional query image file is received. The three-dimensional query image file is converted to generate a first image group including a plurality of two-dimensional image files. The first image group is compared with a plurality of second image groups corresponding to a plurality of three-dimensional candidate image files respectively in a database, so as to obtain a search result conforming to the three-dimensional query image file.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: November 14, 2017
    Assignees: XYZprinting, Inc., Kinpo Electronics, Inc., Cal-Comp Electronics & Communications Company Limited
    Inventors: Yi-Hsun Lee, Meng-Gung Li
  • Patent number: 9820014
    Abstract: The illustrative embodiments described herein provide systems and methods for movie identification based on a location. In the embodiment, a method includes locating a mobile communication device associated with a user to form location data, accessing a location database to determine a geographic location of the mobile communication device based on the location data, and identifying a set of movies related to the geographic location by accessing a movie database. Each of the set of movies in the movie database is associated with one or more respective geographic locations. The method also includes presenting a set of movie results corresponding to the set of movies on a graphical user interface of the mobile communication device. In another embodiment, the method may also validate an object photographed by a camera of the mobile communication device, and use the recognized object to identify the set of movies.
    Type: Grant
    Filed: February 23, 2016
    Date of Patent: November 14, 2017
    Assignee: West Corporation
    Inventors: Erika Nelson Kessenger, Bruce Pollock
  • Patent number: 9818114
    Abstract: A computer-based method for authenticating a suspect consumer as an authorized cardholder during a payment card transaction is provided. The method includes registering the authorized cardholder within a portable computer device by receiving a reference sample of the authorized cardholder. The portable computer device includes a processor, a memory, and a camera. The method also includes storing, in the memory, the reference sample of the authorized cardholder and associated payment card information. The method further includes using the camera to capture a transaction sample of the suspect consumer during the payment card transaction using the camera. The method also includes comparing, by the processor, the transaction sample to the reference sample stored in the memory. The method further includes authenticating the suspect consumer as the authorized cardholder based at least in part on the comparison.
    Type: Grant
    Filed: August 11, 2014
    Date of Patent: November 14, 2017
    Assignee: Mastercard International Incorporated
    Inventors: Jeremy Michael Pastore, Michael Lester Zhao
  • Patent number: 9814935
    Abstract: Enables a fitting system for sporting equipment using an application that executes on a mobile phone for example to prompt and accept motion inputs from a given motion capture sensor to measure a user's size, range of motion, speed and then utilizes that same sensor to capture motion data from a piece of equipment, for example to further optimize the fit of, or suggest purchase of a particular piece of sporting equipment. Utilizes correlation or other data mining of motion data for size, range of motion, speed of other users to maximize the fit of a piece of equipment for the user based on other user's performance with particular equipment. For example, this enables a user of a similar size, range of motion and speed to data mine for the best performance equipment, e.g., longest drive, lowest putt scores, highest winning percentage, etc., associated with other users having similar characteristics.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: November 14, 2017
    Assignee: BLAST MOTION INC.
    Inventors: Michael Bentley, Bhaskar Bose, Ryan Kaps
  • Patent number: 9813690
    Abstract: A method for shape and material recovery of an object observed under a moving camera by detecting a change in image intensity induced by motion to corresponding variation in surface geometry and dichromatic reflectance; receiving a sequence of three camera motions to yield a linear system that decouples shape and bidirectional reflectance distribution functions (BRDFs) terms; applying linearities in differential stereo relations to recover shape from camera motion cues, with unknown lighting and dichromatic BRDFs; and recovering unknown shape and reflectance of the object with dichromatic BRDF, using camera motion cues.
    Type: Grant
    Filed: October 10, 2014
    Date of Patent: November 7, 2017
    Assignee: NEC Corporation
    Inventor: Manmohan Chandaker
  • Patent number: 9813909
    Abstract: A method for authenticating the identity of a handset user in a cloud-computing environment is provided. The method includes: obtaining, a login account and a password from the user; judging whether the login account and the password are correct; if the login account or the password is incorrect, refusing the user to access an operating system of the handset; if the login account and the password are correct, sending the login account and the password to a cloud server, wherein the login account and the password correspond to a face sample image library of the user stored on the cloud server; acquiring an input face image of the user; sending the input face image to the cloud server; authenticating, by the cloud server, the identity of the user according to the login account, the password and the input face image.
    Type: Grant
    Filed: November 18, 2016
    Date of Patent: November 7, 2017
    Assignee: GUANGZHOU HAIJI TECHNOLOGY CO., LTD
    Inventors: Minsheng Wang, Wei Lu, Dongxuan Gao, Xiaojun Liu
  • Patent number: 9807298
    Abstract: An apparatus and a method for providing emotional information in an electronic device are provided. At least one content is displayed. Emotional information is extracted from an image obtained via a camera. The emotional information is added to the content.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: October 31, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dae-Sung Kim, So-Ra Kim, Hyun-Kyoung Kim, Hang-Kyu Park, Seung-Kyung Lim
  • Patent number: 9805506
    Abstract: The physical 3D renderer described herein renders one or more captured depth images as a physical 3D rendering. The physical 3D renderer can render physical 3D surfaces and structures in real time. In one embodiment the 3D renderer creates a physical three dimensional (3D) topological surface from captured images. To this end, a depth image of a surface or structure to be replicated is received (for example from a depth camera or depth sensor). Depth information is determined at a dense distribution of points corresponding to points in the depth image. In one embodiment the depth information corresponding to the depth image is fed to actuators on sliding shafts in an array. Each sliding shaft is adjusted to the depth in the depth image to create a physical 3D topological surface like the surface or structure to be replicated.
    Type: Grant
    Filed: March 7, 2014
    Date of Patent: October 31, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Patrick Therien, Michael Sinclair
  • Patent number: 9805064
    Abstract: An image processing system may include an imaging device for capturing an image and an image processing apparatus for processing the image. The imaging device may include an imaging unit for capturing the image, a first recording unit for recording information relating to the image, the information being associated with the image, and a first transmission control unit for controlling transmission of the image to the image processing apparatus. The image processing apparatus may include a reception control unit for controlling reception of the image transmitted from the imaging device, a feature extracting unit for extracting a feature of the received image, a second recording unit for recording the feature, extracted from the image, the feature being associated with the image, and a second transmission control unit for controlling transmission of the feature to the imaging device.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: October 31, 2017
    Assignee: Sony Corporation
    Inventors: Tamaki Kojima, Yoshihiro Yamaguchi, Mikio Sakemoto, Katsuhiro Takematsu
  • Patent number: 9798742
    Abstract: A method and system for the identification of personal presence and enrichment of metadata in image media is disclosed. The method includes obtaining user presence information for user images. Feature extraction is performed on the images and media databases are searched for images based on the presence information, which includes filtering based on known metadata and filtering based on the feature extraction. The user confirms their presence in the filtered images and the user provides new metadata known to the user for the images. The system then infers metadata for the filtered images based on the new metadata and presents the inferred metadata to the user. The user validates the inferred metadata and the inferred metadata confirmed to be valid is stored.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: October 24, 2017
    Assignee: International Business Machines Corporation
    Inventors: Carlos H. Cardonha, Nicole B. Sultanum
  • Patent number: 9798871
    Abstract: A method for authenticating a user includes generating an image for authentication based on at least one authenticated image based on an input image, and performing authentication based on the generated image for authentication.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: October 24, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sungjoo Suh, Chang Kyu Choi, Jae-Joon Han
  • Patent number: 9792535
    Abstract: A computer-implemented method of grouping faces in large user account for creating an image product includes adding the face images obtained from an image album in a user's account into a first chunk; if the chunk size of the first chuck is smaller than a maximum chuck value, keeping the face images from the image album into the first chunk; otherwise, automatically separating the face images from the image album into a first portion and one or more second portions; keeping the first portion in the first chunk; automatically moving the second portions to subsequent chunks; automatically grouping face images in the first chunk to form face groups; assigning the face groups to known face models associated with the user account; and creating a design for an image-based product based on the face images in the first chunk associated with the face models.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: October 17, 2017
    Assignee: Shutterfly, Inc
    Inventors: Roman Sandler, Alexander M. Kenis
  • Patent number: 9792528
    Abstract: A search object and m-number of first local features respectively constituted by a feature vector of 1 to i dimensions of local areas of m-number of feature points in an image of the search object are stored, feature points are extracted from the image, second local features respectively constituted by a feature vector of 1 dimension to j dimensions are generated with respect to local areas of n-number of feature points, a smaller number of dimensions among the number of dimensions i of the first local features and the number of dimensions j of the second local features is selected, and an existence of the search object in the image in the video is recognized when a prescribed ratio of the m-number of first local features up to the selected number of dimensions corresponds to the n-number of second local features up to the selected number of dimensions.
    Type: Grant
    Filed: January 30, 2013
    Date of Patent: October 17, 2017
    Assignee: NEC CORPORATION
    Inventors: Toshiyuki Nomura, Akio Yamada, Kota Iwamoto, Ryota Mase