Using A Facial Characteristic Patents (Class 382/118)
  • Patent number: 10235562
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: March 19, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10235561
    Abstract: As the use of facial biometrics expands in the commercial and government sectors, the need to ensure that human facial examiners use proper procedures to compare facial imagery will grow. Human examiners have examined fingerprint images for many years such that fingerprint examination processes and techniques have reached a point of general acceptance for both commercial and governmental use. The growing deployment and acceptance of facial recognition can be enhanced and solidified if new methods can be used to assist in ensuring and recording that proper examination processes were performed during the human examination of facial imagery.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: March 19, 2019
    Assignee: AWARE, INC.
    Inventors: Neal Joseph Gieselman, Jonathan Issac Guillory
  • Patent number: 10237843
    Abstract: A position determining unit (4a) determines the position of each of wireless communication apparatuses carried into a vehicle cabin on the basis of the intensities of an electric wave which is transmitted by each wireless communication apparatus and is received by plural antennas (2), or on the basis of the difference between the intensities at the antennas of the electric wave which is transmitted by each wireless communication apparatus and is received by the plural antennas (2). By using a determination result outputted by the position determining unit (4a), a display control unit (4b) outputs, to a display unit (5), an image signal for showing the position in the vehicle cabin of each of the wireless communication apparatuses, and allowing the user to select a wireless communication apparatus which is to be wirelessly connected.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: March 19, 2019
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Yoshikazu Yoshida
  • Patent number: 10229311
    Abstract: Implementations generally relate to face template balancing. In some implementations, a method includes generating face templates corresponding to respective images. The method also includes matching the images to a user based on the face templates. The method also includes receiving a determination that one or more matched images are mismatched images. The method also includes flagging one or more face templates corresponding to the one or more mismatched images as negative face templates.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: March 12, 2019
    Assignee: Google LLC
    Inventors: Jonathan McPhie, Hartwig Adam, Dan Fredinburg, Alexei Masterov
  • Patent number: 10223577
    Abstract: A face image processing apparatus includes: a lighting portion including a first polarizer which polarizes light in a first direction and a light emitter which emits, through the first polarizer, infrared light; an image capturing portion including a second polarizer which polarizes light in a second direction perpendicular to the first direction and an image capturing unit which captures images through the second polarizer; and an image processing portion which detects candidates of eyes using a first image captured when the lighting portion emits the infrared light and a second image captured when the lighting portion does not emit the infrared light. The image processing portion determines, as an eye, a candidate having a hyperbolic or cross shaped pattern present in the first image but not present in the second image.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: March 5, 2019
    Assignee: OMRON AUTOMOTIVE ELECTRONICS CO., LTD.
    Inventors: Keishin Aoki, Shunji Ota
  • Patent number: 10217009
    Abstract: A method for enhancing user liveness detection is provided that includes calculating, by a computing device, a first angle and a second angle for each frame in a video of captured face biometric data. The first angle is between a plane defined by a front face of the terminal device and a vertical axis, and the second angle is between the plane defined by the front face of the terminal device and a plane defined by the face of the user. Moreover, the method includes creating a first signal from the first angles and a second signal from the second angles, calculating a similarity score between the first and second signals, and determining the user is live when the similarity score is at least equal to a threshold score.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: February 26, 2019
    Assignee: DAON HOLDINGS LIMITED
    Inventor: Mircea Ionita
  • Patent number: 10217085
    Abstract: An approach is provided for recognizing one or more people from media content and determining if the one or more people are associated with a social networking service. A request is received from a user equipment specifying a media content. Electronically processing of the media content to recognize one or more people is initiated. It is determined whether the one or more people are associated with a member account of a social networking service. A prompting of the user is initiated with an option based on the determination.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: February 26, 2019
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Brenda Castro, James Francis Reilly, Matti Johannes Sillanpää, Toni Peter Strandell, Jyri Kullervo Virtanen, Mikko Antero Nurmi
  • Patent number: 10216404
    Abstract: An electronic device and method is disclosed herein. The electronic device may include a memory configured to store image data including at least one object, user identification information, and a specific object mapped to the user identification information, and a processor. The processor may execute the method, including extracting an object from the image data, determining whether the extracted object matches the specific object, if the extracted object matches the specific object, encrypting the image data using the user identification information mapped to the specific object as an encryption key, and storing the encrypted image data in the memory.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: February 26, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jaehwan Kwon
  • Patent number: 10218898
    Abstract: Aspects identify one or more persons appearing within a photographic image framing of a camera viewfinder. A geographic location is determined for an additional person related to such identified persons, wherein the additional person is located within a specified proximity range to the identified persons but does not appear within the photographic image framing. In response to determining that a relationship of the additional person to a person identified within the image framing indicates that the additional person should be included within photographic images of the identified person, aspects recommend that the additional person be added to the photographic image framing prior to acquisition of image data by the camera from the photographic image framing.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
  • Patent number: 10212338
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventor: Rodrigo Carceroni
  • Patent number: 10210393
    Abstract: According to one aspect, embodiments herein provide a visual monitoring system for a load panel comprising a first camera having a field of view and configured to be mounted on a surface of the load panel at a first camera position such that a first electrical component of the load panel is in the field of view of the first camera and to generate image based information corresponding to the first electrical component, and a server in communication with the first camera and configured to receive the image based information corresponding to the first electrical component from the first camera and to provide the image based information from the first camera to a user via a user interface.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: February 19, 2019
    Assignee: SCHNEIDER ELECTRIC USA, INC.
    Inventors: John C. Van Gorp, Matthew Stanlake, Mark A. Chidichimo
  • Patent number: 10210379
    Abstract: At least one example embodiment discloses a method of extracting a feature from an input image. The method may include detecting landmarks from the input image, detecting physical characteristics between the landmarks based on the landmarks, determining a target area of the input image from which at least one feature is to be extracted and an order of extracting the feature from the target area based on the physical characteristics and extracting the feature based on the determining.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: February 19, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungjoo Suh, Seungju Han, Jaejoon Han
  • Patent number: 10210627
    Abstract: A computer system determines a metric for an input object, which could be an image of a person with the metric being measure of the person's body size, age, etc. A paired neural network system is trained on a training set of objects having pairs of objects each assigned a relative metric. A relative metric for a pair indicate which of the pair has the higher metric. A representative set of objects includes a known assigned metric value for each object. The trained paired neural network system pairwise compares an input object with objects from the representative set to determine a relative metric for each such pair, to arrive at a collection of relative metrics of the input object relative to various objects in the representative set. A metric value can be estimated for the input object based on the collection of relative metrics and those known metric values.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: February 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ilia Vitsnudel, Ilya Vladimirovich Brailovskiy
  • Patent number: 10204090
    Abstract: System, method and architecture for providing improved visual recognition by modeling visual content, semantic content and an implicit social network representing individuals depicted in a collection of content, such as visual images, photographs, etc., which network may be determined based on co-occurrences of individuals represented by the content, and/or other data linking the individuals. In accordance with one or more embodiments, using images as an example, a relationship structure may comprise an implicit structure, or network, determined from co-occurrences of individuals in the images. A kernel jointly modeling content, semantic and social network information may be built and used in automatic image annotation and/or determination of relationships between individuals, for example.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: February 12, 2019
    Assignee: OATH INC.
    Inventors: Jia Li, Xiangnan Kong
  • Patent number: 10204265
    Abstract: Provided is a user authentication method using a natural gesture input. The user authentication method includes recognizing a plurality of natural gesture inputs from image data of a user, determining number of the plurality of natural gesture inputs as total number of authentication steps, determining a reference ratio representing a ratio of number of authentication steps requiring authentication pass to the total number of the authentication steps, determining an actual ratio representing a ratio of number of authentication steps, where authentication has actually passed, to the total number of the authentication steps, and performing authentication on the user, based on a result obtained by comparing the actual ratio and the reference ratio.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: February 12, 2019
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jun Seong Bang, Dong Chun Lee
  • Patent number: 10198626
    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: February 5, 2019
    Assignee: Snap Inc.
    Inventors: Jia Li, Xutao Lv, Xiaoyu Wang, Xuehan Xiong, Jianchao Yang
  • Patent number: 10200560
    Abstract: Automated sharing of digital images is described. In example implementations, a computing device, such as a smart phone, captures a digital image depicting multiple faces of multiple persons included in the digital image. The computing device is capable of automatically distributing a copy of the digital image to the subjects of the digital image. To do so, a digital image sharing module determines a person identifier using facial detection and recognition. The person identifier, which can be derived from facial characteristics, is used to search a contact information database and find a matching entry. The matching entry includes contact information associated with the person in the digital image. The sharing module transmits a copy of the digital image to the person using the contact information. The digital image sharing module can also display a sharing status indicator indicative of whether the digital image can be, or has been, transmitted automatically.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: February 5, 2019
    Assignee: Adobe Inc.
    Inventors: Sarah A. Kong, Chih-Yao Hsieh
  • Patent number: 10200652
    Abstract: Techniques provided herein apply a precomputed graphical object to one or more images to generate a video that is modified with the precomputed graphical object. Various implementations characterize facial positions on a face in a first image and determine a respective facial position on the face to apply a precomputed graphical object at. One or more implementations modify the first image by applying the precomputed graphical object to the respective facial position in the first image. Some implementations modify one or more images that are captured after the first image by applying the precomputed graphical object to each respective location for the respective facial position in the one or more images. In turn, various implementations generate a video with images that are modified based on the precomputed graphical object.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: February 5, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Henrik Valdemar Turbell
  • Patent number: 10198859
    Abstract: In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: February 5, 2019
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Ebony James Charlton, Andrew James McPhee, Michael John Evans
  • Patent number: 10192142
    Abstract: A computer executed method for supervised facial recognition comprising the operations of preprocessing, feature extraction and recognition. Preprocessing may comprise dividing received face images into several subimages, converting the different face image (or subimage) dimensions into a common dimension and/or converting the datatypes of all of the face images (or subimages) into an appropriate datatype. In feature extraction, 2D DMWT is used to extract information from the face images. Application of the 2D DMWT may be followed by FastICA. FastICA, or, in cases where FastICA is not used, 2D DMWT, may be followed by application of the l2-norm and/or eigendecomposition to obtain discriminating and independent features. The resulting independent features are fed into the recognition phase, which may use a neural network, to identify an unknown face image.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: January 29, 2019
    Assignee: University of Central Florida Research Foundation, Inc.
    Inventors: Wasfy Mikhael, Ahmed Aldhahab
  • Patent number: 10192110
    Abstract: There is provided a vehicle safety system including a sensing unit, a processing unit, a control unit and a display unit. The sensing unit is configured to capture an image frame containing an eyeball image from a predetermined distance. The processing unit is configured to calculate a pupil position of the eyeball image in the image frame and generate a drive signal corresponding to the pupil position. The control unit is configured to trigger a vehicle device associated with the pupil position according to the drive signal. The display unit is configured to show information of the vehicle device.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: January 29, 2019
    Assignee: PIXART IMAGING INC.
    Inventors: Chun-Wei Chen, Shih-Wei Kuo
  • Patent number: 10192550
    Abstract: Voice input is received from a user. An ASR system generates in memory a set of words it has identified in the voice input, and update the set each time it identifies a new word in the voice input to add the new word to the set. A condition indicative of speech inactivity in the voice input is detected. A response for outputting to the user is generated based on the set of identified words, in response to the detection of the speech inactivity condition. The generated response is outputted to the user after an interval of time—commencing with the detection of the speech inactivity condition—has ended and only if no more words have been identified in the voice input by the ASR system in that interval of time.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: January 29, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Raymond J. Froelich
  • Patent number: 10192104
    Abstract: Systems and methods are provided for authenticating a user of a computing device. The system comprises one or more memory devices storing instructions, and one or more processors configured to execute the instructions to provide, to a computing device associated with a user, an indication of a prescribed authentication parameter. The system also receives image data including an image of the user of the computing device captured using an image sensor of the computing device. The system determines an identity of the user based on an analysis of the received image data, determines whether the received image data includes a feature corresponding to the prescribed authentication parameter, and authenticates the user based at least in part on whether the received image data includes the feature corresponding to the prescribed authentication parameter.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: January 29, 2019
    Assignee: Capital One Services, LLC
    Inventor: Colin Robert MacDonald
  • Patent number: 10187623
    Abstract: A stereo vision SoC and a processing method thereof are provided. The stereo vision SoC extracts first support points from an image and adds second support points, performs triangulation based on the first support points and the second support points; and extracts disparity using a result of the triangulation. Accordingly, depth image quality is improved and HW is easily implemented in the stereo vision SoC.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: January 22, 2019
    Assignee: Korea Electronics Technology Institute
    Inventors: Haeng Seon Son, Seon Young Lee, Kyung Won Min
  • Patent number: 10181090
    Abstract: A technique for multi-camera object tracking is disclosed that preserves privacy of imagery from each camera or group of cameras. This technique uses secure multi-party computation to compute a distance metric across data from multiple cameras without revealing any information to operators of the cameras except whether or not an object was observed by both cameras. This is achieved by a distance metric learning technique that reduces the computing complexity of secure computation while maintaining object identification accuracy.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: January 15, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chun-Te Chu, Jaeyeon Jung, Zicheng Liu, Ratul Mahajan
  • Patent number: 10182196
    Abstract: According to various embodiments of the present disclosure, an electronic device includes: a camera module that obtains an image; and a processor which implements the method, including setting a quadrangular area in the obtained image including a reference pixel and a corresponding pixel disposed respectively at corners of the quadrangular area, calculating an accumulated-pixel value for each pixel of the obtained image corresponding to the quadrangular area, such that a particular pixel value for a particular pixel is a sum of the pixel values beginning from the reference pixel, continuing though an arrangement of pixels in the quadrangular area and terminating at the particular pixel, and generating an image quality processing-dedicated frame of accumulated-pixel values based on calculated accumulated-pixel values of each pixel of the frame.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: January 15, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Byeong-Chan Park
  • Patent number: 10176199
    Abstract: In one embodiment, a social networking system automatically tags one or more users to an image file by creating a list of potential matches, and selecting a subset of potential matches based on location, asking a first user to confirm the subset of potential matches, and tagging one or more matched users to the image file.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: January 8, 2019
    Assignee: Facebook, Inc.
    Inventor: Erick Tseng
  • Patent number: 10176654
    Abstract: A suspicious person detection technology which is less likely to cause a blind spot of detection of a suspicious person is provided. A suspicious person detection system detects a suspicious person present in a predetermined area and includes a probe request detection terminal (100) configured to detect a probe request transmitted from a mobile terminal (400) to generate probe information including first identification information specific to the mobile terminal which transmits the probe information, and an analyzing apparatus (200) configured to acquire the probe information from the probe request detection terminal, and, in the case where the first identification information included in the probe information matches none of one or more pieces of second identification information set in advance, transmit suspicious person information indicating that a suspicious person is detected to a predetermined information processing apparatus (300).
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: January 8, 2019
    Assignee: Recruit Co., Ltd.
    Inventors: Kazunori Okubo, Ryuichiro Maezawa, Hironori Arakawa
  • Patent number: 10169893
    Abstract: Implementations generally relate to optimizing a photo album layout. In some implementations, a method includes receiving a plurality of images and determining a target arrangement. The method also includes arranging the plurality of images in an N-dimensional arrangement based on a predetermined distance function. The method also includes arranging the plurality of images in the target arrangement based on the N-dimensional arrangement.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: January 1, 2019
    Assignee: Google LLC
    Inventors: Stephen Joseph Diverdi, Ohad Izhak Fried
  • Patent number: 10169644
    Abstract: Aspects of the present disclosure provide an image-based face detection and recognition system that processes and/or analyzes portions of an image using “image strips” and cascading classifiers to detect faces and/or various facial features, such an eye, nose, mouth, cheekbone, jaw line, etc.
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: January 1, 2019
    Assignee: Blue Line Security Solutions LLC
    Inventor: Marcos Silva
  • Patent number: 10169976
    Abstract: Various implementations of an occupant detection system may be used in a vehicle to detect the presence of a living occupant (human or otherwise) and generate a warning. The warning may be communicated to another person(s) or to other vehicle systems to alert people in the vicinity of the vehicle. The system prevents injury and death to people and pets that may be accidentally within a parked car and unable to egress. The system may be integrated into a new vehicle or housed in a separate device that can be plugged into a power outlet within the vehicle.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: January 1, 2019
    Assignee: The Board of Trustees of The University of Alabama
    Inventors: Timothy Austin Haskew, Edward Sazonov
  • Patent number: 10169646
    Abstract: Embodiments provide, in at least one aspect, methods and systems that authenticate at least one face in at least one digital image using techniques to mitigate spoofing. For example, methods and systems trigger an image capture device to capture a sequence images of the user performing the sequence of one or more position requests based on the pitch and yaw movements. The methods and systems generate a series of face signatures for the sequence of images of the user performing the sequence of one or more position requests. The methods and systems compare the generated series of face signatures to stored face signatures corresponding to the requested sequence of the one or more position requests.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: January 1, 2019
    Assignee: APPLIED RECOGNITION INC.
    Inventors: Ray Ganong, Donald Craig Waugh, Jakub Dolejs, Tomasz Wysocki, Chris Studholme
  • Patent number: 10169659
    Abstract: Devices, systems and methods are disclosed for improving a playback of video data and generation of a video summary. For example, annotation data may be generated for individual video frames included in the video data to indicate content present in the individual video frames, such as faces, objects, pets, speech or the like. A video summary may be determined by calculating a priority metric for individual video frames based on the annotation data. In response to input indicating a face and a period of time, a video summary can be generated including video segments focused on the face within the period of time. The video summary may be directed to multiple faces and/or objects based on the annotation data.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: January 1, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Eugene Pearson, Dynin Hong Khem, Peter Van Tuyl Bentley, William Christopher Banta, Kevin Michael Gordon, Manlio Armando Lo Conte
  • Patent number: 10169642
    Abstract: Various embodiments described herein notifying users regarding photos in which they may appear and suggest photo tags accordingly. Subject to user preferences and privacy settings, facial recognition with respect to a specific user of a social networking system may be performed on one or more photos added by, or otherwise associated with, other entities of the social networking system. For those photos in which the specific user is facially recognized, a suggested photo tag for the specific user may be associated with the recognized photos and the specific user may be alerted accordingly. Depending on the embodiment, the specific user may be provided with an option to confirm the suggested photo tag, decline the suggested photo tag, or do nothing. In the event the specific user declines the suggested photo tag with respect to a particular photo, other users may be prevented from tagging the specific user with respect to the particular photo.
    Type: Grant
    Filed: August 6, 2014
    Date of Patent: January 1, 2019
    Assignee: Facebook, Inc.
    Inventor: Dan Barak
  • Patent number: 10169896
    Abstract: A sub-image of data of a first full image may be selected. The sub-image of data may at least partially obscure an object within the first full image. A request to replace the sub-image of data may be transmitted over a network. The request may include transmitting the full image and transmitting metadata associated with the first full image to one or more of a server computing device. The server computing device may analyze a history of images and select one or more images of the history of images that match one or more attributes of the metadata. The server computing device may replace the sub-image of data using the one or more images to generate at least a second full image that includes the object, wherein the object is not obscured. The second full image may be received over the network.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Swaminathan Balasubramanian, Radha M. De, Ashley D. Delport, Indrajit Poddar, Cheranellore Vasudevan
  • Patent number: 10169894
    Abstract: A sub-image of data of a first full image may be selected. The sub-image of data may at least partially obscure an object within the first full image. A request to replace the sub-image of data may be transmitted over a network. The request may include transmitting the full image and transmitting metadata associated with the first full image to one or more of a server computing device. The server computing device may analyze a history of images and select one or more images of the history of images that match one or more attributes of the metadata. The server computing device may replace the sub-image of data using the one or more images to generate at least a second full image that includes the object, wherein the object is not obscured. The second full image may be received over the network.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Swaminathan Balasubramanian, Radha M. De, Ashley D. Delport, Indrajit Poddar, Cheranellore Vasudevan
  • Patent number: 10162999
    Abstract: In one embodiment, a method includes accessing an image file associated with a first user of a communication system and detecting a face in an image corresponding to the image file. The method also includes accessing an event database associated with the communication system, the event database containing one or more events, each being associated with the first user and one or more second users of the communication system. The method also includes determining one or more candidates among the second users to be matched to the face, where each candidate is associated with an event in the communication system, and where a time associated with the image is in temporal proximity to a time associated with the event.
    Type: Grant
    Filed: February 23, 2016
    Date of Patent: December 25, 2018
    Assignee: Facebook, Inc.
    Inventors: Phaedra Papakipos, Matthew Nicholas Papakipos
  • Patent number: 10162825
    Abstract: In one embodiment, a geo-social networking system automatically tags one or more social contacts of a first user to a photo of the first user by ranking the social contacts based on spatial and temporal proximity to the first user, and in response to the first user's selection of one or more top ranked social contacts, associating the selected social contacts to the photo.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: December 25, 2018
    Assignee: Facebook, Inc.
    Inventor: David Harry Garcia
  • Patent number: 10162878
    Abstract: An information handling system performs a method for finding a nearest neighbor of a point. In some embodiments, the method may be used for agglomerative clustering. The method includes projecting a space ? of a first dimension with a first distance ? to a space P of a second, smaller dimension with a distance ?? by a projection function p. For all pairs of points v1 and v2 in ?, ?? (p(v1), p(v2))??(v1, v2), where p is the function that projects points in ? to points in P. The method also includes selecting a point v in ? and performing a search for its nearest neighbor in ? by projecting v to P and locating a set S of nearest neighbors in P of p(v). A search is then performed in ? of a set of S? of points that project onto the points in S.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: December 25, 2018
    Assignee: TIBCO Software Inc.
    Inventors: Stephen Nuchia, Daniel Scott
  • Patent number: 10157227
    Abstract: Provided is an image processing apparatus including at least one processor configured to implement: an image processing unit which receives at least one image captured by at least one camera, analyzing the captured image, and generates a summary image related to occurrence of an event among the captured image; and a controller which controls the image processor to generate the summary image in response to at least one of a user request received from a user terminal and a predetermined condition being satisfied.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: December 18, 2018
    Assignee: HANWHA AEROSPACE CO., LTD.
    Inventors: Sungbong Cho, Jeongwoong Park
  • Patent number: 10157323
    Abstract: Aspects may relate to a device to provide a spoofing or no spoofing indication. The device may comprise a processor and a sensor. The sensor may receive multiple facial frames of a face of a user. The processor coupled to the sensor may be configured to: perform a function based upon components of the face of the user relative to one another to determine measured facial features for two adjacent frames of the multiple facial frames; and determine whether the measured facial features for the two adjacent frames are sufficiently different to indicate liveness of the user and no spoofing attempt.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: December 18, 2018
    Assignee: QUALCOMM Incorporated
    Inventor: Ofir Alon
  • Patent number: 10147023
    Abstract: Provided are methods, systems, and computer-readable medium for synthetically generating training data to be used to train a learning algorithm that is capable of generating computer-generated images of a subject from real images that include the subject. The training data can be generated using a facial rig by changing expressions, camera viewpoints, and illumination in the training data. The training data can then be used for tracking faces in a real-time video stream. In such examples, the training data can be tuned to expected environmental conditions and camera properties of the real-time video stream. Provided herein are also strategies to improve training set construction by analyzing which attributes of a computer-generated image (e.g., expression, viewpoint, and illumination) require denser sampling.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: December 4, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Martin Klaudiny, Steven McDonagh, Derek Bradley, Thabo Beeler, Iain Matthews, Kenneth Mitchell
  • Patent number: 10147241
    Abstract: An object of the present invention is to provide a fitting support device and method which make it possible to reliably select apparel such as clothes that match user's appearance. The fitting support device includes: a color-characteristic processing unit 100 that acquires color characteristic data relating to user's skin color on the basis of captured image data; a body processing unit 101 that colors, on the basis of the color characteristic data, three-dimensional body shape data corresponding to body shape data on a user to thereby create body image data; a color-pattern processing unit 102 that acquires color pattern data corresponding to the color characteristic data, on the basis of clothing data; a wearing processing unit 103 that creates wearing image data on the basis of the body image data and the color pattern data; and a fitting processing unit 104 that creates fitting image data by synthesizing head portion image data on the user and the wearing image data.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: December 4, 2018
    Assignee: SEIREN CO., LTD.
    Inventors: Norihiro Ogata, Kozo Nagata, Junichi Hashimoto, Toshiro Kawabata
  • Patent number: 10142334
    Abstract: A communicating apparatus, method, and system that capture an image, authenticate a person in the image that has been captured, determine a direction of the person based on a result of authenticating the person, and control transmission of a radio wave in the determined direction to connect a terminal device to a network, and communicate with the terminal device connected to the network by using access information included in the transmitted radio wave.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: November 27, 2018
    Assignee: RICOH COMPANY, LTD.
    Inventor: Shinya Endo
  • Patent number: 10142351
    Abstract: A system and method for retrieving contact information based on image recognition searches is disclosed. A requestor takes a picture of a user or retrieves a storage image of a user and transmits the image to an image recognition module. The image recognition module identifies the user and determines whether the requestor can receive access to the user's contact information based on permission rules. For example, the permission rule includes a requirement that the user and the requestor be sufficiently related on a social graph generated by a social network application. The permission rules can also include a requirement that the requestor have a predetermined proximity to the image. Once the permission rules are satisfied, the image recognition module transmits the user's contact information to the requestor.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: November 27, 2018
    Assignee: Google LLC
    Inventors: Christopher Richard Wren, Nadav Aharony
  • Patent number: 10140986
    Abstract: Voice input is received from a user. An ASR system generates in memory a set of words it has identified in the voice input, and update the set each time it identifies a new word in the voice input to add the new word to the set, during at least one interval of speech activity. Information is pre-retrieved whilst the speech activity interval is still ongoing, for conveying in a response to be outputted at the end of the speech activity interval.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: November 27, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Raymond J. Froelich
  • Patent number: 10140675
    Abstract: Implementations relate to an image grid with selectively prominent images. In some implementations, a computer-implemented method includes identifying a plurality of images, where each image of the plurality of images has a respective importance score. A subset of the images is selected based at least in part on the respective importance score for each image. The method determines respective one or more cells in a grid for occupation by each of the images, where at least one image of the subset is placed in the grid such that it occupies at least two cells in the grid. The method causes the images to be displayed in a user interface on a display screen based on the grid.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: November 27, 2018
    Assignee: Google LLC
    Inventors: Paul Sowden, Madhur Khandelwal
  • Patent number: 10133988
    Abstract: The proposed method is used for classification in open-set scenarios, wherein often it is not possible to first obtain the training data for all possible classes that may arise during the testing stage. During the test phase, test samples belonging to one of the classes used in the training phase are classified based on a ratio between similarity scores, as known correct class and test samples belonging to any other class are to be rejected and classified as unknown.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: November 20, 2018
    Assignees: SAMSUNG ELETRÔNICA DA AMAZÔNIA LTDA., UNIVERSIDADE ESTADUAL DE CAMPINAS—UNICAMP
    Inventors: Pedro Ribeiro Mendes Júnior, Roberto Medeiros De Souza, Rafael De Oliveira Werneck, Bernardo Vecchia Stein, Daniel Vatanabe Pazinato, Waldir Rodrigues De Almeida, Otávio Augusto Bizetto Penatti, Ricardo Da Silva Torres, Anderson Rocha
  • Patent number: 10134131
    Abstract: The disclosure relates to phenotype analysis of cellular image data using a machine-learned, deep metric network model. An example method includes receiving, by a computing device, a target image of a target biological cell having a target phenotype. Further, the method includes obtaining, by the computing device, semantic embeddings associated with the target image and each of a plurality of candidate images of candidate biological cells each having a respective candidate phenotype. The semantic embeddings are generated using a machine-learned, deep metric network model. In addition, the method includes determining, by the computing device, a similarity score for each candidate image. Determining the similarity score for a respective candidate image includes computing a vector distance between the respective candidate image and the target image. The similarity score for each candidate image represents a degree of similarity between the target phenotype and the respective candidate phenotype.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: November 20, 2018
    Assignee: Google LLC
    Inventors: Dale M. Ando, Marc Berndl
  • Patent number: 10135972
    Abstract: Disclosed is a secure telephone call management system for authenticating users of a telephone system in an institutional facility. Authentication of the users is accomplished by using a personal identification number, preferably in conjunction with speaker independent voice recognition and speaker dependent voice identification. When a user first enters the system, the user speaks his or her name which is used as a sample voice print. During each subsequent use of the system, the user is required to speak his or her name. Voice identification software is used to verify that the provided speech matches the sample voice print. The secure system includes accounting software to limit access based on funds in a user's account or other related limitations. Management software implements widespread or local changes to the system and can modify or set any number of user account parameters.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: November 20, 2018
    Assignee: Global Tel*Link Corporation
    Inventor: Stephen Lee Hodge