Patents Examined by Chan S. Park
  • Patent number: 10796144
    Abstract: A method and device for automatically classifying document hardcopy images by using document hardcopy image descriptors are provided.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: October 6, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Sergey Stanislavovich Zavalishin, Andrey Alekseevich Bout, Ilya Vasilyevich Kurilin, Michael Nikolaevich Rychagov
  • Patent number: 10740659
    Abstract: Techniques facilitating generation of a fused kernel that can approximate a full kernel of a convolutional neural network are provided. In one example, a computer-implemented method comprises determining a first pattern of samples of a first sample matrix and a second pattern of samples of a second sample matrix. The first sample matrix can be representative of a sparse kernel, and the second sample matrix can be representative of a complementary kernel. The first pattern and second pattern can be complementary to one another. The computer-implemented method also comprises generating a fused kernel based on a combination of features of the sparse kernel and features of the complementary kernel that are combined according to a fusing approach and training the fused kernel.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: August 11, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Richard Chen, Quanfu Fan, Marco Pistoia, Toyotaro Suzumura
  • Patent number: 10705528
    Abstract: A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user input when a trajectory and/or a velocity, received via the user input, is predicted to cause a collision.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: July 7, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Casimir Matthew Wierzynski, Bardia Fallah Behabadi, Sarah Paige Gibson, Aliakbar Aghamohammadi, Saurav Agarwal
  • Patent number: 10699151
    Abstract: A system and method are provided for performing saliency detection on an image or video. The method includes training and creating deep features using deep neural networks, such that an input image is transformed into a plurality of regions, which minimizes intra-class variance, and maximizes inter-class variance, according to one or more active contour energy constraints. The method also includes providing and output associated with the deep features.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 30, 2020
    Assignee: Miovision Technologies Incorporated
    Inventors: Akshaya K. Mishra, Zhiming Luo, Andrew J. Achkar, Justin A. Eichel
  • Patent number: 10699110
    Abstract: An image processing apparatus includes a detector to detect, from a multi-value image, a table that can display a character in each of a plurality of cells delimited by a plurality of ruled lines and a conformation unit that determines how to conform a color of a constituent element in the table based on information of the element in the table detected by the detector.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: June 30, 2020
    Assignee: Ricoh Company, Ltd.
    Inventors: Takuji Kamada, Satoshi Ouchi
  • Patent number: 10691743
    Abstract: A computing system for realizing visual content of an image collection executes feature detection algorithms and semantic reasoning techniques on the images in the collection to elicit a number of different types of visual features of the images. The computing system indexes the visual features and provides technologies for multi-dimensional content-based clustering, searching, and iterative exploration of the image collection using the visual features and/or the visual feature indices.
    Type: Grant
    Filed: August 5, 2014
    Date of Patent: June 23, 2020
    Assignee: SRI International
    Inventors: Harpreet Singh Sawhney, Jayakrishnan Eledath, Mayank Bansal, Bogdan C. Matei, Xutao Lv, Chaitanya Desai, Timothy Shields
  • Patent number: 10668621
    Abstract: Techniques described herein include a system and methods for implementing fast motion planning of collision detection. In some embodiments, an area voxel map is generated with respect to a three-dimensional space within which a repositioning event is to occur. A number of movement voxel maps are then identified as being related to potential repositioning options. The area voxel map is then compared to each of the movement voxel maps to identify collisions that may occur with respect to the repositioning options. In some embodiments, each voxel map includes a number of bits which each represent voxels in a volume of space. The comparison between the area voxel map and each of the movement voxel maps may include a logical conjunction (e.g., an AND operation). Movement voxel maps for which the comparisons result includes a value of 1 are then removed from a set of valid repositioning options.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: June 2, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Stephen A. Caldara
  • Patent number: 10657376
    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 19, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Chen-Yu Lee, Vijay Badrinarayanan, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 10657400
    Abstract: An authentication method includes projecting a near infrared (NIR) ray using a light emitting diode (LED) of a terminal, receiving a light reflected by a vein of a user using an image sensor of the terminal, verifying whether an image generated using the received light exhibits a vein pattern, in response to the image generated using the received light being verified as exhibiting the vein pattern, generating a vein pattern of the vein based on an image generated using the received light, and in response to the generated vein pattern being determined to match a pre-stored vein pattern, authenticating the user as a registered user corresponding to the pre-stored vein pattern.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: May 19, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jungsoon Shin, Namjoon Kim, Joonah Park, Soochul Lim, Jaehyuk Choi, Tae-Sung Jung
  • Patent number: 10628663
    Abstract: Systems and methods for adapting physical activities and exercises based on analysis of physiological parameters are disclosed. A method includes: identifying, by a computer device, a user; receiving, by the computer device, data of the user while the user is engaged in exercise or physical activity; analyzing, by the computer device, the data to determine a detected state of the user; and providing, by the computer device, feedback to the user based on the analyzing the data.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rogerio Abreu de Paula, Juliana de Melo Batista dos Santos, Andrea Britto Mattos Lima
  • Patent number: 10572767
    Abstract: A method includes receiving, with a computing system, a video item. The method further includes identifying a first set of features within a first frame of the video item. The method further includes identifying, with the computing system, a second set of features within a second frame of the video item, the second frame being subsequent to the first frame. The method further includes determining, with the computing system, differences between the first set of features and the second set of features. The method further includes assigning a clip category to a clip extending between the first frame and the second frame based on the differences.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: February 25, 2020
    Assignee: Netflix, Inc.
    Inventor: Apurvakumar D. Kansara
  • Patent number: 10565568
    Abstract: A computer system can implement a network service by receiving, from a computing device of a user, image data comprising an image of a record. the computer system can then execute image processing logic to determine a first set of information items from the image, and identify a second set of information items that are not determinable from the record. The computer system may then execute augmentation logic to process the record by (i) accessing a transaction database to identify a plurality of transactions made by the user, (ii) using the first set of information items, identifying a matching transaction from the plurality of transactions that pertains to the record, and (iii) resolving the second set of information items using the matching transaction. The computer system can classify the record in a user account of and generate an expense report for the user.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: February 18, 2020
    Assignee: Expensify, Inc.
    Inventors: David M. Barrett, Kevin Michael Kuchta
  • Patent number: 10552977
    Abstract: Systems and methods generate a face-swapped image from a target image using a convolutional neural network trained to apply a source identity to the expression and pose of the target image. The convolutional neural network produces face-swapped images fast enough to transform a video stream. An example method includes aligning the face portion of a target image from an original view to a reference view to generate a target face and generating a swapped face by changing the target face to that of a source identity using a convolutional neural network trained to minimize loss of content from the target face and style from the source identity. The method also includes realigning the swapped face from the reference view to the original view and generating a swapped image by stitching the realigned swapped face with the remaining portion of the target image.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: February 4, 2020
    Assignee: Twitter, Inc.
    Inventors: Lucas Theis, Iryna Korshunova, Wenzhe Shi, Zehan Wang
  • Patent number: 10521662
    Abstract: Apparatus and methods of passive biometric enrollment are configured to provide an introductory experience upon activation of a computer device. The introductory experience involves a user performing movements that allow collection of passive user biometrics by a sensor of the device. The sensor may collect the passive user biometrics while the user performs the movements. The computer device may calibrate one or more features of the computer device based on the movements. The computer device may receive user credentials from the user. The computer device may store a biometric profile including the passive user biometrics in association with the user credentials.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: December 31, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Peter Dawoud Shenouda Dawoud
  • Patent number: 10521660
    Abstract: The image processing method includes a luminance value information obtaining step of obtaining effective radiance values from a subject, and an image generating step of generating a picture image as a set of unit regions each of which has a luminance value obtained by at least partially removing a regular reflection light component on a surface of the subject from the effective radiance values.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: December 31, 2019
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Suguru Kawabata, Takashi Nakano, Kazuhiro Natsuaki, Takahiro Takimoto, Shinobu Yamazaki, Daisuke Honda, Yukio Tamai
  • Patent number: 10509946
    Abstract: A method for matching signatures based on motion signature information including acquiring a first signature and at least one second signature that are to be matched, wherein the first signature and the second signature are generated based on a motion object's motion signature information; and matching the first signature and the second signature based on the motion signature information to obtain a corresponding match result.
    Type: Grant
    Filed: May 5, 2014
    Date of Patent: December 17, 2019
    Assignee: Hangzhou Zhileng Technology Co. Ltd.
    Inventors: Dongge Li, Chengyu Li
  • Patent number: 10510152
    Abstract: A computer-implemented method for determining whether a first image contains at least a portion of a second image includes: determining a first set of feature points associated with the first image; removing from said first set of feature points at least some feature points in the first set that correspond to one or more textures in the first image; and then attempting to match feature points in said first set of feature points with feature points in a second set of feature points associated with said second image to determine whether said first image contains at least a portion of said second image.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: December 17, 2019
    Assignee: Slyce Acquisition Inc.
    Inventors: Philip B. Romanik, Neil Lawrence Mayle
  • Patent number: 10504009
    Abstract: A method for classifying images using a neural network is described. The method utilizes the neural network to generate a deep hash data structure. In one embodiment, the method includes receiving, at a neural network, first data corresponding to a plurality of images of a training image set. The method includes adjusting parameters of the neural network based on concurrent application at the neural network of multiple loss functions to second data corresponding to the plurality of images to generate adjusted parameters. The method includes generating a deep hash data structure based on the adjusted parameters of the neural network. The method further includes sending, via a transmitter to a device, third data corresponding to the deep hash data structure.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: December 10, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yadong Mu, Zhu Liu
  • Patent number: 10504001
    Abstract: Embodiments are disclosed for detecting duplicate and near duplicate images. An exemplary method includes receiving an original image, preparing the image for fingerprinting, and calculating an image fingerprint, the fingerprint expressed as a sequence of numbers. The method further includes comparing the image fingerprint thus obtained with a set of previously stored fingerprints obtained from a set of previously stored images, and determining if the original image is either a duplicate or a near duplicate of an image in the set if the dissimilarity between the two fingerprints is less than a defined threshold T. Once a duplicate or near duplicate is detected, various defined actions may be taken, including culling the less desirable image or referring the redundancy to a user.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: December 10, 2019
    Assignee: Dropbox, Inc.
    Inventors: Michael Dwan, Jinpeng Ren
  • Patent number: 10503956
    Abstract: Systems and methods of identifying unknown video content are described. An example method may include receiving a first fingerprint and a second fingerprint. The first fingerprint may be a color-based fingerprint derived from colors in a portion of the unknown video content, and the second fingerprint may be at least partially based on a feature other than the colors of the unknown video content. A reference database of reference fingerprints may then be queried using one of the first fingerprint or the second fingerprint to obtain a candidate group of fingerprints. The candiate group of fingerprints may then be pried using the other of the first fingerprint and the second fingerprint to identify at least one query fingerprint. The unknown video content may then be identified using the at least one query fingerprint. For example, the second fingerprint may be a luminance-based fingerprint derived from luminance in the unknown video content.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: December 10, 2019
    Assignee: Gracenote, Inc.
    Inventors: Wilson Harron, Markus K. Cremer