Patents Examined by Sheela C Chawan
  • Patent number: 10891465
    Abstract: Methods, apparatuses, devices, program products, and media can improve the accuracy rate of searching for a target person. The method includes: obtaining an image of the target person; searching a face image library by using the image of the target person to obtain a first face image template matching the image of the target person, where the face image library includes a plurality of face image templates; and obtaining, according to the first face image template and a pedestrian image library, at least one target pedestrian image template matching the image of the target person, where the pedestrian image library includes a plurality of pedestrian image templates.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 12, 2021
    Assignee: SHENZHEN SENSETIME TECHNOLOGY CO., LTD.
    Inventors: Maoqing Tian, Shuai Yi, Junjie Yan
  • Patent number: 10885319
    Abstract: Disclosed herein is a posture control system containing a posture control unit that changes a direction of a user's body and a display unit mounted on a user's head. The posture control system includes: a posture data acquiring unit configured to acquire posture data indicating the direction of the user's body; a motion sensor data acquiring unit configured to acquire motion sensor data indicating a direction of a user's face in a real space which is detected by a motion sensor; a camera image data acquiring unit configured to acquire camera image data indicating the face direction with reference to the direction of the user's body which is specified based on a camera image; and a face direction specifying unit configured to specify the direction of the user's face based on the posture data, the motion sensor data, and the camera image data.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: January 5, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yoshikazu Onuki
  • Patent number: 10885644
    Abstract: Embodiments of the present application relate to a method, apparatus, and system for detecting a specified image identifier. The method includes retrieving a target image to be detected from a predetermined area, binarizing the target image to be detected to obtain a target binary image to be detected, calibrating connected domains of the target binary image to be detected, successively retrieving image features of candidate connected domains, and comparing the image features corresponding to the candidate connected domains to image features of a standard specified identifier image, wherein the candidate connected domains are determined based at least in part on the calibration of the connected domains, and determining a candidate connected domain as the location of the specified identifier image based at least in part on the comparison of the image features corresponding to the candidate connected domains to image features of the standard specified identifier image.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: January 5, 2021
    Assignee: BANMA ZHIXING NETWORK (HONGKONG) CO., LIMITED
    Inventors: Gang Cheng, Rufeng Chu, Lun Zhang
  • Patent number: 10867178
    Abstract: Systems and methods are provided for intelligently monitoring environments, classifying objects within such environments, detecting events within such environments, receiving and propagating input concerning image information from multiple users in a collaborative environment, identifying and responding to situational abnormalities or situations of interest based on such detections and/or user inputs.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: December 15, 2020
    Assignee: Palantir Technologies Inc.
    Inventors: Daniel Cervelli, Anand Gupta, Andrew Elder, Robert Imig, Praveen Kumar Ramalingam, Reese Glidden, Matthew Fedderly
  • Patent number: 10867418
    Abstract: An apparatus for generating a cross-sectional abdominal image includes a memory for storing a cross-sectional abdominal image, a measuring unit for measuring an outline of an abdomen, and a controller configured to correct the cross-sectional abdominal image based on the outline of the abdomen measured by the measuring unit.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: December 15, 2020
    Assignee: KYOCERA Corporation
    Inventor: Hiromi Ajima
  • Patent number: 10853634
    Abstract: The systems and methods discussed herein provide for a method that includes receiving, by a device, a first image of a plurality of individuals. The method further includes identifying, by the device from a database comprising images of individuals, a plurality of individuals within the first image. The method further includes for each identified individual within the first image, by the device, identified individual with a first value. The method further includes, generating, by the device to a second device, a record comprising identifications of each identified individual within the first image and the first value.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: December 1, 2020
    Assignee: Citrix Systems, Inc.
    Inventor: Sameer Mehta
  • Patent number: 10839196
    Abstract: The present document is directed to automated and semi-automated surveillance and monitoring methods and systems that continuously record digital video, identify and characterize face tracks in the recorded digital video, store the face tracks in a face-track database, and provide query processing functionalities that allow particular face tracks to be quickly identified and used for a variety of surveillance and monitoring purposes. The currently disclosed methods and systems provide, for example, automated anomaly and threat detection, alarm generation, rapid identification of images of parameter-specified individuals within recorded digital video and mapping the parameter-specified individuals in time and space within monitored geographical areas or volumes, functionalities for facilitating human-witness identification of images of individuals within monitored geographical areas or volumes, and many additional functionalities.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: November 17, 2020
    Assignee: ImageSleuth, Inc.
    Inventor: Noah S. Friedland
  • Patent number: 10832098
    Abstract: Concepts and technologies disclosed herein are directed to image prediction. According to one aspect disclosed herein, an image prediction system can receive a training data set that includes a plurality of training images. The image prediction system can define N-dimensional feature vectors corresponding to the plurality of training images in the training data set, parameterize the N-dimensional feature vectors to obtain a plurality of parameterized curves corresponding the plurality of training images in the training data set, obtain a square root velocity representation for each parameterized curve of the plurality of parameterized curves, rescale the plurality of parameterized curves to remove scaling variability among the plurality of parameterized curves, define a pre-shape space for the plurality of parameterized curves, and obtain shape space points pertaining to each parameterized curve of the plurality of parameterized curves on a shape space that inherits a structure from the pre-shape space.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: November 10, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Raghuraman Gopalan
  • Patent number: 10824900
    Abstract: In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: November 3, 2020
    Assignee: NEC Corporation
    Inventor: Hiroo Ikeda
  • Patent number: 10824916
    Abstract: Systems and methods for improving the accuracy of a computer system for object identification/classification through the use of weakly supervised learning are provided herein. In some embodiments, the method includes (a) receiving at least one set of curated data, wherein the curated data includes labeled images, (b) using the curated data to train a deep network model for identifying objects within images, wherein the trained deep network model has a first accuracy level for identifying objects, receiving a first target accuracy level for object identification of the deep network model, determining, automatically via the computer system, an amount of weakly labeled data needed to train the deep network model to achieve the first target accuracy level, and augmenting the deep network model using weakly supervised learning and the weakly labeled data to achieve the first target accuracy level for object identification by the deep network model.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: November 3, 2020
    Assignee: SRI International
    Inventors: Karan Sikka, Ajay Divakaran, Parneet Kaur
  • Patent number: 10796149
    Abstract: Methods and systems for automating the management and processing of roof damage analysis. In some embodiments image data associated with damaged roofs is collected and automatically analyzed by a computing device. In some embodiments, the image data is modified automatically to include descriptive metadata and visual indicia marking potential areas of damage. In one embodiment, the systems and methods include a remote computing device receiving visual data associated with one or more roofs. In one embodiment, insurance company specific weightings are determined and applied to received information to determine a type and extent of damage to the associated roof. In one embodiment, results of the methods and systems may be used to automatically generate a settlement estimate or supplement additional information in the estimate generation process.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: October 6, 2020
    Assignee: Accurence, Inc.
    Inventors: Zachary Labrie, Benjamin Zamora
  • Patent number: 10796426
    Abstract: Evaluating a design of a configurable inspection station for inspecting a workpiece, wherein the design of the configurable inspection station has a plurality of changeable parameters and providing a computer vision system that can receive multiple, different inputs each defining a respective region of interest in a simulated image to search for a feature corresponding to an attribute of the workpiece and determine whether the feature corresponding to the attribute is identifiable in each of the respective regions of interest.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: October 6, 2020
    Assignee: The Gillette Company LLC
    Inventors: Brian Joseph Woytowich, Lucy Lin, Jeffrey Michael Mitrou, Gregory David Aviza
  • Patent number: 10796136
    Abstract: A system may receive a primary image containing a first set of facial feature data. The primary image may be sent by a facial recognition device for association with a user account. The system may also retrieve a secondary image from a secondary image source. The secondary image may contain a second set of facial feature data. The secondary image may further depict a user associated with the user account. The system may then compare the first set of facial feature data to the second set of facial feature data to determine whether the primary image depicts the user associated with the user account.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: October 6, 2020
    Assignee: American Express Travel Related Services Company, Inc.
    Inventor: Upendra Mardikar
  • Patent number: 10789680
    Abstract: A method, device, system, and article of manufacture are provided for generating an enhanced image of a predetermined scene from images. In one embodiment, a method comprises receiving, by a computing device, a first indication associated with continuous image capture of a predetermined scene being enabled; in response to the continuous image capture being enabled, receiving, by the computing device, from an image sensor, a reference image and a first image, wherein each of the reference image and the first image is of the predetermined scene and has a first resolution; determining an estimated second resolution of an enhanced image of the predetermined scene using the reference image and the first image; and in response to the continuous image capture being disabled determining the enhanced image using the reference image and the first image, wherein the enhanced image has a second resolution that is at least the first resolution and about the estimated second resolution.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: September 29, 2020
    Assignee: Google Technology Holdings LLC
    Inventor: Michael D. McLaughlin
  • Patent number: 10776936
    Abstract: A method comprising: providing a first 3D point cloud and a second 3D point cloud about an object obtained using different sensing techniques; removing a scale difference between the 3D point clouds based on a mean distance of points in corresponding subsets of the first and second 3D point clouds; arranging the 3D point clouds in a two-level structure, wherein a first level is a macro structure describing boundaries of the object and a second level is a micro structure consisting of supervoxels of the 3D point cloud; constructing a first graph from the first 3D point cloud and a second graph from the second 3D point cloud such that the supervoxels represent nodes of the graphs and adjacencies of the supervoxels represents edges of the graphs; matching the first and second graph for obtaining a transformation matrix; and registering the 3D point clouds together by applying the transformation matrix.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: September 15, 2020
    Assignee: Nokia Technologies Oy
    Inventors: Lixin Fan, Xiaoshui Huang, Qiang Wu, Jian Zhang
  • Patent number: 10776883
    Abstract: Methods and systems for automating the management and processing of roof damage analysis. In some embodiments image data associated with damaged roofs is collected and automatically analyzed by a computing device. In some embodiments, the image data is modified automatically to include descriptive metadata and visual indicia marking potential areas of damage. In one embodiment, the systems and methods include a remote computing device receiving visual data associated with one or more roofs. In one embodiment, insurance company specific weightings are determined and applied to received information to determine a type and extent of damage to the associated roof. In one embodiment, results of the methods and systems may be used to automatically generate a settlement estimate or supplement additional information in the estimate generation process.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: September 15, 2020
    Assignee: ACCURENCE, INC.
    Inventors: Zachary Labrie, Benjamin Zamora, Timothy Bruffey
  • Patent number: 10769775
    Abstract: Apparatus, system and method for detecting defects in an adhesion area that includes an adhesive mixed with a fluorescent material. One or more illumination devices may illuminate the fluorescent material in the adhesion area with a light of a predetermined wavelength. A camera may be configured to capture an image of the illuminated adhesion area. A processing device, communicatively coupled to the camera, may be configured to process the captured image by applying one or more boundary areas to the captured image and determining an image characteristic within each of the boundary areas, wherein the image characteristic is used by the processing device to determine the presence of a defect in the adhesive, such as an excess of adhesive or an insufficient application of adhesive.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: September 8, 2020
    Assignee: JABIL INC.
    Inventors: Quyen Duc Chu, Nazir Ahmad
  • Patent number: 10755375
    Abstract: Disclosed are methods, systems, devices, apparatus, media, and other implementations, including a method that includes obtaining input visual data comprising a sequence of symbols, selected from a symbol set, with each of the symbols associated with a glyph representation. The method also includes obtaining a code message comprising code message symbols, and modifying at least one of the symbols of the input visual data to a different glyph representation associated with a respective at least one of the code message symbols to generate, at a first time instance, a resultant coded visual data.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: August 25, 2020
    Inventors: Changxi Zheng, Chang Xiao, Cheng Zhang
  • Patent number: 10747807
    Abstract: Various embodiments of systems and methods allow a system to identify subsets of items by mixing and matching identified features in one or more other items. A system can identify features of items in an item database. The system can then calculate “fingerprints” of these features which are vectors describing the characteristics of the features. The system can present a collection of items and a user can select an item of the collection. The user can then select positive features to include in a search and/or negative features to include in the search. The system can then do a search of the database for items that contain features similar to those positive features and do not contain features similar to those negative features. The user can select features through a variety of means.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: August 18, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Nikhil Garg, Toma Belenzada, Sabine Sternig, Brad Bowman, Zohar Barzelay
  • Patent number: 10748029
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using an image processing neural network system that includes a spatial transformer module. One of the methods includes receiving an input feature map derived from the one or more input images, and applying a spatial transformation to the input feature map to generate a transformed feature map, comprising: processing the input feature map to generate spatial transformation parameters for the spatial transformation, and sampling from the input feature map in accordance with the spatial transformation parameters to generate the transformed feature map.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: August 18, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Maxwell Elliot Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu