Patents Examined by Kim Y. Vu
  • Patent number: 11087439
    Abstract: The present disclosure provides a hybrid framework-based image bit-depth expansion method and device. The invention fuses a traditional de-banding algorithm and a depth network-based learning algorithm, and can remove unnatural effects in an image flat area whilst more realistically restoring numerical information of missing bits. The method comprises the extraction of image flat areas, local adaptive pixel value adjustment-based flat area bit-depth expansion and convolutional neural network-based non-flat area bit-depth expansion. The present invention uses a learning-based method to train an effective depth network to solve the problem of realistically restoring missing bits, whilst using a simple and robust local adaptive pixel value adjustment method in an flat area to effectively inhibit unnatural effects in the flat area such as banding, a ringing and flat noise, improving subjective visual quality of the flat area.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: August 10, 2021
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Yang Zhao, Ronggang Wang, Wen Gao, Zhenyu Wang, Wenmin Wang
  • Patent number: 11080512
    Abstract: An information processing device includes a categorizing section configured to extract a material component image identified as a material component from plural images obtained by imaging a sample fluid containing a plurality of types of material components and flowing through a flow cell, and to categorize the extracted material component image by predetermined category, a count derivation section configured to derive a count of the material component per standard visual field, or derive a count per unit liquid volume of the material component contained in the sample fluid, for each of the categories based on the number of material component images categorized by the categorizing section, and a generation section configured to generate an all-component image in which the material component images are arranged according to the counts that have been derived by the count derivation section for each of the categories.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: August 3, 2021
    Assignee: ARKRAY, Inc.
    Inventors: Koji Fujimoto, Shinya Nakajima, Kenji Nakanishi
  • Patent number: 11074456
    Abstract: According to one implementation, a system for automating content annotation includes a computing platform having a hardware processor and a system memory storing an automation training software code. The hardware processor executes the automation training software code to initially train a content annotation engine using labeled content, test the content annotation engine using a first test set of content obtained from a training database, and receive corrections to a first automatically annotated content set resulting from the test. The hardware processor further executes the automation training software code to further train the content annotation engine based on the corrections, determine one or more prioritization criteria for selecting a second test set of content for testing the content annotation engine based on the statistics relating to the first automatically annotated content, and select the second test set of content from the training database based on the prioritization criteria.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: July 27, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Angel Farre Guiu, Matthew Petrillo, Monica Alfaro Vendrell, Marc Junyent Martin, Daniel Fojo, Anthony M. Accardo, Avner Swerdlow, Katharine Navarre
  • Patent number: 11062121
    Abstract: A method of data processing for an object identification system comprising a neural network. The method comprises, in a secure environment, obtaining first sensed data representative of a physical quantity measured by a sensor. The first sensed data is processed, using the neural network in the secure environment, to identify an object in the first sensed data. The method includes determining that the identified object belongs to a predetermined class of objects. In response to the determining, a first portion of the first sensed data is classified as data to be secured, and a second portion of the first sensed data is classified as data which is not to be secured. The second sensed data, derived from at least the second portion, is outputted as non-secure data.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: July 13, 2021
    Assignees: Apical Limited, Arm Limited
    Inventors: Daren Croxford, Zhi Feng Lee
  • Patent number: 11058208
    Abstract: Disclosed is a method for selecting a cosmetic product, and a method for image acquisition and image processing for the selection method, including: acquiring an image of a person in a controlled lighting environment, and measuring and recording the colorimetric coordinates of the person's face; processing the image to determine an absolute value of the skin tone; correlating the absolute value with a usage color map established for each of multiple cosmetic products of a database by measuring the color of each cosmetic product under its conditions of use, and to determine a personalized color matrix; extracting, from the database, the cosmetic product(s) whose color measurement is part of the personalized color matrix; using an information medium, presenting the product(s) included in the personalized color matrix; and optionally selecting the preferred product(s) chosen by the person from those that are part of the personalized color matrix.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: July 13, 2021
    Assignee: CHANEL PARFUMS BEAUTE
    Inventors: Astrid Lassalle, Sandrine Couderc
  • Patent number: 11062166
    Abstract: Systems and methods for property feature detection and extraction using digital images. The image sources could include aerial imagery, satellite imagery, ground-based imagery, imagery taken from unmanned aerial vehicles (UAVs), mobile device imagery, etc. The detected geometric property features could include tree canopy, pools and other bodies of water, concrete flatwork, landscaping classifications (gravel, grass, concrete, asphalt, etc.), trampolines, property structural features (structures, buildings, pergolas, gazebos, terraces, retaining walls, and fences), and sports courts. The system can automatically extract these features from images and can then project them into world coordinates relative to a known surface in world coordinates (e.g., from a digital terrain model).
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: July 13, 2021
    Inventors: Bryce Zachary Porter, Ryan Mark Justus, John Caleb Call
  • Patent number: 11062443
    Abstract: A similarity determination apparatus comprising a processor configured to: classify each pixel of a first medical image into at least one of a plurality of types of findings; calculate a first feature amount for each classified finding; set a weighting coefficient indicating a degree of weighting which varies depending on a size of each finding for each classified finding; derive an adjusted weighting coefficient by adjusting the weighting coefficient for each of a plurality of finding groups, into which the plurality of types of findings are classified; and derive the similarity between the first medical image and a second medical image by performing a weighting operation for the first feature amount for each finding in the first medical image and a second feature amount for each finding calculated in advance in the second medical image, for each of the finding groups on the basis of the adjusted weighting coefficient.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: July 13, 2021
    Assignee: FUJIFILM Corporation
    Inventor: Shoji Kanada
  • Patent number: 11062453
    Abstract: A method for scene parsing includes: performing a convolution operation on a to-be-parsed image by using a deep neural network to obtain a first feature map, the first feature map including features of at least one pixel in the image; performing a pooling operation on the first feature map to obtain at least one second feature map, a size of the second feature map being less than that of the first feature map; and performing scene parsing on the image according to the first feature map and the at least one second feature map to obtain a scene parsing result of the image, the scene parsing result including a category of the at least one pixel in the image. A system for scene parsing and a non-transitory computer-readable storage medium can facilitate realizing the method.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: July 13, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Jianping Shi, Hengshuang Zhao
  • Patent number: 11055922
    Abstract: A virtual reality visual indicator apparatus comprising a virtual reality image capture device comprising a plurality of cameras configured to capture a respective plurality of images of a scene, the respective plurality of images of the scene configured to be connected at stitching regions to provide a virtual reality image of the scene; and a visual indicator provider configured to transmit, into the scene, a visual indicator at a location of at least one stitching region prior to capture of the respective plurality of images of the scene and provide no visual indicator during capture of the respective plurality of images.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: July 6, 2021
    Assignee: NOKIA TECHNOLOGIES OY
    Inventor: Alejandro Sanguinetti
  • Patent number: 11055520
    Abstract: A gesture recognition method includes determining a palm connected domain based on an acquired depth image. The method includes determining a tracking frame corresponding to the palm connected domain. The method includes recognizing a gesture within a region of a to-be-recognized image corresponding to the tracking frame, based on a location of the tracking frame. In this arrangement, a palm connected domain and a tracking frame corresponding to the palm connected domain are acquired, and a gesture is recognized within a region of a to-be-recognized image corresponding to the tracking frame.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: July 6, 2021
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Jibo Zhao
  • Patent number: 11055832
    Abstract: Methods, systems, and apparatuses are provide to perform automatic banding correction in captured images. For example, the methods receive from a plurality of sensing elements in a sensor array, first image data captured with a first exposure parameter, and second image data captured with a second exposure parameter. The methods partition first image data and second image data and determine values for each partition. The methods compute banding errors based on the determined values of the partitions for first image data and second image data. The methods also determine a banding error correction to one or more of first image data and second image data based on the banding errors. Further, the methods perform an automatic correction of the banding errors on one or more of first image data and second image based on the banding error correction.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: July 6, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Yihe Yao, Tao Ma, Yawen Chi, Yan-Jing Lei, Lei Ma
  • Patent number: 11055551
    Abstract: A correction history recording unit that records region information of a correction site with respect to text data converted from an original image as correction history information, an accuracy calculation unit that calculates accuracy of optical character recognition for each of individual regions on a layout of the original image on the basis of the correction history information, a distribution image generation unit and a distribution image display unit which generate and display a distribution image in which a difference in magnitude of accuracy is shown as a difference in a display aspect for every individual region are included so as to generate and display the distribution image that is distinguished for every individual region by reflecting a tendency in which a character recognition rate in a certain region on a layout of the original image may decrease due to various cases including a format of an original document, a state of an OCR device, and the like.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: July 6, 2021
    Assignee: WINGARC1ST INC.
    Inventors: Yutaka Nagoya, Ko Shimazawa
  • Patent number: 11048943
    Abstract: A method includes generating, with a mobile device having a camera, images of a portion of a real space containing an object having a position. The control device tracks the position of the object based at least on the images received from the mobile device. The control device monitors the position of the object for an event conforming to a rule specified at the control device. The event is based on the rule and the position of the object in the real space. The control device, or a client device, generates an indication that the event has been detected by the control device.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: June 29, 2021
    Assignee: Infinity Cube Ltd.
    Inventors: Kin Yu Ernest Chen, Geok Hong Victor Tan, Lik Fai Alex Cheung
  • Patent number: 11042807
    Abstract: Systems and methods are disclosed for receiving a target image corresponding to a target specimen, the target specimen comprising a tissue sample of a patient, applying a machine learning model, which may also be known as a machine learning system, to the target image to determine at least one characteristic of the target specimen and/or at least one characteristic of the target image, the machine learning model having been generated by processing a plurality of training images to predict at least one characteristic, the training images comprising images of human tissue and/or images that are algorithmically generated, and outputting the at least one characteristic of the target specimen and/or the at least one characteristic of the target image.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: June 22, 2021
    Assignee: PAIGE.AI, Inc.
    Inventors: Supriya Kapur, Christopher Kanan, Thomas Fuchs, Leo Grady
  • Patent number: 11017268
    Abstract: A method, system and computer-usable medium are disclosed for machine learning to identify service request records associated with an account that is likely to escalate. Certain aspects of the disclosure include generating a random forest model using a training set of service request records to determine a probability of escalation for service requests of the training set; applying the random forest model to a current set of service request records to determine an escalation probability for service requests in the current set; and assigning service request records in the current set to a plurality of escalation probability bins, wherein the service request records of the current set are generally equally divided between the plurality of escalation probability bins, and wherein the service request records of the current set are assigned to a probability bin based on the escalation probability of the service request record.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: May 25, 2021
    Assignee: Dell Products L.P.
    Inventors: Varsha Kansal, Rajkumar Dan
  • Patent number: 11016158
    Abstract: The disclosure relates to the automatic determination of correction factor values for producing MR images using a magnetic resonance system. A plurality of MR images is produced, wherein each MR image is produced using parameters with parameter values and using correction factors with correction factor values. In order to produce the MR images, MR data of the same examination object is acquired under the same external boundary conditions. The MR images are evaluated automatically in respect of artifacts in the respective MR image, in order to determine the MR image with the least artifacts among the MR images. The correction factor values are determined as those correction factor values which have been used to produce the MR image with the least artifacts. The parameters determine a sequence, with which the MR data is acquired for producing the MR images. The correction factors reduce influences which influence the acquisition of the MR data.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: May 25, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Dominik Paul, Mario Zeller
  • Patent number: 11004208
    Abstract: Techniques are disclosed for deep neural network (DNN) based interactive image matting. A methodology implementing the techniques according to an embodiment includes generating, by the DNN, an alpha matte associated with an image, based on user-specified foreground region locations in the image. The method further includes applying a first DNN subnetwork to the image, the first subnetwork trained to generate a binary mask based on the user input, the binary mask designating pixels of the image as background or foreground. The method further includes applying a second DNN subnetwork to the generated binary mask, the second subnetwork trained to generate a trimap based on the user input, the trimap designating pixels of the image as background, foreground, or uncertain status. The method further includes applying a third DNN subnetwork to the generated trimap, the third subnetwork trained to generate the alpha matte based on the user input.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: May 11, 2021
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Scott Cohen, Marco Forte, Ning Xu
  • Patent number: 11004214
    Abstract: Anonymization processing for protecting privacy and personal information can be appropriately performed based on a detection state of a moving object within an imaging range. Processing corresponding to one of a first mode for anonymizing an area of a human body based on a fixed background image and a second mode for anonymizing the area of the human body based on a basic background image is performed on an area of a detected moving object based on a detection result of a moving object detection unit.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: May 11, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kan Ito
  • Patent number: 10997714
    Abstract: An apparatus for inspecting components may include a processor for: determining exterior information of a first component mounted on a first printed circuit board; inspecting a mounting state of the first component by using inspection information of a second component having a first similarity value, when the first similarity value is higher than or equal to a preset reference similarity value, and updating the inspection information of the plurality of components by using the inspection information of the first component matched with the inspection information of the second component, when it is determined that the mounting state of the first component is good.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: May 4, 2021
    Assignee: KOH YOUNG TECHNOLOGY INC.
    Inventors: Seung Bum Han, Filip Lukasz Piekniewski, Dae Sung Koo, Woo Young Lim, Jin Man Kang, Ki Won Park
  • Patent number: 10997427
    Abstract: A method is performed at a computing system having one or more processors and memory. The method includes receiving a first video clip having three or more image frames and computing a first hash pattern, including: (i) computing a temporal sequence of differential frames and (ii) for each differential frame: identifying a respective plurality of feature points and computing a respective hash value that represents spatial positioning of the respective feature points with respect to each other. The method includes receiving a second video clip having three or more image frames and computing a second hash pattern by applying steps (i) and (ii) to the three or more image frames. The method includes computing a distance between the first hash pattern and the second hash pattern and determining that the first and second video clips match when the computed distance is less than a threshold distance.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: May 4, 2021
    Assignee: Zorroa Corporation
    Inventor: David DeBry