Patents Examined by Aaron W Carter
  • Patent number: 11531867
    Abstract: Example user behavior prediction methods and apparatus are described. One example method includes obtaining a first contribution value of each piece of characteristic data for a specified behavior after obtaining behavior prediction information including a plurality of pieces of characteristic data. Every N pieces of characteristic data in the plurality of pieces of characteristic data may be processed by using one corresponding characteristic interaction model, to obtain a second contribution value of the every N pieces of characteristic data for the specified behavior. Finally, an execution probability of executing the specified behavior by a user may be determined based on the obtained first contribution value and the obtained second contribution value, to predict a user behavior. In the example method, interaction impact of the plurality of pieces of characteristic data on the specified behavior is considered during behavior prediction.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: December 20, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Ruiming Tang, Minzhe Niu, Yanru Qu, Weinan Zhang, Yong Yu
  • Patent number: 11514298
    Abstract: High-framerate real-time spatiotemporal disparity mechanisms on neuromorphic hardware are provided. In various embodiments, a first and second spiking input sensor each output a time series of spikes corresponding to a plurality of frames. A neurosynaptic network is configured to receive the time series of spikes corresponding to the plurality of frames; accumulate the time series of spikes in a ring buffer, thereby creating a plurality of temporal scales; for each corresponding pair of frames from the first and second spiking input sensors, determining a mapping of pixels in one of the pair of frames to pixels in the other of the pair of frames based on similarity; based on the pixel mapping, determining a disparity map.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: November 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alexander Andreopoulos, Hirak Jyoti Kashyap, Myron D. Flickner
  • Patent number: 11507823
    Abstract: A method of adaptive quantization for a convolutional neural network, includes at least one of receiving an acceptable model accuracy, determining a float value multiply accumulate for the layer based on a float value weight and a float value input, quantizing the float value weight at multiple weight quantization precisions, quantizing the float value input at multiple input quantization precisions, determining a multiply accumulate at multiple multiply accumulate quantization precisions based on the weight quantization precisions and the input quantization precisions, determining multiple quantization errors based on differences between the float value multiply accumulate and the multiple multiply accumulate quantization precisions and selecting one of the multiple weight quantization precisions, one of the multiple input quantization precisions and one of the multiple multiply accumulate quantization precisions based on the predetermined acceptable model accuracy and the multiple quantization errors.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: November 22, 2022
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zuoguan Wang, Tian Zhou, Qun Gu
  • Patent number: 11501515
    Abstract: High-accuracy character recognition has not been realized for a document having a space between lines is narrow, a document in which line contact occurs at a plurality of positions, and a document in which a ratio of lines with line contact is high. Noises are removed from divided line images that are obtained by dividing a text image into line units, and the removed noises are added to a neighboring divided text line image, thus restoring the character image which has been divided into the plurality of lines. This realizes the high-accuracy character recognition.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: November 15, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yoshihito Nanaumi
  • Patent number: 11494894
    Abstract: A lens matching apparatus and a lens matching method are provided. In the method, respective modulation transfer function (MTF) values corresponding to multiple focus lengths of each lens are obtained, a maximum MTF value among the focus lengths of each lens is determined, and lenses are classified according to the maximum MTF value. Each MTF value is determined based on at least one first pixel having maximum light intensity and at least one second pixel having minimum light intensity. Accordingly, the lenses with the same clearness may be classified into the same group, so as to improve image-stitching and speed up the image-stitching.
    Type: Grant
    Filed: August 23, 2020
    Date of Patent: November 8, 2022
    Assignee: Acer Incorporated
    Inventor: Chen-Ju Cheng
  • Patent number: 11494892
    Abstract: An inspection system is provided for detecting defects on surfaces. The system uses a pattern with varying color or darkness which faces the surface. A light illuminates the pattern on the surface so that the pattern and any defects on the surface are reflected and captured for image analysis. The processor then separates the pattern from the image in order to identify the locations of any defects on the surface.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: November 8, 2022
    Assignee: ABB Schweiz AG
    Inventor: Nevroz Sen
  • Patent number: 11488416
    Abstract: A system, a method, and a program that easily verify that a certificate with a photograph belongs to a user. The system that verifies that a certificate 1 with a photograph belongs to a user acquires a first image containing the certificate with the photograph of a user, judges the validity of the first image, acquires a second image containing the user and the certificate with a photograph that corresponds to the first image that has validity, judges if the user's face and the photograph of the certificate in the second image match, and certifies that the certificate with a photograph belongs to the user if the face and the photograph of the certificate match.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: November 1, 2022
    Inventors: Mitsunobu Hirose, Reo Yonaga
  • Patent number: 11461567
    Abstract: Systems and methods for identification (ID) document verification using hybrid near-field communications (NFC) authentication and optical authentication are provided. An exemplary method includes receiving, by a client device, an image of an ID document. Based on the image of the ID document, a determination is made whether the ID document includes a near-field communications (NFC) chip that stores data comprising identifying information for an owner of the identification. Based on this determination of whether the ID document includes an NFC chip, the ID document is verified by selectively using at least one of NFC chip authentication and optical authentication, to obtain a verification result.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: October 4, 2022
    Assignee: Mitek Systems, Inc.
    Inventors: Ashok Singal, Michael Ramsbacker, James Treitler, Sanjay Gupta, Jason L. Gray, Michael Hagen
  • Patent number: 11461996
    Abstract: Provided in the present disclosure are a method, an apparatus and a system for determining feature data of image data, and a storage medium. Wherein, the method comprises: acquiring features of image data, the features comprising a first feature and a second feature, wherein, the first feature is extracted from the image data using a first model, the first model being trained in a machine learning manner, and the second feature is extracted from the image data using a second model, the second model being constructed based on a pre-configured data processing algorithm; and determining feature data based on the first feature and the second feature. The present disclosure solves the technical problem that features recognized by the AI may not be consistent with human recognized features.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: October 4, 2022
    Assignee: OMRON Corporation
    Inventor: Masashi Kurita
  • Patent number: 11452443
    Abstract: The disclosure herein provides methods, systems, and devices for improving optical coherence tomography machine outputs through multiple enface optical coherence tomography angiography averaging techniques. The embodiments disclosed herein can be utilized in ophthalmology for employing optical coherence tomography (OCT) for in vivo visualization of blood vessels and the flow of blood in an eye of a patient, which is also known generally as optical coherence tomography angiography (OCTA).
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: September 27, 2022
    Assignee: Doheny Eye Institute
    Inventors: SriniVas R. Sadda, Akihito Uji
  • Patent number: 11455492
    Abstract: A computer implemented method for generating synthetic training data to train a convolutional neural network is described. The method consists of steps including receiving a source image depicting an object for identification. The type and shape of the depicted object is determined. The source image is overlayed with a N×M grid of vertices, the grid including horizontal and vertical edges and being fit to the shape of the depicted object. For each vertex in the grid, perturbations are added to the (x,y) coordinates of the vertex and the pixel values in a range between the original and final (x,y) coordinates are interpolated, resulting in the generation of an item of synthetic training data. The method is repeated to generate multiple items of synthetic training data which are then used to train a neural network to identify the object in an image.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: September 27, 2022
    Assignee: BuyAladdin.com, Inc.
    Inventors: Theodore Aaron Einstein, Tereza Manukian, Boris Mocialov, Jin Hwan Park
  • Patent number: 11436758
    Abstract: A method according to an embodiment of the present invention includes acquiring an image of a urine test kit equipped with a biochemical sample rod including a plurality of pad cells including a plurality of sub-pad cells, extracting at least one potential RGB value from the image by means of a potential color extractor, and converting and analyzing the at least one potential RGB value using a plurality of color spaces included in a color space conversion engine and a color space analysis engine by means of an analyzing unit. The plurality of color spaces are randomly generated, and a color space having a smallest distance value from the potential RGB value is determined as an optimal color space.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: September 6, 2022
    Assignee: Fitpet Co., Ltd.
    Inventors: Jung Uk Ko, Yeon Chul Rim, Kyung Hwa Chae
  • Patent number: 11430106
    Abstract: An object of the invention is to quantitatively evaluate crystal growth amount in a wide range from an undergrowth state to an overgrowth state with nondestructive inspection. By using a plenty of image feature values such as pattern brightness, a pattern area and a pattern shape which are extracted from an SEM image, and depending on whether brightness inside a pattern is lower than brightness outside the pattern (401), undergrowth and overgrowth is determined (402, 405). Based on a brightness difference or the pattern area, a growth amount index or a normality index of crystal growth in a concave pattern such as a hole pattern or a trench pattern is calculated (404, 407).
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: August 30, 2022
    Assignee: HITACHI HIGH-TECH CORPORATION
    Inventors: Takeyoshi Ohashi, Atsuko Shintani, Masami Ikota, Kazuhisa Hasumi
  • Patent number: 11430247
    Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a two-dimensional continuous surface representation of a three-dimensional object, the continuous surface comprising a plurality of landmark locations; determining a first set of soft membership functions based on a relative location of points in the two-dimensional continuous surface representation and the landmark locations; receiving a two-dimensional input image, the input image comprising an image of the object; extracting a plurality of features from the input image using a feature recognition model; generating an encoded feature representation of the extracted features using the first set of soft membership functions; generating a dense feature representation of the extracted features from the encoded representation using a second set of soft membership functions; and processing the second set of soft membership functions and dense feature representation using a neural image decoder model to
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: August 30, 2022
    Assignee: Snap Inc.
    Inventors: Iason Kokkinos, Georgios Papandreou, Riza Alp Guler
  • Patent number: 11423501
    Abstract: Techniques for training and evaluating machine-learning models for providing student guidance are described herein. In some embodiments, a network service generates a set of clusters that group a plurality of students by similarity. The network service trains a machine-learning model based on variances in outcomes and actions leading to the outcomes for students that belong to a same cluster in the set of clusters. The network service evaluates the machine-learning model for a student that has been mapped to the same cluster to identify at least one action that the student has not performed that is predictive of an optimal outcome for other students that belong to the same cluster. Responsive to evaluating the machine-learning model, the network service presents, through an interface, a recommendation that the student perform the at least one action.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: August 23, 2022
    Assignee: Oracle International Corporation
    Inventors: John Edward Refila, Christopher Stanwood Thompson, Desiree Carvalho Dreszer, Nyeri Leah Osibin, Michael Rene Lauria, Alexander Winfield Thompson
  • Patent number: 11423578
    Abstract: There is provided an encoding device, encoding method, decoding device, and decoding method that make it possible to improve the coding efficiency. The encoding device and the decoding device each perform classification of classifying a pixel of interest of a decoding in-progress image into any of a plurality of classes by using an inclination feature amount, and perform a filter arithmetic operation with the decoding in-progress image by using a tap coefficient of a class of the pixel of interest among tap coefficients of the respective classes. The inclination feature amount indicates a tangent direction of a contour line of pixel values of the pixel of interest. The decoding in-progress image is obtained by adding a residual of predictive coding and a predicted image together. The tap coefficients of the respective classes are each obtained through learning for minimizing an error by using the decoding in-progress image and an original image.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: August 23, 2022
    Assignee: SONY CORPORATION
    Inventor: Kenji Kondo
  • Patent number: 11423643
    Abstract: A training image to be used in training a learning network is generated. The method of generating the training image includes steps of: (a) a labeling device, in response to acquiring an original image, (i) inputting the original image into an image recognition network to detect privacy-related regions from the original image, (ii) adding dummy regions, different from the detected privacy-related regions, onto the original image, and (iii) setting the privacy-related regions and the dummy regions as obfuscation-expected regions which represent regions to be obfuscated in the original image; (b) the labeling device generating an obfuscated image by obfuscating the obfuscation-expected regions; and (c) the labeling device labeling the obfuscated image to be corresponding to a task of the learning network to be trained, to thereby generate the training image to be used in training the learning network.
    Type: Grant
    Filed: April 14, 2022
    Date of Patent: August 23, 2022
    Assignee: DEEPING SOURCE INC.
    Inventor: Jong Hu Jeong
  • Patent number: 11423577
    Abstract: A method comprises obtaining a plurality of 2-dimensional gray scale images of a portion of a printed circuit board assembly. Each 2-dimensional gray scale image corresponds to one of a plurality of parallel planes intersecting the portion of the printed circuit board assembly at respective different locations. The method further comprises converting the plurality of 2-dimensional gray scale images into a color image. Each of the plurality of 2-dimensional gray scale images corresponds to and is used as input for a respective color channel of the color image. The method further comprises analyzing the color image to detect variation in color that indicates a defect; and outputting an alert indicating the defect in response to detecting the variation in color.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: August 23, 2022
    Assignee: International Business Machines Corporation
    Inventors: Matthew S. Kelly, Sebastien Gilbert, Oswaldo Chacon
  • Patent number: 11416979
    Abstract: A defect displaying method is provided in the disclosure. The method comprises acquiring defect group information from an image of a wafer, wherein the defect group information includes a set of correlations between a plurality of defects identified from the image and one or more corresponding assigned defect types and displaying at least some of the plurality of defects according to their corresponding assigned defect types.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: August 16, 2022
    Assignee: ASML Netherlands B.V.
    Inventors: Wei Fang, Cho Huak Teh, Ju Hao Chien, Yi-Ying Wang, Shih-Tsung Chen, Jian-Min Liao, Chuan Li, Zhaohui Guo, Pang-Hsuan Huang, Shao-Wei Lai, Shih-Tsung Hsu
  • Patent number: 11416718
    Abstract: The present invention belongs to the technical field of computer, and discloses an item identification method and device based on vision and gravity sensing. The method comprises: identifying a collected item image, and acquiring a plurality of visual identification results corresponding to N times of pick-up and put-back behaviors, wherein each visual identification result corresponds to one time of pick-up and put-back behavior; acquiring a weight identification result corresponding to each weight change of M times of weight changes of items supported on a support; judging whether a total weight change value corresponding to M times of weight changes is consistent with the total weight value of the pick-up and put-back items corresponding to N times of pick-up and put-back behaviors or not; and if not, perfecting each visual identification result according to the M weight identification results to obtain the sum of identification items corresponding to N times of pick-up and put-back behaviors.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: August 16, 2022
    Assignee: Yi Tunnel (Beijing) Technology Co., Ltd.
    Inventor: Yili Wu