Patents Examined by Sean T Motsinger
-
Patent number: 11216914Abstract: A video blind denoising method based on deep learning, a computer device and a computer-readable storage medium. The method includes: taking a video sequence from a video to be denoised, taking the middle frame in the video sequence as a noisy reference frame, performing an optical flow estimation on the image corresponding to the noisy reference frame and each other frame in the video sequence, to obtain optical flow fields; transforming, according to the optical flow fields, the image corresponding to each other frame in the video sequence to the noisy reference frame for registration respectively, to obtain multi-frame noisy registration images; taking the multi-frame noisy registration images as an input of a convolutional neural network, taking the noisy reference frame as the reference image, performing iterative training and denoising by using the noise2noise training principle, to obtain the denoised image. This solution may achieve the blind denoising of a video.Type: GrantFiled: April 28, 2020Date of Patent: January 4, 2022Assignees: Tsinghua Shenzhen International Graduate School, Tsinghua UniversityInventors: Xiang Xie, Shaofeng Zou, Guolin Li, Songping Mai, Zhihua Wang
-
Patent number: 11216702Abstract: In some embodiments, a computer-implemented method is disclosed.Type: GrantFiled: July 14, 2020Date of Patent: January 4, 2022Assignee: THE CLIMATE CORPORATIONInventors: Yichuan Gui, Wei Guan
-
Patent number: 11210506Abstract: An information processing apparatus includes first and second display controllers and a receiver. The first display controller performs control to display a first character recognition result. The first character recognition result is a character recognition result of a first element. The first element and a second element have a specific dependency relationship and are included in a form. The receiver receives a checking/correcting result of checking and/or correcting the first character recognition result. The second display controller performs control to display information that the checking/correcting result and a second character recognition result, which is a character recognition result of the second element, do not satisfy the dependency relationship if the checking/correcting result and the second character recognition result do not satisfy the dependency relationship.Type: GrantFiled: October 22, 2019Date of Patent: December 28, 2021Assignee: FUJIFILM Business Innovation Corp.Inventor: Chihiro Kawabe
-
Patent number: 11212543Abstract: A method for restoring a compressed image according to an embodiment of the present disclosure includes receiving monochrome image data and low resolution color image data generated from an original color image of the monochrome image data, decoding the monochrome image data and generating a low resolution monochrome image, decoding the low resolution color image data generating a low resolution color image; processing the low resolution monochrome image and generating a high resolution monochrome image in accordance with a super resolution imaging neural network; and generating a high resolution color image based on the low resolution color image and the high resolution monochrome image in accordance with a colorization imaging neural network. The imaging neural network of the present disclosure may be a deep neural network generated by machine learning, and images may be input and output in the Internet of Things environment using a 5G network.Type: GrantFiled: January 9, 2020Date of Patent: December 28, 2021Assignee: LG ELECTRONICSInventors: Keum Sung Hwang, Seung Hwan Moon, Young Kwon Kim, Hyun Dae Choi
-
Patent number: 11205084Abstract: The present subject matter is related in general to the field of image processing, disclosing method and system for evaluating an image quality for Optical Character Recognition (OCR) Image evaluation system receives image comprising optical character data. The image evaluation system determines image parameter value for each of one or more image parameters of the image. The image parameter value for each of the one or more image parameters is determined for plurality of binary image segments identified in the image. The image evaluation system determines suitability value and impact value of the image, based on the image parameter value for each of the image parameters determined for the image. The image evaluation system determines quality score for the image, based on the suitability value and the impact value. The image is transmitted for processing before the OCR, upon determining the quality score to be above overall pre-defined threshold value.Type: GrantFiled: March 30, 2020Date of Patent: December 21, 2021Assignee: Wipro LimitedInventors: Prashanth Krishnapura Subbaraya, Raghavendra Hosabettu
-
Patent number: 11195265Abstract: Provided are a server and method for recognizing an image to determine whether an inspection target is faulty. The method includes generating a new image by transforming N images acquired from an inspection target into one or more channel spaces, and extracting a feature value by learning N images acquired by separating the new image according to channels.Type: GrantFiled: March 15, 2019Date of Patent: December 7, 2021Assignee: LG CNS Co., Ltd.Inventor: Kyung Yul Kim
-
Patent number: 11195302Abstract: A rear-facing camera captures a live-action video image while a front-facing camera captures an image of a distributor. An avatar controller controls an avatar based on the image of the distributor captured by the front-facing camera. A synthesizer arranges the avatar in a predetermined position of a real space coordinate system and synthesizes the avatar with the live-action video image. The face of the distributor captured by the front-facing camera is tracked and reflected on the avatar.Type: GrantFiled: December 25, 2018Date of Patent: December 7, 2021Assignee: DWANGO Co., Ltd.Inventors: Nobuo Kawakami, Shinnosuke Iwaki, Takashi Kojima, Toshihiro Shimizu, Hiroaki Saito
-
Patent number: 11188778Abstract: The technology disclosed attenuates spatial crosstalk from sequencing images for base calling. In particular, the technology disclosed accesses an image whose pixels depict intensity emissions from a target cluster and intensity emissions from additional adjacent clusters. The pixels include a center pixel that contains a center of the target cluster. Each pixel in the pixels is divisible into a plurality of subpixels. Depending upon a particular subpixel, in a plurality of subpixels of the center pixel, which contains the center of the target cluster, the technology disclosed selects, from a bank of subpixel lookup tables, a subpixel lookup table that corresponds to the particular subpixel. The selected subpixel lookup table contains pixel coefficients that are configured to maximizes a signal-to-noise ratio. The technology disclosed element-wise multiplies the pixel coefficients with the pixels and determines a weighted sum.Type: GrantFiled: May 4, 2021Date of Patent: November 30, 2021Assignee: Illumina, Inc.Inventors: Eric Jon Ojard, Rami Mehio, Gavin Derek Parnaby, Nitin Udpa, John S. Vieceli
-
Patent number: 11176410Abstract: A text extraction computing method that comprises calculating an estimated character pixel height of text from a digital image. The method may scale the digital image using the estimated character pixel height and a preferred character pixel height. The method may binarizes the digital image. The method may remove distortions using a neural network trained by a cycle GAN on a set of source text images and a set of clean text images. The set of source text images and clean text images are unpaired. The source text images may be distorted images of text. Calculating the estimated character pixel height may include summarizing the rows of pixels into a horizontal projection, and determining a line-repetition period from the projection, and quantifying the portion of the line-repetition period that corresponds to the text as the estimated character pixel height. The method may extract characters from the digital image using OCR.Type: GrantFiled: October 27, 2019Date of Patent: November 16, 2021Assignee: John Snow Labs Inc.Inventors: Jose Alberto Pablo Andreotti, David Talby
-
Patent number: 11164295Abstract: Methods and apparatus for enhancing optical images and parametric databases are disclosed. In an exemplary embodiment, a method includes identifying a parametric database having one or more dimensions of varying parameter values, and enhancing the parametric database utilizing a pyramid data structure that includes a plurality of pyramid levels as an input to and output from the frequency blender. Each of the plurality of levels of the pyramid data structure includes an instance of the parametric database having a unique parameter sampling resolution, a frequency isolation representation at that resolution, and a frequency blended representation of the parametric database at that resolution. The method also includes utilizing a frequency blender and auto throttle to perform the blending of a plurality of levels, and returning the enhanced parametric database.Type: GrantFiled: August 23, 2019Date of Patent: November 2, 2021Inventor: Michael Edwin Stewart
-
Patent number: 11158092Abstract: A computer is caused to realize: a line drawing data acquisition function to acquire line drawing data to be colored; a size-reducing process function to perform a size-reducing process on the line drawing data acquired to a predetermined reduced size so as to obtain size-reduced line drawing data; a first coloring process function to perform a coloring process on the size-reduced line drawing data based on a first learned model that has previously learned the coloring process on the size-reduced line drawing data by using sample data; and a second coloring process function to perform a coloring process on original line drawing data by receiving an input of the original line drawing data and colored, size-reduced line drawing data as the size-reduced line drawing data on which the first coloring process function has performed the coloring, based on a second learned model that has previously learned the coloring process on the sample data by receiving an input of the sample data and colored, size-reduced samplType: GrantFiled: May 1, 2017Date of Patent: October 26, 2021Assignee: PREFERRED NETWORKS, INC.Inventor: Taizan Yonetsuji
-
Patent number: 11151743Abstract: A method of detecting an end of an aisle of shelf modules in an imaging controller of a mobile automation apparatus, includes: obtaining image data captured by an image sensor and a plurality of depth measurements captured by a depth sensor, the image data and the depth measurements corresponding to an area containing a portion of the aisle of shelf modules; obtaining locomotive data of the apparatus; generating a dynamic trust region based on the locomotive data; detecting an edge segment based on the image data and the plurality of depth measurements, the edge segment representing an edge of a support surface; and when the edge segment is located at least partially in the dynamic trust region, updating an estimated end of the aisle based on the detected edge segment.Type: GrantFiled: June 3, 2019Date of Patent: October 19, 2021Assignee: Zebra Technologies CorporationInventors: Tze Fung Christopher Chan, Feng Cao, Mehdi Mazaheri Tehrani, Mahyar Vajedi
-
Patent number: 11151690Abstract: The disclosure provides an image super-resolution reconstruction method, a mobile terminal, and a computer-readable storage medium. The method includes: obtaining continuous N first YUV images; extracting N luma images and N chroma images from the N first YUV images; performing sequentially sharpness estimation, image registration, and image reconstruction based on a neural network on the N luma images; performing image reconstruction on the N chroma images; and fusing the chroma image obtained after reconstruction and the luma image obtained after reconstructions to obtain the target YUV image that has a higher resolution than the N first YUV images.Type: GrantFiled: July 31, 2020Date of Patent: October 19, 2021Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Yuhu Jia
-
Patent number: 11151369Abstract: Systems and methods are provided for processing an image of a financial payment document captured using a mobile device and classifying the type of payment document in order to extract the content therein. These methods may be implemented on a mobile device or a central server, and can be used to identify content on the payment document and determine whether the payment document is ready to be processed by a business or financial institution. The system can identify the type of payment document by identifying features on the payment document and performing a series of steps to determine probabilities that the payment document belongs to a specific document type. The identification steps are arranged starting with the fastest step in order to attempt to quickly determine the payment document type without requiring lengthy, extensive analysis.Type: GrantFiled: March 12, 2020Date of Patent: October 19, 2021Assignee: MITEK SYSTEMS, INC.Inventors: Grigori Nepomniachtchi, Vitali Kliatskine, Nikolay Kotovich
-
Patent number: 11151403Abstract: The present disclosure provides a method and apparatus for segmenting a sky area, and a convolutional neural network. The method includes: acquiring, by the image input layer, an original image; extracting, by the first convolutional neural network, a plurality of sky feature images with different scales from the original image; processing, by the plurality of cascaded second convolutional neural networks, the plurality of sky feature images to output a target feature image; up-sampling, by the up-sampling layer, the target feature image to obtain an up-sampled feature image; determining, by the sky area determining layer, a pixel area of which a gray value is greater than or equal to a preset gray value in the up-sampled feature image as a sky area.Type: GrantFiled: June 10, 2019Date of Patent: October 19, 2021Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Guannan Chen
-
Patent number: 11138414Abstract: A system for processing digital images comprising: at least one remote hardware processor; and at least one device, comprising at least one processing circuitry configured for: receiving from at least one image sensor, electrically coupled to the processing circuitry, at least one digital image captured by the at least one image sensor; partitioning at least one object, identified in the at least one digital image, into a plurality of object segments; replacing in the at least one digital image each of the plurality of object segments with a schematic segment, illustrating respective object segment, to produce at least one schematic image; and sending the at least one schematic image to the remote hardware processor; wherein the remote hardware processor is adapted to: receiving the at least one schematic image from the at least one device; analyzing the at least one schematic image to identify at least one behavioral pattern.Type: GrantFiled: August 25, 2019Date of Patent: October 5, 2021Assignee: NEC Corporation Of AmericaInventors: Tsvi Lev, Yaacov Hoch
-
Patent number: 11132813Abstract: A distance to an object is estimated with a monocular camera that estimates a distance from a moving object to feature points on an image from an imaging device mounted on the moving object. The distance estimator sets one or more feature points on the image acquired from the imaging device at a first timing and detects the feature point on the image acquired from the imaging device at a second timing. The distance estimator also determines the movement amount of the feature point on the image between the first timing and the second timing and determines the movement amount of the moving object between the first and second timings. The distance estimator then estimates the distance from the moving object to the feature point based on the movement amount on the image and the movement amount of the moving object between the first and second timings.Type: GrantFiled: February 22, 2018Date of Patent: September 28, 2021Assignee: HITACHI, LTD.Inventors: Alex Masuo Kaneko, Kenjiro Yamamoto
-
Patent number: 11126833Abstract: An artificial intelligence apparatus for recognizing a user includes a camera, and a process configured to receive, via the camera, image data including a recognition target object, generate recognition information corresponding to the recognition target object from the received image data, calculate a confidence level of the generated recognition information, determine whether the calculated confidence level is greater than a reference value, if the calculated confidence level is greater than the reference value, perform a control corresponding to the generated recognition information, and if the calculated confidence level is not greater than the reference value, provide a feedback for the object recognition.Type: GrantFiled: August 21, 2019Date of Patent: September 21, 2021Assignee: LG ELECTRONICS INC.Inventors: Jaehong Kim, Taeho Lee, Hyejeong Jeon, Jongwoo Han
-
Patent number: 11128935Abstract: Methods and systems for processing telemetry data that contains multiple data types is disclosed. Optimum multimodal encoding approaches can be used which can achieve data-specific compression performance for heterogeneous datasets by distinguishing data types and their characteristics at real-time and applying most effective compression method to a given data type. Using an optimum encoding diagram for heterogeneous data, a data classification algorithm classifies input data blocks into predefined categories, such as Unicode, telemetry, RCS and IR for telemetry datasets, and a class of unknown which includes non-studied data types, and then assigns them into corresponding compression models.Type: GrantFiled: June 28, 2019Date of Patent: September 21, 2021Assignee: BTS Software Solutions, LLCInventor: Dunling Li
-
Patent number: 11120276Abstract: A deep multimodal cross-layer intersecting fusion method, a terminal device and a storage medium are provided. The method includes: acquiring an RGB image and point cloud data containing lane lines, and pre-processing the RGB image and point cloud data; and inputting the pre-processed RGB image and point cloud data into a pre-constructed and trained semantic segmentation model, and outputting an image segmentation result. The semantic segmentation model is configured to implement cross-layer intersecting fusion of the RGB image and point cloud data. In the new method, a feature of a current layer of a current modality is fused with features of all subsequent layers of another modality, such that not only can similar or proximate features be fused, but also dissimilar or non-proximate features can be fused, thereby achieving full and comprehensive fusion of features. All fusion connections are controlled by a learnable parameter.Type: GrantFiled: May 20, 2021Date of Patent: September 14, 2021Assignee: TSINGHUA UNIVERSITYInventors: Xinyu Zhang, Zhiwei Li, Huaping Liu, Xingang Wu