Patents Examined by Tsung-Yin Tsai
  • Patent number: 11966842
    Abstract: Systems and methods to train a cell object detector are described.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 23, 2024
    Assignee: ICAHN SCHOOL OF MEDICINE AT MOUNT SINAI
    Inventors: Jack Zeineh, Marcel Prastawa, Gerardo Fernandez
  • Patent number: 11966890
    Abstract: The disclosure provides a bill identification method, device, electronic device and a computer-readable storage medium. The method includes: obtaining an image of a bill to be identified; using a pre-trained area identification model to identify a final payment area of the bill in the image; using a pre-trained character identification model to identify the final payment amount in the final payment area. By applying the solution provided by the present disclosure, it is possible to realize automatic identification of the payment amount on the bill, and the efficiency of bill processing is improved.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: April 23, 2024
    Assignee: Hangzhou Glority Software Limited
    Inventors: Qingsong Xu, Qing Li
  • Patent number: 11967051
    Abstract: Provided are an image fusion method and apparatus, and a portable terminal. The method comprises: obtaining several aligned images; respectively calculating gradient information of each image; setting a mask image of each image, and generating a target gradient image; performing a gradient operation on the target gradient image, and obtaining a target Laplacian image; performing a deconvolution transform on the Laplacian image, and generating a fused panoramic image. The technical solution generates a Laplacian image by performing gradient information calculations on several images, and then performs a deconvolution transform to generate a fused panoramic image, thereby eliminating image stitching color differences, and implementing a better image fusion effect.
    Type: Grant
    Filed: May 9, 2020
    Date of Patent: April 23, 2024
    Assignee: ARASHI VISION INC.
    Inventors: Liang Xie, Chaoyi Xie
  • Patent number: 11960586
    Abstract: A face recognition system including: a reading unit that reads identification information from a medium carried by an authentication subject; an image capturing unit that acquires an image; a face detection unit that detects, as a detected face image, a face image from the image acquired; a face matching unit that, when a registered face image associated with the identification information read by the reading unit is present, matches the detected face image against the registered face image and matches, against the registered face image, the detected face image captured before the reading unit reads the identification information; and a registration unit that, when the registered face image associated with the identification information read is not present, registers, as the registered face image, the detected face image before the reading unit reads the identification information.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: April 16, 2024
    Assignee: NEC CORPORATION
    Inventors: Noriaki Hayase, Hiroshi Tezuka
  • Patent number: 11961282
    Abstract: A system for detecting synthetic videos may include a server, a plurality of weak classifiers, and a strong classifier. The server may be configured to receive a prediction result from each of a plurality of weak classifiers; and send the prediction results from each of the plurality of weak classifiers to a strong classifier. The weak classifiers may be trained on real videos and known synthetic videos to analyze a distinct characteristic of a video file; detect irregularities of the distinct characteristic; generate a prediction result associated with the distinct characteristic, the prediction result being a prediction on whether the video file is synthetic; and output the prediction result to the server. The strong classifier may be trained to receive the prediction results of the plurality of weak classifiers from the server; analyze the prediction results; and determine if the video file is synthetic based on the prediction results.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: April 16, 2024
    Assignee: ZeroFOX, Inc
    Inventors: Michael Morgan Price, Matthew Alan Price
  • Patent number: 11953312
    Abstract: The invention includes a system and method for obtaining high-resolution 3D images of objects. The system includes three cameras and three light sources that have different wavelengths (e.g. a red light source, a blue light source and a green light source). Each camera simultaneously captures a color image of the object. A processor separates each of the red light images, the blue light images and the green light images into separate monochrome images using each of the red light source, blue light source and green light source. The quality of the images are not subject to limited resolution of conventional RBG images. Because three different wavelengths of light are used, the surface can be accurately imaged, regardless of its characteristics (e.g. reflectivity and transparency). The system is well suited for industrial uses that require a high volume of objects, particularly those of mixed material, to be rapidly inspected for defects as small as a few microns.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: April 9, 2024
    Assignee: MIT SEMICONDUCTOR (TIAN JIN) CO., LTD
    Inventors: Kok Weng Wong, Albert Archmawety, Jun Kang Ng, Chee Chye Lee
  • Patent number: 11948284
    Abstract: The present invention belongs to the technical field of petroleum exploitation engineering, and discloses a 3D modeling method for pore-filling hydrate sediment based on a CT image. Indoor remolding rock cores or in situ site rock cores without hydrate can be scanned by CT; a sediment matrix image stack and a pore image stack are obtained by gray threshold segmentation; then, a series of pore-filling hydrate image stacks with different saturations are constructed through image morphological processing of the pore image stack such as erosion, dilation and image subtraction operation; and a series of digital rock core image stacks of the pore-filling hydrate sediment with different saturations are formed through image subtraction operation and splicing operation to provide a relatively real 3D model for the numerical simulation work of the basic physical properties of a reservoir of natural gas hydrate.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 2, 2024
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Yongchen Song, Yanghui Li, Peng Wu, Xiang Sun, Weiguo Liu, Jiafei Zhao, Mingjun Yang, Lei Yang, Zheng Ling
  • Patent number: 11941891
    Abstract: The present disclosure provides a method for detecting a lane line, a vehicle and a computing device. The method includes: generating an optical flow image in accordance with a series of event data from a dynamic vision sensor coupled to a vehicle; determining an initial search region including a start point of the lane line in accordance with the optical flow image; determining a center of gravity of the initial search region; determining a new search region through an offsetting operation on the center of gravity; determining a center of gravity of the new search region; repeating the steps of determining a new search region and determining a center of gravity of the new search region iteratively to acquire centers of gravity of a plurality of search regions; and determining the lane line in accordance with the centers of gravity of the plurality of search regions.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 26, 2024
    Assignee: OMNIVISION SENSOR SOLUTION (SHANGHAI) CO., LTD.
    Inventor: Xiaozheng Mou
  • Patent number: 11922601
    Abstract: A medical image processing apparatus includes: an obtaining unit configured to obtain a first image that is a motion contrast en-face image of an eye to be examined; and an image quality improving unit configured to generate a second image with at least one of lower noise and higher contrast than the obtained first image using the obtained first image as input data that is input into an image quality improving engine, wherein the image quality improving engine includes a machine learning engine that has been obtained by using training data including a second image with at least one of lower noise and higher contrast than a first image that is a motion contrast en-face image of an eye to be examined.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: March 5, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yoshihiko Iwase, Manabu Yamazoe, Hideaki Mizobe, Hiroki Uchida, Ritsuya Tomita
  • Patent number: 11914677
    Abstract: An image processing method includes obtaining a sample image, a category label of the sample image, and a label value of the category label. The method further includes calling a preset image processing model to perform segmentation processing on the sample image to obtain at least two sub-regions. The method further includes calling the preset image processing model to perform category prediction on the sample image to obtain a predicted value of the category label. The method further includes updating a network parameter of the preset image processing model according to center coordinates of the sub-regions, the label value of the category label, and the predicted value of the category label. The method further includes performing iterative training on the preset image processing model according to the updated network parameter, to obtain a target image processing model.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: February 27, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Sihong Chen
  • Patent number: 11907836
    Abstract: A computer-implemented method for processing images to determine EI site status is provided. The method includes image processing of an aerial image by two EI feature recognition models. A first EI feature recognition model recognizes a first EI feature and a second EI feature recognition model recognizes a second EI feature. The results of each model are further used to determine a composite indication of EI site status.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: February 20, 2024
    Assignee: SOURCEWATER, INC.
    Inventor: Joshua Adler
  • Patent number: 11900640
    Abstract: A method of substitutional neural residual compression is performed by at least one processor and includes estimating motion vectors, based on a current image frame and a previous reconstructed image frame, obtaining a predicted image frame, based on the estimated motion vectors and the previous reconstructed image frame, and subtracting the obtained predicted image frame from the current image frame to obtain a substitutional residual. The method further includes encoding the obtained substitutional residual, using a first neural network, to obtain an encoded representation, and compressing the encoded representation.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: February 13, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Wei Jiang, Wei Wang, Shan Liu
  • Patent number: 11893779
    Abstract: Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: February 6, 2024
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Aydogan Ozcan, Yibo Zhang, Hatice Ceylan Koydemir
  • Patent number: 11880960
    Abstract: A medical image processing apparatus includes: an obtaining unit configured to obtain a first image that is a motion contrast en-face image of an eye to be examined; and an image quality improving unit configured to generate a second image with at least one of lower noise and higher contrast than the obtained first image using the obtained first image as input data that is input into an image quality improving engine, wherein the image quality improving engine includes a machine learning engine that has been obtained by using training data including a second image with at least one of lower noise and higher contrast than a first image that is a motion contrast en-face image of an eye to be examined.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 23, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yoshihiko Iwase, Manabu Yamazoe, Hideaki Mizobe, Hiroki Uchida, Ritsuya Tomita
  • Patent number: 11880972
    Abstract: This application relates to a tissue nodule detection and tissue nodule detection model training method, apparatus, device, storage medium and system.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: January 23, 2024
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chen Cheng, Zhongqian Sun, Zhao Chen, Wei Yang
  • Patent number: 11880406
    Abstract: A computer-implemented method for processing images to identify Energy Infrastructure (EI) features within aerial images of global terrain is provided. The image processing method identifies information about EI features by applying an EI feature recognition model to aerial images of global terrain. The EI feature recognition model identifies the EI feature information according to image content of the aerial image. The method further provides updates to the identification of the EI feature information according to relationships between identified EI features.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: January 23, 2024
    Assignee: SOURCEWATER, INC.
    Inventor: Joshua Adler
  • Patent number: 11880432
    Abstract: Presented are concepts for obtaining a confidence measure for a machine learning model. One such concept process input data with the machine learning model to generate a primary result. It also generate a plurality of modified instances of the input data and processes the plurality of modified instances of the input data with the machine learning model to generate a respective plurality of secondary results. A confidence measure relating to the primary result is determined based on the secondary results.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: January 23, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Matthias Lenga, Rafael Wiemker, Tobias Klinder, Marten Bergtholdt, Heike Carolus
  • Patent number: 11881090
    Abstract: The present disclosure is directed to systems and methods for generating investigations of user behavior. In an example embodiment, the system includes a video camera configured to capture video of user activity, a video analytic module to perform real-time video processing of the captured video to generate non-video data from video, and a computer configured to receive the video and the non-video data from the video camera. In some embodiments, the video camera is at least one of a traffic camera or an aerial drone camera. The computer includes a video analytics module configured to analyze one of video and non-video data to identify occurrences of particular user behavior, and an investigation generation module configured to generate an investigation containing at least one video sequence of the particular user behavior. In some embodiments, the investigation is generated in near real time.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: January 23, 2024
    Inventor: James Carey
  • Patent number: 11869208
    Abstract: The present disclosure relates to methods, apparatuses, and computer programs for processing computed tomography images. Precise segmentation of the left atrium (LA) in computed tomography (CT) images constitutes a crucial preparatory step for catheter ablation in atrial fibrillation (AF). We aim to apply deep convolutional neural networks (DCNNs) to automate the LA detection/segmentation procedure and create a three-dimensional (3D) geometries. The deep learning provides an efficient and accurate way for automatic contouring and LA volume calculation based on the construction of the 3D LA geometry. Non-pulmonary vein (NPV) trigger has been reported as an important predictor of recurrence post atrial fibrillation (AF) ablation. Elimination of NPV triggers can reduce the post-ablation AF recurrence. The deep learning was applied in pre-ablation pulmonary vein computed tomography (PVCT) geometric slices to create a prediction model for NPV triggers in patients with paroxysmal atrial fibrillation (PAF).
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: January 9, 2024
    Assignees: TAIPEI VETERANS GENERAL HOSPITAL, National Yang Ming Chiao Tung University
    Inventors: Horng-Shing Lu, Chih-Min Liu, Shih-Lin Chang, Shih-Ann Chen, Yenn-Jiang Lin, Hung-Hsun Chen, Wei-Shiang Chen
  • Patent number: 11854194
    Abstract: An image analysis method and an image analysis system are disclosed. The method may include extracting training raw graphic data including at least one first node corresponding to a plurality of histological features of a training tissue slide image, and at least one first edge defined by a relationship between the histological features and generating training graphic data by sampling the first node of the training raw graphic data. The method may also include determining a parameter of a readout function by training a graph neural network (GNN) using the training graphic data and training output data corresponding to the training graphic data, and extracting inference graphic data including at least one second node corresponding to a plurality of histological features of an inference tissue slide image, and at least one second edge decided by a relationship between the histological features of the inference tissue slide image.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: December 26, 2023
    Assignee: LUNIT INC.
    Inventor: Minje Jang