Patents Examined by Caroline Tabancay Duffy
-
Patent number: 12354242Abstract: Embodiments of the present application provide a method for denoising videos. The method comprises: acquiring, by the CPU, video frame images by acquiring video data and decoding the video data; loading, by the GPU, the video frame images from the CPU; acquiring, by the GPU, first images by denoising the video frame images using a predetermined non-local means (NLM) denoising algorithm; acquiring, by the GPU, second images by denoising the first images using a predetermined non-local Bayes (NLB) denoising algorithm; and acquiring, by the CPU, denoised video data by acquiring the second images from the GPU and encoding the second images.Type: GrantFiled: November 18, 2020Date of Patent: July 8, 2025Assignee: BIGO TECHNOLOGY PTE. LTD.Inventors: Yi Huang, Lingxiao Du
-
Patent number: 12347213Abstract: Aspects of the disclosure are directed to the field of artificial intelligence technologies and provides a method and an apparatus for obtaining a feature of duct tissue based on computer vision, an intelligent microscope, a storage medium, and a computer device. The method can include the steps of obtaining an image including duct tissue, determining, in an image region corresponding to the duct tissue in the image, at least two feature obtaining regions adapted to duct morphology of the duct tissue, obtaining cell features of cells of the duct tissue in the feature obtaining regions respectively, and obtaining a feature of the duct tissue based on the cell features of the cells of the duct tissue in the feature obtaining regions respectively.Type: GrantFiled: March 2, 2022Date of Patent: July 1, 2025Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Cheng Jiang, Jiarui Sun, Liang Wang, Rongbo Shen, Jianhua Yao
-
Patent number: 12347188Abstract: An apparatus for labeling an object includes a processor and a memory. The processor creates a synthetic three-dimensional (3D) modeling environment scene, generates image data synthetically generated by an in-flight camera simulation, the image data being within the 3D modeling environment scene based on an orientation of a camera and including one or more objects; uses a mask to identify the one or more objects in the 3D modeling environment scene, labels the identified one or more objects using a cursor on target (COT) lookup table, and stores the labeled identified one or more objects and flight metadata in a database as part of a training dataset to thereby train an artificial intelligence (AI) system. The AI system identifies a real object corresponding to the label of the one or more identified one or more objects in the COT lookup table. The real object is a real-world, target object.Type: GrantFiled: November 2, 2021Date of Patent: July 1, 2025Assignee: The Boeing CompanyInventors: Yan Yang, Anthony Mountford, Jamieson Lee
-
Patent number: 12340033Abstract: A pointer positioning method, an apparatus, and an instrument, relates to the technical field of instrument. The method comprises: on the condition that a virtual pointer rotates in a first dial image at a first preset angle, detecting quantities of first pixel points corresponding to a plurality of first rotation positions; according to the quantities of the first pixel points corresponding to the plurality of first rotation positions, determining a first target position where the virtual pointer is located; according to the first target position, determining a target angle value between a target pointer in an instrument dial corresponding to the first dial image and a reference position; and according to the target angle value, searching a target scale value corresponding to the target pointer from a corresponding relationship between preset angle values and scale values.Type: GrantFiled: March 26, 2021Date of Patent: June 24, 2025Assignee: BOE Technology Group Co., Ltd.Inventor: Qian Ha
-
Patent number: 12327332Abstract: An image processing method includes obtaining a first quantity of to-be-analyzed images and performing fusion and enhancement processing on the first quantity of to-be-analyzed images through an image analysis model to obtain a first target image. Each to-be-analyzed image corresponds to a different target modality of a target imaging object. The first target image is used to enhance display of a distribution area of an analysis object of the first quantity of to-be-analyzed images. The analysis object belongs to the imaging object. The image analysis model is obtained by training a second quantity of sample images corresponding to different sample modalities. The first quantity is less than or equal to the second quantity. The target modality belongs to the sample modalities.Type: GrantFiled: March 3, 2022Date of Patent: June 10, 2025Assignee: LENOVO (BEIJING) LIMITEDInventors: Yao Zhang, Jiang Tian
-
Patent number: 12327331Abstract: A computer-implemented system and method include performing neural style transfer augmentations using at least a content image, a first style image, and a second style image. A first augmented image is generated based at least on content of the content image and a first style of the first style image. A second augmented image is generated based at least on the content of the content image and a second style of the second style image. The machine learning system is trained with training data that includes at least the content image, the first augmented image, and the second augmented image. A loss output is computed for the machine learning system. The loss output includes at least a consistency loss that accounts for a predicted label provided by the machine learning system with respect to each of the content image, the first augmented image, and the second augmented image.Type: GrantFiled: December 2, 2021Date of Patent: June 10, 2025Assignee: Robert Bosch GmbHInventors: Akash Umakantha, S. Alireza Golestaneh, Joao Semedo, Wan-Yi Lin
-
Patent number: 12327345Abstract: Presented herein are embodiments of a vision-based object perception system for activity analysis, safety monitoring, or both. Embodiments of the perception subsystem detect multi-class objects (e.g., construction machines and humans) in real-time while estimating the poses and actions of the detected objects. Safety monitoring embodiments and object activity analysis embodiments may be based on the perception result. To evaluate the performance of embodiments, a dataset was collected including multi-class of objects in different lighting conditions with human annotations. Experimental results show that the proposed action recognition approach outperforms the state-of-the-art approaches on top-1 accuracy by about 5.18%.Type: GrantFiled: July 26, 2022Date of Patent: June 10, 2025Assignee: Baidu USA LLCInventors: Sibo Zhang, Liangjun Zhang
-
Patent number: 12299916Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting three-dimensional object locations from images. One of the methods includes obtaining a sequence of images that comprises, at each of a plurality of time steps, a respective image that was captured by a camera at the time step; generating, for each image in the sequence, respective pseudo-lidar features of a respective pseudo-lidar representation of a region in the image that has been determined to depict a first object; generating, for a particular image at a particular time step in the sequence, image patch features of the region in the particular image that has been determined to depict the first object; and generating, from the respective pseudo-lidar features and the image patch features, a prediction that characterizes a location of the first object in a three-dimensional coordinate system at the particular time step in the sequence.Type: GrantFiled: December 8, 2021Date of Patent: May 13, 2025Assignee: Waymo LLCInventors: Longlong Jing, Ruichi Yu, Jiyang Gao, Henrik Kretzschmar, Kang Li, Ruizhongtai Qi, Hang Zhao, Alper Ayvaci, Xu Chen, Dillon Cower, Congcong Li
-
Patent number: 12293577Abstract: Embodiments of the disclosure provide a machine learning model for generating a predicted executable command for an image. The learning model includes an interface configured to obtain an utterance indicating a request associated with the image, an utterance sub-model, a visual sub-model, an attention network, and a selection gate. The machine learning model generates a segment of the predicted executable command from weighted probabilities of each candidate token in a predetermined vocabulary determined based on the visual features, the concept features, current command features, and the utterance features extracted from the utterance or the image.Type: GrantFiled: February 18, 2022Date of Patent: May 6, 2025Assignee: Adobe Inc.Inventors: Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt, Hyounghun Kim, Doo Soon Kim
-
Patent number: 12287449Abstract: A three-dimensional imaging method and system for surface comprehensive geophysical prospecting, the method includes: acquiring detection data of a plurality of two-dimensional profiles of a surface detection site; forming two-dimensional profile resistivity data by geophysical inversion of the detection data; performing three-dimensional coordinate conversion on the two-dimensional profile resistivity data to obtain resistivity data of a three-dimensional coordinate system; and converting the resistivity data of the three-dimensional coordinate system into a three-dimensional model by using a Kriging interpolation method.Type: GrantFiled: June 16, 2021Date of Patent: April 29, 2025Assignee: SHANDONG UNIVERSITYInventors: Shucai Li, Yiguo Xue, Maoxin Su, Chunjin Lin, Li Guan, Daohong Qiu, Zhiqiang Li, Yimin Liu, Peng Wang, Huimin Gong
-
Patent number: 12272031Abstract: An image inpainting system is described that receives an input image that includes a masked region. From the input image, the image inpainting system generates a synthesized image that depicts an object in the masked region by selecting a first code that represents a known factor characterizing a visual appearance of the object and a second code that represents an unknown factor characterizing the visual appearance of the object apart from the known factor in latent space. The input image, the first code, and the second code are provided as input to a generative adversarial network that is trained to generate the synthesized image using contrastive losses. Different synthesized images are generated from the same input image using different combinations of first and second codes, and the synthesized images are output for display.Type: GrantFiled: April 21, 2022Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Krishna Kumar Singh, Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman
-
Patent number: 12254593Abstract: In a method for generating combined image data based on first magnetic resonance (MR) data and second MR data, the first MR data and the second MR data are provided, the first MR data having been generated by a first actuation of a magnetic resonance device from an examination area of an examination object using a first sequence module, and the second MR data having been generated by a second actuation of the magnetic resonance device from the examination area of the examination object using the first sequence module, the first MR data and the second MR data are registered to one another to generate first registered MR data and second registered MR data; the first registered MR data and the second registered MR data are statistically combined to generate combined image data, and the combined image data is provided as an output in electronic form as a data file.Type: GrantFiled: September 2, 2021Date of Patent: March 18, 2025Assignee: Siemens Healthineers AGInventors: Thomas Benkert, Marcel Dominik Nickel
-
Patent number: 12249007Abstract: According to an embodiment of the present invention, there is provided a magnetic resonance image processing method including: acquiring a low-quality training image, a first parameter group including at least one scan parameter applied when the training image is acquired, a high-quality label image, and a second parameter group including at least one scan parameter applied when the label image is acquired; and training an artificial neural network model by using the training image, the first parameter group, the label image, and the second parameter group.Type: GrantFiled: April 21, 2022Date of Patent: March 11, 2025Assignee: AIRS MEDICAL INC.Inventor: Jeewook Kim
-
Patent number: 12249048Abstract: One embodiment of the present invention sets forth a technique for generating data. The technique includes sampling from a first distribution associated with the score-based generative model to generate a first set of values. The technique also includes performing one or more denoising operations via the score-based generative model to convert the first set of values into a first set of latent variable values associated with a latent space. The technique further includes converting the first set of latent variable values into a generative output.Type: GrantFiled: February 25, 2022Date of Patent: March 11, 2025Assignee: NVIDIA CORPORATIONInventors: Arash Vahdat, Karsten Kreis, Jan Kautz
-
Patent number: 12249003Abstract: A device and method with data preprocessing are disclosed. The device with preprocessing includes a first memory configured to store raw data, and a field programmable gate array (FPGA) in which reconfigurable augmentation modules are programmed, where the FPGA includes a decoder configured to decode the raw data, a second memory configured to store the decoded raw data, and a processor, where the processor is configured to determine target augmentation modules, from among the reconfigurable augmentation modules, based on a data preprocessing pipeline, perform the data preprocessing pipeline using the determined target augmentation modules to generate augmented data, including an augmentation of at least a portion of the decoded raw data stored in the second memory using an idle augmentation module, from among the target augmentation modules, and implement provision of the augmented data to a graphics processing unit (GPU) or Neural Processing Unit (NPU).Type: GrantFiled: December 22, 2021Date of Patent: March 11, 2025Assignees: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)Inventors: Myeongjae Jeon, Chanho Park, Kyuho Lee
-
Patent number: 12217432Abstract: Assessment of pulmonary function in coronavirus patients includes use of a computer aided diagnostic (CAD) system to assess pulmonary function and risk of mortality in patients with coronavirus disease 2019 (COVID-19). The CAD system processes chest X-ray data from a patient, extracts imaging markers, and grades disease severity based at least in part on the extracted imaging markers, thereby distinguishing between higher risk and lower risk patients.Type: GrantFiled: March 3, 2022Date of Patent: February 4, 2025Assignee: University of Louisville Research Foundation, Inc.Inventors: Ayman S. El-Baz, Ahmed Shalaby, Mohamed Elsharkawy, Ahmed Sharafeldeen, Ahmed Soliman, Ali Mahmoud, Harpal Sandhu, Guruprasad A. Giridharan
-
Patent number: 12211256Abstract: The invention supports creation of models for recognizing attributes in an image with high accuracy. An image recognition support apparatus includes an image input unit configured to acquire an image, a pseudo label generation unit configured to recognize the acquired image based on a plurality of types of image recognition models and output recognition information, and generate pseudo labels indicating attributes of the acquired image based on the output recognition information, and a new label generation unit configured to generate new labels based on the generated pseudo labels.Type: GrantFiled: February 23, 2022Date of Patent: January 28, 2025Assignee: Hitachi, Ltd.Inventors: Soichiro Okazaki, Quan Kong, Tomoaki Yoshinaga
-
Patent number: 12182965Abstract: Apparatus and methods relate to receiving an input image comprising an array of pixels, wherein the input image is associated with a first characteristic; applying a neural network to transform the input image to an output image associated with a second characteristic by generating, by an encoder and for each pixel of the array of pixels of the input image, an encoded pixel, providing, to a decoder, the array of encoded pixels, applying, by the decoder, axial attention to decode a given pixel, wherein the axial attention comprises a row attention or a column attention applied to one or more previously decoded pixels in rows or columns preceding a row or column associated with the given pixel, wherein the row or column attention mixes information within a respective row or column, and maintains independence between respective different rows or different columns; and generating, by the neural network, the output image.Type: GrantFiled: September 28, 2021Date of Patent: December 31, 2024Assignee: Google LLCInventors: Manoj Kumar Sivaraj, Dirk Weissenborn, Nal Emmerich Kalchbrenner
-
Patent number: 12175600Abstract: In a general aspect, a data management system for spatial phase imaging is described. A data management system for spatial phase imaging includes: a storage engine configured to receive and store input data in a record format, the input data including: pixel-level first-order primitives generated based on electromagnetic (EM) radiation received from an object located in a field-of-view of an image sensor device; and pixel-level second-order primitives generated based on the first-order primitives. The data management system further includes: an analytics engine configured to determine a plurality of features of the object based on the pixel-level first-order primitives and the pixel-level second-order primitives; and an access engine configured to provide a user access to the plurality of features of the object determined by the analytics engine and to the input data stored by the storage engine.Type: GrantFiled: January 28, 2022Date of Patent: December 24, 2024Assignee: Photon-X, Inc.Inventors: Blair Barbour, Nicholas Englert, David Theodore Truch, John Harrison
-
Patent number: 12175681Abstract: An image processing method includes displaying a first key frame image obtained from a sequence frame video stream; obtaining a first foreground image and a first background image obtained after foreground and background separation processing is performed on the first key frame image; obtaining a second foreground image obtained after foreground image processing is performed on a first target object in the first foreground image; obtaining a third background image obtained after background repair processing is performed on the first background image using a second background image, where the second background image is a background image included in a second key frame image obtained by photographing a target scene before the first key frame image is obtained; and obtaining a first key frame image obtained after foreground and background compositing processing is performed on the second foreground image and the third background image.Type: GrantFiled: July 30, 2020Date of Patent: December 24, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Liqiang Wang, Henghui Lu