Patents Examined by David Perlman
  • Patent number: 12288405
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for text extraction from a receipt image. An example non-transitory computer readable medium comprises instructions that, when executed, cause a machine to at least improve region of interest detection efficiency by converting pixels of an input receipt image from a first format to a second format, generate a binary representation of the input receipt image based on the converted pixels, the binary representation of the input receipt image corresponding to saturation values for respective ones of the converted pixels, calculate mirror data from the binary representation of the input receipt image, and cluster the binary representation of the input receipt image to identify a first set of candidate regions of interest, the candidate regions of interest characterized by portions of the binary representation of the input receipt image having saturation values that satisfy a threshold value.
    Type: Grant
    Filed: August 26, 2022
    Date of Patent: April 29, 2025
    Assignee: NIELSEN CONSUMER LLC
    Inventors: Venkadachalam Ramalingam, Sricharan Amarnath, Raju Kumar Allam, Sreenidhi N. Upadhya, Kannan Shanmuganathan, Hussain Masthan
  • Patent number: 12283029
    Abstract: An image inpainting method includes: acquiring an image to be inpainted based on depth information and texture information of a reference image, the image to be inpainted including at least one region to be inpainted; determining at least one reference block matching the at least one region to be inpainted respectively in the reference image; and inpainting the at least one region to be inpainted by using the at least one reference block to obtain a composite image.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: April 22, 2025
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., PEKING UNIVERSITY
    Inventors: Yan Sun, Yunhe Tong, Tiankuo Shi, Xue Chen, Yanhui Xi, Yuxin Bi, Anjie Wang, Songchao Tan, Xiaomang Zhang
  • Patent number: 12277744
    Abstract: A method includes receiving a first image that is captured at a first time. The method also includes detecting a location of a first object in the first image. The method also includes determining a region of interest based at least partially upon the location of the first object in the first image. The method also includes receiving a second image that is captured at a second time. The method also includes identifying the region of interest in the second image. The method also includes detecting a location of a second object in a portion of the second image that is outside of the region of interest.
    Type: Grant
    Filed: March 20, 2024
    Date of Patent: April 15, 2025
    Assignee: AURORA FLIGHT SCIENCES CORPORATION, A SUBSIDIARY OF THE BOEING COMPANY
    Inventors: Benjamin Jafek, Samvruta Tumuluru
  • Patent number: 12266181
    Abstract: Embodiments are disclosed for receiving a user input and an input video comprising multiple frames. The method may include extracting a text feature from the user input. The method may further include extracting a plurality of image features from the frames. The method may further include identifying one or more keyframes from the frames that include the object. The method may further include clustering one or more groups of the one or more keyframes. The method may further include generating a plurality of segmentation masks for each group. The method may further include determining a set of reference masks corresponding to the user input and the object. The method may further include generating a set of fusion masks by combining the plurality of segmentation masks and the set of reference masks. The method may further include propagating the set of fusion masks and outputting a final set of masks.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: April 1, 2025
    Assignee: Adobe Inc.
    Inventors: Shivam Nalin Patel, Kshitiz Garg, Han Guo, Ali Aminian, Aashish Misraa
  • Patent number: 12265145
    Abstract: A method for performing magnetic resonance imaging on a subject comprises obtaining undersampled imaging data, extracting one or more temporal basis functions from the imaging data, extracting one or more preliminary spatial weighting functions from the imaging data, inputting the one or more preliminary spatial weighting functions into a neural network to produce one or more final spatial weighting functions, and multiplying the one or more final spatial weighting functions by the one or more temporal basis functions to generate an image sequence. Each of the temporal basis functions corresponds to at least one time-varying dimension of the subject. Each of the preliminary spatial weighting functions corresponds to a spatially-varying dimension of the subject. Each of the final spatial weighting functions is an artifact-free estimation of the one of the one or more preliminary spatial weighting functions.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: April 1, 2025
    Assignee: CEDARS-SINAI MEDICAL CENTER
    Inventors: Anthony Christodoulou, Debiao Li, Yuhua Chen
  • Patent number: 12260526
    Abstract: One embodiment provides a computer-implemented method that includes providing a dynamic list structure that stores one or more detected object bounding boxes. Temporal analysis is applied that updates the dynamic list structure with object validation to reduce temporal artifacts. A two-dimensional (2D) buffer is utilized to store a luminance reduction ratio of a whole video frame. The luminance reduction ratio is applied to each pixel in the whole video frame based on the 2D buffer. One or more spatial smoothing filters are applied to the 2D buffer to reduce a likelihood of one or more spatial artifacts occurring in a luminance reduced region.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: March 25, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kamal Jnawali, Joonsoo Kim, Chenguang Liu, Chang Su
  • Patent number: 12260645
    Abstract: Systems, methods, and computer program products of intelligent image analysis using object detection models to identify objects and locate and detect features in an image are disclosed. The systems, methods, and computer program products include automated learning to identify the location of an object to enable continuous identification and location of an object in an image during periods when the object may be difficult to recognize or during low visibility conditions.
    Type: Grant
    Filed: January 10, 2024
    Date of Patent: March 25, 2025
    Assignee: DEVON ENERGY CORPORATION
    Inventors: Amos James Hall, Jared Lee Markes, Beau Travis Rollins, Michael Alan Adler, Jr.
  • Patent number: 12260649
    Abstract: A device may process the surveillance video data to segment vehicles, and may utilize a segmentation guided attention network model with the vehicles to determine traffic density count data. The device may process an image segmentation map, with a regression analysis model, to derive traffic signal timing. The device may process the surveillance video data, with a deep learning model, to identify objects, and may utilize a YOLO model, with the objects, to determine object types. The device may utilize a curriculum loss model with the objects to determine crowd count data, and may process the surveillance video data, with a video analytics model, to identify first events. The device may process the surveillance video data, with a classifier and deep network models, to identify second events, and may process the determined information, with a dynamic text-based explanation model, to generate a text-based explanation and/or a failure prediction.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: March 25, 2025
    Assignee: Accenture Global Solutions Limited
    Inventors: Subramaniaprabhu Jagadeesan, Bikash Chandra Mahato
  • Patent number: 12254604
    Abstract: A system comprises a picture and metadata captured by a content capture system; a recognizable characteristic datastore configured to store recognizable characteristics of different users; a module configured to identify a time and a location associated with the picture based on the metadata, and to identify one or more potential target systems within a predetermined range of the location at the time; a characteristic recognition module configured to retrieve the recognizable characteristics of one or more potential users associated with the potential target systems, and evaluate whether the picture includes one or more representations of at least one actual target user from the potential users based on the recognizable characteristics of the potential users; a distortion module configured to distort a feature of the representations of the least one actual target user in response to the determination; a communication module configured to communicate the distorted picture to a computer network.
    Type: Grant
    Filed: September 21, 2023
    Date of Patent: March 18, 2025
    Assignee: Privowny, Inc.
    Inventor: Herve Le Jouan
  • Patent number: 12254086
    Abstract: Systems, devices, and methods are disclosed for encoding behavioral information into an image format to facilitate image based behavioral identification.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: March 18, 2025
    Assignee: Fortinet, Inc.
    Inventor: Sameer Khanna
  • Patent number: 12236686
    Abstract: A distributed monitoring and analytics system is configured to automatically monitor conditions in a remote oil field. The distributed monitoring and analytics system generally includes one or more mobile monitoring units that each includes a vehicle, a sensor package within the vehicle that is configured to produce one or more sensor outputs as the mobile monitoring unit traverses the remote oil field, and an onboard computer configured to process the output from the sensor package. The sensor package can include any number of sensors, including a camera that outputs a video signal for computer vision analysis and a gas detector that outputs a gas detection signal based on the detection of fugitive gas emissions within the remote oil field.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: February 25, 2025
    Assignee: Baker Hughes Oilfield Operations LLC
    Inventors: John Westerheide, Dustin Sharber, Jeffrey Potts, Mahendra Joshi, Xiaoqing Ge, Jeremy Van Dam
  • Patent number: 12217330
    Abstract: A computer-implemented method for generating labeled training data for an artificial intelligence machine is provided.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: February 4, 2025
    Assignee: THALES
    Inventors: Thierry Ganille, Guillaume Pabia, Christian Nouvel
  • Patent number: 12214487
    Abstract: A vision-based tactile measurement method is provided, performed by a computer device (e.g., a chip) connected to a tactile sensor, the tactile sensor including a sensing face and an image sensing component, and the sensing face being provided with a marking pattern. The method includes: obtaining an image sequence collected by the image sensing component of the sensing face, each image of the image sequence comprising one instance of the marking pattern; calculating a difference feature of the marking patterns in adjacent images of the image sequence; and processing the difference feature of the marking patterns using a feedforward neural network to obtain a tactile measurement result, a quantity of hidden layers in the feedforward neural network being less than a threshold.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: February 4, 2025
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yu Zheng, Zhongjin Xu, Zhengyou Zhang
  • Patent number: 12212762
    Abstract: This application discloses a point cloud encoding method and apparatus, a point cloud decoding method and apparatus, and a storage medium for point cloud encoding and/or decoding, and belongs to a data processing field. The method includes: first obtaining auxiliary information of a to-be-encoded patch, and then encoding the auxiliary information and a first index of the to-be-encoded patch into a bitstream. Values of the first index may be a first value, a second value, and a third value. Different values indicate different types of patches. Therefore, different types of patches can be distinguished by using the first index. For different types of patches, their corresponding auxiliary information encoded into a bitstream may comprise different contents. This can simplify a format of information encoded into the bitstream, reduce bit overheads of the bitstream, and improve encoding efficiency.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: January 28, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Kangying Cai, Dejun Zhang
  • Patent number: 12211260
    Abstract: Systems, apparatuses and methods may provide for technology that processes an inference workload in a first subset of layers of a neural network that prevents or inhibits data dependent branch operations, conducts an exit determination as to whether an output of the first subset of layers satisfies one or more exit criteria, and selectively bypasses processing of the output in a second subset of layers of the neural network based on the exit determination. The technology may also speculatively initiate the processing of the output in the second subset of layers while the exit determination is pending. Additionally, when the inference workloads include a plurality of batches, the technology may mask one or more of the plurality of batches from processing in the second subset of layers.
    Type: Grant
    Filed: November 27, 2023
    Date of Patent: January 28, 2025
    Assignee: Intel Corporation
    Inventors: Haim Barad, Barak Hurwitz, Uzi Sarel, Eran Geva, Eli Kfir, Moshe Island
  • Patent number: 12211213
    Abstract: In order to perform quantitative analysis on an object in an image, it is important to accurately identify the object, but when plural objects are in contact with each other, it is potential that a target portion cannot be accurately identified. An image is segmented into a foreground region and a background region, the foreground region being a region in which an object for which quantitative information is to be calculated is shown, and the background region being a region other than the foreground region. With respect to a first object and a second object in contact with each other in the image, a contact point between the first object and the second object is detected based on a region segmentation result output by a segmentation unit.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: January 28, 2025
    Assignee: HITACHI HIGH-TECH CORPORATION
    Inventors: Anirban Ray, Hideharu Hattori, Yasuki Kakishita, Taku Sakazume
  • Patent number: 12211306
    Abstract: A method for counting suckling piglets based on self-attention spatiotemporal feature fusion is disclosed, which includes: detecting a side-lying sow in a video frame by using CenterNet to acquire a key frame of suckling piglets and a region of interest of the video frame, and overcome the interference of the movement of non-suckling piglets on the spatiotemporal feature extraction for the region of interest; transforming spatiotemporal features extracted by a spatiotemporal two-stream convolutional network from a key frame video clip into a spatiotemporal feature vector, inputting the obtained spatiotemporal feature vector into a temporal, a spatial and a fusion transformer to obtain a self-attention matrix; performing element-wise product for the self-attention matrix and the fused spatiotemporal features to obtain a self-attention spatiotemporal feature map; and inputting the self-attention spatiotemporal feature map into a regression branch of the number of suckling piglets to complete the counting of the
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: January 28, 2025
    Inventors: Yueju Xue, Haiming Gan, Wenhao Hou, Chengguo Xu
  • Patent number: 12205341
    Abstract: The present invention relates to a neural network-based high-resolution image restoration method and system, including: performing feature extraction on a target frame in a network input to obtain a first feature, performing feature extraction on a first frame and an adjacent frame and an optical flow between the first frame and the adjacent frame to obtain a second feature, and concatenating the first feature and the second feature to obtain a shallow layer feature; performing feature extraction and refinement on the shallow layer feature to obtain a plurality of output first features and a plurality of output second features; performing feature decoding on the plurality of output second features, and concatenating decoded features along channel dimensionality to obtain features; and performing weight distribution on the features to obtain final features, and restoring an image. The present invention can effectively help to improve image quality.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: January 21, 2025
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Jianling Hu, Lihang Gao, Dong Liao, Jianfeng Yang, Honglong Cao
  • Patent number: 12204653
    Abstract: A system and method, in particular computer implemented method for determining a perturbation for attacking and/or validating an association tracker. The method includes providing digital image data that includes an object, determining with the digital image data a first feature that characterizes the object, providing in particular from a storage a second feature that characterizes a tracked object, determining the perturbation depending on a measure of a similarity between the first feature and the second feature.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: January 21, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventors: Anurag Pandey, Jan Hendrik Metzen, Nicole Ying Finnie, Volker Fischer
  • Patent number: 12198451
    Abstract: The present disclosure relates to a method of matching a text with a design performed by an apparatus for matching a text with a design. According to an embodiment of the present disclosure, the method may comprise acquiring an image from information including images and texts; learning features of the acquired image; extracting texts from the information and performing learning about a pair of an extracted text and the acquired image; extracting a trend word extracted at least a predetermined reference number of times among the extracted texts; performing learning about a pair of the trend word and the acquired image; and identifying a design feature matched with the trend word among learned features of the image.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: January 14, 2025
    Assignee: DESIGNOVEL
    Inventors: Woo Sang Song, Ki Young Shin, Jian Ri Li