Patents Examined by Amir Alavi
  • Patent number: 12361684
    Abstract: The purpose of the present invention is to provide a data collection system capable of appropriately setting a range for collecting image data used as training data of a learning device that recognizes an image. A data collection system according to the present invention receives requirement variables that represent the requirement of image data necessary for a learning device to be sufficiently trained and requirement definition data that designates the requirement value, further receives priority data that designates the priority of the requirement variables, presents the requirement variables and the requirement values in the order of the priorities, and presents a requirement value response rate that represents the ratio of the requirement variables for which the requirement value is specified.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: July 15, 2025
    Assignee: Hitachi Solutions, Ltd.
    Inventor: Hidekazu Ito
  • Patent number: 12361699
    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for efficient processing of visual content using machine learning models. An example method generally includes generating, from an input, an embedding tensor for the input. The embedding tensor for the input is projected into a reduced-dimensional space projection of the embedding tensor based on a projection matrix. An attention value for the input is derived based on the reduced-dimensional space projection of the embedding tensor and a non-linear attention function. A match, in the reduced-dimensional space, is identified between a portion of the input and a corresponding portion of a target against which the input is evaluated based on the attention value for the input. One or more actions are taken based on identifying the match.
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: July 15, 2025
    Assignee: QUALCOMM Incorporated
    Inventors: Jamie Menjay Lin, Fatih Murat Porikli
  • Patent number: 12340565
    Abstract: Embodiments of the present disclosure relate to validation of unsupervised adaptive models. According to example embodiments of the present disclosure, unlike methods validating with the seen target data, the present disclosure synthesizes new samples by mixing the target samples and pseudo labels. The accuracy between model predictions of mixed samples and the mixed labels are measured for model selection, and the accuracy score may be called PseudoMix. PseudoMix enjoys the combined inductive bias of previous methods. Experiments demonstrate that PseudoMix can keep state-of-the-art performance across different validation settings.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: June 24, 2025
    Assignee: LEMON INC.
    Inventors: Song Bai, Dapeng Hu, Jun Hao Liew, Chuhui Xue
  • Patent number: 12333806
    Abstract: Systems and methods for detecting objects in a video are provided. A method can include inputting a video comprising a plurality of frames into an interleaved object detection model comprising a plurality of feature extractor networks and a shared memory layer. For each of one or more frames, the operations can include selecting one of the plurality of feature extractor networks to analyze the one or more frames, analyzing the one or more frames by the selected feature extractor network to determine one or more features of the one or more frames, determining an updated set of features based at least in part on the one or more features and one or more previously extracted features extracted from a previous frame stored in the shared memory layer, and detecting an object in the one or more frames based at least in part on the updated set of features.
    Type: Grant
    Filed: March 13, 2024
    Date of Patent: June 17, 2025
    Assignee: GOOGLE LLC
    Inventors: Dmitry Kalenichenko, Menglong Zhu, Marie Charisse White, Mason Liu, Yinxiao Li
  • Patent number: 12328435
    Abstract: An intra prediction method and a device using the intra prediction method are provided. The intra prediction method includes the steps of: deriving a current prediction mode as a prediction mode of a current block; constructing neighboring samples of the current block with available reference samples; filtering the available reference samples; and generating predicted samples of the current block on the basis of the filtered available reference samples. The filtering step includes performing the filtering using the available reference sample located in the prediction direction of the current prediction mode and a predetermined number of available reference samples neighboring to the prediction direction of the current prediction mode.
    Type: Grant
    Filed: February 27, 2024
    Date of Patent: June 10, 2025
    Assignee: LG ELECTRONICS INC.
    Inventors: Yong Joon Jeon, Seung Wook Park, Jung Sun Kim, Joon Young Park, Byeong Moon Jeon, Jae Hyun Lim
  • Patent number: 12327390
    Abstract: The present invention in some embodiments thereof relates to a system and method for detecting inappropriate content on a device and filtering content on a variety of media. Inappropriate content is detected by taking a sample of media from at least one of a local memory, a data stream from a network and a data stream from local sensor, preprocessing the sample using a local processor and locally stored software to determine if the sample is a likely candidate to include objectionable content, in response to said sample being found to be a likely candidate perform at least one of quarantining the sample, marking the media, sending the sample to a remote processor for further analysis, analyzing the sample using an artificial intelligence routine running on a local processor and analyzing the sample using an artificial intelligence routine running on said local processor.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: June 10, 2025
    Inventors: Elyasaf Korenwaitz, Ariel Yosef, Zvi Bazak
  • Patent number: 12327419
    Abstract: A method, computer program product, and system include a processor(s) that map a physical environment by utilizing one or more image capture devices to scan aspects of the physical environment. The mapping identifies contamination levels and features associated with objects in the environment. The processor(s) utilizes unsupervised learning and supervised learning to identify activities engaged in by a user in the environment. The processor(s) determines that a trigger event has occurred. The processor(s) identify an activity engaged in by the user and determine if a user interface utilized by the program code to display results is visible to the user. If the processor(s) determines that the interface is not visible, the processor(s) selects an alert mechanism to alert the user to the trigger event and alerts the user to the trigger event with the selected alert mechanism.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: June 10, 2025
    Assignee: Kyndryl, Inc.
    Inventors: Tiberiu Suto, Shikhar Kwatra, Jonathan Cottrell, Carolina G. Delgado, Kelly Camus
  • Patent number: 12327398
    Abstract: An embodiment for generating balanced train-test splits for machine learning analysis. The embodiment may automatically extract low-level features and high-level features from a series of received datasets. The embodiment may automatically determine a series of impactful features for each of the received datasets correlating to a corresponding label. The embodiment may automatically select subsets of impactful features The embodiment may automatically cluster the received datasets to generate series of clusters, each of the generated series of clusters corresponding to one of the selected subsets of impactful features. The embodiment may automatically generate train-test split versions using datasets from each cluster in each of the generated series of clusters. The embodiment may automatically score each of the generated train-test split versions and select a highest-scoring train-test split version.
    Type: Grant
    Filed: December 2, 2022
    Date of Patent: June 10, 2025
    Assignee: International Business Machines Corporation
    Inventors: Simona Rabinovici-Cohen, Ella Barkan, Tal Tlusty Shapiro
  • Patent number: 12321686
    Abstract: The presently disclosed inventive concepts are directed to systems, computer program products, and methods for intelligent screen automation. According to one embodiment, a method includes: determining one or more logical relationships between textual elements and non-textual elements of one or more images of a user interface; building a hierarchy comprising some or all of the non-textual elements and some or all of the textual elements in order to form a data structure representing functionality of the user interface; and outputting the data structure to a memory.
    Type: Grant
    Filed: December 19, 2023
    Date of Patent: June 3, 2025
    Assignee: TUNGSTEN AUTOMATION CORPORATION
    Inventors: Vadim Alexeev, Benjamin De Coninck Owe
  • Patent number: 12288411
    Abstract: In example embodiments, techniques are provided that use two different ML models (a symbol association ML model and a link association ML model), one to extract associations between text labels and one to extract associations between symbols and links, in a schematic diagram (e.g., P&ID) in an image-only format. The two models may use different ML architectures. For example, the symbol association ML model may use a deep learning neural network architecture that receives for each possible text label and symbol pair both a context and a request, and produces a score indicating confidence the pair is associated. The link association ML model may use a gradient boosting tree architecture that receives for each possible text label and link pair a set of multiple features describing at least the geometric relationship between the possible text label and link pair and produces a score indicating confidence the pair is associated.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: April 29, 2025
    Assignee: Bentley Systems, Incorporated
    Inventors: Marc-Andrè Gardner, Simon Savary, Louis-Philippe Asselin
  • Patent number: 12288567
    Abstract: A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: April 29, 2025
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, ETH ZÜRICH
    Inventors: Wim Abbeloos, Arun Balajee Vasudevan, Dengxin Dai, Luc Van Gool
  • Patent number: 12283105
    Abstract: A rail area extraction method based on laser point cloud data is provided, including: preprocessing collected laser point cloud data; screening and clustering the laser point cloud data based on a fixed distance segmentation method, and representing laser point cloud data of reference objects by a main laser point cloud data cluster; projecting laser point cloud data of reference objects to a horizontal plane, fitting reference curves based on an improved differential evolution algorithm with a train left side reference curve as an upper boundary and a train right side reference curve as a lower boundary; selecting a target boundary line from upper and lower boundaries based on a laser point cloud data amount-density two-step decision method; calculating a rail area center line based on the target boundary line; and selecting a rail area boundary line extension method or rail area center line extension method to calculate the rail area.
    Type: Grant
    Filed: July 29, 2024
    Date of Patent: April 22, 2025
    Assignee: Suzhou TongRuiXing Technology Co., LTD
    Inventors: Tuo Shen, Lanxin Xie, Tenghui Xue
  • Patent number: 12283081
    Abstract: A stroboscopic device, comprising: a camera for acquiring video frames; a light source; and at least one hardware processor; and software that is configured to, when executed by the at least one hardware processor, generating synthetic video frames of an object, providing real video frames of the object and the synthetic video frames to a discriminator configured to detect the difference between the real video frames from the synthetic video frames, using feedback from the discriminator to generate further synthetic video frames, providing real video frames of the object and the further synthetic video frames to the discriminator and repeating until a convergence point is achieved where the discriminator can't reliably tell the real video frames from the synthetically generated video frames, generating a data set comprising synthetic video frames and real vide frames of the object, using the data set to train a model, acquire a sequence of video frames via the camera, for a first and second frame in the seque
    Type: Grant
    Filed: July 10, 2024
    Date of Patent: April 22, 2025
    Assignee: IOT Technologies LLC
    Inventors: Abraham Greenboim, Osama Mustafa
  • Patent number: 12276504
    Abstract: Systems and methods for navigating intersections autonomously or semi-autonomously can include, but are not limited to including, accessing data related to the geography and traffic management features of the intersection, executing autonomous actions to navigate the intersection, and coordinating with one or more processors and/or operators executing remote actions, if necessary. Traffic management features can be identified by using various types of images such as oblique images.
    Type: Grant
    Filed: March 11, 2024
    Date of Patent: April 15, 2025
    Assignee: DEKA Products Limited Partnership
    Inventors: Praneeta Mallela, Aaditya Ravindran, Benjamin V. Hersh, Boris Bidault
  • Patent number: 12277665
    Abstract: An electronic device is provided. The electronic device includes a camera, a display, and a processor configured to obtain a first image including one or more external objects by using the camera, display to output a three-dimensional (3D) object generated based on attributes related to a face among the one or more external objects using the display, receive a selection of at least one graphic attribute from a plurality of graphic attributes which can be applied to the 3D object, generate a 3D avatar for the face based on the at least one graphic attribute, and generate a second image including at least one object reflecting a predetermined facial expression or motion using the 3D avatar.
    Type: Grant
    Filed: August 21, 2023
    Date of Patent: April 15, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wooyong Lee, Yonggyoo Kim, Byunghyun Min, Dongil Son, Chanhee Yoon, Kihuk Lee, Cheolho Cheong
  • Patent number: 12277739
    Abstract: A point-cloud decoding device 200 according to the present invention includes: a geometry information decoding unit 2010 configured to decode a flag that controls whether to apply “Implicit QtBt” or not; and an attribute-information decoding unit 2060 configured to decode a flag that controls whether to apply “scalable lifting” or not; wherein, a restriction is set not to apply the “scalable lifting” when the “Implicit QtBt” is applied.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: April 15, 2025
    Assignee: KDDI CORPORATION
    Inventors: Kyohei Unno, Kei Kawamura, Sei Naito
  • Patent number: 12272032
    Abstract: A method includes obtaining an input image that contains blur. The method also includes providing the input image to a trained machine learning model, where the trained machine learning model includes (i) a shallow feature extractor configured to extract one or more feature maps from the input image and (ii) a deep feature extractor configured to extract deep features from the one or more feature maps. The method further includes using the trained machine learning model to generate a sharpened output image. The trained machine learning model is trained using ground truth training images and input training images, where the input training images include versions of the ground truth training images with blur created using demosaic and noise filtering operations.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: April 8, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Devendra K. Jangid, John Seokjun Lee, Hamid R. Sheikh
  • Patent number: 12272101
    Abstract: A seed camera disposed a first location is manually calibrated. A second camera, disposed at a second location, detects a physical marker based on predefined characteristics of the physical marker. The physical marker is located within an overlapping field of view between the seed camera and the second camera. The second camera is calibrated based on a combination of the physical location of the physical marker, the first location of the seed camera, the second location of the second camera, a first image of the physical marker generated with the seed camera, and a second image of the physical marker generated with the second camera.
    Type: Grant
    Filed: June 5, 2023
    Date of Patent: April 8, 2025
    Assignee: Nice North America LLC
    Inventor: Chandan Gope
  • Patent number: 12272107
    Abstract: Encoding a three-dimensional point cloud. A set of points are obtained within the three-dimensional point cloud, a point within the set of points having a co-ordinate in three-dimensions. The points are converted into a two-dimensional representation. For a point within the set of points, information describing the co-ordinate is represented as a location within the two-dimensional representation and a value at the location. The two-dimensional representation is encoded using a tier-based hierarchical coding format to output encoded data. The tier-based hierarchical coding format encodes the two-dimensional representation as layers, the layers representing echelons of data used to progressively reconstruct the signal at different levels of quality.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: April 8, 2025
    Assignee: V-NOVA INTERNATIONAL LIMITED
    Inventor: Guido Meardi
  • Patent number: 12272131
    Abstract: In one aspect, an example method includes a processor (1) applying a feature map network to an image to create a feature map comprising a grid of vectors characterizing at least one feature in the image and (2) applying a probability map network to the feature map to create a probability map assigning a probability to the at least one feature in the image, where the assigned probability corresponds to a likelihood that the at least one feature is an overlay. The method further includes the processor determining that the probability exceeds a threshold, and responsive to the processor determining that the probability exceeds the threshold, performing a processing action associated with the at least one feature.
    Type: Grant
    Filed: December 19, 2023
    Date of Patent: April 8, 2025
    Assignee: The Nielsen Company (US), LLC
    Inventors: Wilson Harron, Irene Zhu