Patents Examined by Hadi Akhavannik
  • Patent number: 12198225
    Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: January 14, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
  • Patent number: 12190607
    Abstract: A system, method, and computer program for updating calibration lookup tables within an autonomous vehicle or transmitting roadway marking changes between online and offline mapping files is disclosed. A LIDAR sensor may be used for generating an online (rasterized) mapping file with online intensity values which are compared against a correlated offline (rasterized) mapping file having offline intensity values. The online intensity value may be used to acquire a lookup table having a normal distribution that is compared against the offline intensity value. The lookup table may be updated when the offline intensity value is within the normal distribution. Or the vehicle may transmit a roadway marking change when the offline intensity value is outside the normal distribution.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: January 7, 2025
    Inventor: Khalid Yousif
  • Patent number: 12183058
    Abstract: A computer application may aim to identify first and second “matching” objects. The matching method cannot necessarily be based on how visually similar the two objects are to each other because two matching objects might be different and/or be visually different. Moreover, the images of the objects to be matched might not necessarily have metadata to assist in the matching. In some embodiments, a machine learning model may be trained using a set of digital images, each including two or more matching objects. Triplet loss training may be used, and each triplet may include: an image of a first object extracted from a first image, an image of an object that is visually similar to an image of a second object extracted from the first image, and an image of a third object extracted from a different image.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: December 31, 2024
    Assignee: Shopify Inc.
    Inventors: Shaked Dunay, Adam Malloul, Roni Gurvich
  • Patent number: 12183082
    Abstract: An information processing apparatus (10) includes a time and space information acquisition unit (110) that acquires high-risk time and space information indicating a spatial region with an increased possibility of an accident occurring or of a crime being committed and a corresponding time slot, a possible surveillance target acquisition unit (120) that identifies a video to be analyzed from among a plurality of videos generated by capturing an image of each of a plurality of places, on the basis of the high-risk time and space information, and analyzes the identified video to acquire information of a possible surveillance target, and a target time and space identification unit (130) that identifies at least one of a spatial region where surveillance is to be conducted which is at least a portion of the spatial region or a time slot when surveillance is to be conducted, from among the spatial region and the time slot indicated by the high-risk time and space information, on the basis of the information of the
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: December 31, 2024
    Assignee: NEC CORPORATION
    Inventors: Junko Nakagawa, Ryoma Oami, Kenichiro Ida, Mika Saito, Shohzoh Nagahama, Akinari Furukawa, Yasumasa Ohtsuka, Junichi Fukuda, Fumi Ikeda, Manabu Moriyama, Fumie Einaga, Tatsunori Yamagami, Keisuke Hirayama, Yoshitsugu Kumano, Hiroki Adachi
  • Patent number: 12183001
    Abstract: Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are commonly used to assess patients with known or suspected pathologies of the lungs and liver. In particular, identification and quantification of possibly malignant regions identified in these high-resolution images is essential for accurate and timely diagnosis. However, careful quantitative assessment of lung and liver lesions is tedious and time consuming. This disclosure describes an automated end-to-end pipeline for accurate lesion detection and segmentation.
    Type: Grant
    Filed: December 8, 2022
    Date of Patent: December 31, 2024
    Assignee: Arterys Inc.
    Inventors: Daniel Irving Golden, Fabien Rafael David Beckers, John Axerio-Cilies, Matthieu Le, Jesse Lieman-Sifry, Anitha Priya Krishnan, Sean Patrick Sall, Hok Kan Lau, Matthew Joseph Didonato, Robert George Newton, Torin Arni Taerum, Shek Bun Law, Carla Rosa Leibowitz, Angélique Sophie Calmon
  • Patent number: 12170142
    Abstract: A method at a computing device for classifying elements within an input, the method including breaking the input into a plurality of patches; for each patch: creating a vector output; applying a characterization map to select a classification bin from a plurality of classification bins; and utilizing the selected classification bin to classify the vector output to create a classified output; and compiling the classified output from each patch.
    Type: Grant
    Filed: April 26, 2023
    Date of Patent: December 17, 2024
    Assignees: NantOmics, LLC, NantHealth, Inc.
    Inventors: Mustafa Jaber, Liudmila A Beziaeva, Christopher W Szeto, Bing Song
  • Patent number: 12169961
    Abstract: Disclosed herein is digital object generator that makes uses a one-way function to generate unique digital objects based on the user specific input. Features of the input are first extracted via a few-shot convolutional neural network model, then evaluated weight and integrated fit. The resulting digital object includes a user decipherable output such as a visual representation, an audio representation, or a multimedia representation that includes recognizable elements from the user specific input.
    Type: Grant
    Filed: October 14, 2022
    Date of Patent: December 17, 2024
    Assignee: EMOJI ID, LLC
    Inventors: Naveen Kumar Jain, Riccardo Paolo Spagni
  • Patent number: 12165278
    Abstract: A hardware downscaling module and downscaling methods for downscaling a two-dimensional array of values. The hardware downscaling unit comprises a first group of one-dimensional downscalers; and a second group of one-dimensional downscalers; wherein the first group of one-dimensional downscalers is arranged to receive a two-dimensional array of values and to perform downscaling in series in a first dimension; and wherein the second group of one-dimensional downscalers is arranged to receive an output from the first group of one-dimensional downscalers and to perform downscaling in series in a second dimension.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: December 10, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Timothy Lee, Alan Vines, David Hough
  • Patent number: 12159428
    Abstract: There is provided an item detection device that detects an item to be loaded and unloaded and includes an image acquisition unit acquiring a surrounding image obtained by capturing surroundings of the item detection device, an information image creation unit creating an information image, in which information related to a part to be loaded and unloaded in the item has been converted into an easily recognizable state, on the basis of the surrounding image, and a computing unit computing at least one of a position and a posture of the part to be loaded and unloaded on the basis of the information image.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: December 3, 2024
    Assignees: KABUSHIKI KAISHA TOYOTA JIDOSHOKKI, NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE AND TECHNOLOGY
    Inventors: Yasuyo Kita, Nobuyuki Kita, Ryuichi Takase, Tatsuya Komuro, Norihiko Kato
  • Patent number: 12159393
    Abstract: In some embodiments, a method can include augmenting a set of images of collectables to generate a set of synthetic images of collectables. The method can further include combining the set of images of collectables and the set of synthetic images of collectables to produce a training set. The method can further include training a set of machine learning models based on the training set. Each machine learning model from the set of machine learning models can generate a grade for an image attribute from a set of image attributes. The set of image attributes can include an edge, a corner, a center, or a surface. The method can further include executing, after training, the set of machine learning models to generate a set of grades for an image of collectable not included in the training set.
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: December 3, 2024
    Assignee: Collectors Universe, Inc.
    Inventors: David Shalamberidze, Kevin C. Lenane
  • Patent number: 12159435
    Abstract: A point cloud data processing method according to embodiments may comprise: encoding point cloud data; and transmitting the encoded point cloud data. The point cloud data processing method according to embodiments may comprise: receiving point cloud data; and decoding the received point cloud data.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: December 3, 2024
    Assignee: LG Electronics Inc.
    Inventors: Hyunmook Oh, Sejin Oh
  • Patent number: 12136209
    Abstract: An apparatus for assessing a vessel of interest and a corresponding method are provided in which the modeling of the hemodynamic parameters using a fluid dynamics model can be verified by deriving feature values from the segmented vessel of interest and inputting these feature values into a classifier. The classifier may then determine, based on the feature values whether the segmentation has been performed from proximal to distal, from distal to proximal or cannot be determined from the provided data. An incorrect segmentation order can thus be identified and potentially be corrected, thereby avoiding inaccurate simulation results.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: November 5, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Christian Haase, Michael Grass, Martijn Anne Van Lavieren, Cornelis Willem Johannes Immanuel Spoel, Romane Isabelle Marie-Bernard Gauriau, Holger Schmitt, Javier Olivan Bescos
  • Patent number: 12136255
    Abstract: A method for employing a semi-supervised learning approach to improve accuracy of a small model on an edge device is presented. The method includes collecting a plurality of frames from a plurality of video streams generated from a plurality of cameras, each camera associated with a respective small model, each small model deployed in the edge device, sampling the plurality of frames to define sampled frames, performing inference to the sampled frames by using a big model, the big model shared by all of the plurality of cameras and deployed in a cloud or cloud edge, using the big model to generate labels for each of the sampled frames to generate training data, and training each of the small models with the training data to generate updated small models on the edge device.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: November 5, 2024
    Assignee: NEC Corporation
    Inventors: Yi Yang, Murugan Sankaradas, Srimat Chakradhar
  • Patent number: 12131547
    Abstract: An information processing apparatus (10) includes a time and space information acquisition unit (110) that acquires high-risk time and space information indicating a spatial region with an increased possibility of an accident occurring or of a crime being committed and a corresponding time slot, a possible surveillance target acquisition unit (120) that identifies a video to be analyzed from among a plurality of videos generated by capturing an image of each of a plurality of places, on the basis of the high-risk time and space information, and analyzes the identified video to acquire information of a possible surveillance target, and a target time and space identification unit (130) that identifies at least one of a spatial region where surveillance is to be conducted which is at least a portion of the spatial region or a time slot when surveillance is to be conducted, from among the spatial region and the time slot indicated by the high-risk time and space information, on the basis of the information of the
    Type: Grant
    Filed: October 24, 2023
    Date of Patent: October 29, 2024
    Assignee: NEC CORPORATION
    Inventors: Junko Nakagawa, Ryoma Oami, Kenichiro Ida, Mika Saito, Shohzoh Nagahama, Akinari Furukawa, Yasumasa Ohtsuka, Junichi Fukuda, Fumi Ikeda, Manabu Moriyama, Fumie Einaga, Tatsunori Yamagami, Keisuke Hirayama, Yoshitsugu Kumano, Hiroki Adachi
  • Patent number: 12125221
    Abstract: A method for detecting a three-dimensional object in a two-dimensional image includes: inputting the two-dimensional image into an object detection model, and obtaining a resulting detection depth dataset; obtaining, based on the detection depth dataset, coordinate sets of a number of points-of-interest each associated with a to-be-detected object in a 3D camera centered coordinate system; and converting the coordinate sets of the number of points-of-interest in the 3D camera centered coordinate system into a number of coordinate sets in a 3D global coordinate system. Embodiments of this disclosure may be utilized in the field of self-driving cars with roadside traffic cameras.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: October 22, 2024
    Assignee: MERIT LILIN ENT. CO., LTD.
    Inventors: Cheng-Chung Hsu, Chih-Kang Hu, Chi-Yen Cheng, Chia-Wen Ho, Jin-De Song
  • Patent number: 12118787
    Abstract: Methods, system, and computer storage media are provided for multi-modal localization. Input data comprising two modalities, such as image data and corresponding text or audio data, may be received. A phrase may be extracted from the text or audio data, and a neural network system may be utilized to spatially and temporally localize the phrase within the image data. The neural network system may include a plurality of cross-modal attention layers that each compare features across the first and second modalities without comparing features of the same modality. Using the cross-modal attention layers, a region or subset of pixels within one or more frames of the image data may be identified as corresponding to the phrase, and a localization indicator may be presented for display with the image data. Embodiments may also include unsupervised training of the neural network system.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: October 15, 2024
    Assignee: ADOBE INC.
    Inventors: Hailin Jin, Bryan Russell, Reuben Xin Hong Tan
  • Patent number: 12117380
    Abstract: Milling with ultraviolet excitation (MUVE) realizes high-throughput multiplex imaging of large three-dimensional samples. The instrumentation may comprise a UV-source attachment, precision stage attachment, and/or a blade assembly, and the instrumentation may overcome several constraints inherent to current state-of-the-art three-dimensional microscopy. MUVE offers throughput that is orders of magnitude faster than other technology by collecting a two-dimensional array of pixels simultaneously. The proposed instrumentation also utilizes serial ablation and provides the opportunity for true whole-organ imaging at microscopic resolution.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: October 15, 2024
    Assignee: University of Houston System
    Inventors: David Mayerich, Jason Eriksen
  • Patent number: 12112456
    Abstract: The present disclosure relates to an image retouching system that automatically retouches digital images by accurately correcting face imperfections such as skin blemishes and redness. For instance, the image retouching system automatically retouches a digital image through separating digital images into multiple frequency layers, utilizing a separate corresponding neural network to apply frequency-specific corrections at various frequency layers, and combining the retouched frequency layers into a retouched digital image. As described herein, the image retouching system efficiently utilizes different neural networks to target and correct skin features specific to each frequency layer.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: October 8, 2024
    Assignee: Adobe Inc.
    Inventors: Federico Perazzi, Jingwan Lu
  • Patent number: 12112620
    Abstract: A streetlight situational awareness system (SSAS) includes streetlight modules integrated into streetlights. Each module includes a camera configured to detect objects within a predetermined zone along a road. The awareness module includes a lamp array configured to illuminate an area around the streetlight. The system includes a communication network configured to share data about objects within the zone with nearby vehicles and/or neighboring streetlight modules.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: October 8, 2024
    Assignee: LHP, Inc.
    Inventors: Adam Joseph Saenz, Victor Hugo Aguilar, Armando Silvestre Hernandez-Urena, Steven Joseph Neemeh
  • Patent number: 12106487
    Abstract: A technique is described herein that interprets some frames in a stream of video content as key frames and other frames as predicted frames. The technique uses an image analysis system to produce feature information for each key frame. The technique uses a prediction model to produce feature information for each predicted frame. The prediction model operates on two inputs: (1) feature information that has been computed for an immediately-preceding frame; and (2) frame-change information. A motion-determining model produces the frame-change information by computing the change in video content between the current frame being predicted and the immediately-preceding frame. The technique reduces the amount of image-processing operations that are used to process the stream of video content compared to a base case of processing all of the frames using the image analysis system. As such, the technique uses less computing resources compared to the base case.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: October 1, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mohsen Fayyaz, Hamidreza Vaezi Joze, Eric Chris Wolfgang Sommerlade