Patents Examined by Dhaval V Patel
  • Patent number: 11976937
    Abstract: A computer-implemented method for interpolation comprises the following steps carried out by computer hardware components: determining an image-like input data structure, the image-like input data structure comprising a plurality of data points; determining a plurality of reference data points as a subset of the plurality of data points of the image-like input data structure; determining an image-like temporary data structure based on the image-like input data structure and a pre-determined processing operation; replacing data points of the image-like temporary data structure corresponding to the plurality of reference data points by the plurality of reference data points to obtain an image-like updated temporary data structure; and determining an image-like output data structure based on the image-like updated temporary data structure.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: May 7, 2024
    Assignee: Aptiv Technologies AG
    Inventors: Mateusz Wojcik, Mateusz Komorkiewicz, Filip Ciepiela, Daniel Dworak
  • Patent number: 11978272
    Abstract: Adapting a machine learning model to process data that differs from training data used to configure the model for a specified objective is described. A domain adaptation system trains the model to process new domain data that differs from a training data domain by using the model to generate a feature representation for the new domain data, which describes different content types included in the new domain data. The domain adaptation system then generates a probability distribution for each discrete region of the new domain data, which describes a likelihood of the region including different content described by the feature representation. The probability distribution is compared to ground truth information for the new domain data to determine a loss function, which is used to refine model parameters. After determining that model outputs achieve a threshold similarity to the ground truth information, the model is output as a domain-agnostic model.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Kai Li, Christopher Alan Tensmeyer, Curtis Michael Wigington, Handong Zhao, Nikolaos Barmpalios, Tong Sun, Varun Manjunatha, Vlad Ion Morariu
  • Patent number: 11966452
    Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: April 23, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
  • Patent number: 11967151
    Abstract: Embodiments of this application disclose a video classification method performed by a computer device and belong to the field of computer vision (CV) technologies. The method includes: obtaining a video; selecting n image frames from the video; extracting respective feature information of the n image frames according to a learned feature fusion policy by using a feature extraction network, the learned feature fusion policy being used for indicating proportions of the feature information of the other image frames that have been fused with feature information of a first image frame in the n image frames; and determining a classification result of the video according to the respective feature information of the n image frames. By replacing complex and repeated 3D convolution operations with simple feature information fusion between adjacent image frames, time for finally obtaining a classification result of the video is therefore reduced, thereby having high efficiency.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: April 23, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yan Li, Xintian Shi, Bin Ji
  • Patent number: 11967067
    Abstract: A candidate generator generates a set of candidate three-dimensional image patches from an input volume. A candidate classifier classifies the set of candidate three-dimensional image patches as containing or not containing disease. Classifying the set of candidate three-dimensional image patches comprises generating an attention mask for each given candidate three-dimensional image patch within the set of candidate three-dimensional image patches to form a set of attention masks, applying the set of attention masks to the set of candidate three-dimensional image patches to form a set of masked image patches, and classifying the set of masked image patches as containing or not containing the disease. The candidate classifier applies soft attention and hard attention to the three-dimensional image patches such that distinctive image regions are highlighted proportionally to their contribution to classification while completely removing image regions that may cause confusion.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: April 23, 2024
    Inventors: Shafiqul Abedin, Hongzhi Wang, Ehsan Dehghan Marvast, David James Beymer
  • Patent number: 11961304
    Abstract: Examples disclosed herein may involve a computing system that is operable to (i) receive a first sequence of images captured by a monocular camera associated with a vehicle during a given period of operation and a second sequence of image pairs captured by a stereo camera associated with the vehicle during the given period of operation, (ii) derive, from the first sequence of images captured by the monocular camera, a first track for a given agent that comprises a first sequence of position information for the given agent, (iii) derive, from the second sequence of image pairs captured by the stereo camera, a second track for the given agent that comprises a second sequence of position information for the given agent, and (iv) determine a trajectory for the given agent based on the first and second tracks for the given agent.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: April 16, 2024
    Assignee: Lyft, Inc.
    Inventors: Lorenzo Peppoloni, Michal Witkowski
  • Patent number: 11961228
    Abstract: A medical image acquisition unit acquires a medical image obtained by imaging an observation target. A feature amount calculation unit calculates a feature amount of the observation target for each pixel of an image region of the medical image or for each divided region obtained by dividing the image region of the medical image into a specific size. A stage determination unit calculates a distribution index value which is an index value of the spatial distribution of the feature amount of each divided region, and determines the disease stage of the observation target based on the distribution index value.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM Corporation
    Inventor: Tatsuya Aoyama
  • Patent number: 11950946
    Abstract: A technique for automating the identifying of a measurement point in cephalometric image analysis is provided. An automatic measurement point recognition method includes a step of detecting, from a cephalometric image 14 acquired from a subject, a plurality of peripheral partial regions 31, 32, 33, 34 for recognizing a target feature point, a step of estimating a candidate position of the feature point in each of the peripheral partial regions 31, 32, 33, 34 by the application of a regression CNN model 10, and a step of determining the position of the feature point in the cephalometric image 14 based on the distribution of the candidate positions estimated. In the step of detecting, for example, the peripheral partial region 32, a classification CNN model 13 trained with a control image 52 is applied.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: April 9, 2024
    Assignee: OSAKA UNIVERSITY
    Inventors: Chihiro Tanikawa, Chonho Lee
  • Patent number: 11950922
    Abstract: A monitoring system for obtaining a video related to medication adherence of a user, includes: a wireless communication device comprising a motion sensor, an ambient light sensor, a first transceiver transmitting data to an external device, and a first controller configured to control the motion sensor, the ambient light sensor, and the first transceiver, the wireless communication device having an attaching portion for being attached to an object containing a medication; and a wearable device including a camera, a second transceiver receiving a signal from the first transceiver, and a second controller configured to obtain video data through the camera based on the signal received through the second transceiver.
    Type: Grant
    Filed: April 21, 2023
    Date of Patent: April 9, 2024
    Assignee: INHANDPLUS INC.
    Inventors: Hwiwon Lee, Nam Eok Kim
  • Patent number: 11950948
    Abstract: Radiographic imaging techniques include the use of an exposure sequence having a plurality of specimen exposure windows. The plurality of specimen exposure windows have a total exposure time period that satisfies a desired exposure time. The plurality of exposure frames resulting from the plurality of specimen exposure windows are summed to form an output frame. The output frame is provided as output.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: April 9, 2024
    Assignee: HOLOGIC, INC.
    Inventors: Cornell Lee Williams, Brad Polischuk
  • Patent number: 11954935
    Abstract: An electronic device that is improved in object detection performance exhibited when executing a specific function. A system controller detects one or more objects from image data, evaluates feature values of the one or more detected objects, and identifies an object out of the one or more objects, that satisfies a predetermined criterion related to execution of a specific function, based on a result of the evaluation. The specific function is executed in a case where it is determined that the identified object satisfies an execution condition of the specific function.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: April 9, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Atsushi Mikawa
  • Patent number: 11948401
    Abstract: Various embodiments of devices, systems, and methods for providing AI-based physical function assessment recordings and assessment performance analytics for a subject are described. A series of video frames are obtained that include the subject. Computer vision techniques that use artificial neural networks may be applied to the video frames to: detect a Person of Interest (POI) and an Object of Interest (OOI) in the video frames; track movement of the POI and the location of the OOI in subsequent video frames; detect body key points; and detect postures and posture transitions of the POI. Physical function indicators may be calculated and function analytics provided based on the assessment.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: April 2, 2024
    Assignee: Nightingale.ai Corp.
    Inventors: Chao Bian, Peter Tanugraha, Charlene Chu
  • Patent number: 11947631
    Abstract: An electronic device and method for reverse image search is provided. The electronic device receives an image. The electronic device extracts, by a DNN model, a first set of image features associated with the image and generates a first feature vector based on the first set of image features. The electronic device extracts, by an image-feature detection model, a second set of image features associated with the image and generates a second feature vector based on the second set of image features. The electronic device generates a third feature vector based on combination of the first and second feature vectors. The electronic device determines a similarity metric between the third feature vector and a fourth feature vector of each of a set of pre-stored images and identifies a pre-stored image based on the similarity metric. The electronic device controls a display device to display information associated with the pre-stored image.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: April 2, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Jong Hwa Lee, Praggya Garg
  • Patent number: 11948366
    Abstract: Embodiments herein provide various systems and methods for automated classification of vehicle reads to build vehicle identification profile and for automated vehicle identification using content extracted from an image frame of a vehicle to identify a most probable vehicle identification profile. An example method comprises capturing, by a camera, a read comprising an image frame including a portion of a vehicle; identifying at least one of a license plate number and a descriptor of the vehicle using image processing on the read; determining a probability value that the read includes the vehicle based on the identified at least one of the license plate number and the descriptor; when the probability value exceeds a threshold value, identifying a vehicle identification profile in a database using the identified at least one of the license plate number and the descriptor; and updating the vehicle identification profile to include the captured read.
    Type: Grant
    Filed: April 13, 2022
    Date of Patent: April 2, 2024
    Assignee: Neology, Inc.
    Inventors: Peter Crary, Peter Istenes, Dave Bynum, Israel Padilla
  • Patent number: 11941827
    Abstract: A computer-implemented method of performing a three-dimensional 3D point cloud registration with multiple two-dimensional (2D) images may include estimating a mathematical relationship between 3D roto-translations of dominant planes of objects in a 3D point cloud and bi-dimensional homographies in a 2D image plane, thereby resulting in a 3D point cloud registration using multiple 2D images. A trained classifier may be used to determine correspondence between homography matrices and inferred motion of the dominant plane(s) on a 3D point cloud for paired image frames. A homography matrix between the paired images of the dominant plane(s) on the 2D image plane may be selected based on the correspondence between the inferred motions and measured motion of the dominant plane(s) on the 3D point cloud for the paired image frames. The process may be less computationally intensive than conventional 2D-3D registration approaches.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: March 26, 2024
    Assignee: Datalogic IP Tech S.R.L.
    Inventors: Francesco D'Ercoli, Marco Cumoli
  • Patent number: 11935311
    Abstract: The present disclosure provides systems and methods for detecting components of an array of biological, chemical, or physical entities. In an aspect, the present disclosure provides a method for detecting an array of biological, chemical, or physical entities, comprising: (a) using one or more light sensing devices, acquiring pixel information from sites in an array, wherein the sites comprise biological, chemical, or physical entities that produce light; (b) processing the pixel information to identify a set of regions of interest (ROIs) corresponding to the sites in the array that produce the light; (c) classifying the pixel information for the ROIs into a categorical classification from among a plurality of distinct categorical classifications, thereby producing a plurality of pixel classifications; and (d) identifying one or more components of the array of biological, chemical, or physical entities based at least in part on the plurality of pixel classifications.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: March 19, 2024
    Assignee: Nautilus Subsidiary, Inc.
    Inventors: Jarrett D. Egertson, Vadim Lobanov, David Stern, Parag Mallick, Sujal M. Patel, Ryan K. Seghers
  • Patent number: 11934379
    Abstract: The method for address verification preferably includes: receiving an unverified address; parsing the unverified address into address elements; determining a candidate address set based on the address elements; determining an address comparison set from the verified address database; selecting an intended address from the address comparison set; optionally facilitating use of the intended address; and optionally determining and providing a call to action based on the intended address.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: March 19, 2024
    Assignee: Lob.com, Inc.
    Inventors: Marcus Gartner, David Currie
  • Patent number: 11923947
    Abstract: A method for random access for beam failure recovery. In the method, a random access configuration for the beam failure recovery is received. In the event of a beam failure, a random access procedure is performed according to the random access configuration.
    Type: Grant
    Filed: April 20, 2023
    Date of Patent: March 5, 2024
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Rui Fan, Icaro L. J. Da Silva, Helka-Liina Määttanen
  • Patent number: 11915481
    Abstract: Detecting security events and generating corresponding natural language descriptors includes monitoring an area to capture data corresponding to moving objects in the area, classifying the moving objects, generating events based on classifying the moving objects, building an event graph by connecting related ones of the events, using the event graph to detect security events, and building natural language activity descriptors for the security events of the event graph using natural language templates to convert the security events to natural language. The natural language security descriptors may be presented using a verbal request to a voice-enabled assistant, a mandatory notification by the voice-enabled assistant, periodic reports and/or conversational style notifications in a visual format. Data may be captured using sensors, video streams from at least one camera vehicle, smart home devices, presence detection mechanisms, and/or weather data/forecasts.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: February 27, 2024
    Assignee: Sunflower Labs Inc.
    Inventors: Yannik S. Nager, Christian Eheim, Alexander S. Pachikov
  • Patent number: 11915497
    Abstract: A control device includes a control unit configured to: acquire a first image, a second image and report information, the first image being an image resulting from photographing a vehicle before a user gets in the vehicle, the second image being an image resulting from photographing the vehicle after the user gets out of the vehicle, the report information being relevant to a change in a state of the vehicle and being reported by the user; detect the change in the state of the vehicle based on the first image and the second image; and evaluate the user based on the detected change and the report information.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: February 27, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Ai Miyata