Patents Examined by Julia Z. Yao
  • Patent number: 12293540
    Abstract: An apparatus for constructing kinematic information of a robot manipulator is provided. The apparatus includes: a robot image acquisition part for acquiring a robot image containing shape information and coordinate information of the robot manipulator; a feature detection part for detecting the type of each of a plurality of joints of the robot manipulator and the three-dimensional coordinates of the joint using a feature detection model generated through deep learning based on the robot image containing shape information and coordinate information; and a variable derivation part for deriving Denavit-Hartenberg (DH) parameters based on the type of each of the plurality of joints of the robot manipulator and the three-dimensional coordinates of the joint.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: May 6, 2025
    Assignee: KOREA INSTITUTE OF ROBOT AND CONVERGENCE
    Inventors: Junyoung Lee, Maolin Jin, Sang Hyun Park, Bumgyu Kim
  • Patent number: 12288324
    Abstract: A computer-implemented method for providing a scene with synthetic contrast includes receiving preoperative image data of an examination region containing a hollow organ, wherein the medical image data images a contrast agent flow in the hollow organ; receiving intraoperative image data of the examination region of the examination subject, wherein the intraoperative image data images a medical object at least partially disposed in the hollow organ, generating the scene with synthetic contrast by applying a trained function to input data, wherein the input data is based on the preoperative image data and the intraoperative image data, wherein the scene with synthetic contrast images a virtual contrast agent flow in the hollow organ taking into account the medical object disposed therein, wherein at least one parameter of the trained function is based on a comparison between a training scene and a comparison scene; and providing the scene with synthetic contrast.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: April 29, 2025
    Assignee: Siemens Healthineers AG
    Inventor: Tobias Lenich
  • Patent number: 12283039
    Abstract: The present disclosure in some embodiments provides a product inspection method and a system based on deep learning for detecting a product defect. The present disclosure provides a product inspection method and system for detecting product defects by linking a predeveloped deep learning-based classification model to interwork with the existing product inspection system while fine-tuning the classification model to be maintained or supplemented by instantly correcting errors of the classification model, thereby improving the accuracy of product quality inspection.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: April 22, 2025
    Assignee: HYUNDAI MOBIS CO., LTD.
    Inventors: Tae Hyun Kim, Hye Rin Kim, Yeong Jun Cho
  • Patent number: 12270662
    Abstract: Examples disclosed herein may involve a computing system that is operable to (i) receive first data of one or more geographical environments from a first type of localization sensor, (ii) receive second data of the one or more geographical environments from a second type of localization sensor, (iii) determine constraints from the first data and the second data, (iv) determine shared pose data associated with both of the first data and the second data using the constraints determined from both the first data and the second data by determining one or more sequences of common poses between respective poses generated from each of the first and second data, wherein the shared pose data provides a common coordinate frame for the first data and the second data, and (v) generate a map of the one or more geographical environments using the determined shared pose data.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: April 8, 2025
    Assignee: Lyft, Inc.
    Inventors: Wolfgang Hess, Luca Del Pero, Daniel Sievers, Holger Rapp
  • Patent number: 12266082
    Abstract: A method of recursive filtering of image data includes receiving sequentially a plurality of original pixel values, multiplying an original pixel value of a current pixel of the plurality of pixel values by a dynamically varying recursion coefficient, adding recursively filtered pixel values from left and right neighbors of the current pixel, retrieved from a memory buffer holding filtered pixel data from a previous image line, multiplying the added recursively pixel values by 1 minus the dynamically varying recursion coefficient, adding the two multiplication results together to yield a filtered pixel value for the current pixel, writing the filtered pixel value for the current pixel back into the memory buffer, and displaying the filtered pixel value on a display.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: April 1, 2025
    Assignee: V-Silicon Semiconductor (Hefei) Co., Ltd
    Inventor: Jeroen Maria Kettenis
  • Patent number: 12254713
    Abstract: Provided are an in-store food and beverage transfer and collection system using image analysis and a method of transferring and collecting food and beverage in a store using the same in-store food and beverage transferred and collection system. The in-store food and beverage transfer and collection system using image analysis includes: an image obtaining unit configured to obtain an image of a predetermined area and comprising a plurality of cameras; a processing unit configured to process a control command for processing an order of a customer by using an image obtained by the image obtaining unit; and a transfer robot configured to obtain order information of the customer according to the control command of the processing unit, and transfer food and beverage that are ordered.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: March 18, 2025
    Assignee: XYZ, Inc.
    Inventor: Sung Jae Hwang
  • Patent number: 12249157
    Abstract: A method for detecting movements of a body of a first motor vehicle includes recording image and sensor data by a camera and sensor device of a second motor vehicle. The image and sensor data represent that part of the environment of the second motor vehicle that contains the first motor vehicle. The image and sensor data are forwarded to a data analysis device that detects movements of the body of the first motor vehicle and uses artificial intelligence algorithms and machine image analysis to process the image and sensor data to classify movements of the vehicle body. The classified movements of the vehicle body are assigned to at least one of a set of defined states. Output data from the determined state are generated for further use in an automated driving function and/or for a user interface.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: March 11, 2025
    Assignee: Dr. Ing. h. c. F. Porsche AG
    Inventors: Yannik Peters, Matthias Stadelmayer
  • Patent number: 12243148
    Abstract: A method comprising: dividing a 3D space into a voxel grid comprising a plurality of voxels; associating a plurality of distance values with the plurality of voxels, each distance value based on a distance to a boundary of an object; selecting an approximate interpolation mode for stepping a ray through a first one or more voxels of the 3D space responsive to the first one or more voxels having distance values greater than a threshold; and detecting the ray reaching a second one or more voxels having distance values less than the first threshold; and responsively selecting a precise interpolation mode for stepping the ray through the second one or more voxels.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: March 4, 2025
    Assignee: Intel Corporation
    Inventors: Vivek De, Ram Krishnamurthy, Amit Agarwal, Steven Hsu, Monodeep Kar
  • Patent number: 12223615
    Abstract: A method comprising: dividing a 3D space into a voxel grid comprising a plurality of voxels; associating a plurality of distance values with the plurality of voxels, each distance value based on a distance to a boundary of an object; selecting an approximate interpolation mode for stepping a ray through a first one or more voxels of the 3D space responsive to the first one or more voxels having distance values greater than a threshold; and detecting the ray reaching a second one or more voxels having distance values less than the first threshold; and responsively selecting a precise interpolation mode for stepping the ray through the second one or more voxels.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: February 11, 2025
    Assignee: Intel Corporation
    Inventors: Vivek De, Ram Krishnamurthy, Amit Agarwal, Steven Hsu, Monodeep Kar
  • Patent number: 12223742
    Abstract: A method includes receiving first environment information related to a first vehicular task from a host vehicle, comparing the first environment information to second environment information captured when a member vehicle performed a second vehicular task corresponding to the first vehicular task using a second set of operating criteria, and determining a first set of operating criteria for performing the first vehicular task based on a similarity score between the first environment information and the second environment information and a success or accuracy rate of the second vehicular task performed by the member vehicle.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 11, 2025
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Seyhan Ucar, Takamasa Higuchi, Chang-Heng Wang, Onur Altintas
  • Patent number: 12211211
    Abstract: The techniques described herein provide improved techniques for segmenting corneal epithelium layer. A method includes receiving an optical coherence tomography (OCT) image of an eye; generating, based on the OCT image, a binarized image of the eye; generating, based on the binarized image of the eye and the OCT image, a binary mask of a cornea of the eye; segmenting, based on the binary mask of the cornea of the eye, an anterior cornea of the eye on the OCT image; generating, based on the OCT image and the segmented anterior cornea, a binary mask for an epithelium layer of the eye; segmenting, based on the binary mask for the epithelium layer of the eye, a Bowman's layer in the cornea of the eye on the OCT image; and causing the segmented anterior cornea and the segmented Bowman's layer data to be used for generation of an epithelium map.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: January 28, 2025
    Assignee: Alcon Inc.
    Inventors: Parisa Rabbani, Sahar Hosseinzadeh Kassani
  • Patent number: 12165293
    Abstract: An apparatus for processing image data may include: a memory storing instructions; and a processor configured to execute the instructions to: extract a target image patch including a target object, from a captured image; obtain a plurality of landmark features from the target image patch; align the plurality of landmark features of the target image patch with a plurality of reference landmark features in a template image patch including the same target object; and when the plurality of landmark features are aligned with the plurality of reference landmark features, transfer texture details of the target object in the template image patch to the target object in the target image patch.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: December 10, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kaimo Lin, Hamid Sheikh
  • Patent number: 12131493
    Abstract: Disclosed is an apparatus for generating a depth map using a monocular image. The apparatus includes: a deep convolution neural network (DCNN) optimized based on an encoder and decoder architecture. The encoder extracts one or more features from the monocular image according to the number of provided feature layers, and the decoder calculates displacements of mismatched pixels from the features extracted from different feature layers, and generates the depth map for the monocular image.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: October 29, 2024
    Assignee: PUSAN NATIONAL UNIVERSITY INDUSTRY-UNIVERSITY COOPERATION FOUNDATION
    Inventor: Won Joo Hwang
  • Patent number: 12112533
    Abstract: A method and an apparatus for data calculation in a neural network model, and an image processing method and apparatus. The method for data calculation includes: reading weight data shared by a group of data processing of a data processing layer in a neural network model, into a GroupShared variable of a thread group of a graphics processing unit (GPU), dividing input data of the data processing layer based on the number of threads in the thread group, reading, for each group of input data, weight data corresponding to the group of input data for a data processing in the group of data processing from the GroupShared variable, and performing, by each thread in the thread group, the data processing by using a group of read input data and weight data corresponding to the group of input data, to obtain a calculation result corresponding to the group of input data.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: October 8, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hongrui Chen, Wenran Liu, Qifeng Chen, Haoyuan Li, Feng Li
  • Patent number: 12056849
    Abstract: Embodiments are disclosed for translating an image from a source visual domain to a target visual domain. In particular, in one or more embodiments, the disclosed systems and methods comprise a training process that includes receiving a training input including a pair of keyframes and an unpaired image. The pair of keyframes represent a visual translation from a first version of an image in a source visual domain to a second version of the image in a target visual domain. The one or more embodiments further include sending the pair of keyframes and the unpaired image to an image translation network to generate a first training image and a second training image. The one or more embodiments further include training the image translation network to translate images from the source visual domain to the target visual domain based on a calculated loss using the first and second training images.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: August 6, 2024
    Assignees: Adobe Inc., CZECH TECHNICAL UNIVERSITY IN PRAGUE
    Inventors: Michal Lukác, Daniel Sýkora, David Futschik, Zhaowen Wang, Elya Shechtman
  • Patent number: 12045310
    Abstract: Disclosed is a method and system for achieving optimal separable convolutions, the method is applied to image analyzing and processing and comprises steps of: inputting an image to be analyzed and processed; calculating three sets of parameters of a separable convolution: an internal number of groups, a channel size and a kernel size of each separated convolution, and achieving optimal separable convolution process; and performing deep neural network image process. The method and system in the present disclosure adopts implementation of separable convolution which efficiently reduces a computational complexity of deep neural network process. Comparing to the FFT and low rank approximation approaches, the method and system disclosed in the present disclosure is efficient for both small and large kernel sizes and shall not require a pre-trained model to operate on and can be deployed to applications where resources are highly constrained.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: July 23, 2024
    Assignee: Peng Cheng Laboratory
    Inventors: Tao Wei, Yonghong Tian, Yaowei Wang, Yun Liang, Chang Wen Chen, Wen Gao
  • Patent number: 12025693
    Abstract: A method for correcting a synthetic aperture radar (SAR) antenna beam image comprising: collecting SAR data, forming an uncorrected image, isolating a pixel value from the uncorrected image, performing an inverse image formation on the isolated pixel value to convert the isolated pixel value into a phase history, calculating actual isolated pixel value location in the uncorrected image, computing range loss, antenna beam, and phase corrections for the isolated pixel value, interpolating range loss corrections, antenna beam pattern corrections, and phase corrections in the phase history, applying the interpolated corrections to the isolated pixel value phase history thereby forming a corrected phase history, converting the corrected phase history back into a corrected image, replacing the corresponding uncorrected pixel value in the uncorrected image with the corrected isolated pixel value, and repeating this process for all uncorrected pixel values thereby providing a corrected SAR image.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: July 2, 2024
    Assignee: IERUS TECHNOLOGIES, INC.
    Inventors: Cameron Musgrove, Griffin Gothard, Daniel Faircloth
  • Patent number: 12014493
    Abstract: According to an embodiment of the present disclosure, a method of assessing bone age by using a neural network performed by a computing device is disclosed. The method includes receiving an analysis image which is a target of bone age assessment; and assessing bone age of the target by inputting the analysis image into a bone age analysis model comprising one or more neural networks. The bone age analysis model, which is trained by supervised learning based on an attention guide label, includes at least one attention module for intensively analyzing a main region of the analysis image.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: June 18, 2024
    Assignee: VUNO Inc.
    Inventors: Byeonguk Bae, Kyuhwan Jung
  • Patent number: 11989886
    Abstract: Existing techniques in precision farming comprise supervised event detection and need labeled training data which is tedious considering the large number of crops, differences therein and even larger number of diseases and pests. The present disclosure provides an unsupervised method and uses images of any size captured in an uncontrolled environment. The methods and systems disclosed find application in automatically localizing and classifying events, including health state and growth stage and also estimating an extent of manifestation of the event. Information of spatial continuity in pixels and boundaries in a given image is used to update the feature representation and label assignment to every pixel using a fully convolutional network. Back propagation of the pixel labels modified according to the output of a graph based method helps the neural network to converge and provide a time efficient solution.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: May 21, 2024
    Assignee: Tata Consultancy Services Limited
    Inventors: Prakruti Vinodchandra Bhatt, Sanat Sarangi, Srinivasu Pappula