Patents Examined by Jonathan S Lee
  • Patent number: 11887292
    Abstract: The present invention discloses a two-step anti-fraud vehicle insurance image collecting and quality testing method, system and device, the method comprises: step 1, collecting vehicle insurance scene images and marking vehicle orientation; step 2, performing object detection on the collected vehicle insurance scene images and screening to obtain object coordinates; step 3, according to the vehicle orientation and the object coordinates, obtaining the specific position of the object coordinates located in the whole vehicle; step 4, according to the object coordinates screened in step 2, performing vehicle component detection on the vehicle insurance scene images, obtaining the component coordinates of the vehicle components, and screening to obtain the vehicle component closest to the object coordinates; step 5, according to the specific position of the object coordinates located in the whole vehicle and the vehicle components closest to the object coordinates, obtaining the position of the vehicle components
    Type: Grant
    Filed: May 13, 2023
    Date of Patent: January 30, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Jinni Dong, Jiaxi Yang, Kai Ding, Chongning Na
  • Patent number: 11880982
    Abstract: Segmenting an image is disclosed including acquiring a first image and a second image, the first image and the second image being obtained based on a same imaging target, and a resolution of the first image being greater than a resolution of the second image, performing image segmentation processing based on a plurality of sub-images of the first image to obtain a first initial segmentation result, performing image segmentation processing based on the second image to obtain a second initial segmentation result, merging, based on the imaging target, the first initial segmentation result with the second initial segmentation result to obtain a target segmentation result, and outputting or storing the target segment result.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: January 23, 2024
    Inventors: Jianqiang Ma, Zhe Tang
  • Patent number: 11880967
    Abstract: An image selection method, applied to a self-propelled apparatus, includes: collecting an image from a surrounding environment through an image collection device during the self-propelled apparatus travels; scoring the image according to a scoring rule when there is a recognizable obstacle in the collected image, wherein a value of the scoring is used to indicate an imaging quality of the recognizable obstacle in the image; and selecting an image that comprises the recognizable obstacle and that has a highest score as a to-be-displayed image in response to receive a request to view the image of the recognizable obstacle. A computer-readable storage medium and a self-propelled apparatus are further provided.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: January 23, 2024
    Assignee: Beijing Roborock Technology Co., Ltd.
    Inventors: Yixing Wang, Zhen Wu, Haojian Xie, Lei Zhang
  • Patent number: 11875523
    Abstract: The present disclosure provides an adaptive stereo matching optimization method, apparatus, and device, and a storage medium. The method includes: acquiring images of at least two perspectives of the same target scene, accordingly obtaining, through calculation, disparity value ranges corresponding to pixels in the target scene; and obtaining optimized depth value ranges by adjusting the disparity value ranges of the pixels in the target scene in real time through an adaptive stereo matching model; adjusting an execution cycle in the adaptive stereo matching model in real time through a DVFS algorithm according to a resource constraint condition of the processing system; and/or training on a plurality of scene image data sets through a convolutional neural network, so that the specific function parameters in the adaptive stereo matching model are correspondingly adjusted in real time according to the acquired different scene images.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: January 16, 2024
    Assignee: ShanghaiTech University
    Inventors: Fupeng Chen, Heng Yu, Yajun Ha
  • Patent number: 11875626
    Abstract: A method for identifying counterfeit coins, comprising receiving surface image data and edge image data of the coin at a processor. Identifying a plurality of defects using the processor. Comparing each of the plurality of defects to a database of known authentic coin image data defects to determine whether the coin is authentic.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: January 16, 2024
    Inventor: Christopher J. Rourk
  • Patent number: 11853108
    Abstract: The disclosure relates to an artificial intelligence (AI) system using a machine learning algorithm such as deep learning and the like, and an application thereof. In particular, there is provided a control method for an electronic apparatus for searching for an image, the method comprising displaying an image comprising at least one object, detecting a user input for selecting an object, recognizing an object displayed at a point at which the user input is detected and acquiring information regarding the recognized object by using a recognition model trained to acquire information regarding an object, displaying a list including the information regarding the object, and based on one piece of information being selected from the information regarding the object included in the list, providing a related image by searching for the related image based on the selected information.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyeon Mok Ko, Hyung Rai Oh, Hong Chul Kim, Silas Jeon, In-Chul Hwang
  • Patent number: 11854223
    Abstract: Techniques and systems are provided for positioning mixed-reality devices within mixed-reality environments. The devices, which are configured to perform inside out tracking, transition between position tracking states in mixed-reality environments and utilize positional information from other inside out tracking devices that share the mixed-reality environments to identify/update positioning of the devices when they become disoriented within the environments and without requiring an extensive or full scan and comparison/matching of feature points that are detectable by the devices with mapped feature points of the maps associated with the mixed-reality environments. Such techniques can conserve processing and power consumption that would be required when performing a full or extensive scan and comparison of matching feature points. Such techniques can also enhance the accuracy and speed of positioning mixed-reality devices.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: December 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Erik Alexander Hill, Kathleen Carol Heasley, Jake Thomas Shields, Kevin James-Peddicord Luecke, Robert Neil Drury, Garret Paul Jacobson
  • Patent number: 11842378
    Abstract: Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: December 12, 2023
    Assignee: CAASTLE, INC.
    Inventors: Yu-Cheng Tsai, Dongming Jiang, Georgiy Goldenberg
  • Patent number: 11841434
    Abstract: An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: December 12, 2023
    Assignee: Tesla, Inc.
    Inventor: Anting Shen
  • Patent number: 11842526
    Abstract: An exemplified methods and systems provides a Volterra filter network architecture that employs a cascaded implementation and a plurality of kernels, a set of which is configured to execute an nth order filter, wherein the plurality of kernels of the nth order filters are repeatedly configured in a plurality of cascading layers of interconnected kernels to form a cascading hierarchical structure that approximates a high-order filter.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: December 12, 2023
    Assignee: NORTH CAROLINA STATE UNIVERSITY
    Inventors: Hamid Krim, Siddharth Roheda, Sally Ghanem
  • Patent number: 11830173
    Abstract: A manufacturing method of learning data is used for making a neural network perform learning. The manufacturing method of learning data includes a first acquiring step configured to acquire an original image, a second acquiring step configured to acquire a first image as a training image generated by adding blur to the original image, and a third acquiring step configured to acquire a second image as a ground truth image generated by adding blur to the original image. A blur amount added to the second image is smaller than that added to the first image.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: November 28, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Takashi Oniki
  • Patent number: 11823368
    Abstract: A system and methods for assessing road surface quality includes a wireless mobile device having a camera, a location receiver, and a road surface classifying computer application and is configured to be mounted on a vehicle. The system has a remote server having a road surface classifying web application, a database, and an interactive map connected to the web application. The mobile device actuates the camera to record videos, extract images from the videos, process the images, classify the images into road conditions, record a location of the images, generate a data packet including an identification of the mobile device and a time stamp of the data packet, and transmit the data packet to the remote server. The remote server stores the data packet in the database. The web application superimposes the time stamp of the data packet, the location, the road conditions, and the images on the interactive map.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: November 21, 2023
    Assignee: Prince Mohammad Bin Fahd University
    Inventors: Nazeeruddin Mohammad, Majid Ali Khan, Ahmed Abul Hasanaath
  • Patent number: 11823402
    Abstract: A method and apparatus for correcting an error in depth information estimated from a two-dimensional (2D) image are disclosed. The method includes diagnosing an error in depth information by inputting a color image and depth information estimated using the color image to a depth error detection network, and determining enhanced depth information by maintaining or correcting the depth information based on the diagnosed error.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: November 21, 2023
    Assignees: Electronics and Telecommunications Research Institute, The Trustees of Indiana University
    Inventors: Soon Heung Jung, Jeongil Seo, Jagpreet Singh Chawla, Nikhil Thakurdesai, David Crandall, Md Reza, Anuj Godase
  • Patent number: 11816767
    Abstract: A method and system for reconstructing a magnetic particle distribution model based on time-frequency spectrum enhancement are provided. The method includes: scanning, by a magnetic particle imaging (MPI) device, a scan target to acquire a one-dimensional time-domain signal of the scan target; performing short-time Fourier transform to acquire a time-frequency spectrum; acquiring, by a deep neural network (DNN) fused with a self-attention mechanism, a denoised time-frequency spectrum; acquiring a high-quality magnetic particle time-domain signal; and reconstructing a magnetic particle distribution model. The method learns global and local information in the time-frequency spectrum through the DNN fused with the self-attention mechanism, thereby learning a relationship between different harmonics to distinguish between a particle signal and a noise signal.
    Type: Grant
    Filed: July 5, 2023
    Date of Patent: November 14, 2023
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jie Tian, Zechen Wei, Hui Hui, Xin Yang, Huiling Peng
  • Patent number: 11810283
    Abstract: Examples of the present disclosure describe systems and methods for detecting and remediating compression artifacts in multimedia items. In example aspects, a machine learning model is trained on a dataset related to compression artifacts and non-compression artifacts. Input data may then be collected by a data collection module and provided to a pattern recognition module. The pattern recognition module may extract visual and audio features of the multimedia item and provide the extracted features to the trained machine learning model. The trained machine learning model may compare the extracted features to the model, and a confidence value may be generated. The confidence value may be compared to a confidence value threshold. If the confidence value is equal to or exceeds the confidence threshold, then the input data may be classified as containing at least one compression artifact. Remedial action may subsequently be applied (e.g., boosting the system with technical resources).
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: November 7, 2023
    Assignee: DISH Network L.L.C.
    Inventor: Adam Morzos
  • Patent number: 11804037
    Abstract: The present application provides a method and a system for generating an image sample having a specific feature. The method includes: training a generative adversarial network-based sample generation model, where the generative adversarial network includes a generator and two discriminators: a global discriminator configured to perform global discrimination on an image, and a local discriminator configured to perform local discrimination on a specific feature; and inputting, to a trained generator that serves as a sample generation model, a semantic segmentation image that indicates a location of the specific feature and a corresponding real image not having the specific feature, to obtain a generated image sample having the specific feature.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: October 31, 2023
    Assignee: CONTEMPORARY AMPEREX TECHNOLOGY CO., LIMITED
    Inventors: Guannan Jiang, Jv Huang, Chao Yuan
  • Patent number: 11804036
    Abstract: A person re-identification method based on a perspective-guided multi-adversarial attention is provided. The deep convolutional neural network includes a feature learning module, a multi-adversarial module, and a perspective-guided attention mechanism module. The multi-adversarial module is followed by a global pooling layer and a perspective discriminator after each stage of a basic network of the feature learning module. The perspective-guided attention mechanism module is an attention map generator and the perspective discriminator. The training of the deep convolutional neural network includes learning of the feature learning module, learning of the multi-adversarial module, and learning of the perspective-guided attention mechanism module. The proposed method uses the trained deep convolutional neural network to extract features of the testing images, and using an Euclidean distance to perform feature matching on images in a query set and images in a gallery set.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: October 31, 2023
    Assignee: WUHAN UNIVERSITY
    Inventors: Bo Du, Fangyi Liu, Mang Ye
  • Patent number: 11798177
    Abstract: The present application provides a hand tracking method, device and system, wherein the method comprises: determining a current frame image corresponding to each tracking camera respectively; acquiring tracking information of a hand location corresponding to the to-be-detected frame image and two-dimensional coordinates of a preset quantity of skeleton points according to the current frame images and the tracking information of the last frame image of the current frame images; determining three-dimensional coordinates of the preset quantity of skeleton points according to the two-dimensional coordinates and pre-acquired tracking data of a head location corresponding to the hand location; carrying out smoothing filter processing on the three-dimensional coordinates of the skeleton points and historical three-dimensional coordinates of the last frame image so as to acquire processed stable skeleton points; and fusing, rendering and displaying the stable skeleton points and the tracking data of the head location
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: October 24, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventor: Tao Wu
  • Patent number: 11798182
    Abstract: A system for training neural networks that predict the parameters of a human mesh model is disclosed herein. The system includes at least one camera and a data processor configured to execute computer executable instructions for: receiving a first frame and a second frame of a video from the at least one camera; extracting first and second features from the first and second frames of the video; inputting the sequence of frames of the video into a human mesh estimator module, the human mesh estimator module estimating mesh parameters from the sequence of frames of the video so as to determine a predicted mesh; and generating a training signal for input into the human mesh estimator module by using at least one of: (i) a depth loss module and (ii) a rigid transform loss module.
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: October 24, 2023
    Assignee: Bertec Corporation
    Inventors: Batuhan Karagoz, Emre Akbas, Bedirhan Uguz, Ozhan Suat, Necip Berme, Mohan Chandra Baro
  • Patent number: 11790666
    Abstract: The present disclosure is directed to an autonomous vehicle having a vehicle control system. The vehicle control system includes an image processing system. The image processing system receives an image that includes a plurality of image portions. The image processing system also calculates a score for each image portion. The score indicates a level of confidence that a given image portion represents an illuminated component of a traffic light. The image processing system further identifies one or more candidate portions from among the plurality of image portions. Additionally, the image processing system determines that a particular candidate portion represents an illuminated component of a traffic light using a classifier. Further, the image processing system provides instructions to control the autonomous vehicle based on the particular candidate portion representing an illuminated component of a traffic light.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: October 17, 2023
    Assignee: Waymo LLC
    Inventors: Andreas Wendel, David Ian Franklin Ferguson