Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 12380648
    Abstract: Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate.
    Type: Grant
    Filed: October 5, 2022
    Date of Patent: August 5, 2025
    Assignee: STREEM, LLC
    Inventor: Nikilesh Urella
  • Patent number: 12373973
    Abstract: An object of the present invention is to provide a medical image processing apparatus, an endoscope system, a medical image processing method, and a program that assist a user in estimating an accurate size of a region of interest. A medical image processing apparatus according to an aspect of the present invention is a medical image processing apparatus including a processor. The processor is configured to acquire images in time-series, make a determination of whether a region of interest in each of the images is suitable for size estimation, and report, by using a reporting device, a result of the determination and operation assistance information for improving the result of the determination.
    Type: Grant
    Filed: July 31, 2022
    Date of Patent: July 29, 2025
    Assignee: FUJIFILM Corporation
    Inventor: Seiya Takenouchi
  • Patent number: 12361677
    Abstract: A detection device, such as an unmanned vehicle, is adapted to traverse a search area and generate sensor data associated with objects that may be present in the search area. The generated sensor data is used by a system including object detection inference models configured to receive the sensor data and output object data, a local object tracker configured to track detected objects in a local map, and a global object tracker configured to track detected objects on a global map. The local object tracker is configured to fuse object detections from the object detection inference models to identify locally tracked objects, and a Kalman filter processes frames of fused object data to resolve duplicates and/or invalid object detections. The global object tracker includes a pose manager, configured to track global objects in the global map and update the pose based on a map optimization process. User-in-the-loop processing includes a user interface for displaying and manual editing of detected object data.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: July 15, 2025
    Assignee: Teledyne FLIR Defense, Inc.
    Inventors: A. Peter Keefe, James Leonard Nute
  • Patent number: 12354289
    Abstract: An apparatus and a method remove noise from a point cloud and generate a depth map of a monocular camera image. The apparatus includes a camera sensor that photographs a monocular camera image. The apparatus also includes a lidar sensor that generates the point cloud corresponding to the monocular camera image. The apparatus also includes a controller that removes noise from the point cloud and generates the depth map of the monocular camera image based on the point cloud from which the noise is removed.
    Type: Grant
    Filed: May 4, 2023
    Date of Patent: July 8, 2025
    Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATION
    Inventors: Jin Sol Kim, Jin Ho Park, Jang Yoon Kim
  • Patent number: 12345022
    Abstract: An excavator includes a comparing part configured to compare first image data obtained by capturing, at a first timing, a bucket in a specified orientation, with second image data obtained by capturing, at a second timing different from the first timing, the bucket in the specified orientation; and an output part configured to output a comparison result obtained by the comparing part.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: July 1, 2025
    Assignee: SUMITOMO HEAVY INDUSTRIES, LTD.
    Inventor: Masato Koga
  • Patent number: 12347207
    Abstract: A vehicle external environment recognition apparatus includes at least one processor, and at least one memory coupled to the at least one processor. The at least one processor is configured to operate in cooperation with at least one program stored in the at least one memory to execute processing. The processing includes generating a distance image from luminance images, specifying, by using semantic segmentation, a floating matter class in the luminance images, and invalidating parallax associated with floating pixels that are included in the distance image and belong to the floating matter class.
    Type: Grant
    Filed: September 8, 2022
    Date of Patent: July 1, 2025
    Assignee: SUBARU CORPORATION
    Inventor: Minoru Kuraoka
  • Patent number: 12347734
    Abstract: A system of examination of a semiconductor specimen, comprising a processor and memory circuitry (PMC) configured to: obtain an image of a hole formed in the semiconductor specimen, wherein the hole exposes at least one layer of a plurality of layers of the semiconductor specimen, segment the image into a plurality of regions, generate at least one of: data Dpix_intensity informative of one or more pixel intensities of one or more regions of the plurality of regions, data Dgeometry informative of one or more geometrical properties of one or more regions of the plurality of regions, feed at least one of Dpix_intensity or Dgeometry to a trained classifier to obtain an output, wherein the output of the trained classifier is usable to determine whether the hole ends at a target layer of the plurality of layers.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: July 1, 2025
    Assignee: Applied Materials Israel Ltd.
    Inventors: Rafael Bistritzer, Vadim Vereschagin, Grigory Klebanov, Roman Kris, Ilan Ben-Harush, Omer Kerem, Asaf Golov, Elad Sommer
  • Patent number: 12322140
    Abstract: A method for is disclosed for rectifying images and/or image points acquired by camera of a camera-based system of a vehicle with a windshield pane. A raw image of a scene is acquired with the camera. Raw image data is selected from the raw image. Intermediate image data is calculated based on the raw image data and camera parameters. The intermediate image data includes an intermediate image or intermediate image points. The intermediate image data resembles an image/image points of the scene obtained by a pinhole camera through the windshield pane. A set of points in space of the scene is calculated, the set of points corresponding to pixels of the intermediate image or to the intermediate image points. The calculation is performed using a parallel shift of the optical path induced by the windshield pane based on windshield pane parameters. Also disclosed is vehicle and a camera-based system therefor.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: June 3, 2025
    Assignee: Continental Autonomous Mobility Germany GmbH
    Inventor: Aless Lasaruk
  • Patent number: 12322126
    Abstract: In various examples, methods and systems are provided for estimating depth values for images (e.g., from a monocular sequence). Disclosed approaches may define a search space of potential pixel matches between two images using one or more depth hypothesis planes based at least on a camera pose associated with one or more cameras used to generate the images. A machine learning model(s) may use this search space to predict likelihoods of correspondence between one or more pixels in the images. The predicted likelihoods may be used to compute depth values for one or more of the images. The predicted depth values may be transmitted and used by a machine to perform one or more operations.
    Type: Grant
    Filed: February 3, 2022
    Date of Patent: June 3, 2025
    Assignee: NVIDIA Corporation
    Inventors: Yiran Zhong, Charles Loop, Nikolai Smolyanskiy, Ke Chen, Stan Birchfield, Alexander Popov
  • Patent number: 12315184
    Abstract: The disclosure provides methods and apparatuses for determining a height of a road target in a driving process. One example method includes determining a target object on a road, and determining a height threshold of the road based on the target object, to obtain a maximum allowed height of the road.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: May 27, 2025
    Assignee: Shenzhen Yinwang Intelligent Technologies Co., Ltd.
    Inventor: Wei Zhou
  • Patent number: 12299914
    Abstract: A method for controlling a vehicle in an environment includes generating, via a cross-attention model, a cross-attention cost volume based on a current image of the environment and a previous image of the environment in a sequence of images. The method also includes generating combined features by combining cost volume features of the cross-attention cost volume with single-frame features associated with the current image. The single-frame features may be generated via a single-frame encoding model. The method further includes generating a depth estimate of the current image based on the combined features. The method still further includes controlling an action of the vehicle based on the depth estimate.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: May 13, 2025
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Vitor Guizilini
  • Patent number: 12302018
    Abstract: Provided is a photosensor including a light receiving part generating electric charge according to incident light, a charge transfer gate configured to transfer the electric charge generated in the light receiving part, and a signal generation unit configured to generate a charge transfer signal applied to the charge transfer gate, in which the signal generation unit generates the charge transfer signal so that the charge transfer gate is brought into a charge transfer state in a first time range of a first period belonging to an n-th (n is an integer of 1 or more) frame and the charge transfer gate is brought into a charge transfer state in a second time range of a second period belonging to an m-th (m is an integer of 1 or more different from n) frame.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: May 13, 2025
    Assignee: HAMAMATSU PHOTONICS K.K.
    Inventors: Mitsuhito Mase, Hiroo Yamamoto, Harumichi Mori
  • Patent number: 12277728
    Abstract: A pupil detection device includes an eye area image obtaining unit that obtains image data representing an eye area image in a captured image obtained by a camera; a luminance gradient calculating unit that calculates luminance gradient vectors corresponding to respective individual image units in the eye area image, using the image data; an evaluation value calculating unit that calculates evaluation values corresponding to the respective individual image units, using the luminance gradient vectors; and a pupil location detecting unit that detects a pupil location in the eye area image, using the evaluation values.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: April 15, 2025
    Assignee: Mitsubishi Electric Corporation
    Inventors: Atsushi Matsumoto, Ryosuke Torama, Shintaro Watanabe, Ryohei Murachi
  • Patent number: 12272114
    Abstract: According to one embodiment, a learning method of causing a statistical model for outputting a distance to a subject to learn by using an image including the subject as an input is provided. The method includes acquiring an image for learning including a subject having an already known shape, acquiring a first distance to the subject included in the image for learning, from the image for learning, and causing the statistical model to learn by restraining the first distance with the shape of the subject included in the image for learning.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: April 8, 2025
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Nao Mishima, Masako Kashiwagi
  • Patent number: 12266169
    Abstract: Automated real-time aerial change assessment of targets is provided. An aerial image of a target area is recorded during a flyover and a target detected in the aerial image. A sequence of images of the target area are recorded during a subsequent flyover. The system determines a target detection probability according to confidence scores of the sequence of images and determines a change status of the target. Responsive to a target change, a percentage of change is determined according to image feature matching between first aerial image and each of the images from the second flyover. Target detection probability and percentage of change are combined as statistically independent events to determine a probability of change. The probability of change and percentage of change for each image in the sequence is output in real-time, and final change assessment is output when the aircraft exits the target area.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: April 1, 2025
    Assignee: The Boeing Company
    Inventors: Yan Yang, Paul Foster
  • Patent number: 12260577
    Abstract: A method for training a depth estimation model implemented in an electronic device includes obtaining a first image pair from a training data set; inputting the first left image into the depth estimation model, and obtaining a disparity map; adding the first left image and the disparity map, and obtaining a second right image; calculating a mean square error and cosine similarity of pixel values of all corresponding pixels in the first right image and the second right image; calculating mean values of the mean square error and the cosine similarity, and obtaining a first mean value of the mean square error and a second mean value of the cosine similarity; adding the first mean value and the second mean value, and obtaining a loss value of the depth estimation model; and iteratively training the depth estimation model according to the loss value.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: March 25, 2025
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Yu-Hsuan Chien, Chin-Pin Kuo
  • Patent number: 12254701
    Abstract: A vehicle periphery monitoring device is configured to: specify positions of detection points of an obstacle with respect to a vehicle based on detection results of multiple periphery monitoring sensors equipped to the vehicle; update the positions of the detected detection points corresponding to a position change of the vehicle; storing the updated positions of the detection points in a memory that maintains stored information during a parked state of the vehicle; and estimate, at a start time of the vehicle from the parked state, a position of a detection point, which is located in a blind spot and is not detected by all of the multiple periphery monitoring sensors at the start time of the vehicle, using the positions of the detection points stored in the memory.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: March 18, 2025
    Assignees: DENSO CORPORATION, TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yousuke Miyamoto, Akihiro Kida, Motonari Ohbayashi
  • Patent number: 12243253
    Abstract: There is provided a depth information generating apparatus. A first generating unit generates first depth information on the basis of a plurality of viewpoint images which are obtained from first shooting and which have mutually-different viewpoints. A second generating unit generates second depth information for a captured image obtained from second shooting by correcting the first depth information so as to reflect a change in depth caused by a difference in a focal distance of the second shooting relative to the first shooting.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: March 4, 2025
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yohei Horikawa, Takeshi Ogawa
  • Patent number: 12231775
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: June 5, 2024
    Date of Patent: February 18, 2025
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12223665
    Abstract: A system includes a first type of measurement device that captures first 2D images, a second type of measurement device that captures 3D scans. A 3D scan includes a point cloud and a second 2D image. The system also includes processors that register the first 2D images. The method includes accessing the 3D scan that records at least a portion of the surrounding environment that is also captured by a first 2D image. Further, 2D features in the second 2D image are detected, and 3D coordinates from the point cloud are associated to the 2D features. 2D features are also detected in the first 2D image, and matching 2D features from the first 2D image and the second 2D image are identified. A position and orientation of the first 2D image is calculated in a coordinate system of the 3D scan using the matching 2D features.
    Type: Grant
    Filed: August 10, 2022
    Date of Patent: February 11, 2025
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Matthias Wolke, Jafar Amiri Parian
  • Patent number: 12225180
    Abstract: A method of generating stereoscopic display contents includes obtaining, from a Red, Green, Blue plus Distance (RGB-D) image using a processor, a first Red, Green, and Blue (RGB) image and a depth image; determining, based on depth values in the depth image, a first disparity map in accordance with the RGB-D image; determining a second disparity map and a third disparity map by transforming the first disparity map using a disparity distribution ratio; and generating, by the processor, the pair of stereoscopic images comprising a second RGB image and a third RGB image, wherein the second RGB image is generated by shifting a first set of pixels in the first RGB image based on the second disparity map, and the third RGB image is generated by shifting a second set of pixels in the first RGB image based on the third disparity map.
    Type: Grant
    Filed: October 25, 2022
    Date of Patent: February 11, 2025
    Assignee: Orbbec 3D Technology International, Inc.
    Inventors: Xin Xie, Nan Xu, Xu Chen
  • Patent number: 12217415
    Abstract: A method of manufacturing a display device includes preparing a cover substrate assembly including a cover substrate, a protective film attached to a lower surface of the cover substrate, and an inspection pattern disposed between the cover substrate and the protective film, obtaining a first image by imaging the inspection pattern through an upper surface of the cover substrate, obtaining noise data by comparing a reference image of the inspection pattern with the first image, obtaining a second image by imaging the cover substrate assembly, obtaining a corrected image of the second image by reflecting the noise data in the second image, and detecting a defect of the cover substrate based on the corrected image of the second image.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: February 4, 2025
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Tae-Jin Hwang, Hyeongmin Ahn
  • Patent number: 12198422
    Abstract: A method includes, during a delivery process of an unmanned aerial vehicle (UAV), receiving, by an image processing system, a depth image captured by a downward-facing stereo camera on the UAV. One or more pixels are within a sample area of the depth image and are associated with corresponding depth values indicative of distances of one or more objects to the downward-facing stereo camera. The method also includes determining, by the image processing system an estimated depth value representative of depth values within the sample area. The method further includes determining that the estimated depth value is below a trigger depth. The method further includes, based at least on determining that the estimated depth value is below the trigger depth, aborting the delivery process of the UAV.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: January 14, 2025
    Assignee: Wing Aviation LLC
    Inventors: Louis Kenneth Dressel, Kyle David Julian
  • Patent number: 12184964
    Abstract: One embodiment provides a method, the method including: capturing, utilizing at least one image capture device coupled to an information handling device, video image data of a user; identifying, utilizing a gesture detection system and within the video image data, a location of hands of the user in relation to a gesture zone; and performing, utilizing the gesture detection system and based upon the location of the hands of the user in relation to the gesture zone, an action, wherein the producing the framed video stream comprises performing an action based upon the location of the hands of the user in relation to the gesture zone. Other aspects are claimed and described.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 31, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: John W Nicholson, Robert James Norton, Jr., Justin Ringuette, Sandy Collins
  • Patent number: 12165343
    Abstract: A depth estimation system can enhance depth estimation using brightness image. Light is projected onto an object. The object reflects at least a portion of the projected light. The reflected light is at least partially captured by an image sensor, which generates image data. The depth estimation system may use the image data to generate both a depth image and a brightness image. The image sensor includes a plurality of pixels, each of which is associated with two ADCs. The ADCs receive different analog signals from the pixel and outputs different digital signals. The depth estimation system may use the different digital signals to determine the brightness value of a brightness pixel of the brightness image. The ADCs may be associated with one or more other pixels. The pixel and the one or more other pixels may be arranged in a same column in an array of the image sensor.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: December 10, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: Javier Calpe Maravilla, Jonathan Ephraim David Hurwitz, Nicolas Le Dortz
  • Patent number: 12158926
    Abstract: Embodiments described herein provide for determining a probability distribution of a three-dimensional point in a template feature map matching a three-dimensional point in space. A dual-domain target structure tracking end-to-end system receives projection data in one dimension or two dimensions and a three-dimensional simulation image. The end-to-end system extracts a template feature map from the simulation image using segmentation. The end-to-end system extracts features from the projection data, transforms the features of the projection data into three-dimensional space, and sequences the three-dimensional space to generate a three-dimensional feature map. The end-to-end system compares the template feature map to the generated three-dimensional feature map, determining an instantaneous probability distribution of the template feature map occurring in the three-dimensional feature map.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: December 3, 2024
    Assignee: SIEMENS HEALTHINEERS INTERNATIONAL AG
    Inventors: Pascal Paysan, Michal Walczak, Liangjia Zhu, Toon Roggen, Stefan Scheib
  • Patent number: 12148220
    Abstract: A method is described for providing a neural network for directly validating an environment map in a vehicle by means of sensor data. Valid or legitimate environment data is provided in a feature representation from map data and sensor data. Invalid or illegitimate environment data is provided in a feature representation from map data and sensor data. A neutral network is trained using the valid environment data and the invalid environment data.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: November 19, 2024
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Felix Drost, Sebastian Schneider
  • Patent number: 12144663
    Abstract: An X-ray CT apparatus and a correction method of projection data that are capable of suppressing artifacts generated in the vicinity of an edge portion of a test subject are provided. The X-ray CT apparatus for photographing a test subject is characterized by comprising: a correction data creation unit that creates correction data using difference data between measurement projection data for each X-ray energy obtained by photographing a known phantom having a known composition, a known shape, and a size smaller than a photographing field of view of the X-ray CT apparatus and calculation projection data for each X-ray energy calculated on the basis of X-ray transmission lengths obtained from the shape of the known phantom; and a correction unit that corrects projection data for each X-ray energy of the test subject using the correction data.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: November 19, 2024
    Assignee: FUJIFILM Corporation
    Inventor: Kazuma Yokoi
  • Patent number: 12137289
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: May 23, 2024
    Date of Patent: November 5, 2024
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12126912
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: May 23, 2024
    Date of Patent: October 22, 2024
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12125227
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and/or implementing machine learning models utilizing compressed log scene measurement maps. For example, the disclosed system generates compressed log scene measurement maps by converting scene measurement maps to compressed log scene measurement maps by applying a logarithmic function. In particular, the disclosed system uses scene measurement distribution metrics from a digital image to determine a base for the logarithmic function. In this way, the compressed log scene measurement maps normalize ranges within a digital image and accurately differentiates between scene elements objects at a variety of depths. Moreover, for training, the disclosed system generates a predicted scene measurement map via a machine learning model and compares the predicted scene measurement map with a compressed log ground truth map.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: October 22, 2024
    Assignee: Adobe Inc.
    Inventor: Jianming Zhang
  • Patent number: 12118443
    Abstract: A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: October 15, 2024
    Assignee: Apple Inc.
    Inventors: Charles Maalouf, Shawn R. Scully, Christopher B. Fleizach, Tu K. Nguyen, Lilian H. Liang, Warren J. Seto, Julian Quintana, Michael J. Beyhs, Hojjat Seyed Mousavi, Behrooz Shahsavari
  • Patent number: 12114100
    Abstract: An image processing method in remote control, a device, an apparatus and a program product are provided and related to the field of automatic driving technologies. The specific implementation solution includes receiving image information sent by a vehicle, wherein the image information includes multiple-channel image data collected by the vehicle; performing a stitching process on the multiple-channel image data, to obtain a stitched image; sending the stitched image to a remote cockpit apparatus for controlling the vehicle.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: October 8, 2024
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Qingrui Sun, Jingchao Feng, Liming Xia, Zhuo Chen
  • Patent number: 12100221
    Abstract: A method and electronic device for detecting an object are disclosed. The method includes generating a cluster of points representative of the surroundings of the SDC, generating by a first Neural Network (NN) a first feature vector based on the cluster indicative of a local context of the given object in the surroundings of the SDC, generating by a second NN second feature vectors for respective points from the cluster based on a portion of the point cloud, where a given second feature vector is indicative of the local and global context of the given object, generating by the first NN a third feature vector for the given object based on the second feature vectors representative of the given object, and generating by a third NN a bounding box around the given object using the third feature vector.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: September 24, 2024
    Assignee: Y.E. Hub Armenia LLC
    Inventors: Andrey Olegovich Rykov, Artem Andreevich Filatov
  • Patent number: 12079881
    Abstract: Systems and methods for mapping an electronic document to a particular transaction category are disclosed. An example method may be performed by one or more processors of a categorization system and include receiving, from a user via an interface, an electronic document associated with a transaction between the user and a particular establishment, receiving, from the user via the interface, an image of the particular establishment, identifying in the image, using an image recognition engine, at least one of a sign or a symbol representative of the particular establishment, extracting, using an analytics module, location information from at least one of the image or a mobile device, determining, using the analytics module, a name of the particular establishment based on at least one of the location information or the at least one sign or symbol, and mapping the electronic document to a particular transaction category based on the determined name.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: September 3, 2024
    Assignee: Intuit Inc.
    Inventors: Wolfgang Paulus, Luis Felipe Cabrera, Mike Graves
  • Patent number: 12069373
    Abstract: Provided is a monitoring system including: a monitoring apparatus configured to monitor a target; and an operating apparatus configured to operate the monitoring apparatus. In the monitoring system, the operating system includes a controller, and the controller is configured to obtain target information from an outside; receive monitoring information from the monitoring apparatus; determine driving information for driving the monitoring apparatus so that the target is positioned in a monitoring area of the monitoring apparatus, based on the target information and the monitoring information; determine an interlocking field of view (FOV) for adjusting the monitoring area on the basis of the driving information; and transmit the interlocking FOV and a driving angle based on the driving information, to the monitoring apparatus.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: August 20, 2024
    Assignee: HANWHA AEROSPACE CO., LTD.
    Inventor: Bong Kyung Suk
  • Patent number: 12067078
    Abstract: An edge device is capable of requesting a server including a first processing processor dedicated to specific processing to execute predetermined processing, and the edge device includes: a determination unit configured to determine whether the predetermined processing is processable by a second processing processor dedicated to the specific processing included in the edge device; and a control unit configured to cause the second processing processor of the edge device to execute the predetermined processing in a case where it is determined that the predetermined processing is processable.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: August 20, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kenta Usami
  • Patent number: 12051221
    Abstract: Disclosed is a pose identification method including obtaining a depth image of a target, obtaining feature information of the depth image and position information corresponding to the feature information, and obtaining a pose identification result of the target based on the feature information and the position information.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: July 30, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Zhihua Liu, Yamin Mao, Hongseok Lee, Paul Oh, Qiang Wang, Yuntae Kim, Weiheng Liu
  • Patent number: 12045284
    Abstract: A method and system provide for searching a computer-aided design (CAD) drawing. A CAD drawing is obtained and includes vector based geometric entities. For each entity, primitives are extracted and held in a graph with graph nodes that record entity paths. A feature coordinate system is created for each of the entities using the primitives. The primitives are transformed from a world coordinate system to feature coordinates of the feature coordinate system. Geometry data of the transformed entities is encoded into index codes that are utilized in an index table as keys with the graph nodes as values. A target geometric entity is identified and a target index code is determined and used to query the index table to identify instances of the target geometric entity in the CAD drawing. Found instances in the CAD drawing are displayed in a visually distinguishable manner.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: July 23, 2024
    Assignee: AUTODESK, INC.
    Inventor: Ping Zou
  • Patent number: 12032390
    Abstract: The autonomous landing method of a UAV includes: obtaining a point cloud distribution map of a planned landing region; determining a detection region in the point cloud distribution map according to an actual landing region of the UAV in the point cloud distribution map; dividing the detection region into at least two designated regions, each of the at least two designated regions corresponding to a part of the planned landing region; determining whether a quantity of point clouds in each of the at least two designated regions is less than a preset threshold; and if a quantity of point clouds in a designated region is less than the preset threshold, controlling the UAV to fly away from the designated region of which the quantity of point clouds is less than the preset threshold, or controlling the UAV to stop landing.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: July 9, 2024
    Assignee: AUTEL ROBOTICS CO., LTD.
    Inventor: Xin Zheng
  • Patent number: 12033351
    Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: July 9, 2024
    Assignee: Google LLC
    Inventors: Juan David Hincapie, Andre Le
  • Patent number: 12026823
    Abstract: A method for generating a moving volumetric image of a moving object from data recorded by a user-held device comprising: acquiring, from the user-held device, video and depth data of the moving object, and pose data; and communicating the acquired data to a computing module. Then, processing the video data to extract images that are segmented to form segmented images; passing the segmented images, depth data and pose data through a processing module to form a sequence of volumetric meshes defining the outer surface of the moving object; rendering the sequence of volumetric meshes with a visual effect at least partly determined from the video data to form a rendered moving volumetric image; and communicating the rendered moving volumetric image to at least one device including the user-held device. Then, displaying, at a display of the at least one device, the rendered moving volumetric image.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: July 2, 2024
    Assignee: Volograms Limited
    Inventors: Rafael Pagés, Jan Ond{hacek over (r)}ej, Konstantinos Amplianitis, Sergio Arnaldo, Valeria Olyunina
  • Patent number: 12024125
    Abstract: A control device including a control section configured to cause a ranging process of measuring a distance between communication devices to be executed a designated number of times, and control a subsequent process that is a process of using a ranging value that satisfies a designated allowable value, on a basis of whether or not at least any of a plurality of ranging values that have been acquired satisfies the designated allowable value. The control section controls start of the subsequent process when the ranging value that satisfies the designated allowable value is acquired.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: July 2, 2024
    Assignee: KABUSHIKI KAISHA TOKAI RIKA DENKI SEISAKUSHO
    Inventors: Yuki Kono, Shigenori Nitta, Masateru Furuta, Yosuke Ohashi
  • Patent number: 12020415
    Abstract: One variation of a method for monitoring manufacture of assembly units includes: receiving selection of a target location hypothesized by a user to contain an origin of a defect in assembly units of an assembly type; accessing a feature map linking non-visual manufacturing features to physical locations within the assembly type; for each assembly unit, accessing an inspection image of the assembly unit recorded by an optical inspection station during production of the assembly unit, projecting the target location onto the inspection image, detecting visual features proximal the target location within the inspection image, and aggregating non-visual manufacturing features associated with locations proximal the target location and representing manufacturing inputs into the assembly unit based on the feature map; and calculating correlations between visual and non-visual manufacturing features associated with locations proximal the target location and the defect for the set of assembly units.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: June 25, 2024
    Assignee: Instrumental, Inc.
    Inventors: Samuel Bruce Weiss, Anna-Katrina Shedletsky, Simon Kozlov, Tilmann Bruckhaus, Shilpi Kumar, Isaac Sukin, Ian Theilacker, Brendan Green
  • Patent number: 12020456
    Abstract: An external parameter calibration method for an image acquisition apparatus is disclosed. The method includes acquiring images from images acquired by the image acquisition apparatus. The images contain reference objects acquired by the image acquisition apparatus during the driving of the vehicle. The reference objects in the images are divided into a number of sections along a road direction in which the vehicle is located, and reference objects in each of the sections are fitted into straight lines. Pitch angles and yaw angles of the image acquisition apparatus are determined based on vanishing points of a straight line in each of the sections. The sequences of the determined pitch and yaw angles are filtered. Straight portions in the road from the filtered sequences of pitch and yaw angles are obtained. Data of the pitch angles and yaw angles corresponding to the straight portions are stored to a data stack.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: June 25, 2024
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zhiyong Tang, Jiang Peng, Tao Zhang
  • Patent number: 12007481
    Abstract: A sensor includes an avalanche photodiode (APD), a first resistor, a second resistor, and a rectification element. The first resistor is connected between a current output terminal of the APD and a first output terminal. The second resistor and the rectification element are connected in series between the current output terminal and a second output terminal. The rectification element is connected between the second resistor and the second output terminal.
    Type: Grant
    Filed: April 10, 2023
    Date of Patent: June 11, 2024
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventors: Hiroshi Kubota, Nobu Matsumoto
  • Patent number: 11998279
    Abstract: Embodiments of the present invention set forth a method to update an operation pathway for a robotic arm assembly in response to a movement of a patient. The method includes processing a two-dimensional image associated with a tag having a spatial relationship with the patient. A corresponding movement of the tag in response to the movement of the patient is determined based on the spatial relationship. The tag includes a first point and a second point and the two-dimensional image includes a first point image and a second point image. The method also includes associating the first point image with the first point and the second point image with the second point and updating the operation pathway based on a conversion matrix of the first point and the second point, and the first point image and the second point image.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: June 4, 2024
    Assignee: Brain Navi Biotechnology Co., Ltd.
    Inventors: Chieh Hsiao Chen, Kuan Ju Wang
  • Patent number: 12000963
    Abstract: Provided is a light detection and ranging (LiDAR) device including a light transmitter including a plurality of light sources, each of the plurality of light sources being configured to emit light toward an object, a light receiver including a plurality of light detection elements, each of the plurality of light detection elements being configured to detect reflected light reflected from the object that is irradiated with the light emitted by the plurality of light sources, and the light receiver being configured to remove crosstalk from second detection information output by at least one light detection element of the plurality of light detection elements based on first detection information output by any one of remaining light detection elements of the plurality of light detection elements, and a processor configured to obtain information on the object based on the second detection information with the crosstalk removed.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: June 4, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jungwoo Kim, Tatsuhiro Otsuka, Yongchul Cho
  • Patent number: 11998799
    Abstract: The present disclosure relates to effective, efficient, and economical methods and systems for improving athletic performance by tracking objects typically thrown or hit by athletes. In particular, the present disclosure relates to a unique configuration of technology wherein an electronic display is located at or behind the target and one or more cameras are positioned to observe the target. Once an object is thrown or hit, one or more cameras may observe and track the object. Further, an electronic display may be used to provide targeting information to an athlete and also to determine the athlete's accuracy.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: June 4, 2024
    Assignee: SmartMitt LLC
    Inventor: Thomas R. Frenz
  • Patent number: 11995749
    Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 28, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Borer, Jakob Buhmann, Martin Guay