Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 12260577
    Abstract: A method for training a depth estimation model implemented in an electronic device includes obtaining a first image pair from a training data set; inputting the first left image into the depth estimation model, and obtaining a disparity map; adding the first left image and the disparity map, and obtaining a second right image; calculating a mean square error and cosine similarity of pixel values of all corresponding pixels in the first right image and the second right image; calculating mean values of the mean square error and the cosine similarity, and obtaining a first mean value of the mean square error and a second mean value of the cosine similarity; adding the first mean value and the second mean value, and obtaining a loss value of the depth estimation model; and iteratively training the depth estimation model according to the loss value.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: March 25, 2025
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Yu-Hsuan Chien, Chin-Pin Kuo
  • Patent number: 12254701
    Abstract: A vehicle periphery monitoring device is configured to: specify positions of detection points of an obstacle with respect to a vehicle based on detection results of multiple periphery monitoring sensors equipped to the vehicle; update the positions of the detected detection points corresponding to a position change of the vehicle; storing the updated positions of the detection points in a memory that maintains stored information during a parked state of the vehicle; and estimate, at a start time of the vehicle from the parked state, a position of a detection point, which is located in a blind spot and is not detected by all of the multiple periphery monitoring sensors at the start time of the vehicle, using the positions of the detection points stored in the memory.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: March 18, 2025
    Assignees: DENSO CORPORATION, TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yousuke Miyamoto, Akihiro Kida, Motonari Ohbayashi
  • Patent number: 12243253
    Abstract: There is provided a depth information generating apparatus. A first generating unit generates first depth information on the basis of a plurality of viewpoint images which are obtained from first shooting and which have mutually-different viewpoints. A second generating unit generates second depth information for a captured image obtained from second shooting by correcting the first depth information so as to reflect a change in depth caused by a difference in a focal distance of the second shooting relative to the first shooting.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: March 4, 2025
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yohei Horikawa, Takeshi Ogawa
  • Patent number: 12231775
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: June 5, 2024
    Date of Patent: February 18, 2025
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12223665
    Abstract: A system includes a first type of measurement device that captures first 2D images, a second type of measurement device that captures 3D scans. A 3D scan includes a point cloud and a second 2D image. The system also includes processors that register the first 2D images. The method includes accessing the 3D scan that records at least a portion of the surrounding environment that is also captured by a first 2D image. Further, 2D features in the second 2D image are detected, and 3D coordinates from the point cloud are associated to the 2D features. 2D features are also detected in the first 2D image, and matching 2D features from the first 2D image and the second 2D image are identified. A position and orientation of the first 2D image is calculated in a coordinate system of the 3D scan using the matching 2D features.
    Type: Grant
    Filed: August 10, 2022
    Date of Patent: February 11, 2025
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Matthias Wolke, Jafar Amiri Parian
  • Patent number: 12225180
    Abstract: A method of generating stereoscopic display contents includes obtaining, from a Red, Green, Blue plus Distance (RGB-D) image using a processor, a first Red, Green, and Blue (RGB) image and a depth image; determining, based on depth values in the depth image, a first disparity map in accordance with the RGB-D image; determining a second disparity map and a third disparity map by transforming the first disparity map using a disparity distribution ratio; and generating, by the processor, the pair of stereoscopic images comprising a second RGB image and a third RGB image, wherein the second RGB image is generated by shifting a first set of pixels in the first RGB image based on the second disparity map, and the third RGB image is generated by shifting a second set of pixels in the first RGB image based on the third disparity map.
    Type: Grant
    Filed: October 25, 2022
    Date of Patent: February 11, 2025
    Assignee: Orbbec 3D Technology International, Inc.
    Inventors: Xin Xie, Nan Xu, Xu Chen
  • Patent number: 12217415
    Abstract: A method of manufacturing a display device includes preparing a cover substrate assembly including a cover substrate, a protective film attached to a lower surface of the cover substrate, and an inspection pattern disposed between the cover substrate and the protective film, obtaining a first image by imaging the inspection pattern through an upper surface of the cover substrate, obtaining noise data by comparing a reference image of the inspection pattern with the first image, obtaining a second image by imaging the cover substrate assembly, obtaining a corrected image of the second image by reflecting the noise data in the second image, and detecting a defect of the cover substrate based on the corrected image of the second image.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: February 4, 2025
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Tae-Jin Hwang, Hyeongmin Ahn
  • Patent number: 12198422
    Abstract: A method includes, during a delivery process of an unmanned aerial vehicle (UAV), receiving, by an image processing system, a depth image captured by a downward-facing stereo camera on the UAV. One or more pixels are within a sample area of the depth image and are associated with corresponding depth values indicative of distances of one or more objects to the downward-facing stereo camera. The method also includes determining, by the image processing system an estimated depth value representative of depth values within the sample area. The method further includes determining that the estimated depth value is below a trigger depth. The method further includes, based at least on determining that the estimated depth value is below the trigger depth, aborting the delivery process of the UAV.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: January 14, 2025
    Assignee: Wing Aviation LLC
    Inventors: Louis Kenneth Dressel, Kyle David Julian
  • Patent number: 12184964
    Abstract: One embodiment provides a method, the method including: capturing, utilizing at least one image capture device coupled to an information handling device, video image data of a user; identifying, utilizing a gesture detection system and within the video image data, a location of hands of the user in relation to a gesture zone; and performing, utilizing the gesture detection system and based upon the location of the hands of the user in relation to the gesture zone, an action, wherein the producing the framed video stream comprises performing an action based upon the location of the hands of the user in relation to the gesture zone. Other aspects are claimed and described.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 31, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: John W Nicholson, Robert James Norton, Jr., Justin Ringuette, Sandy Collins
  • Patent number: 12165343
    Abstract: A depth estimation system can enhance depth estimation using brightness image. Light is projected onto an object. The object reflects at least a portion of the projected light. The reflected light is at least partially captured by an image sensor, which generates image data. The depth estimation system may use the image data to generate both a depth image and a brightness image. The image sensor includes a plurality of pixels, each of which is associated with two ADCs. The ADCs receive different analog signals from the pixel and outputs different digital signals. The depth estimation system may use the different digital signals to determine the brightness value of a brightness pixel of the brightness image. The ADCs may be associated with one or more other pixels. The pixel and the one or more other pixels may be arranged in a same column in an array of the image sensor.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: December 10, 2024
    Assignee: Analog Devices International Unlimited Company
    Inventors: Javier Calpe Maravilla, Jonathan Ephraim David Hurwitz, Nicolas Le Dortz
  • Patent number: 12158926
    Abstract: Embodiments described herein provide for determining a probability distribution of a three-dimensional point in a template feature map matching a three-dimensional point in space. A dual-domain target structure tracking end-to-end system receives projection data in one dimension or two dimensions and a three-dimensional simulation image. The end-to-end system extracts a template feature map from the simulation image using segmentation. The end-to-end system extracts features from the projection data, transforms the features of the projection data into three-dimensional space, and sequences the three-dimensional space to generate a three-dimensional feature map. The end-to-end system compares the template feature map to the generated three-dimensional feature map, determining an instantaneous probability distribution of the template feature map occurring in the three-dimensional feature map.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: December 3, 2024
    Assignee: SIEMENS HEALTHINEERS INTERNATIONAL AG
    Inventors: Pascal Paysan, Michal Walczak, Liangjia Zhu, Toon Roggen, Stefan Scheib
  • Patent number: 12144663
    Abstract: An X-ray CT apparatus and a correction method of projection data that are capable of suppressing artifacts generated in the vicinity of an edge portion of a test subject are provided. The X-ray CT apparatus for photographing a test subject is characterized by comprising: a correction data creation unit that creates correction data using difference data between measurement projection data for each X-ray energy obtained by photographing a known phantom having a known composition, a known shape, and a size smaller than a photographing field of view of the X-ray CT apparatus and calculation projection data for each X-ray energy calculated on the basis of X-ray transmission lengths obtained from the shape of the known phantom; and a correction unit that corrects projection data for each X-ray energy of the test subject using the correction data.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: November 19, 2024
    Assignee: FUJIFILM Corporation
    Inventor: Kazuma Yokoi
  • Patent number: 12148220
    Abstract: A method is described for providing a neural network for directly validating an environment map in a vehicle by means of sensor data. Valid or legitimate environment data is provided in a feature representation from map data and sensor data. Invalid or illegitimate environment data is provided in a feature representation from map data and sensor data. A neutral network is trained using the valid environment data and the invalid environment data.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: November 19, 2024
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Felix Drost, Sebastian Schneider
  • Patent number: 12137289
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: May 23, 2024
    Date of Patent: November 5, 2024
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12125227
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and/or implementing machine learning models utilizing compressed log scene measurement maps. For example, the disclosed system generates compressed log scene measurement maps by converting scene measurement maps to compressed log scene measurement maps by applying a logarithmic function. In particular, the disclosed system uses scene measurement distribution metrics from a digital image to determine a base for the logarithmic function. In this way, the compressed log scene measurement maps normalize ranges within a digital image and accurately differentiates between scene elements objects at a variety of depths. Moreover, for training, the disclosed system generates a predicted scene measurement map via a machine learning model and compares the predicted scene measurement map with a compressed log ground truth map.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: October 22, 2024
    Assignee: Adobe Inc.
    Inventor: Jianming Zhang
  • Patent number: 12126912
    Abstract: Disclosed are methods and apparatuses for image data encoding/decoding. A method for decoding a 360-degree image includes the steps of: receiving a bitstream obtained by encoding a 360-degree image; generating a prediction image by making reference to syntax information obtained from the received bitstream; adding the generated prediction image to a residual image obtained by dequantizing and inverse-transforming the bitstream, so as to obtain a decoded image; and reconstructing the decoded image into a 360-degree image according to a projection format. Therefore, the performance of image data compression can be improved.
    Type: Grant
    Filed: May 23, 2024
    Date of Patent: October 22, 2024
    Assignee: B1 INSTITUTE OF IMAGE TECHNOLOGY, INC.
    Inventor: Ki Baek Kim
  • Patent number: 12118443
    Abstract: A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: October 15, 2024
    Assignee: Apple Inc.
    Inventors: Charles Maalouf, Shawn R. Scully, Christopher B. Fleizach, Tu K. Nguyen, Lilian H. Liang, Warren J. Seto, Julian Quintana, Michael J. Beyhs, Hojjat Seyed Mousavi, Behrooz Shahsavari
  • Patent number: 12114100
    Abstract: An image processing method in remote control, a device, an apparatus and a program product are provided and related to the field of automatic driving technologies. The specific implementation solution includes receiving image information sent by a vehicle, wherein the image information includes multiple-channel image data collected by the vehicle; performing a stitching process on the multiple-channel image data, to obtain a stitched image; sending the stitched image to a remote cockpit apparatus for controlling the vehicle.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: October 8, 2024
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Qingrui Sun, Jingchao Feng, Liming Xia, Zhuo Chen
  • Patent number: 12100221
    Abstract: A method and electronic device for detecting an object are disclosed. The method includes generating a cluster of points representative of the surroundings of the SDC, generating by a first Neural Network (NN) a first feature vector based on the cluster indicative of a local context of the given object in the surroundings of the SDC, generating by a second NN second feature vectors for respective points from the cluster based on a portion of the point cloud, where a given second feature vector is indicative of the local and global context of the given object, generating by the first NN a third feature vector for the given object based on the second feature vectors representative of the given object, and generating by a third NN a bounding box around the given object using the third feature vector.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: September 24, 2024
    Assignee: Y.E. Hub Armenia LLC
    Inventors: Andrey Olegovich Rykov, Artem Andreevich Filatov
  • Patent number: 12079881
    Abstract: Systems and methods for mapping an electronic document to a particular transaction category are disclosed. An example method may be performed by one or more processors of a categorization system and include receiving, from a user via an interface, an electronic document associated with a transaction between the user and a particular establishment, receiving, from the user via the interface, an image of the particular establishment, identifying in the image, using an image recognition engine, at least one of a sign or a symbol representative of the particular establishment, extracting, using an analytics module, location information from at least one of the image or a mobile device, determining, using the analytics module, a name of the particular establishment based on at least one of the location information or the at least one sign or symbol, and mapping the electronic document to a particular transaction category based on the determined name.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: September 3, 2024
    Assignee: Intuit Inc.
    Inventors: Wolfgang Paulus, Luis Felipe Cabrera, Mike Graves
  • Patent number: 12067078
    Abstract: An edge device is capable of requesting a server including a first processing processor dedicated to specific processing to execute predetermined processing, and the edge device includes: a determination unit configured to determine whether the predetermined processing is processable by a second processing processor dedicated to the specific processing included in the edge device; and a control unit configured to cause the second processing processor of the edge device to execute the predetermined processing in a case where it is determined that the predetermined processing is processable.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: August 20, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kenta Usami
  • Patent number: 12069373
    Abstract: Provided is a monitoring system including: a monitoring apparatus configured to monitor a target; and an operating apparatus configured to operate the monitoring apparatus. In the monitoring system, the operating system includes a controller, and the controller is configured to obtain target information from an outside; receive monitoring information from the monitoring apparatus; determine driving information for driving the monitoring apparatus so that the target is positioned in a monitoring area of the monitoring apparatus, based on the target information and the monitoring information; determine an interlocking field of view (FOV) for adjusting the monitoring area on the basis of the driving information; and transmit the interlocking FOV and a driving angle based on the driving information, to the monitoring apparatus.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: August 20, 2024
    Assignee: HANWHA AEROSPACE CO., LTD.
    Inventor: Bong Kyung Suk
  • Patent number: 12051221
    Abstract: Disclosed is a pose identification method including obtaining a depth image of a target, obtaining feature information of the depth image and position information corresponding to the feature information, and obtaining a pose identification result of the target based on the feature information and the position information.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: July 30, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Zhihua Liu, Yamin Mao, Hongseok Lee, Paul Oh, Qiang Wang, Yuntae Kim, Weiheng Liu
  • Patent number: 12045284
    Abstract: A method and system provide for searching a computer-aided design (CAD) drawing. A CAD drawing is obtained and includes vector based geometric entities. For each entity, primitives are extracted and held in a graph with graph nodes that record entity paths. A feature coordinate system is created for each of the entities using the primitives. The primitives are transformed from a world coordinate system to feature coordinates of the feature coordinate system. Geometry data of the transformed entities is encoded into index codes that are utilized in an index table as keys with the graph nodes as values. A target geometric entity is identified and a target index code is determined and used to query the index table to identify instances of the target geometric entity in the CAD drawing. Found instances in the CAD drawing are displayed in a visually distinguishable manner.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: July 23, 2024
    Assignee: AUTODESK, INC.
    Inventor: Ping Zou
  • Patent number: 12033351
    Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: July 9, 2024
    Assignee: Google LLC
    Inventors: Juan David Hincapie, Andre Le
  • Patent number: 12032390
    Abstract: The autonomous landing method of a UAV includes: obtaining a point cloud distribution map of a planned landing region; determining a detection region in the point cloud distribution map according to an actual landing region of the UAV in the point cloud distribution map; dividing the detection region into at least two designated regions, each of the at least two designated regions corresponding to a part of the planned landing region; determining whether a quantity of point clouds in each of the at least two designated regions is less than a preset threshold; and if a quantity of point clouds in a designated region is less than the preset threshold, controlling the UAV to fly away from the designated region of which the quantity of point clouds is less than the preset threshold, or controlling the UAV to stop landing.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: July 9, 2024
    Assignee: AUTEL ROBOTICS CO., LTD.
    Inventor: Xin Zheng
  • Patent number: 12026823
    Abstract: A method for generating a moving volumetric image of a moving object from data recorded by a user-held device comprising: acquiring, from the user-held device, video and depth data of the moving object, and pose data; and communicating the acquired data to a computing module. Then, processing the video data to extract images that are segmented to form segmented images; passing the segmented images, depth data and pose data through a processing module to form a sequence of volumetric meshes defining the outer surface of the moving object; rendering the sequence of volumetric meshes with a visual effect at least partly determined from the video data to form a rendered moving volumetric image; and communicating the rendered moving volumetric image to at least one device including the user-held device. Then, displaying, at a display of the at least one device, the rendered moving volumetric image.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: July 2, 2024
    Assignee: Volograms Limited
    Inventors: Rafael PagƩs, Jan Ond{hacek over (r)}ej, Konstantinos Amplianitis, Sergio Arnaldo, Valeria Olyunina
  • Patent number: 12024125
    Abstract: A control device including a control section configured to cause a ranging process of measuring a distance between communication devices to be executed a designated number of times, and control a subsequent process that is a process of using a ranging value that satisfies a designated allowable value, on a basis of whether or not at least any of a plurality of ranging values that have been acquired satisfies the designated allowable value. The control section controls start of the subsequent process when the ranging value that satisfies the designated allowable value is acquired.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: July 2, 2024
    Assignee: KABUSHIKI KAISHA TOKAI RIKA DENKI SEISAKUSHO
    Inventors: Yuki Kono, Shigenori Nitta, Masateru Furuta, Yosuke Ohashi
  • Patent number: 12020415
    Abstract: One variation of a method for monitoring manufacture of assembly units includes: receiving selection of a target location hypothesized by a user to contain an origin of a defect in assembly units of an assembly type; accessing a feature map linking non-visual manufacturing features to physical locations within the assembly type; for each assembly unit, accessing an inspection image of the assembly unit recorded by an optical inspection station during production of the assembly unit, projecting the target location onto the inspection image, detecting visual features proximal the target location within the inspection image, and aggregating non-visual manufacturing features associated with locations proximal the target location and representing manufacturing inputs into the assembly unit based on the feature map; and calculating correlations between visual and non-visual manufacturing features associated with locations proximal the target location and the defect for the set of assembly units.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: June 25, 2024
    Assignee: Instrumental, Inc.
    Inventors: Samuel Bruce Weiss, Anna-Katrina Shedletsky, Simon Kozlov, Tilmann Bruckhaus, Shilpi Kumar, Isaac Sukin, Ian Theilacker, Brendan Green
  • Patent number: 12020456
    Abstract: An external parameter calibration method for an image acquisition apparatus is disclosed. The method includes acquiring images from images acquired by the image acquisition apparatus. The images contain reference objects acquired by the image acquisition apparatus during the driving of the vehicle. The reference objects in the images are divided into a number of sections along a road direction in which the vehicle is located, and reference objects in each of the sections are fitted into straight lines. Pitch angles and yaw angles of the image acquisition apparatus are determined based on vanishing points of a straight line in each of the sections. The sequences of the determined pitch and yaw angles are filtered. Straight portions in the road from the filtered sequences of pitch and yaw angles are obtained. Data of the pitch angles and yaw angles corresponding to the straight portions are stored to a data stack.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: June 25, 2024
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zhiyong Tang, Jiang Peng, Tao Zhang
  • Patent number: 12007481
    Abstract: A sensor includes an avalanche photodiode (APD), a first resistor, a second resistor, and a rectification element. The first resistor is connected between a current output terminal of the APD and a first output terminal. The second resistor and the rectification element are connected in series between the current output terminal and a second output terminal. The rectification element is connected between the second resistor and the second output terminal.
    Type: Grant
    Filed: April 10, 2023
    Date of Patent: June 11, 2024
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventors: Hiroshi Kubota, Nobu Matsumoto
  • Patent number: 12000963
    Abstract: Provided is a light detection and ranging (LiDAR) device including a light transmitter including a plurality of light sources, each of the plurality of light sources being configured to emit light toward an object, a light receiver including a plurality of light detection elements, each of the plurality of light detection elements being configured to detect reflected light reflected from the object that is irradiated with the light emitted by the plurality of light sources, and the light receiver being configured to remove crosstalk from second detection information output by at least one light detection element of the plurality of light detection elements based on first detection information output by any one of remaining light detection elements of the plurality of light detection elements, and a processor configured to obtain information on the object based on the second detection information with the crosstalk removed.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: June 4, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jungwoo Kim, Tatsuhiro Otsuka, Yongchul Cho
  • Patent number: 11998279
    Abstract: Embodiments of the present invention set forth a method to update an operation pathway for a robotic arm assembly in response to a movement of a patient. The method includes processing a two-dimensional image associated with a tag having a spatial relationship with the patient. A corresponding movement of the tag in response to the movement of the patient is determined based on the spatial relationship. The tag includes a first point and a second point and the two-dimensional image includes a first point image and a second point image. The method also includes associating the first point image with the first point and the second point image with the second point and updating the operation pathway based on a conversion matrix of the first point and the second point, and the first point image and the second point image.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: June 4, 2024
    Assignee: Brain Navi Biotechnology Co., Ltd.
    Inventors: Chieh Hsiao Chen, Kuan Ju Wang
  • Patent number: 11998799
    Abstract: The present disclosure relates to effective, efficient, and economical methods and systems for improving athletic performance by tracking objects typically thrown or hit by athletes. In particular, the present disclosure relates to a unique configuration of technology wherein an electronic display is located at or behind the target and one or more cameras are positioned to observe the target. Once an object is thrown or hit, one or more cameras may observe and track the object. Further, an electronic display may be used to provide targeting information to an athlete and also to determine the athlete's accuracy.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: June 4, 2024
    Assignee: SmartMitt LLC
    Inventor: Thomas R. Frenz
  • Patent number: 11995749
    Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 28, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH ZĆ¼rich (Eidgenƶssische Technische Hochschule ZĆ¼rich)
    Inventors: Dominik Borer, Jakob Buhmann, Martin Guay
  • Patent number: 11989910
    Abstract: The present disclosure relates to a moving body, a position estimation method, and a program capable of achieving high accuracy of self-position estimation. An imaging control unit sets a zoom parameter of an imaging unit having a zoom lens according to at least any one of an altitude or a moving speed of its own machine, and a self-position estimation unit estimates a self-position on the basis of an image captured by the imaging unit in which the zoom parameter is set. Technology according to the present disclosure can be applied to, for example, a moving body such as a drone.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 21, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Yuya Yamaguchi, Makoto Daniel Tokunaga
  • Patent number: 11987102
    Abstract: To realize a device and method that outputs scents according to an advertisement. A scent output control unit that executes scent output control and a scent output unit that executes a scent output under control of the scent output control unit are included, and the scent output control unit selects a scent to be output on the basis of an advertisement and outputs the selected scent via the scent output unit. The scent output control unit selects a scent to be output on the basis of an advertisement detected from an image captured by a vehicle exterior camera that captures the outside of the vehicle or an advertisement output to the information output terminal, and outputs the selected scent via the scent output unit. Furthermore, an occupant profile is analyzed on the basis of an image of an occupant, and a scent corresponding to the occupant is output.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 21, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Satoshi Konya, Shuji Fujita, Yukito Inoue, Cedric Duvert
  • Patent number: 11989884
    Abstract: The present application discloses a method of adjusting a parameter, the parameter being used to derive a physiological characteristic of an individual from an image of the user, the method comprising the steps of: obtaining the parameter for the individual; obtaining a corresponding parameter for a plurality of other individuals within a cohort of the individual; comparing the parameter for the individual with a statistically significant parameter for the plurality of other individuals; and adjusting the parameter for the individual in accordance with the difference between the parameter for the individual and the statistically significant parameter for the plurality of other individuals.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: May 21, 2024
    Assignee: XIM LIMITED
    Inventors: Laurence Pearce, Samuel Pearce
  • Patent number: 11972587
    Abstract: An establishing method of semantic distance map for a moving device, includes capturing an image; obtaining a single-point distance measurement result of the image; performing recognition for the image to obtain a recognition result of each obstacle in the image; and determining a semantic distance map corresponding to the image according to the image, the single-point distance measurement result and the recognition result of each obstacle of in the image; wherein each pixel of the semantic distance map includes an obstacle information, which includes a distance between the moving device and an obstacle, a type of the obstacle, and a recognition probability of the obstacle.
    Type: Grant
    Filed: May 22, 2022
    Date of Patent: April 30, 2024
    Assignee: FITIPOWER INTEGRATED TECHNOLOGY INC.
    Inventors: Hsueh-Tse Lin, Wei-Hung Hsu, Shang-Yu Yeh
  • Patent number: 11972562
    Abstract: A method for determining a plant growth curve includes obtaining color images and depth images of a plant to be detected at different time points, performing alignment processing on each color image and each depth image to obtain an alignment image, detecting the color image through a pre-trained target detection model to obtain a target bounding box, calculating an area ratio of the target bounding box in the color image, determining a depth value of all pixel points in the target boundary frame according to the aligned image, performing denoising processing on each depth value to obtain a target depth value, generating a first growth curve of the plant to be detected according to the target depth values and corresponding time points, and generating a second growth curve of the plant to be detected according to the area ratios and the corresponding time points.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: April 30, 2024
    Assignee: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Chih-Te Lu, Chin-Pin Kuo, Tzu-Chen Lin
  • Patent number: 11961202
    Abstract: Disclosed is an editing system for postprocessing three-dimensional (ā€œ3Dā€) image data to realistically recreate the effects associated with viewing or imaging a represented scene with different camera settings or lenses. The system receives an original image and an edit command with a camera setting or a camera lens. The system associates the selection to multiple image adjustments. The system performs a first of the multiple image adjustments on a first set of 3D image data from the original image in response to the first set of 3D image data satisfying specific positional or non-positional values defined for the first image adjustment, and performs a second of the multiple image adjustments on a second set of 3D image data from the original image in response to the second set of 3D image data satisfying the specific positional or non-positional values defined for the second image adjustment.
    Type: Grant
    Filed: August 22, 2023
    Date of Patent: April 16, 2024
    Assignee: Illuscio, Inc.
    Inventors: Max Good, Joseph Bogacz
  • Patent number: 11961407
    Abstract: Methods and associated systems and apparatus for generating a three-dimensional (3D) flight path for a moveable platform such as an unmanned aerial vehicle (UAV) are disclosed herein. The method includes receiving a set of 3D information associated with a virtual reality environment and receiving a plurality of virtual locations in the virtual reality environment. For individual virtual locations, the system receives a corresponding action item. The system then generates a 3D path based on at least one of the set of 3D information, the plurality of virtual locations, and the plurality of action items. The system then generates a set of images associated with the 3D path and then visually presents the same to an operator via a virtual reality device. The system enables the operator to adjust the 3D path via the virtual reality device.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: April 16, 2024
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Yuewen Ma, Kaiyong Zhao, Shizhen Zheng, Chihui Pan
  • Patent number: 11961314
    Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: April 16, 2024
    Assignee: NXP B.V.
    Inventors: Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
  • Patent number: 11931884
    Abstract: A smart drilling system includes a terminal configured to map a design space to an actual space and having perforation location information in the design space, a drilling machine including a drill for perforation and configured to perform perforation in the actual space under control of the terminal based on the perforation location information, and a total station configured to acquire location information of a reference point in the actual space for mapping the design space to the actual space and location information of the drilling machine in the actual space, and to transmit the location information of the reference point in the actual space and the location information of the drilling machine to the terminal, wherein the terminal recognizes and displays a perforable region or a perforable point at a current position of the drilling machine.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: March 19, 2024
    Assignees: GeoSystem Inc., Buildingpointkorea Inc.
    Inventors: Dong Hun Kang, Jong Hyun Oh, Ji Eun Kim, Chang Wook Joh, Young Hoon Koh
  • Patent number: 11912201
    Abstract: A vehicular exterior rearview mirror assembly includes a dual mode illumination module having a first light emitting diode (LED) operable to emit visible white light when electrically powered and a second LED operable to emit non-visible light when electrically powered. The first LED, when the vehicle is parked and the first LED is electrically powered, emits visible white light to provide visible ground illumination at a ground region at least partially along the side portion of the vehicle. Visible ground illumination by the vehicular exterior rearview mirror assembly at the ground region is locked-out during movement of the vehicle. The second LED, when the second LED is electrically powered, emits non-visible light to provide non-visible illumination for a camera viewing exterior and at least sideward of the vehicle. Non-visible light emission by the second LED is not locked-out during movement of the vehicle.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: February 27, 2024
    Assignee: Magna Mirrors of America, Inc.
    Inventors: Gregory A. Huizen, Eric Peterson
  • Patent number: 11908199
    Abstract: Image data captured from a traveling vehicle is considered, and it is not possible to reduce the transmission band of the image data. It is assumed that a radar 4 mounted in a traveling vehicle 10 detects a certain distant three-dimensional object at a time T in a direction of a distance d1 [m] and an angle ?1. Since the vehicle 10 travels at a vehicle speed Y [km/h], it is predicted that a camera 3 is capable of capturing the distant three-dimensional object at a time (T+?T) and an angle ?1 or at a distance d2 [m]. Therefore, if a control unit 2 outputs a request to the camera 3 in advance, so as to cut out an image of the angle ?1 or the distance d2 [m] at the time (T+?T), when the time (T+?T) comes, the camera 3 transfers a whole image and a high-resolution image being a cutout image of only a partial image, to the control unit 2.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: February 20, 2024
    Assignee: Hitachi Astemo, Ltd.
    Inventors: Tetsuya Yamada, Teppei Hirotsu, Tomohito Ebina, Kazuyoshi Serizawa, Shouji Muramatsu
  • Patent number: 11900521
    Abstract: An apparatus includes an electronic display configured to be positioned in a first location and one or more processors electronically coupled to the electronic display. The processors receive a video from a server. The video depicts a view of a second location and includes an image of a rectangular casing, a frame, and one or more muntins. The image is composited with the video by the server to provide an illusion of a window in the second location to a user viewing the video. The rectangular casing surrounds the window. The processors synchronize a time-of-view at the second location in the video with a time-of-day at the first location and synchronize a second length-of-day at the second location in the video with a first length-of-day at the first location. The processors transmit the video to the electronic display for viewing by the user.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: February 13, 2024
    Assignee: LiquidView Corp
    Inventors: Mitchell Braff, Jan C. Hobbel, Paulina A. Perrault, Adam Sah, Kangil Cheon, Yeongkeun Jeong, Grishma Rao, Noah Michael Shibley, Hyerim Shin, Marcelle van Beusekom
  • Patent number: 11893884
    Abstract: The present application discloses a method for acquiring three-dimensional perception information based on external parameters of a roadside camera, and a roadside device. The specific implementation solution is as follows: acquiring a first matching point pair between an image captured by the first camera and an image captured by the second camera, and generating a first rotation matrix, where the first rotation matrix represents a rotation matrix of the first camera in a second camera coordinate system; generating a third rotation matrix according to the first rotation matrix and the second rotation matrix, where the second rotation matrix represents a rotation matrix of the second camera in a world coordinate system, and the third rotation matrix represents a rotation matrix of the first camera in the world coordinate system; generating three-dimensional perception information of the image captured by the first camera according to the third rotation matrix.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: February 6, 2024
    Assignee: APOLLO INTELLIGENT CONNECTIVITY (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Libin Yuan, Jinrang Jia, Guangqi Yi
  • Patent number: 11890146
    Abstract: An illumination system for monitoring and illuminating an operating region includes at least one light assembly including at least one illumination source configured to selectively direct a light emission in a portion of the operating region. At least one imager is configured to capture image data in a field of view in the operating region. A controller is in communication with the light assembly and the imager. The controller is configured to process image data captured in the field of view and identify a plurality of objects detected in the image data based on an object library. Track a location of each of the plurality of objects and store the location of each of the objects.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: February 6, 2024
    Assignee: GENTEX CORPORATION
    Inventors: Jorge Zapata, Carlos Eduardo Vargas Silva
  • Patent number: 11887247
    Abstract: In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 30, 2024
    Assignee: NAVVIS GMBH
    Inventors: Eckehard Steinbach, Robert Huitl, Georg Schroth, Sebastian Hilsenbeck