3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 11971961
    Abstract: An apparatus and method for data fusion between heterogeneous sensors are disclosed. The method for data fusion between the heterogeneous sensors may include identifying image data and point cloud data for a search area by each of a camera sensor and a LiDAR sensor that are calibrated using a marker board having a hole; recognizing a translation vector determined through calibrating of the camera sensor and the LiDAR sensor; and projecting the point cloud data of the LiDAR sensor onto the image data of the camera sensor using the recognized translation vector to fuse the identified image data and point cloud data.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 30, 2024
    Assignee: DAEGU GYEONGBUK INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Jin Hee Lee, Kumar Ajay, Soon Kwon, Woong Jae Won
  • Patent number: 11967096
    Abstract: A depth estimation from focus method and system includes receiving input image data containing focus information, generating an intermediate attention map by an AI model, normalizing the intermediate attention map into a depth attention map via a normalization function, and deriving expected depth values for the input image data containing focus information from the depth attention map. The AI model for depth estimation can be trained unsupervisedly without ground truth depth maps. The AI model of some embodiments is a shared network estimating a depth map and reconstructing an AiF image from a set of images with different focus positions.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: April 23, 2024
    Assignee: MEDIATEK INC.
    Inventors: Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Ning-Hsu Wang
  • Patent number: 11967109
    Abstract: According to one embodiment, a system for determining a position of a vehicle includes an image sensor, a top-down view component, a comparison component, and a location component. The image sensor obtains an image of an environment near a vehicle. The top-down view component is configured to generate a top-down view of a ground surface based on the image of the environment. The comparison component is configured to compare the top-down image with a map, the map comprising a top-down light LIDAR intensity map or a vector-based semantic map. The location component is configured to determine a location of the vehicle on the map based on the comparison.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sarah Houts, Alexandru Mihai Gurghian, Vidya Nariyambut Murali, Tory Smith
  • Patent number: 11964762
    Abstract: Subject matter regards generating a 3D point cloud and registering the 3D point cloud to the surface of the Earth (sometimes called “geo-locating”). A method can include capturing, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of the object, registering the overlapping subsections to each other, and geo-locating the registered overlapping subsections.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 23, 2024
    Assignee: Raytheon Company
    Inventors: Torsten A. Staab, Steven B. Seida, Jody D. Verret, Richard W. Ely, Stephen J. Raif
  • Patent number: 11967111
    Abstract: Proposed is a multi-view camera-based iterative calibration method for generation of a 3D volumetric model that performs calibration between cameras adjacent in a vertical direction for a plurality of frames, performs calibration while rotating with the results of viewpoints adjacent in the horizontal direction, and creates a virtual viewpoint between each camera pair to repeat calibration. Thus, images of various viewpoints are obtained using a plurality of low-cost commercial color-depth (RGB-D) cameras. By acquiring and performing the calibration of these images at various viewpoints, it is possible to increase the accuracy of calibration, and through this, it is possible to generate a high-quality real-life graphics volumetric model.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: April 23, 2024
    Assignee: KWANGWOON UNIVERSITY INDUSTRY-ACADEMIC COLLABORATION FOUNDATION
    Inventors: Young Ho Seo, Byung Seo Park
  • Patent number: 11967107
    Abstract: An apparatus includes a generation unit configured to generate map information including a position of a feature point and identification information on an index in an image of a real space captured by a capturing apparatus, a collation unit configured to collate the identification information on the index in the generated map information with the identification information on the index in one or more pieces of registered map information, and to extract map information from the one or more pieces of registered map information based on a result of the collation, and an estimation unit configured to estimate a position and orientation of the capturing apparatus based on the position of the feature point in the extracted map information and the position of the feature point in the generated map information.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: April 23, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuki Takemoto
  • Patent number: 11967083
    Abstract: A method and system for segmenting a plurality of images. The method comprises the steps of segmenting the image through a novel clustering technique that is, generating a composite depth map including temporally stable segments of the image as well as segments in subsequent images that have changed. These changes may be determined by determining one or more differences between the temporally stable depth map and segments included in one or more subsequent frames. Thereafter, the portions of the one or more subsequent frames that include segments including changes from their corresponding segments in the temporally stable depth map are processed and are combined with the segments from the temporally stable depth map to compute their associated disparities in one or more subsequent frames. The images may include a pair of stereo images acquired through a stereo camera system at a substantially similar time.
    Type: Grant
    Filed: July 24, 2022
    Date of Patent: April 23, 2024
    Assignee: Golden Edge Holding Corporation
    Inventors: Tarek El Dokor, Joshua King, Jordan Cluster, James Edward Holmes
  • Patent number: 11966665
    Abstract: The present invention provides a method for designing of a piece of apparel in particular an upper of a shoe, comprising the steps of providing at least one first panel including a plurality of feature points in an essentially two-dimensional configuration, arranging the at least one first panel on a first reference body in a three-dimensional configuration representing the piece of apparel to be designed, generating a first mapping between the two-dimensional configuration of the at least one first panel and the three-dimensional configuration of the at least one first panel using the plurality of feature points and designing the piece of apparel using the first mapping.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: April 23, 2024
    Assignee: adidas AG
    Inventors: Jochen Bjoern Suessmuth, Jens Raab, Detlef Philipp Müller
  • Patent number: 11964689
    Abstract: A vehicular trailer guidance system includes a human machine interface provided in the vehicle and operable by a driver of the vehicle during a backing up maneuver of the vehicle with a trailer hitched thereto. The human machine interface comprises a rotary knob and operates (i) in a backing up trailer left-backup mode when rotated counter-clockwise, (ii) in a backing up trailer straight-backup mode when the rotary knob is pushed or pulled and (iii) in a backing up trailer right-backup mode when rotated clockwise. Responsive to selection by the driver of the backup mode, the system sets a desired trailer angle relative to the longitudinal axis of the vehicle to an angle that is commensurate with a driver-selected setting of the rotary knob. The system controls steering of the vehicle to back up the trailer to have the determined trailer angle coincide with the set desired trailer angle.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: April 23, 2024
    Assignee: Magna Electronics Inc.
    Inventors: Yuesheng Lu, Jyothi P. Gali
  • Patent number: 11961181
    Abstract: A three-dimensional image transformation, executing on one or more computer systems, can mathematically transform a first two-dimensional image space onto a second two-dimensional image space using a three-dimensional image space. The three-dimensional image transformation can project the three-dimensional image space onto the first two-dimensional image space to map the first two-dimensional image space to the three-dimensional image space. Thereafter, the three-dimensional image transformation can project the second two-dimensional image space onto the three-dimensional image space to map the three-dimensional image space to the second two-dimensional image space.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: April 16, 2024
    Assignee: MSG Entertainment Group, LLC
    Inventors: William Andrew Nolan, Michael Romaszewicz
  • Patent number: 11961333
    Abstract: Gait, the walking pattern of individuals, is one of the important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and viewing angle. To remedy this issue, this disclosure proposes to explicitly disentangle appearance, canonical and pose features from RGB imagery. A long short-term memory integrates pose features over time as a dynamic gait feature while canonical features are averaged as a static gait feature. Both of them are utilized as classification features.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: April 16, 2024
    Assignee: Board of Trustees of Michigan State University
    Inventors: Xiaoming Liu, Ziyuan Zhang
  • Patent number: 11961251
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for continuous surface and depth estimation. A continuous surface and depth estimation system determines the depth and surface normal of physical objects by using stereo vision limited within a predetermined window.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: April 16, 2024
    Assignee: SNAP INC.
    Inventors: Olha Borys, Ilteris Kaan Canberk, Daniel Wagner, Jakob Zillner
  • Patent number: 11961257
    Abstract: An image processing system having on-the-fly calibration uses the placement of the imaging sensor and the light source for calibration. The placement of the imaging sensor and light source with respect to each other affect the amount of signal received by a pixel as a function of distance to a selected object. For example, an obstruction can block the light emitter, and as the obstruction is positioned an increasing distance away from the light emitter, the signal level increases as light rays leave the light emitters, bounce off the obstruction and are received by the imaging sensor. The system includes a light source configured to emit light, and an image sensor to collect incoming signals including reflected light, and a processor to determine a distance measurement at each of the pixels and calibrate the system.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: April 16, 2024
    Assignee: Analog Devices, Inc.
    Inventors: Charles Mathy, Brian C. Donnelly, Nicolas Le Dortz, Sefa Demirtas
  • Patent number: 11961184
    Abstract: A system and method for 3D reconstruction with plane and surface reconstruction, scene parsing, depth reconstruction with depth fusion from different sources. The system includes display and a processor to perform the method for 3D reconstruction with plane and surface reconstruction. The method includes dividing a scene of an image frame into one or more plane regions and one or more surface regions. The method also includes generating reconstructed planes by performing plane reconstruction based on the one or more plane regions. The method also includes generating reconstructed surfaces by performing surface reconstruction based on the one or more surface regions. The method further includes creating the 3D scene reconstruction by integrating the reconstructed planes and the reconstructed surfaces.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11956409
    Abstract: Aspects of the disclosure provide methods and apparatuses for audio processing. In some examples, an apparatus for media processing includes processing circuitry. The processing circuitry receives first 3 degrees of freedom (3 DoF) information associated with a first media content for a scene in a media application. The first 3 DoF information includes a first revolution orientation for describing the first media content on a first sphere centered at a user of the media application. The processing circuitry determines that a rendering platform for rendering the first media content is a six degrees of freedom (6 DoF) platform, and calculates, first spatial location information of the first media content based on the first revolution orientation and first parameters of the first sphere. The first spatial location information is used in first 6 DoF information associated with the first media content for rendering the first media content on the 6 DoF platform.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: April 9, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Jun Tian, Xiaozhong Xu, Shan Liu
  • Patent number: 11954874
    Abstract: A method for localizing, in a space containing at least one determined object, an object element associated to a particular 2D representation element in a determined 2D image of the space, may have: deriving a range or interval of candidate spatial positions for the imaged object element on the basis of predefined positional relationships); restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions, wherein restricting includes at least one of: limiting the range or interval of candidate spatial positions using at least one inclusive volume surrounding at least one determined object; and limiting the range or interval of candidate spatial positions using at least one exclusive volume surrounding non-admissible candidate spatial positions; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: April 9, 2024
    Assignee: Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
    Inventors: Joachim Keinert, Thorsten Wolf
  • Patent number: 11956408
    Abstract: The information processing system obtains a plurality of images based on image capturing by a plurality of imaging devices; obtains viewpoint information for specifying a position of a virtual viewpoint and a view direction from the virtual viewpoint; and generates a plurality of virtual viewpoint contents each of which corresponds to one of a plurality of image formats based on the common plurality of obtained images and the obtained viewpoint information, and the plurality of image formats is image formats whose numbers of virtual viewpoints specified by the viewpoint information used for generation of the virtual viewpoint contents are different from one another.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: April 9, 2024
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Taku Ogasawara
  • Patent number: 11954907
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a storage device, for electric grid modeling using surfel data are enclosed. An electric grid wire identification method includes: obtaining a set of surface elements (surfels), wherein each surfel of the set of surfels represents a portion of a surface of an object in a geographic region; selecting, based on one or more surfel attributes, one or more surfels of the set of surfels that each represent a portion of a surface of an electric grid wire; generating a representation of the electric grid wire from the selected one or more surfels; and adding the representation of the electric grid wire to a virtual model of the electric grid. Obtaining the set of surfels can include obtaining ranging data of the geographic region; and generating the set of surfels from the ranging data.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: April 9, 2024
    Assignee: X Development LLC
    Inventors: Ananya Gupta, Phillip Ellsworth Stahlfeld
  • Patent number: 11948268
    Abstract: Techniques for encoding or decoding digital video or pictures include acquiring a video bitstream that includes an encoded video image that is a two-dimensional image comprising multiple regions of a panoramic image in three-dimensional coordinates, extracting neighboring information for the multiple regions, performing, using the neighboring information, post-processing of a video image decoded from the video bitstream, and generating a display image from the video image after the post-processing.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: April 2, 2024
    Assignee: ZTE Corporation
    Inventors: Zhao Wu, Ming Li, Ping Wu
  • Patent number: 11948234
    Abstract: Disclosed is a system and associated methods for dynamically enhancing a three-dimensional (“3D”) animation that is generated from points of one or more point clouds. The system reduces noise and corrects gaps, holes, and/or distortions that are created in different frames as a result of adjusting the point cloud points to create the 3D animation. The system detects a set of points that share positional and/or non-positional commonality of a feature in the 3D animation. The system applies one or more adjustments to the set of points to animate feature from a current frame to a next frame, and detects a point from the set of points that deviates from the positional and/or non-positional commonality of the feature after applying the adjustments. The system dynamically enhances the 3D animation by correcting the point prior to rendering the next frame of the 3D animation.
    Type: Grant
    Filed: August 30, 2023
    Date of Patent: April 2, 2024
    Assignee: Illuscio, Inc.
    Inventor: Max Good
  • Patent number: 11948376
    Abstract: Device, system, and method of generating a reduced-size volumetric dataset. A method includes receiving a plurality of three-dimensional volumetric datasets that correspond to a particular object; and generating, from that plurality of three-dimensional volumetric datasets, a single uniform mesh dataset that corresponds to that particular object. The size of that single uniform mesh dataset is less than ¼ of the aggregate size of the plurality of three-dimensional volumetric datasets. The resulting uniform mesh is temporally coherent, and can be used for animating that object, as well as for introducing modifications to that object or to clothing or garments worn by that object.
    Type: Grant
    Filed: June 25, 2023
    Date of Patent: April 2, 2024
    Assignee: REALMOTION INC.
    Inventor: Amit Chachek
  • Patent number: 11948338
    Abstract: An encoder encodes three-dimensional (3D) volumetric content, such as immersive media, using video encoded attribute patch images packed into a 2D atlas to communicate the attribute values for the 3D volumetric content. The encoder also uses mesh-encoded sub-meshes to communicate geometry information for portions of the 3D object or scene corresponding to the attribute patch images packed into the 2D atlas. The encoder applies decimation operations to the sub-meshes to simplify the sub-meshes before mesh encoding the sub-meshes. A distortion analysis is performed to bound the level to which the sub-meshes are simplified at the encoder. Mesh simplification at the encoder reduces the number of vertices and edges included in the sub-meshes which simplifies rendering at a decoder receiving the encoded 3D volumetric content.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Khaled Mammou, Fabrice A. Robinet, Maneli Noorkami, Afshin Taghavi Nasrabadi
  • Patent number: 11948329
    Abstract: Systems and methods are disclosed, including a non-transitory computer readable medium storing computer executable instructions that when executed by a processor cause the processor to identify a first image, a second image, and a third image, the first image overlapping the second image and the third image, the second image overlapping the third image; determine a first connectivity between the first image and the second image; determine a second connectivity between the first image and the third image; determine a third connectivity between the second image and the third image, the second connectivity being less than the first connectivity, the third connectivity being greater than the second connectivity; assign the first image, the second image, and the third image to a cluster based on the first connectivity and the third connectivity; conduct a bundle adjustment process on the cluster of the first image, the second image, and the third image.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: April 2, 2024
    Assignee: Pictometry International Corp.
    Inventor: David Nilosek
  • Patent number: 11948316
    Abstract: A camera module may include an image sensor including an optical device configured to rotate about at least one of an x-axis, a y-axis, and a z-axis perpendicular to each other, in response to a mode signal, and configured to generate a plurality of first images, each first image generated when the optical device is at a different position; and an image signal processor (ISP) configured to process the plurality of first images, wherein the ISP is further configured to obtain a plurality of parameters pre-stored according to the mode signal, correct the plurality of first images by using the plurality of parameters, and generate a second image by merging the corrected first images.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 2, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Bongki Son, Sangmin Kim, Jeongyong Shin
  • Patent number: 11941793
    Abstract: A method for automatically determining quality of registration of landmarks includes training an artificial intelligence (AI) system to detect inaccurate registration of landmarks. Training the AI system uses training data that includes scans of an environment captured by a 3D measuring device from corresponding scan points. A first scan is registered with at least a second scan based on one or more landmarks captured in the first scan and the second scan. Further, a model is created to identify incorrect registration by analyzing the training data. The analysis detects a mismatch in a first instance of a landmark in the first scan and a second instance of said landmark in the second scan. The model is then used to evaluate registration of landmarks in live data, the live data including a set of scans, the result identifying accuracy level of the registration of landmarks.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: March 26, 2024
    Assignee: FARO Technologies, Inc.
    Inventors: Denis Wohlfeld, Heiko Bauer, Evelyn Schmitz
  • Patent number: 11941848
    Abstract: The present disclosure relates to a camera device. The camera device and an electronic device including the same according to an embodiment of the present disclosure include: a color camera; an IR camera; and a processor configured to extract a first region of a color image from the color camera, to extract a second region of an IR image from the IR camera, to calculate error information based on a difference between a gradient of the first region and a gradient of the second region, to compensate for at least one of the color image and the IR image based on the calculated error information, and to output a compensated color image or a compensated IR image.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: March 26, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Yunsuk Kang, Chanyong Park, Eunsung Lee
  • Patent number: 11937548
    Abstract: A system and method for sensing an edge of a region includes at least one distance sensor configured to detect a plurality of distances of objects along a plurality of adjacent scan lines. A controller is in communication with the at least one distance sensor and is configured to determine a location of an edge of a region within the plurality of adjacent scan lines. The controller includes a comparator module configured to compare values corresponding to the detected plurality of distances, and an identification module configured to identify the location of the edge of the region according to the compared values. In one example, the values corresponding to the detected plurality of distances include couplets of standard deviations that are analyzed and selected to identify the location of the edge.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: March 26, 2024
    Assignee: Raven Industries, Inc.
    Inventors: James Edward Slichter, Andrew Joseph Pierson, Derek Michael Stotz, Jonathan William Richardson
  • Patent number: 11940569
    Abstract: A method for calibrating a measuring system, the system comprising an image capture device, a laser scanner, and a positioning unit, wherein the method comprises preparing at least two images supplied by the image capture device and preparing a 3D point cloud; identifying at least one homologous point in the images and performing an optimization sequence for determining at least one calibration parameter of the measuring system. The sequence comprises at least one iteration of the following steps: for each image, identifying, in the 3D point cloud, at least one close point projecting in a neighborhood of the homologous point of the image; performing a measurement of the distance separating the close points respectively associated with the images; and adjusting the calibration parameter according to the measurement performed.
    Type: Grant
    Filed: July 3, 2020
    Date of Patent: March 26, 2024
    Assignees: Yellowscan, Universite de Montpellie, Centre National De La Recherche Scientifique
    Inventors: Quentin Pentek, Tristan Allouis, Christophe Fiorio, Olivier Strauss
  • Patent number: 11941499
    Abstract: Examples of methods for training using rendered images are described herein. In some examples, a method may include, for a set of iterations, randomly positioning a three-dimensional (3D) object model in a virtual space with random textures. In some examples, the method may include, for the set of iterations, rendering a two-dimensional (2D) image of the 3D object model in the virtual space and a corresponding annotation image. In some examples, the method may include training a machine learning model using the rendered 2D images and corresponding annotation images.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: March 26, 2024
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Qian Lin, Augusto Cavalcante Valente, Deangeli Gomes Neves, Guilherme Augusto Silva Megeto
  • Patent number: 11941852
    Abstract: A three-dimensional measurement device includes an image data receiving unit, a camera position selecting unit, a stereoscopic image selecting unit, and a camera position calculator. The image data receiving unit receives data of multiple photographic images. The photographic images are obtained by photographing a measurement target object and a random dot pattern from multiple surrounding viewpoints by use of a camera. The camera position selecting unit selects camera positions from among multiple positions of the camera. The stereoscopic image selecting unit selects the photographic images as stereoscopic images from among the photographic images that are taken from the camera positions selected by the camera position selecting unit. The camera position calculator calculates the camera position from which the stereoscopic images are taken. The selection of the camera positions is performed multiple times in such a manner that at least one different camera position is selected each time.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: March 26, 2024
    Assignee: Kazusa DNA Research Institute
    Inventors: Atsushi Hayashi, Nobuo Kochi, Takanari Tanabata, Sachiko Isobe
  • Patent number: 11931825
    Abstract: An additive manufacturing system includes a laser array including a plurality of laser devices. Each laser device of the plurality of laser devices generates an energy beam for forming a melt pool in a powder bed. The additive manufacturing system further includes at least one optical element. The optical element receives at least one of the energy beams and induces a predetermined power diffusion in the at least one energy beam.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: March 19, 2024
    Assignee: General Electric Company
    Inventors: Jason Harris Karp, Victor Petrovich Ostroverkhov
  • Patent number: 11935249
    Abstract: A system and method for determining egomotion can include determining correspondence maps between pairs of images of an odometry set; identifying odometry features shared between the images of the odometry set; and determining the egomotion based on the odometry features.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: March 19, 2024
    Assignee: Compound Eye, Inc.
    Inventors: Jason Devitt, Konstantin Azarov, Harold Wadleigh
  • Patent number: 11935162
    Abstract: An imaging system (302) includes an X-ray radiation source (312) configured to emit radiation that traverses an examination region, a detector array (314) configured to detect radiation that traverses an examination region and generate a signal indicative thereof, wherein the detected radiation is for a 3-D pre-scan, and a reconstructor (316) configured to reconstruct the signal to generate a 2-D pre-scan projection image. The imaging system further includes a console (318) wherein a processor thereof is configured to execute 3-D volume planning instructions (328) in memory to display the 2-D pre-scan projection image (402, 602, 802, 1002) and a scan plan or bounding box (404, 604, 804, 1004) for planning a 3-D volume scan of a region/tissue of interest based on a selected protocol for a 3-D volume scan of a region/tissue of interest being planned and receive an input confirming or adjusting the scan plan box to create a 3-D volume scan plan for the 3-D volume scan of the region/tissue of interest.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: March 19, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Kevin Martin Brown
  • Patent number: 11928824
    Abstract: An approach is provided in which the approach receives an image that includes multiple image points and constructs a plane in the image based on a first subset of the plurality of image points. The approach identifies a second subset of the image points that belong to the plane and are not part of the first subset of image points, and removes the first subset of image points and the second subset of image points form the image points. The approach annotates the remaining subset of image points in the image.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: March 12, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xue Ping Liu, Dan Zhang, Yuan Yuan Ding, Chao Xin, Fan Li, Hong Bing Zhang, Xu Min
  • Patent number: 11928844
    Abstract: A three-dimensional data encoding method includes: encoding geometry information of each of three-dimensional points based on one of a first geometry information encoding method of encoding using octree division and a second geometry information encoding method of encoding without using octree division; and generating a bitstream including the geometry information encoded and a geometry information flag indicating whether the encoding was performed based on the first geometry information encoding method or the second geometry information encoding method. In the generating of the bitstream: when the encoding is performed based on the first geometry information encoding method, the bitstream including a parameter set used in octree division is generated; and when the encoding is performed based on the second geometry information encoding method, the bitstream not including the parameter set used for octree division is generated.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: March 12, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Noritaka Iguchi, Toshiyasu Sugio
  • Patent number: 11922606
    Abstract: The method includes simultaneously illuminating a scene by at least two light sources, each light source emitting structured light having a spatial pattern, a wavelength and/or a polarization, wherein the spatial pattern, the wavelength and/or the polarization of each structured light differ from each other, respectively, capturing an image of the scene simultaneously illuminated by the at least two light sources by an imaging sensor through a filter array, wherein one pixel of the imaging sensor captures the image through one filter of the filter array, calculating, for each pixel, intensity values of direct and global components of the light received by the pixel from a system of equations compiled for each joint pixel, and performing, for each pixel, image correction by assigning to each pixel its calculated intensity value of the direct component to obtain a corrected image.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: March 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vladimir Mikhailovich Semenov, Anastasiia Sergeevna Suvorina, Vladislav Valer'evich Lychagov, Anton Sergeevich Medvedev, Evgeny Andreevich Dorokhov, Gennady Dmitrievich Mammykin
  • Patent number: 11921291
    Abstract: In an example method of training a neural network for performing visual odometry, the neural network receives a plurality of images of an environment, determines, for each image, a respective set of interest points and a respective descriptor, and determines a correspondence between the plurality of images. Determining the correspondence includes determining one or point correspondences between the sets of interest points, and determining a set of candidate interest points based on the one or more point correspondences, each candidate interest point indicating a respective feature in the environment in three-dimensional space). The neural network determines, for each candidate interest point, a respective stability metric and a respective stability metric. The neural network is modified based on the one or more candidate interest points.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: March 5, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Daniel Detone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Patent number: 11915145
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Innfarn Yoo, Rohit Taneja
  • Patent number: 11915427
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle generates segmentation scenes based upon lidar data generated by a lidar sensor system of the autonomous vehicle. The lidar data includes points indicative of positions of objects in a driving environment of the autonomous vehicle. The segmentation scenes comprise regions that are indicative of the objects in the driving environment. The autonomous vehicle generates scores for each segmentation scene based upon characteristics of each segmentation scenes and selects a segmentation scene based upon the scores. The autonomous vehicle then operates based upon the segmentation scene.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: February 27, 2024
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Andrea Allais, Micah Christopher Chambers
  • Patent number: 11915441
    Abstract: Systems and methods are provided performing for low compute depth map generation by implementing acts of obtaining a stereo pair of images of a scene, downsampling the stereo pair of images, generating a depth map by stereo matching the downsampled stereo pair of images, and generating an upsampled depth map based on the depth map using an edge-preserving filter for obtaining at least some data of at least one image of the stereo pair of images.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: February 27, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 11915449
    Abstract: The present invention relates to a method and an apparatus for estimating a user pose using a three-dimensional virtual space model. The method of estimating a user pose including the position and orientation information of a user for a three-dimensional space includes a step of receiving user information including an image acquired in a three-dimensional space, a step of confirming a three-dimensional virtual space model constructed based on spatial information including depth information and image information for the three-dimensional space, a step of generating corresponding information corresponding to the user information in the three-dimensional virtual space model, a step of calculating similarity between the corresponding information and the user information, and a step of estimating a user pose based on the similarity.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: February 27, 2024
    Assignee: Korea University Research and Business Foundation
    Inventors: Nak Ju Doh, Ga Hyeon Lim, Jang Hun Hyeon, Dong Woo Kim, Bum Chul Jang, Hyung A Choi
  • Patent number: 11915438
    Abstract: The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: February 27, 2024
    Assignee: STMicroelectronics France
    Inventors: Manu Alibay, Olivier Pothier, Victor Macela, Alain Bellon, Arnaud Bourge
  • Patent number: 11915368
    Abstract: A system for modeling a roof structure comprising an aerial imagery database and a processor in communication with the aerial imagery database. The aerial imagery database stores a plurality of stereoscopic image pairs and the processor selects at least one stereoscopic image pair among the plurality of stereoscopic image pairs and related metadata from the aerial imagery database based on a geospatial region of interest. The processor identifies a target image and a reference image from the at least one stereoscopic pair and calculates a disparity value for each pixel of the identified target image to generate a disparity map. The processor generates a three dimensional point cloud based on the disparity map, the identified target image and the identified reference image. The processor optionally generates a texture map indicative of a three-dimensional representation of the roof structure based on the generated three dimensional point cloud.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: February 27, 2024
    Assignee: Insurance Services Office, Inc.
    Inventors: Joseph L. Mundy, Bryce Zachary Porter, Ryan Mark Justus, Francisco Rivas
  • Patent number: 11911105
    Abstract: Embodiments of the present disclosure are directed to systems, apparatuses and methods for prediction, diagnosis, planning and monitoring for myopia and myopic progression. In some embodiments, one or more refractive properties of the eye is determined for each of a plurality of retinal locations of the eye. The plurality of locations may comprise a central region, such as a fovea of the eye, or non-foveal region such as a peripheral region or a region of the macula outside the fovea. Measuring the refractive properties of the eye for the plurality of locations may be helpful in diagnosing myopia and other ocular conditions by providing the refractive properties of the eye for locations away from the fovea, such as the peripheral retina.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: February 27, 2024
    Assignee: ACUCELA INC.
    Inventors: Ryo Kubota, Philip M. Buscemi
  • Patent number: 11908150
    Abstract: Systems and methods are provided for three-dimensional scanning and measurement by a device having a processor. The processor is configured to receive images of an object from at least two angles; preprocess the images using morphological refinement; create a source point cloud based on the images; remove outliers from the source point cloud; globally register the source point cloud to generate a transformed source point cloud; compare the transformed source point cloud with a target point cloud to generate a stitched point cloud that thereby creates a stitched 3D model; measure the resulting stitched 3D model; and provide the resulting stitched 3D model for comparison to a digitized item to assess sizing of the 3D model to the item.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: February 20, 2024
    Assignee: Xesto Inc.
    Inventors: Mehmet Afiny Affan Akdemir, Christian Garcia Salguero, Victoria Sophie Howe
  • Patent number: 11907886
    Abstract: In an example embodiment, a machine learning algorithm is used to train a machine-learned model to create a three-dimensional representation of products, with each product mapped into a coordinate in the three-dimensional space. The model selects the coordinates based on the similarity of the product to other products. Coordinates that are closer geometrically in the three-dimensional space represent products that are similar to each other, whereas as the coordinates get further and further away, this implies that the products are less and less similar. This machine-learned model then not only allows for quick analysis of multiple products, as similarity between products or groups of products can be performed using geometric calculations (e.g., cosine distance), but also can then be tied into a three-dimensional representation that can be displayed either on a two-dimensional display or displayed on a three-dimensional display.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: February 20, 2024
    Assignee: SAP SE
    Inventors: Oliver Grob, Jens Mansfeld
  • Patent number: 11908098
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate a combined 3D representation of a user based on an alignment based on a 3D reference point. For example, a process may include obtaining a predetermined three-dimensional (3D) representation that is associated with a 3D reference point defined relative to a skeletal representation of the user. The process may further include obtaining a sequence of frame-specific 3D representations corresponding to multiple instants in a period of time, each of the frame-specific 3D representations representing a second portion of the user at a respective instant of the multiple instants in the period of time. The process may further include generating combined 3D representations of the user generated by combining the predetermined 3D representation with a respective frame-specific 3D representation based on an alignment which is based on the 3D reference point.
    Type: Grant
    Filed: September 20, 2023
    Date of Patent: February 20, 2024
    Assignee: Apple Inc.
    Inventor: Michael S. Hutchinson
  • Patent number: 11907812
    Abstract: A data generating method includes: an atomic model generating step of generating one or more three-dimensional atomic models corresponding to a nanomaterial to be measured; a three-dimensional data generating step of generating three-dimensional atomic level structure volume data corresponding to the nanomaterial to be measured based on the one or more three-dimensional atomic model; a tilt series generating step of generating a tilt series by simulating three-dimensional tomography for a plurality of different angles in a predetermined angle range for at least some of the three-dimensional atomic level structure volume data; and a three-dimensional atomic structure tomogram volume data generating step of generating a three-dimensional atomic structure tomogram volume data set by performing three-dimensional reconstruction on at least some of the three-dimensional atomic level structure volume data based on the tilt series.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: February 20, 2024
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Yongsoo Yang, Juhyeok Lee, Chaehwa Jeong
  • Patent number: 11908151
    Abstract: Systems and methods are provided for three-dimensional scanning and measurement by a device having a processor. The processor is configured to receive images of an object from at least. two angles; preprocess the images using morphological refinement; create a source point cloud based on the images; remove outliers from the source point cloud; globally register the source point cloud to generate a transformed source point cloud; compare the transformed source point cloud with a target point cloud to generate a stitched point cloud that thereby creates a stitched 3D model; measure the resulting stitched 3D model; and provide the resulting stitched 3D model for comparison to a digitized item to assess sizing of the 3D model to the item.
    Type: Grant
    Filed: March 22, 2023
    Date of Patent: February 20, 2024
    Assignee: Xesto Inc.
    Inventors: Mehmet Afiny Affan Akdemir, Christian Garcia Salguero, Victoria Sophie Howe
  • Patent number: 11908165
    Abstract: In one embodiment, a method includes capturing, by a camera of an image capturing module, a first image of a target. The image capturing module and a drum are attached to a fixture and the target is attached to the drum. The method also includes determining a number of lateral pixels in a lateral pitch distance of the image of the target, determining a lateral object pixel size based on the number of lateral pixels, and determining a drum encoder rate based on the lateral object pixel size. The drum encoder rate is programmed into a drum encoder. The method further includes capturing, by the camera, a second image of the target while the target is rotated about an axis of the drum, determining a number of longitudinal pixels in a longitudinal pitch distance of the second image, and comparing the number of lateral pixels to the number of longitudinal pixels.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: February 20, 2024
    Assignee: BNSF Railway Company
    Inventors: Darrell R. Krueger, Garrett Smitley