Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 11508077
    Abstract: A processor-implemented method of detecting a moving object includes: estimating a depth image of a current frame; determining an occlusion image of the current frame by calculating a depth difference value between the estimated depth image of the current frame and an estimated depth image of a previous frame; determining an occlusion accumulation image of the current frame by adding a depth difference value of the occlusion image of the current frame to a depth difference accumulation value of an occlusion accumulation image of the previous frame; and outputting an area of a moving object based on the occlusion accumulation image.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: November 22, 2022
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Hyoun Jin Kim, Haram Kim
  • Patent number: 11495026
    Abstract: A technique facilitates selecting and designating an arbitrary one of a plurality of aerial lines. The aerial line extraction system, includes: an area-of-interest cropping unit that crops a region where an aerial line is assumed to exist as an area of interest by setting a support of the aerial line as a reference from a three-dimensional point cloud data; an element segmenting unit that segments the area of interest into a plurality of subdivided areas, obtains a histogram by counting three-dimensional point clouds existing in each of the subdivided areas, and obtains a segmentation plane of the area of interest on the basis of the histogram; and an element display unit that segments the area of interest into a plurality of segmented areas by the segmentation plane and displays the three-dimensional point clouds included in each of the segmented areas in a distinguishable manner.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: November 8, 2022
    Assignee: HITACHI SOLUTIONS, LTD.
    Inventors: Sadaki Nakano, Nobutaka Kimura, Kishiko Maruyama, Nobuhiro Chihara
  • Patent number: 11481878
    Abstract: Systems, computer program products, and techniques for detecting and/or reconstructing objects depicted in digital image data within a three-dimensional space are disclosed. The concepts utilize internal features for detection and reconstruction, avoiding reliance on information derived from location of edges. The inventive concepts provide an improvement over conventional techniques since objects may be detected and/or reconstructed even when edges are obscured or not depicted in the digital image data. In one aspect, detecting a document depicted in a digital image includes: detecting a plurality of identifying features of the document, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of one or more edges of the document based at least in part on the plurality of identifying features; and outputting the projected location of the one or more edges of the document to a display of a computer, and/or a memory.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: October 25, 2022
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen Michael Thompson, Jan W. Amtrup
  • Patent number: 11483468
    Abstract: Methods and systems for capturing a three dimensional image are described. An image capture process is performed while moving a lens to capture image data across a range of focal depths, and a three dimensional image reconstruction process generates a three dimensional image based on the image data. A two-dimensional image is also rendered including focused image data from across the range of focal depths. The two dimensional image and the three dimensional image are fused to generate a focused three dimensional model.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: October 25, 2022
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventor: Chih-Min Liu
  • Patent number: 11465772
    Abstract: An exterior aircraft image projector includes at least one light source, providing a light output in operation; an optical system configured for transforming the light output of the at least one light source into a light beam and projecting said light beam onto the ground below the aircraft and the of the aircraft; a photo detector arranged to detect a brightness level (Iambient) of the ground or the exterior and configured to provide a corresponding brightness signal; and a controller, coupled to the photo detector and the at least one light source configured to control an intensity of the light output of the at least one light source as a function of the brightness level (Iambient), as provided by the photo detector via the brightness signal.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: October 11, 2022
    Assignee: GOODRICH LIGHTING SYSTEMS GMBH
    Inventors: Bjoern Schallenberg, Carsten Pawliczek
  • Patent number: 11453130
    Abstract: A robot system, including: a robot; a base supporting the robot; a controller connected to the robot; a processor connected to the controller; a depth camera connected to the processor; a flange plate; a coupling shaft including a first end and a second end; a mounting base including an elongated hole, a first side wall, and a second side wall; a sprayer including a mounting shaft; a first positioning bolt; a limit arm includes a first end and a second end; an axis pin; a limit shaft; a second positioning bolt; a gas cylinder; a piston rod; a connector; a shifter level; a trigger. The robot is connected to the first end of the coupling shaft via the flange plate. The second end of the coupling shaft is connected to the mounting base. The mounting shaft of the sprayer is disposed in the elongated hole of the mounting base.
    Type: Grant
    Filed: June 28, 2020
    Date of Patent: September 27, 2022
    Assignee: DALIAN NEWSTAR AUTOMOBILE EQUIPMENT CO., LTD.
    Inventors: Kedong Bi, Chaoping Qin, Long Cui, Wentao Li
  • Patent number: 11436735
    Abstract: A volume of an object is extracted from a three-dimensional image to generate a three-dimensional object image, where the three-dimensional object image represents the object but little to no other aspects of the three-dimensional image. The three-dimensional image is yielded from an examination in which the object, such as a suitcase, is situated within a volume, such as a luggage bin, that may contain other aspects or objects that are not of interest, such as sidewalls of the luggage bin. The three-dimensional image is projected to generate a two-dimensional image, and a two-dimensional boundary of the object is defined, where the two-dimensional boundary excludes or cuts off at least some of the uninteresting aspects. In some embodiments, the two-dimensional boundary is reprojected over the three-dimensional image to generate a three-dimensional boundary, and voxels comprised within the three-dimensional boundary are extracted to generate the three-dimensional object image.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: September 6, 2022
    Assignee: ANALOGIC CORPORATION
    Inventors: David Lieblich, Nirupam Sarkar, Daniel B. Keesing
  • Patent number: 11417020
    Abstract: A method includes: obtaining a stereo pair of images from a stereo camera assembly of a mobile computing device, the stereo pair of images depicting a first marker and a second marker each associated with the mobile computing device; determining, from the stereo pair of images, a distance between the first and second markers; comparing a threshold to a difference between the determined distance and a reference distance corresponding to the first and second reference markers; and when the difference exceeds the threshold, generating an alert notification.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: August 16, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Serguei Zolotov, Lawrence Allen Stone
  • Patent number: 11386616
    Abstract: A spatial indexing system receives a sequence of images depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: July 12, 2022
    Assignee: OPEN SPACE LABS, INC.
    Inventors: Michael Ben Fleischman, Philip DeCamp, Jeevan Kalanithi, Thomas Friel Allen
  • Patent number: 11380050
    Abstract: A face image generation method includes: determining, according to a first face image, a three dimensional morphable model (3DMM) corresponding to the first face image as a first model; determining, according to a reference element, a 3DMM corresponding to the reference element as a second model, the reference element representing a posture and/or an expression of a target face image; determining, according to the first model and the second model, an initial optical flow map corresponding to the first face image, and deforming the first face image according to the initial optical flow map to obtain an initial deformation map; obtaining, through a convolutional neural network, an optical flow increment map and a visibility probability map that correspond to the first face image; and generating the target face image according to the first face image, the initial optical flow map, the optical flow increment map, and the visibility probability map.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: July 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xue Fei Zhe, Yonggen Ling, Lin Chao Bao, Yi Bing Song, Wei Liu
  • Patent number: 11367264
    Abstract: A computer implemented method or system including a map conversion toolkit and a map Population toolkit. The map conversion toolkit allows one to quickly trace the layout of a floor plan, generating a file (e.g., GeoJSON file) that can be rendered in two dimensions (2D) or three dimensions (3D) using web tools such as Mapbox. The map population toolkit takes the scan (e.g., 3D scan) of a room in the building (taken from an RGB-D camera), and, through a semi- automatic process, generates individual objects, which are correctly dimensioned and positioned in the (e.g., GeoJSON) representation of the building. In another example, a computer implemented method for diagraming a space comprises obtaining a layout of the space; and annotating or decorating the layout with meaningful labels that are translatable to glanceable visual signals or audio signals.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: June 21, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Viet Trinh, Roberto Manduchi
  • Patent number: 11361457
    Abstract: An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: June 14, 2022
    Assignee: Tesla, Inc.
    Inventor: Anting Shen
  • Patent number: 11321852
    Abstract: A method for initializing a tracking algorithm for target objects, includes generating a 3D point cloud of the target object and iteratively determining a spatial position and orientation of the target object using a 3D model. A spatial position and orientation of the target object is first determined using an artificial neural network, thereafter the tracking algorithm is initialized with a result of this determination. A method for training an artificial neural network for initializing a tracking algorithm for target objects includes generating a 3D point cloud of the target object by a scanning method, and iteratively determining a spatial position and orientation of the using a 3D model of the target object. The artificial neural network is trained using training data to initially determine a spatial position and orientation of the target object and thereafter initialize the tracking algorithm with a result of this initial determination.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 3, 2022
    Assignee: Jena-Optronik GmbH
    Inventors: Christoph Schmitt, Johannes Both, Florian Kolb
  • Patent number: 11315274
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: April 26, 2022
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11301953
    Abstract: Disclosed are a panoramic video asymmetrical mapping method and a corresponding inverse mapping method that include mapping a spherical surface corresponding to a panoramic image or video A onto a two-dimensional image or video B, projecting the spherical surface onto an isosceles quadrangular pyramid with a square bottom plane, and further projecting the isosceles quadrangular pyramid onto a planar surface, using isometric projection on a main viewpoint region in the projection and using a relatively high sampling density to ensure that the video quality of the region of the main viewpoint is high, while using a relatively low sample density for non-main viewpoint regions so as to reduce bit rate. The panoramic video asymmetrical inverse mapping technique provides a method for mapping from a planar surface to a spherical surface, and a planar surface video may be mapped back to a spherical surface for rendering and viewing.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: April 12, 2022
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Yueming Wang, Zhenyu Wang, Wen Gao
  • Patent number: 11282275
    Abstract: A method for generating a storybook includes generating metadata including shape information which is a predefined value for specifying a shape that a character model has in each of scenes in which a character of storybook content appears, receiving a facial image of a user, generating a user model based on a user face by applying texture information of the facial image to the character, generating a model image of the user model having a predefined shape in each of the scenes by reflecting shape information predefined in each of the scenes into the user model, and generating a file printable on a certain actual object to include at least one of the model images.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 22, 2022
    Assignee: ILLUNI INC.
    Inventors: Byunghwa Park, Youngjun Kwon, Gabee Jo
  • Patent number: 11263820
    Abstract: A method of operating a computing system to generate a model of an environment represented by a mesh is provided. The method allows to update 3D meshes to client applications in real time with low latency to support on the fly environment changes. The method provides 3D meshes adaptive to different levels of simplification requested by various client applications. The method provides local update, for example, updating the mesh parts that are changed since last update. The method also provides 3D meshes with planarized surfaces to support robust physics simulations. The method includes segmenting a 3D mesh into mesh blocks. The method also includes performing a multi-stage simplification on selected mesh blocks. The multi-stage simplification includes a pre-simplification operation, a planarization operation, and a post-simplification operation.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: March 1, 2022
    Assignee: Magic Leap, Inc.
    Inventors: David Geoffrey Molyneaux, Frank Thomas Steinbr├╝cker, Zhongle Wu, Xiaolin Wei, Jianyuan Min, Yifu Zhang
  • Patent number: 11252430
    Abstract: The present disclosure is directed a system and method for exploiting camera and depth information associated with rendered video frames, such as those rendered by a server operating as part of a cloud gaming service, to more efficiently encode the rendered video frames for transmission over a network. The method and system of the present disclosure can be used in a server operating in a cloud gaming service to improve, for example, the amount of latency, downstream bandwidth, and/or computational processing power associated with playing a video game over its service. The method and system of the present disclosure can be further used in other applications where camera and depth information of a rendered or captured video frame is available.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: February 15, 2022
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Khaled Mammou, Ihab Amer, Gabor Sines, Lei Zhang, Michael Schmit, Daniel Wong
  • Patent number: 11182928
    Abstract: Embodiments of the present disclosure provide a method, apparatus for determining a rotation angle of an engineering mechanical device, an electronic device and a computer readable medium. The method may include: acquiring a depth image sequence acquired by a binocular camera disposed at a rotating portion of the engineering mechanical device during rotation of the rotating portion of the engineering mechanical device; converting the depth image sequence into a three-dimensional point cloud sequence; and determining a matching point between three-dimensional point cloud frames in the three-dimensional point cloud sequence, determining a rotation angle of the binocular camera during the rotation of the rotating portion of the engineering mechanical device based on the matching point between the three-dimensional point cloud frames as the rotation angle of the engineering mechanical device.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: November 23, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xinjing Cheng, Ruigang Yang, Feixiang Lu, Yajue Yang, Hao Xu
  • Patent number: 11176353
    Abstract: The disclosure relates to corresponding apparatus, computer program and method for receiving three-dimensional, 3D, map-data, in which a plurality of locations within the 3D-map-data are association with respective 3D-data-capture-locations of a 3D-camera, and in which 3D-camera-timing-information is associated with each of the plurality of locations; receiving one or more two-dimensional, 2D, images from a 2D-camera, in which 2D-camera-timing-information is associated with each 2D-image, and in which each 2D-image is captured when a movement of the 3D-camera is less than a threshold level; identifying 3D-camera-timing-information associated with locations within the 3D-map-data that correspond to 3D-data-capture-locations with a movement level of the 3D-camera less than the threshold level; associating, in a combined dataset, each 2D-image with a corresponding location within the 3D-map-data by a data processing unit correlating the 2D-camera-timing-information with the identified 3D-camera-timing-informatio
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: November 16, 2021
    Assignee: Geoslam Limited
    Inventors: Neil Slatcher, Alex Bentley, Cheryl Smith
  • Patent number: 11158081
    Abstract: A positioning method is provided for an electrical device including a depth sensor and an inertial measurement unit (IMU). The positioning method includes: calculating an initial position and an initial direction of the electrical device according to signals of the IMU; obtaining an environment point cloud from a database, and obtaining a partial environment point cloud according to the initial position and the initial direction; obtaining a read point cloud by the depth sensor; and matching the read point cloud and the partial environment point cloud to calculate an updated position and an updated direction of the electrical device.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: October 26, 2021
    Assignee: ADAT Technology Co., Ltd.
    Inventors: Kai-Hung Su, Mu-Heng Lin
  • Patent number: 11100707
    Abstract: A computer-implemented graphics processing method, which comprises providing an original set of vertices of a terrain mesh; producing a new set of vertices from the original set of vertices; and, for a given vertex in the original set of vertices: (i) obtaining texture coordinates for vertices in a subset of vertices in the new set of vertices that corresponds to the given vertex in the original set of vertices; and (ii) combining the obtained texture coordinates to obtain a texture coordinate for the given vertex in the original set of vertices. The combining may comprise determining a weighted sum of the obtained texture coordinates, and the weights may be the weights for which the given vertex in the original set of vertices is the centroid of a polygon formed by the corresponding vertices in the new set of vertices.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 24, 2021
    Assignee: SQUARE ENIX LIMITED
    Inventor: Peter Sikachev
  • Patent number: 11100310
    Abstract: Disclosed in embodiments of the present disclosure are an object three-dimensional detection method and apparatus, an intelligent driving control method and apparatus, a medium, and a device. The object three-dimensional detection method comprises: obtaining two-dimensional coordinates of a key point of a target object in an image to be processed; constructing a pseudo three-dimensional detection body of the target object according to the two-dimensional coordinates of the key point; obtaining depth information of the key point; and determining a three-dimensional detection body of the target object according to the depth information of the key point and the pseudo three-dimensional detection body.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: August 24, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yingjie Cai, Xingyu Zeng, Junjie Yan, Xiaogang Wang
  • Patent number: 11049219
    Abstract: Methods and apparatus for multi-encoder processing of high resolution content. In one embodiment, the method includes capturing high resolution imaging content; splitting up the captured high resolution imaging content into respective portions; feeding the split up portions to respective imaging encoders; packing encoded content from the respective imaging encoders into an A/V container; and storing and/or transmitting the A/V container. In another embodiment, the method includes retrieving and/or receiving an A/V container; splitting up the retrieved and/or received A/V container into respective portions; feeding the split up portions to respective imaging decoders; stitching the decoded imaging portions into a common imaging portion; and storing and/or displaying at least a portion of the common imaging portion.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 29, 2021
    Assignee: GoPro, Inc.
    Inventors: David Newman, Daryl Stimm, Adeel Abbas
  • Patent number: 11046430
    Abstract: Systems and methods are provided for improving the flight safety of fixed- and rotary-wing unmanned aerial systems (UAS) operating in complex dynamic environments, including urban cityscapes. Sensors and computations are integrated to predict local winds and promote safe operations in dynamic urban regions where GNSS and other network communications may be unavailable. The system can be implemented onboard a UAS and does not require in-flight communication with external networks. Predictions of local winds (speed and direction) are created using inputs from sensors that scan the local environment. These predictions are then used by the UAS guidance, navigation, and control (GNC) system to determine safe trajectories for operations in urban environments.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: June 29, 2021
    Assignee: United States of America as Represented by the Administrator of NASA
    Inventors: John Earl Melton, Ben Edward Nikaido
  • Patent number: 11037281
    Abstract: Embodiments of this application disclose an image fusion method performed by a computing device. The method includes the following steps: obtaining source face image data of a current to-be-fused image and resource configuration information of a current to-be-fused resource, performing image recognition processing on the source face image data, to obtain source face feature points corresponding to the source face image data, and generating a source face three-dimensional grid of the source face image data according to the source face feature points, performing grid fusion by using a resource face three-dimensional grid and the source face three-dimensional grid to generate a target face three-dimensional grid, and performing face complexion fusion by using source complexion data of the source face image data and resource complexion data of resource face image data on the target face three-dimensional grid, to generate fused target face image data.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: June 15, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Keyi Shen, Pei Cheng, Mengren Qian, Bin Fu
  • Patent number: 11015929
    Abstract: The present invention discloses a positioning method and apparatus. The method includes: acquiring a first image captured by an optical device, where the first image includes an observation object and a plurality of predetermined objects, and the predetermined objects are objects with known geographic coordinates; selecting a first predetermined object from the predetermined objects based on the first image; acquiring a second image, where the first predetermined object is located in a center of the second image; determining a first attitude angle of the optical device based on the first predetermined object in the second image and measurement data captured by an inertial navigation system; modifying the first attitude angle based on a positional relationship between the observation object and the first predetermined object in the second image, to obtain a second attitude angle; and calculating geographic coordinates of the observation object based on the second attitude angle.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: May 25, 2021
    Assignee: DONGGUAN FRONTIER TECHNOLOGY INSTITUTE
    Inventors: Ruopeng Liu, Lin Luan, Faguo Xu
  • Patent number: 11010639
    Abstract: An angularly-dependent reflectance of a surface of an object is measured. Images are collected by a sensor at different sensor geometries and different light-source geometries. A point cloud is generated. The point cloud includes a location of a point, spectral band intensity values for the point, an azimuth and an elevation of the sensor, and an azimuth and an elevation of a light source. Raw pixel intensities of the object and surroundings of the object are converted to a surface reflectance of the object using specular array calibration (SPARC) targets. A three-dimensional (3D) location of each point in the point cloud is projected back to each image using metadata from the plurality of images, and spectral band values are assigned to each value in the point cloud, thereby resulting in a multi-angle spectral reflectance data set.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 18, 2021
    Assignee: Raytheon Company
    Inventors: John J. Coogan, Stephen J. Schiller
  • Patent number: 10984221
    Abstract: An image recognition device includes: a luminance image generator and a distance image generator that generate a luminance image and a distance image, respectively, based on an image signal of an imaging target object output from a photoreceptor element; a target object recognition processor that extracts a target-object candidate from the luminance image using a machine learning database; and a three-dimensional object determination processor that uses the distance image to determine whether the extracted target-object candidate is a three-dimensional object. If it is determined that the target-object candidate is not a three-dimensional object, the target-object candidate extracted from the luminance image is prevented from being used, in the machine learning database, as image data for extracting a feature value of a target object.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: April 20, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Shigeru Saitou, Shinzo Koyama, Masato Takemoto, Motonori Ishii
  • Patent number: 10976812
    Abstract: Provided is an information processing device including an image processing unit that performs geometric correction on a target image instructed to be displayed in a display region that displays an image. The geometric correction is performed on a basis of direction information indicating a direction of a user viewing the display region with respect to the display region.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: April 13, 2021
    Assignee: SONY CORPORATION
    Inventors: Takuya Ikeda, Kentaro Ida, Yousuke Kawana, Osamu Shigeta, Seiji Suzuki
  • Patent number: 10977812
    Abstract: A method is described for adapting 3D image datasets so that they can be registered and combined with 2D images of the same subject, wherein deformation or movement of parts of the subject has occurred between obtaining the 3D image and the 2D image. 2D-3D registrations of the images with respect to multiple features visible in both images are used to provide point correspondences between the images in order to provide an interpolation function that can be used to determine the position of a feature visible in the first image but not the second image and thus mark the location of the feature on the second image. Also described is apparatus for carrying out this method.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 13, 2021
    Assignee: Cydar Limited
    Inventors: Tom Carrell, Graeme Penney, Andreas Varnavas
  • Patent number: 10933534
    Abstract: Included is a method for a mobile automated device to detect and avoid edges including: providing one or more rangefinder sensors on the mobile automated device to calculate, continuously or periodically, distances from the one or more rangefinder sensor to a surface; monitoring, with a processor of the mobile automated device, the distances calculated by each of the one or more rangefinder sensors; and actuating, with the processor of the mobile automated device, the mobile automated device to execute one or more predetermined movement patterns upon the processor detecting a calculated distance greater than a predetermined amount, wherein the one or more movement patterns initiate movement of the mobile automated device away from the area where the increase was detected.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: March 2, 2021
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia, Masih Ebrahimi Afrouzi
  • Patent number: 10922787
    Abstract: An imaging apparatus that starts connecting processing at a synthesizing position in an early stage is provided. The imaging apparatus includes: a first imaging element that images a first imaging range, a second imaging element that images a second imaging range of which one part overlaps with the first imaging range, and a synthesizing unit that synthesizes an image corresponding to an imaging range wider than the first imaging range or the second imaging range, based on pixel data groups output by the first imaging element and the second imaging element, wherein the first imaging element and the second imaging element output pixel data corresponding to a position at which the first imaging range and the second imaging range overlap each other, to a synthesizing unit prior to other pixel data.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: February 16, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasuaki Ise
  • Patent number: 10915760
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting humans in images using occupancy grid maps. The methods, systems, and apparatus include actions of obtaining an image of a scene without people, generating a reference occupancy grid from the image, generating pairs of training images with humans rendered and corresponding training occupancy grids based on the occupancy grid and the image, training a scene-specific human detector with the pairs of training images with humans rendered and corresponding training occupancy grids, generating a sample occupancy grid from a sample image using the scene-specific human detector, and augmenting the sample image using the sample occupancy grid.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: February 9, 2021
    Assignee: ObjectVideo Labs, LLC
    Inventor: SungChun Lee
  • Patent number: 10878630
    Abstract: A three-dimensional (3D) image display device includes a display device; a variable focus optical system configured to focus the 3D image formed by the display device on a reference plane, a processor configured to determine a representative depth value of the 3D image by selecting a depth position, from among a plurality of depth positions corresponding to the 3D image, as the representative depth value, and control the variable focus optical system to adjust the reference plane by adjusting a focal point of the variable focus optical system based on the representative depth value; and a transfer optical system configured to transfer the 3D image focused on the reference plane to a pupil of an observer.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 29, 2020
    Assignee: SAMSUNG ELECTRONICS CO.. LTD.
    Inventors: Geeyoung Sung, Yuntae Kim, Changkun Lee, Hongseok Lee
  • Patent number: 10860166
    Abstract: Provided is an image processing method including: displaying an image including a plurality of objects; receiving a selection of an object from among the plurality of objects; receiving a depth adjustment input; changing a depth of the selected object based on the depth adjustment input; generating a depth adjusted image file of the image based on the changed depth; and displaying a depth adjusted image based on the generated depth adjusted image file.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: December 8, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyun-jee Kim
  • Patent number: 10846818
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Patent number: 10846817
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Patent number: 10841799
    Abstract: Methods and apparatuses for arranging sounding symbol are provided. An example apparatus comprises memory; and processing circuitry coupled to the memory. The processing circuitry is configured to encode a sounding signal. The sounding signal comprises a plurality of sounding symbols, and the repetition of sounding symbols to be transmitted in sequence is avoided.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: November 17, 2020
    Inventors: Assaf Gurevitz, Robert Stacey, Jonathan Segev, Qinghua Li, Danny Alexander, Shlomi Vituri, Feng Jiang
  • Patent number: 10834379
    Abstract: A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promising solution to address this problem is to use automated 2D-to-3D conversion. However, current conversion methods, while general, produce low-quality results with artefacts that are not acceptable to many viewers. Creating a database of 3D stereoscopic videos with accurate depth is, however, very difficult. Computer generated content can be used to generate high-quality 3D video reference database for 2D-to-3D conversion. The method transfers depth information from frames in the 3D reference database to the target frame while respecting object boundaries. It computes depth maps from the depth gradients, and outputs a stereoscopic video.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: November 10, 2020
    Inventors: Mohamed M. Hefeeda, Kiana Ali Asghar Calagari, Mohamed Abdelaziz A Mohamed Elgharib, Wojciech Matusik, Piotr Didyk, Alexandre Kaspar
  • Patent number: 10825259
    Abstract: An apparatus to generate a model of a surface of an object includes a data set pre-aligner configured to receive multiple sets of surface data that correspond to respective portions of a surface of an object and that include three-dimensional (3D) points. The data set pre-aligner is also configured to perform a pre-alignment of overlapping sets to generate pre-aligned sets, including performing a rotation operation on a second set of the surface data, relative to a first set of the surface data that overlaps the second set, to apply a rotation amount that is selected from among multiple discrete rotation amounts and based on a similarity metric. The apparatus includes a data set aligner configured to perform an iterative alignment of the pre-aligned sets to generate aligned sets. The apparatus also includes a 3D model generator configured to combine the aligned sets to generate a 3D model of the object.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: November 3, 2020
    Assignee: THE BOEING COMPANY
    Inventors: Kyungnam Kim, Heiko Hoffmann
  • Patent number: 10817125
    Abstract: Provided is an image processing method including: displaying an image including a plurality of objects; receiving a selection of an object from among the plurality of objects; receiving a depth adjustment input; changing a depth of the selected object based on the depth adjustment input; generating a depth adjusted image file of the image based on the changed depth; and displaying a depth adjusted image based on the generated depth adjusted image file.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: October 27, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyun-jee Kim
  • Patent number: 10805860
    Abstract: Provided are a method by which a terminal performs an access barring check in a wireless communication system and a device for supporting the same. The method may include: a step for entering an RRC_INACTIVE state; a step for performing the access barring check on a cell; a step for checking that access to the cell is prevented as many times as the maximum number of access attempts; and a step for transitioning from the RRC_INACTIVE state to an RRC_IDLE state.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: October 13, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Youngdae Lee, Jaehyun Kim, Bokyung Byun
  • Patent number: 10798416
    Abstract: Disclosed is a 3D video motion estimating apparatus and method. The 3D video motion estimating apparatus may enable a motion vector of a color image and a motion vector of a depth image refer to each other, thereby increasing a compression rate.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: October 6, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Young Lee, Du-Sik Park, Jaejoon Lee, Ho Cheon Wey, Il Soon Lim, Seok Lee
  • Patent number: 10796434
    Abstract: A method for learning an automatic parking device of a vehicle for detecting an available parking area is provided. The method includes steps of: a learning device, (a) if a parking lot image of an area nearby the vehicle is acquired, (i) inputting the parking lot image into a segmentation network to output a convolution feature map via an encoder, output a deconvolution feature map by deconvoluting the convolution feature map via a decoder, and output segmentation information by masking the deconvolution feature map via a masking layer; (b) inputting the deconvolution feature map into a regressor to generate relative coordinates of vertices of a specific available parking region, and generate regression location information by regressing the relative coordinates; and (c) instructing a loss layer to calculate 1-st losses by referring to the regression location information and an ROI GT, and learning the regressor via backpropagation using the 1-st losses.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10796141
    Abstract: Systems and methods are provided for capturing images of animals for the purpose of identifying the animal. A camera can be positioned to capture images of an animal at a feeding station. A member can be positioned on the opposite side of the feeding station from the camera to provide a common background for the images captured by the camera. When an image is captured by the camera, a determination is made as to whether an animal is present in the image. If an animal is determined to be present in the image, a segmentation algorithm can be used to remove (or make black) the background pixels from the image leaving only the pixels associated with the animal. The image with only the animal pixels can be provided to animal recognition software for identification of the animal. In some embodiments captured images can be used to create synthetic images for training without requiring segmentation for the identification process.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: October 6, 2020
    Assignee: SPECTERRAS SBF, LLC
    Inventors: James W. Shepherd, Jr., Wesley E. Snyder
  • Patent number: 10769828
    Abstract: In an embodiment, an automated process for generating photo collages including an individual photo of each member of a group, team, etc. with head shots of some or all members is provided. Various members or photographers may take digital photos of each member, capturing a full or partial body photo. The process may use face detection techniques to identify the faces in the body photos, and to automatically crop the head shots from the full body photos. The head shots may be cropped in a consistent manner, leading to a visually pleasing set of head shots in the collage. The effort required of the individuals tasked with producing the collages may be substantially reduced, in some embodiments. Additionally, the quality of the resulting collages may be improved, in some embodiments.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: September 8, 2020
    Inventor: Nimai C. Malle
  • Patent number: 10739458
    Abstract: A method and system for scanning and measuring an environment is provided. The method includes providing a three-dimensional (3D) measurement device. The 3D measurement device being operable in a helical mode or a compound mode, wherein a plurality of light beams are emitted along a first path defined by a first axis and a second axis in the compound mode and along a second path defined by the first axis in the helical mode. A mobile platform holding the 3D measurement device is moved from a first position. A first group of 3D coordinates of the area is acquired by the 3D measurement device when the mobile platform is moving. A second group of 3D coordinates of the area is acquired with a second 3D measurement device that with six-degrees of freedom (6DOF). The first group of 3D coordinates is registered based on the third group of 3D coordinates.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: August 11, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Oliver Zweigle, Bernd-Dietmar Becker, Reinhard Becker
  • Patent number: 10726570
    Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Patent number: 10713805
    Abstract: A method for encoding depth map image involves dividing the image into blocks. These blocks are then classified into smooth blocks without large depth discontinuities and discontinuous blocks with large depth discontinuities. In the discontinuous blocks, depth discontinuities are represented by line segments and partitions. Interpolation-based intra prediction is used to approximate and compress the depth values in the smooth blocks and partitions. Further compression can be achieved with of depth-aware quantization, adaptive de-blocking filtering, scale adaptive block size, and resolution decimation schemes.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: July 14, 2020
    Assignee: VERSITECH LIMITED
    Inventors: Shing Chow Chan, Jia-fei Wu