Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 11176353
    Abstract: The disclosure relates to corresponding apparatus, computer program and method for receiving three-dimensional, 3D, map-data, in which a plurality of locations within the 3D-map-data are association with respective 3D-data-capture-locations of a 3D-camera, and in which 3D-camera-timing-information is associated with each of the plurality of locations; receiving one or more two-dimensional, 2D, images from a 2D-camera, in which 2D-camera-timing-information is associated with each 2D-image, and in which each 2D-image is captured when a movement of the 3D-camera is less than a threshold level; identifying 3D-camera-timing-information associated with locations within the 3D-map-data that correspond to 3D-data-capture-locations with a movement level of the 3D-camera less than the threshold level; associating, in a combined dataset, each 2D-image with a corresponding location within the 3D-map-data by a data processing unit correlating the 2D-camera-timing-information with the identified 3D-camera-timing-informatio
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: November 16, 2021
    Assignee: Geoslam Limited
    Inventors: Neil Slatcher, Alex Bentley, Cheryl Smith
  • Patent number: 11158081
    Abstract: A positioning method is provided for an electrical device including a depth sensor and an inertial measurement unit (IMU). The positioning method includes: calculating an initial position and an initial direction of the electrical device according to signals of the IMU; obtaining an environment point cloud from a database, and obtaining a partial environment point cloud according to the initial position and the initial direction; obtaining a read point cloud by the depth sensor; and matching the read point cloud and the partial environment point cloud to calculate an updated position and an updated direction of the electrical device.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: October 26, 2021
    Assignee: ADAT Technology Co., Ltd.
    Inventors: Kai-Hung Su, Mu-Heng Lin
  • Patent number: 11100310
    Abstract: Disclosed in embodiments of the present disclosure are an object three-dimensional detection method and apparatus, an intelligent driving control method and apparatus, a medium, and a device. The object three-dimensional detection method comprises: obtaining two-dimensional coordinates of a key point of a target object in an image to be processed; constructing a pseudo three-dimensional detection body of the target object according to the two-dimensional coordinates of the key point; obtaining depth information of the key point; and determining a three-dimensional detection body of the target object according to the depth information of the key point and the pseudo three-dimensional detection body.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: August 24, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yingjie Cai, Xingyu Zeng, Junjie Yan, Xiaogang Wang
  • Patent number: 11100707
    Abstract: A computer-implemented graphics processing method, which comprises providing an original set of vertices of a terrain mesh; producing a new set of vertices from the original set of vertices; and, for a given vertex in the original set of vertices: (i) obtaining texture coordinates for vertices in a subset of vertices in the new set of vertices that corresponds to the given vertex in the original set of vertices; and (ii) combining the obtained texture coordinates to obtain a texture coordinate for the given vertex in the original set of vertices. The combining may comprise determining a weighted sum of the obtained texture coordinates, and the weights may be the weights for which the given vertex in the original set of vertices is the centroid of a polygon formed by the corresponding vertices in the new set of vertices.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 24, 2021
    Assignee: SQUARE ENIX LIMITED
    Inventor: Peter Sikachev
  • Patent number: 11046430
    Abstract: Systems and methods are provided for improving the flight safety of fixed- and rotary-wing unmanned aerial systems (UAS) operating in complex dynamic environments, including urban cityscapes. Sensors and computations are integrated to predict local winds and promote safe operations in dynamic urban regions where GNSS and other network communications may be unavailable. The system can be implemented onboard a UAS and does not require in-flight communication with external networks. Predictions of local winds (speed and direction) are created using inputs from sensors that scan the local environment. These predictions are then used by the UAS guidance, navigation, and control (GNC) system to determine safe trajectories for operations in urban environments.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: June 29, 2021
    Assignee: United States of America as Represented by the Administrator of NASA
    Inventors: John Earl Melton, Ben Edward Nikaido
  • Patent number: 11049219
    Abstract: Methods and apparatus for multi-encoder processing of high resolution content. In one embodiment, the method includes capturing high resolution imaging content; splitting up the captured high resolution imaging content into respective portions; feeding the split up portions to respective imaging encoders; packing encoded content from the respective imaging encoders into an A/V container; and storing and/or transmitting the A/V container. In another embodiment, the method includes retrieving and/or receiving an A/V container; splitting up the retrieved and/or received A/V container into respective portions; feeding the split up portions to respective imaging decoders; stitching the decoded imaging portions into a common imaging portion; and storing and/or displaying at least a portion of the common imaging portion.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 29, 2021
    Assignee: GoPro, Inc.
    Inventors: David Newman, Daryl Stimm, Adeel Abbas
  • Patent number: 11037281
    Abstract: Embodiments of this application disclose an image fusion method performed by a computing device. The method includes the following steps: obtaining source face image data of a current to-be-fused image and resource configuration information of a current to-be-fused resource, performing image recognition processing on the source face image data, to obtain source face feature points corresponding to the source face image data, and generating a source face three-dimensional grid of the source face image data according to the source face feature points, performing grid fusion by using a resource face three-dimensional grid and the source face three-dimensional grid to generate a target face three-dimensional grid, and performing face complexion fusion by using source complexion data of the source face image data and resource complexion data of resource face image data on the target face three-dimensional grid, to generate fused target face image data.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: June 15, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Keyi Shen, Pei Cheng, Mengren Qian, Bin Fu
  • Patent number: 11015929
    Abstract: The present invention discloses a positioning method and apparatus. The method includes: acquiring a first image captured by an optical device, where the first image includes an observation object and a plurality of predetermined objects, and the predetermined objects are objects with known geographic coordinates; selecting a first predetermined object from the predetermined objects based on the first image; acquiring a second image, where the first predetermined object is located in a center of the second image; determining a first attitude angle of the optical device based on the first predetermined object in the second image and measurement data captured by an inertial navigation system; modifying the first attitude angle based on a positional relationship between the observation object and the first predetermined object in the second image, to obtain a second attitude angle; and calculating geographic coordinates of the observation object based on the second attitude angle.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: May 25, 2021
    Assignee: DONGGUAN FRONTIER TECHNOLOGY INSTITUTE
    Inventors: Ruopeng Liu, Lin Luan, Faguo Xu
  • Patent number: 11010639
    Abstract: An angularly-dependent reflectance of a surface of an object is measured. Images are collected by a sensor at different sensor geometries and different light-source geometries. A point cloud is generated. The point cloud includes a location of a point, spectral band intensity values for the point, an azimuth and an elevation of the sensor, and an azimuth and an elevation of a light source. Raw pixel intensities of the object and surroundings of the object are converted to a surface reflectance of the object using specular array calibration (SPARC) targets. A three-dimensional (3D) location of each point in the point cloud is projected back to each image using metadata from the plurality of images, and spectral band values are assigned to each value in the point cloud, thereby resulting in a multi-angle spectral reflectance data set.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 18, 2021
    Assignee: Raytheon Company
    Inventors: John J. Coogan, Stephen J. Schiller
  • Patent number: 10984221
    Abstract: An image recognition device includes: a luminance image generator and a distance image generator that generate a luminance image and a distance image, respectively, based on an image signal of an imaging target object output from a photoreceptor element; a target object recognition processor that extracts a target-object candidate from the luminance image using a machine learning database; and a three-dimensional object determination processor that uses the distance image to determine whether the extracted target-object candidate is a three-dimensional object. If it is determined that the target-object candidate is not a three-dimensional object, the target-object candidate extracted from the luminance image is prevented from being used, in the machine learning database, as image data for extracting a feature value of a target object.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: April 20, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Shigeru Saitou, Shinzo Koyama, Masato Takemoto, Motonori Ishii
  • Patent number: 10976812
    Abstract: Provided is an information processing device including an image processing unit that performs geometric correction on a target image instructed to be displayed in a display region that displays an image. The geometric correction is performed on a basis of direction information indicating a direction of a user viewing the display region with respect to the display region.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: April 13, 2021
    Assignee: SONY CORPORATION
    Inventors: Takuya Ikeda, Kentaro Ida, Yousuke Kawana, Osamu Shigeta, Seiji Suzuki
  • Patent number: 10977812
    Abstract: A method is described for adapting 3D image datasets so that they can be registered and combined with 2D images of the same subject, wherein deformation or movement of parts of the subject has occurred between obtaining the 3D image and the 2D image. 2D-3D registrations of the images with respect to multiple features visible in both images are used to provide point correspondences between the images in order to provide an interpolation function that can be used to determine the position of a feature visible in the first image but not the second image and thus mark the location of the feature on the second image. Also described is apparatus for carrying out this method.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 13, 2021
    Assignee: Cydar Limited
    Inventors: Tom Carrell, Graeme Penney, Andreas Varnavas
  • Patent number: 10933534
    Abstract: Included is a method for a mobile automated device to detect and avoid edges including: providing one or more rangefinder sensors on the mobile automated device to calculate, continuously or periodically, distances from the one or more rangefinder sensor to a surface; monitoring, with a processor of the mobile automated device, the distances calculated by each of the one or more rangefinder sensors; and actuating, with the processor of the mobile automated device, the mobile automated device to execute one or more predetermined movement patterns upon the processor detecting a calculated distance greater than a predetermined amount, wherein the one or more movement patterns initiate movement of the mobile automated device away from the area where the increase was detected.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: March 2, 2021
    Assignee: AI Incorporated
    Inventors: Ali Ebrahimi Afrouzi, Soroush Mehrnia, Masih Ebrahimi Afrouzi
  • Patent number: 10922787
    Abstract: An imaging apparatus that starts connecting processing at a synthesizing position in an early stage is provided. The imaging apparatus includes: a first imaging element that images a first imaging range, a second imaging element that images a second imaging range of which one part overlaps with the first imaging range, and a synthesizing unit that synthesizes an image corresponding to an imaging range wider than the first imaging range or the second imaging range, based on pixel data groups output by the first imaging element and the second imaging element, wherein the first imaging element and the second imaging element output pixel data corresponding to a position at which the first imaging range and the second imaging range overlap each other, to a synthesizing unit prior to other pixel data.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: February 16, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasuaki Ise
  • Patent number: 10915760
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting humans in images using occupancy grid maps. The methods, systems, and apparatus include actions of obtaining an image of a scene without people, generating a reference occupancy grid from the image, generating pairs of training images with humans rendered and corresponding training occupancy grids based on the occupancy grid and the image, training a scene-specific human detector with the pairs of training images with humans rendered and corresponding training occupancy grids, generating a sample occupancy grid from a sample image using the scene-specific human detector, and augmenting the sample image using the sample occupancy grid.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: February 9, 2021
    Assignee: ObjectVideo Labs, LLC
    Inventor: SungChun Lee
  • Patent number: 10878630
    Abstract: A three-dimensional (3D) image display device includes a display device; a variable focus optical system configured to focus the 3D image formed by the display device on a reference plane, a processor configured to determine a representative depth value of the 3D image by selecting a depth position, from among a plurality of depth positions corresponding to the 3D image, as the representative depth value, and control the variable focus optical system to adjust the reference plane by adjusting a focal point of the variable focus optical system based on the representative depth value; and a transfer optical system configured to transfer the 3D image focused on the reference plane to a pupil of an observer.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 29, 2020
    Assignee: SAMSUNG ELECTRONICS CO.. LTD.
    Inventors: Geeyoung Sung, Yuntae Kim, Changkun Lee, Hongseok Lee
  • Patent number: 10860166
    Abstract: Provided is an image processing method including: displaying an image including a plurality of objects; receiving a selection of an object from among the plurality of objects; receiving a depth adjustment input; changing a depth of the selected object based on the depth adjustment input; generating a depth adjusted image file of the image based on the changed depth; and displaying a depth adjusted image based on the generated depth adjusted image file.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: December 8, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyun-jee Kim
  • Patent number: 10846818
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Patent number: 10846817
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Patent number: 10841799
    Abstract: Methods and apparatuses for arranging sounding symbol are provided. An example apparatus comprises memory; and processing circuitry coupled to the memory. The processing circuitry is configured to encode a sounding signal. The sounding signal comprises a plurality of sounding symbols, and the repetition of sounding symbols to be transmitted in sequence is avoided.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: November 17, 2020
    Inventors: Assaf Gurevitz, Robert Stacey, Jonathan Segev, Qinghua Li, Danny Alexander, Shlomi Vituri, Feng Jiang
  • Patent number: 10834379
    Abstract: A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promising solution to address this problem is to use automated 2D-to-3D conversion. However, current conversion methods, while general, produce low-quality results with artefacts that are not acceptable to many viewers. Creating a database of 3D stereoscopic videos with accurate depth is, however, very difficult. Computer generated content can be used to generate high-quality 3D video reference database for 2D-to-3D conversion. The method transfers depth information from frames in the 3D reference database to the target frame while respecting object boundaries. It computes depth maps from the depth gradients, and outputs a stereoscopic video.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: November 10, 2020
    Inventors: Mohamed M. Hefeeda, Kiana Ali Asghar Calagari, Mohamed Abdelaziz A Mohamed Elgharib, Wojciech Matusik, Piotr Didyk, Alexandre Kaspar
  • Patent number: 10825259
    Abstract: An apparatus to generate a model of a surface of an object includes a data set pre-aligner configured to receive multiple sets of surface data that correspond to respective portions of a surface of an object and that include three-dimensional (3D) points. The data set pre-aligner is also configured to perform a pre-alignment of overlapping sets to generate pre-aligned sets, including performing a rotation operation on a second set of the surface data, relative to a first set of the surface data that overlaps the second set, to apply a rotation amount that is selected from among multiple discrete rotation amounts and based on a similarity metric. The apparatus includes a data set aligner configured to perform an iterative alignment of the pre-aligned sets to generate aligned sets. The apparatus also includes a 3D model generator configured to combine the aligned sets to generate a 3D model of the object.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: November 3, 2020
    Assignee: THE BOEING COMPANY
    Inventors: Kyungnam Kim, Heiko Hoffmann
  • Patent number: 10817125
    Abstract: Provided is an image processing method including: displaying an image including a plurality of objects; receiving a selection of an object from among the plurality of objects; receiving a depth adjustment input; changing a depth of the selected object based on the depth adjustment input; generating a depth adjusted image file of the image based on the changed depth; and displaying a depth adjusted image based on the generated depth adjusted image file.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: October 27, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyun-jee Kim
  • Patent number: 10805860
    Abstract: Provided are a method by which a terminal performs an access barring check in a wireless communication system and a device for supporting the same. The method may include: a step for entering an RRC_INACTIVE state; a step for performing the access barring check on a cell; a step for checking that access to the cell is prevented as many times as the maximum number of access attempts; and a step for transitioning from the RRC_INACTIVE state to an RRC_IDLE state.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: October 13, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Youngdae Lee, Jaehyun Kim, Bokyung Byun
  • Patent number: 10798416
    Abstract: Disclosed is a 3D video motion estimating apparatus and method. The 3D video motion estimating apparatus may enable a motion vector of a color image and a motion vector of a depth image refer to each other, thereby increasing a compression rate.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: October 6, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Young Lee, Du-Sik Park, Jaejoon Lee, Ho Cheon Wey, Il Soon Lim, Seok Lee
  • Patent number: 10796141
    Abstract: Systems and methods are provided for capturing images of animals for the purpose of identifying the animal. A camera can be positioned to capture images of an animal at a feeding station. A member can be positioned on the opposite side of the feeding station from the camera to provide a common background for the images captured by the camera. When an image is captured by the camera, a determination is made as to whether an animal is present in the image. If an animal is determined to be present in the image, a segmentation algorithm can be used to remove (or make black) the background pixels from the image leaving only the pixels associated with the animal. The image with only the animal pixels can be provided to animal recognition software for identification of the animal. In some embodiments captured images can be used to create synthetic images for training without requiring segmentation for the identification process.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: October 6, 2020
    Assignee: SPECTERRAS SBF, LLC
    Inventors: James W. Shepherd, Jr., Wesley E. Snyder
  • Patent number: 10796434
    Abstract: A method for learning an automatic parking device of a vehicle for detecting an available parking area is provided. The method includes steps of: a learning device, (a) if a parking lot image of an area nearby the vehicle is acquired, (i) inputting the parking lot image into a segmentation network to output a convolution feature map via an encoder, output a deconvolution feature map by deconvoluting the convolution feature map via a decoder, and output segmentation information by masking the deconvolution feature map via a masking layer; (b) inputting the deconvolution feature map into a regressor to generate relative coordinates of vertices of a specific available parking region, and generate regression location information by regressing the relative coordinates; and (c) instructing a loss layer to calculate 1-st losses by referring to the regression location information and an ROI GT, and learning the regressor via backpropagation using the 1-st losses.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10769828
    Abstract: In an embodiment, an automated process for generating photo collages including an individual photo of each member of a group, team, etc. with head shots of some or all members is provided. Various members or photographers may take digital photos of each member, capturing a full or partial body photo. The process may use face detection techniques to identify the faces in the body photos, and to automatically crop the head shots from the full body photos. The head shots may be cropped in a consistent manner, leading to a visually pleasing set of head shots in the collage. The effort required of the individuals tasked with producing the collages may be substantially reduced, in some embodiments. Additionally, the quality of the resulting collages may be improved, in some embodiments.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: September 8, 2020
    Inventor: Nimai C. Malle
  • Patent number: 10739458
    Abstract: A method and system for scanning and measuring an environment is provided. The method includes providing a three-dimensional (3D) measurement device. The 3D measurement device being operable in a helical mode or a compound mode, wherein a plurality of light beams are emitted along a first path defined by a first axis and a second axis in the compound mode and along a second path defined by the first axis in the helical mode. A mobile platform holding the 3D measurement device is moved from a first position. A first group of 3D coordinates of the area is acquired by the 3D measurement device when the mobile platform is moving. A second group of 3D coordinates of the area is acquired with a second 3D measurement device that with six-degrees of freedom (6DOF). The first group of 3D coordinates is registered based on the third group of 3D coordinates.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: August 11, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Oliver Zweigle, Bernd-Dietmar Becker, Reinhard Becker
  • Patent number: 10726570
    Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Patent number: 10713805
    Abstract: A method for encoding depth map image involves dividing the image into blocks. These blocks are then classified into smooth blocks without large depth discontinuities and discontinuous blocks with large depth discontinuities. In the discontinuous blocks, depth discontinuities are represented by line segments and partitions. Interpolation-based intra prediction is used to approximate and compress the depth values in the smooth blocks and partitions. Further compression can be achieved with of depth-aware quantization, adaptive de-blocking filtering, scale adaptive block size, and resolution decimation schemes.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: July 14, 2020
    Assignee: VERSITECH LIMITED
    Inventors: Shing Chow Chan, Jia-fei Wu
  • Patent number: 10685679
    Abstract: A computer-implemented system and method of determining a virtual camera path. The method comprises determining an action path in video data of a scene, wherein the action path includes at least two points, each of the two points defining a three-dimensional position and a time in the video data; and selecting a template for a virtual camera path, the template camera path including information defining a template camera path with respect to an associated template focus path. The method further comprises aligning the template focus path with the determined action path in the scene and transforming the template camera path based on the alignment to determine the virtual camera path.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: June 16, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Berty Jacques Alain Bhuruth
  • Patent number: 10614548
    Abstract: A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: April 7, 2020
    Assignee: YouAR INC.
    Inventors: George Howard Alexander Blikas, Oliver Clayton Daniels
  • Patent number: 10580204
    Abstract: The present disclosure provides a method comprising: acquiring a plurality of images of a plurality of scenes in advance, and performing feature extraction on the plurality of images respectively, to obtain a corresponding plurality of feature point sets; performing pairwise feature matching on the plurality of images, generating a corresponding eigen matrix according to the pairwise feature matching, and performing noise processing on the eigen matrix; performing 3D reconstruction according to the feature matching and the noise-processed eigen matrix and based on a ray model, to generate a 3D feature point cloud and a reconstructed camera pose set; acquiring a query image, and performing feature extraction on the query image to obtain a corresponding 2D feature point set; and performing image positioning according to the 2D feature point set, the 3D feature point cloud and the reconstructed camera pose set and based on a positioning attitude image optimization framework.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: March 3, 2020
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Jie Zhou, Lei Deng, Yueqi Duan
  • Patent number: 10491916
    Abstract: The present disclosure is directed a system and method for exploiting camera and depth information associated with rendered video frames, such as those rendered by a server operating as part of a cloud gaming service, to more efficiently encode the rendered video frames for transmission over a network. The method and system of the present disclosure can be used in a server operating in a cloud gaming service to improve, for example, the amount of latency, downstream bandwidth, and/or computational processing power associated with playing a video game over its service. The method and system of the present disclosure can be further used in other applications where camera and depth information of a rendered or captured video frame is available.
    Type: Grant
    Filed: October 1, 2013
    Date of Patent: November 26, 2019
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Khaled Mammou, Ihab Amer, Gabor Sines, Lei Zhang, Michael Schmit, Daniel Wong
  • Patent number: 10409264
    Abstract: A factory server receives part requests from customer devices and controls one or more manufacturing tools, such as 3D printers, to fabricate the requested parts. The factory server implements several features to streamline the process of fabricating parts using the manufacturing tools. For instance, the factory server can facilitate the design of a part by extracting features from the part request and identifying model files having those features. The factory server can also select an orientation in which to fabricate the part and determine print settings to use when fabricating the part. In addition, the factory server can implement a process to fabricate a three-dimensional part with a two-dimensional image applied to one or more of its external surfaces. Furthermore, the factory server can also generate a layout of multiple part instances on a build plate of a 3D printer so that multiple part instances can be fabricated at once.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: September 10, 2019
    Assignee: Voodoo Manufacturing, Inc.
    Inventors: Jonathan Schwartz, Max Friefeld, Oliver Ortlieb
  • Patent number: 10404963
    Abstract: Described here are systems, devices, and method for converting a two-dimensional video sequence into first and second video sequences for display at first and second display areas of a single display. In some embodiments, a two-dimensional video image sequence is received at a mobile device. The two-dimensional video image sequence may be split into first and second video image sequences such that a first video image sequence is output to the first display area and a second video image sequence different from the first video image sequence is output to the second display area. The first and second video image sequences may be created from the two-dimensional video image sequence.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: September 3, 2019
    Inventor: David Gerald Kenrick
  • Patent number: 10390035
    Abstract: A method is provided for coding a current image. The method includes determining, in a first image, different from the current image, a group of k? pixels corresponding to a current group of k pixels (k??k) to be coded of the current image; calculating a motion vector between each of the k? pixels of the first image and a corresponding pixel of a second image different from the current image, on completion of which a field of k? motion vectors is obtained; and predicting the pixels or of the motion of the current group of k pixels of the current image on the basis of the field of k? motion vectors which is obtained.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: August 20, 2019
    Assignee: ORANGE
    Inventors: Elie Gabriel Mora, Joel Jung, Beatrice Pesquet-Popescu, Marco Cagnazzo
  • Patent number: 10380777
    Abstract: The disclosure proposes a method of texture synthesis and an apparatus using the same. In one of the exemplary embodiments, the step of generating the first single scale detail image would include not limited to: performing a feature extraction of a first pixel block of an image frame to derive a first pixel feature, applying a first criteria to the first pixel feature to derive a positive result, performing a first detail alignment and a maximum extension of the positive result to derived an adjusted positive mapping result, applying a second criteria, which is opposite to the first criteria, to the first pixel feature to derive a negative result, performing a second detail alignment and a minimum extension of the negative result to derived an adjusted negative mapping result, and blending the adjusted positive mapping result and the adjusted negative mapping result to generate the first single scale detail image.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: August 13, 2019
    Assignee: Novatek Microelectronics Corp.
    Inventors: Xiao-Na Xie, Kai Kang, Jian-Hua Liang, Yuan-Jia Du
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10353433
    Abstract: An image processing method and apparatus for a curved display device are provided. The image processing method includes acquiring physical curvature information related to the display device, determining a center region of an input image based on the physical curvature information, generating a pixel-by-pixel spatial indexed gain based on the determined center region of the input image, and correcting a pixel value of the input image by using the pixel-by-pixel spatial indexed gain.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: July 16, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-ran Park, Youn-jin Kim, Seong-wook Han
  • Patent number: 10356381
    Abstract: An image output apparatus includes an image signal processing unit configured to perform image processing on acquired image data and a depth information generation unit configured to generate information regarding a depth direction related to an image. A three-dimensional image data generation unit generates three-dimensional image data based on depth information and the image data subjected to the image processing. A system control unit associates the image data subjected to the image processing by the image signal processing unit with the depth information generated by the depth information generation unit and performs control such that the three-dimensional image data generation unit is caused to generate the three-dimensional image data using the depth information changed according to the image data before and after the image processing.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: July 16, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuyoshi Kiyosawa
  • Patent number: 10269147
    Abstract: A system provides camera position and point cloud estimation 3D reconstruction. The system receives images and attempts existing structure integration to integrate the images into an existing reconstruction under a sequential image reception assumption. If existing structure integration fails, the system attempts dictionary overlap detection by accessing a dictionary database and searching to find a closest matching frame in the existing reconstruction. If overlaps are found, the system matches the images with the overlaps to determine a highest probability frame from the overlaps, and attempts existing structure integration again. If overlaps are not found or existing structure integration fails again, the system attempts bootstrapping based on the images. If any of existing structure integration, dictionary overlap detection, or bootstrapping succeeds, and if multiple disparate tracks have come to exist, the system attempts reconstructed track merging.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventors: Michael Jones, Adam James Dickin
  • Patent number: 10271034
    Abstract: In a method of coding video data, a first depth value of a depth look up table (DLT) is determined, where the first depth value is associated with a first pixel of the video data, and a second depth value of the DLT is determined, where the second depth value is associated with a second pixel of the video data. Coding of the second depth value relative to the first depth value is performed during coding of the DLT.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: April 23, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Li Zhang, Ying Chen, Marta Karczewicz
  • Patent number: 10269148
    Abstract: A system provides image undistortion in 3D reconstruction. The system receives an image produced by a sensor, and determines whether correction values are cached for the sensor, where each correction value is configured to place a corresponding pixel into a corrected location. When there are no cached correction values, the system calculates correction values for pixels in the image, generates a correction grid for the image including vertices corresponding to texture coordinates from the image, where each vertex in the correction grid includes a corresponding correction value, partitions the correction grid into partitioned grids, and caches the partitioned grids. The system then renders the image using the partitioned grids.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventor: Michael Jones
  • Patent number: 10262414
    Abstract: Systems, methods, and computer program products for classifying a brain are disclosed. An embodiment method includes processing image data to generate segmented image data of a brain cortex. The method further includes generating a statistical analysis of the brain based on a three dimensional (3D) model of the brain cortex generated from the segmented image data. The method further includes using the statistical analysis to classify the brain cortex and to identify the brain as being associated with a particular neurological condition. According to a further embodiment, generating the 3D model of the brain further includes registering a 3D volume associated with the model with a corresponding reference volume and generating a 3D mesh associated with the registered 3D volume. The method further includes generating the statistical analysis by analyzing individual mesh nodes of the registered 3D mesh based on a spherical harmonic shape analysis of the 3D model.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: April 16, 2019
    Assignee: University of Louisville Research Foundation, Inc.
    Inventors: Matthew J. Nitzken, Ayman S. El-Baz, Manuel F. Casanova
  • Patent number: 10235795
    Abstract: A method of compressing a texture image for use in generating a 360 degree panoramic video is provided. The method including the steps of: receiving an original texture image for a sphere model, wherein the original texture image is an image with a rectangular shape and includes a plurality of pixel lines and each of the pixels lines has a corresponding spherical position on the sphere model; determining a compression ratio of each of the pixel lines according to the corresponding spherical position of each of the pixel lines; and compressing the pixel lines with the compression ratios corresponding thereto to generate a compressed texture image with a non-rectangular shape, wherein the compressed texture image is further being mapped to the sphere model to generate the 360 degree panoramic video.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 19, 2019
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventors: Yiting Yi, Gong Chen, Rong Xie
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10210668
    Abstract: Techniques are described for generating a three dimensional (3D) object from complete or partial 3D data. Image data defining or partially defining a 3D object may be obtained. Using that data, a common plane facing surface of the 3D object may be defined that is substantially parallel to a common plane (e.g., ground plane). One or more edges of the common plane facing surface may be determined, and extended to the common plane. A bottom surface, which is bound by the one or more extended edges and is parallel with the common plane, may be generated based on the common-plane facing surface. In some aspects, defining the common plane facing surface may include segmenting the image data into a plurality of polygons, orienting at least one of the polygons to face the common plane, and discarding occluding polygons.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: February 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kristofer N. Iverson, Emmett Lalish, Gheorghe Marius Gheorghescu, Jan Jakubovic, Martin Kusnier, Vladimir Sisolak, Tibor Szaszi
  • Patent number: 10198858
    Abstract: A method based on Structure from Motion for processing a plurality of sparse images acquired by one or more acquisition devices to generate a sparse 3D points cloud and of a plurality of internal and external parameters of the acquisition devices includes the steps of collecting the images; extracting keypoints therefrom and generating keypoint descriptors; organizing the images in a proximity graph; pairwise image matching and generating keypoints connecting tracks according maximum proximity between keypoints; performing an autocalibration between image clusters to extract internal and external parameters of the acquisition devices, wherein calibration groups are defined that contain a plurality of image clusters and wherein a clustering algorithm iteratively merges the clusters in a model expressed in a common local reference system starting from clusters belonging to the same calibration group; and performing a Euclidean reconstruction of the object as a sparse 3D point cloud based on the extracted parame
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: February 5, 2019
    Inventors: Yash Singh, Roberto Toldo, Luca Magri, Simone Fantoni, Andrea Fusiello