Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 10805860
    Abstract: Provided are a method by which a terminal performs an access barring check in a wireless communication system and a device for supporting the same. The method may include: a step for entering an RRC_INACTIVE state; a step for performing the access barring check on a cell; a step for checking that access to the cell is prevented as many times as the maximum number of access attempts; and a step for transitioning from the RRC_INACTIVE state to an RRC_IDLE state.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: October 13, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Youngdae Lee, Jaehyun Kim, Bokyung Byun
  • Patent number: 10796434
    Abstract: A method for learning an automatic parking device of a vehicle for detecting an available parking area is provided. The method includes steps of: a learning device, (a) if a parking lot image of an area nearby the vehicle is acquired, (i) inputting the parking lot image into a segmentation network to output a convolution feature map via an encoder, output a deconvolution feature map by deconvoluting the convolution feature map via a decoder, and output segmentation information by masking the deconvolution feature map via a masking layer; (b) inputting the deconvolution feature map into a regressor to generate relative coordinates of vertices of a specific available parking region, and generate regression location information by regressing the relative coordinates; and (c) instructing a loss layer to calculate 1-st losses by referring to the regression location information and an ROI GT, and learning the regressor via backpropagation using the 1-st losses.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10796141
    Abstract: Systems and methods are provided for capturing images of animals for the purpose of identifying the animal. A camera can be positioned to capture images of an animal at a feeding station. A member can be positioned on the opposite side of the feeding station from the camera to provide a common background for the images captured by the camera. When an image is captured by the camera, a determination is made as to whether an animal is present in the image. If an animal is determined to be present in the image, a segmentation algorithm can be used to remove (or make black) the background pixels from the image leaving only the pixels associated with the animal. The image with only the animal pixels can be provided to animal recognition software for identification of the animal. In some embodiments captured images can be used to create synthetic images for training without requiring segmentation for the identification process.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: October 6, 2020
    Assignee: SPECTERRAS SBF, LLC
    Inventors: James W. Shepherd, Jr., Wesley E. Snyder
  • Patent number: 10798416
    Abstract: Disclosed is a 3D video motion estimating apparatus and method. The 3D video motion estimating apparatus may enable a motion vector of a color image and a motion vector of a depth image refer to each other, thereby increasing a compression rate.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: October 6, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Young Lee, Du-Sik Park, Jaejoon Lee, Ho Cheon Wey, Il Soon Lim, Seok Lee
  • Patent number: 10769828
    Abstract: In an embodiment, an automated process for generating photo collages including an individual photo of each member of a group, team, etc. with head shots of some or all members is provided. Various members or photographers may take digital photos of each member, capturing a full or partial body photo. The process may use face detection techniques to identify the faces in the body photos, and to automatically crop the head shots from the full body photos. The head shots may be cropped in a consistent manner, leading to a visually pleasing set of head shots in the collage. The effort required of the individuals tasked with producing the collages may be substantially reduced, in some embodiments. Additionally, the quality of the resulting collages may be improved, in some embodiments.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: September 8, 2020
    Inventor: Nimai C. Malle
  • Patent number: 10739458
    Abstract: A method and system for scanning and measuring an environment is provided. The method includes providing a three-dimensional (3D) measurement device. The 3D measurement device being operable in a helical mode or a compound mode, wherein a plurality of light beams are emitted along a first path defined by a first axis and a second axis in the compound mode and along a second path defined by the first axis in the helical mode. A mobile platform holding the 3D measurement device is moved from a first position. A first group of 3D coordinates of the area is acquired by the 3D measurement device when the mobile platform is moving. A second group of 3D coordinates of the area is acquired with a second 3D measurement device that with six-degrees of freedom (6DOF). The first group of 3D coordinates is registered based on the third group of 3D coordinates.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: August 11, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Oliver Zweigle, Bernd-Dietmar Becker, Reinhard Becker
  • Patent number: 10726570
    Abstract: Augmented reality devices and methods for computing a homography based on two images. One method may include receiving a first image based on a first camera pose and a second image based on a second camera pose, generating a first point cloud based on the first image and a second point cloud based on the second image, providing the first point cloud and the second point cloud to a neural network, and generating, by the neural network, the homography based on the first point cloud and the second point cloud. The neural network may be trained by generating a plurality of points, determining a 3D trajectory, sampling the 3D trajectory to obtain camera poses viewing the points, projecting the points onto 2D planes, comparing a generated homography using the projected points to the ground-truth homography and modifying the neural network based on the comparison.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Jan Malisiewicz, Andrew Rabinovich
  • Patent number: 10713805
    Abstract: A method for encoding depth map image involves dividing the image into blocks. These blocks are then classified into smooth blocks without large depth discontinuities and discontinuous blocks with large depth discontinuities. In the discontinuous blocks, depth discontinuities are represented by line segments and partitions. Interpolation-based intra prediction is used to approximate and compress the depth values in the smooth blocks and partitions. Further compression can be achieved with of depth-aware quantization, adaptive de-blocking filtering, scale adaptive block size, and resolution decimation schemes.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: July 14, 2020
    Assignee: VERSITECH LIMITED
    Inventors: Shing Chow Chan, Jia-fei Wu
  • Patent number: 10685679
    Abstract: A computer-implemented system and method of determining a virtual camera path. The method comprises determining an action path in video data of a scene, wherein the action path includes at least two points, each of the two points defining a three-dimensional position and a time in the video data; and selecting a template for a virtual camera path, the template camera path including information defining a template camera path with respect to an associated template focus path. The method further comprises aligning the template focus path with the determined action path in the scene and transforming the template camera path based on the alignment to determine the virtual camera path.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: June 16, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Berty Jacques Alain Bhuruth
  • Patent number: 10614548
    Abstract: A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: April 7, 2020
    Assignee: YouAR INC.
    Inventors: George Howard Alexander Blikas, Oliver Clayton Daniels
  • Patent number: 10580204
    Abstract: The present disclosure provides a method comprising: acquiring a plurality of images of a plurality of scenes in advance, and performing feature extraction on the plurality of images respectively, to obtain a corresponding plurality of feature point sets; performing pairwise feature matching on the plurality of images, generating a corresponding eigen matrix according to the pairwise feature matching, and performing noise processing on the eigen matrix; performing 3D reconstruction according to the feature matching and the noise-processed eigen matrix and based on a ray model, to generate a 3D feature point cloud and a reconstructed camera pose set; acquiring a query image, and performing feature extraction on the query image to obtain a corresponding 2D feature point set; and performing image positioning according to the 2D feature point set, the 3D feature point cloud and the reconstructed camera pose set and based on a positioning attitude image optimization framework.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: March 3, 2020
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Jie Zhou, Lei Deng, Yueqi Duan
  • Patent number: 10491916
    Abstract: The present disclosure is directed a system and method for exploiting camera and depth information associated with rendered video frames, such as those rendered by a server operating as part of a cloud gaming service, to more efficiently encode the rendered video frames for transmission over a network. The method and system of the present disclosure can be used in a server operating in a cloud gaming service to improve, for example, the amount of latency, downstream bandwidth, and/or computational processing power associated with playing a video game over its service. The method and system of the present disclosure can be further used in other applications where camera and depth information of a rendered or captured video frame is available.
    Type: Grant
    Filed: October 1, 2013
    Date of Patent: November 26, 2019
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Khaled Mammou, Ihab Amer, Gabor Sines, Lei Zhang, Michael Schmit, Daniel Wong
  • Patent number: 10409264
    Abstract: A factory server receives part requests from customer devices and controls one or more manufacturing tools, such as 3D printers, to fabricate the requested parts. The factory server implements several features to streamline the process of fabricating parts using the manufacturing tools. For instance, the factory server can facilitate the design of a part by extracting features from the part request and identifying model files having those features. The factory server can also select an orientation in which to fabricate the part and determine print settings to use when fabricating the part. In addition, the factory server can implement a process to fabricate a three-dimensional part with a two-dimensional image applied to one or more of its external surfaces. Furthermore, the factory server can also generate a layout of multiple part instances on a build plate of a 3D printer so that multiple part instances can be fabricated at once.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: September 10, 2019
    Assignee: Voodoo Manufacturing, Inc.
    Inventors: Jonathan Schwartz, Max Friefeld, Oliver Ortlieb
  • Patent number: 10404963
    Abstract: Described here are systems, devices, and method for converting a two-dimensional video sequence into first and second video sequences for display at first and second display areas of a single display. In some embodiments, a two-dimensional video image sequence is received at a mobile device. The two-dimensional video image sequence may be split into first and second video image sequences such that a first video image sequence is output to the first display area and a second video image sequence different from the first video image sequence is output to the second display area. The first and second video image sequences may be created from the two-dimensional video image sequence.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: September 3, 2019
    Inventor: David Gerald Kenrick
  • Patent number: 10390035
    Abstract: A method is provided for coding a current image. The method includes determining, in a first image, different from the current image, a group of k? pixels corresponding to a current group of k pixels (k??k) to be coded of the current image; calculating a motion vector between each of the k? pixels of the first image and a corresponding pixel of a second image different from the current image, on completion of which a field of k? motion vectors is obtained; and predicting the pixels or of the motion of the current group of k pixels of the current image on the basis of the field of k? motion vectors which is obtained.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: August 20, 2019
    Assignee: ORANGE
    Inventors: Elie Gabriel Mora, Joel Jung, Beatrice Pesquet-Popescu, Marco Cagnazzo
  • Patent number: 10380777
    Abstract: The disclosure proposes a method of texture synthesis and an apparatus using the same. In one of the exemplary embodiments, the step of generating the first single scale detail image would include not limited to: performing a feature extraction of a first pixel block of an image frame to derive a first pixel feature, applying a first criteria to the first pixel feature to derive a positive result, performing a first detail alignment and a maximum extension of the positive result to derived an adjusted positive mapping result, applying a second criteria, which is opposite to the first criteria, to the first pixel feature to derive a negative result, performing a second detail alignment and a minimum extension of the negative result to derived an adjusted negative mapping result, and blending the adjusted positive mapping result and the adjusted negative mapping result to generate the first single scale detail image.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: August 13, 2019
    Assignee: Novatek Microelectronics Corp.
    Inventors: Xiao-Na Xie, Kai Kang, Jian-Hua Liang, Yuan-Jia Du
  • Patent number: 10373380
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10353433
    Abstract: An image processing method and apparatus for a curved display device are provided. The image processing method includes acquiring physical curvature information related to the display device, determining a center region of an input image based on the physical curvature information, generating a pixel-by-pixel spatial indexed gain based on the determined center region of the input image, and correcting a pixel value of the input image by using the pixel-by-pixel spatial indexed gain.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: July 16, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seung-ran Park, Youn-jin Kim, Seong-wook Han
  • Patent number: 10356381
    Abstract: An image output apparatus includes an image signal processing unit configured to perform image processing on acquired image data and a depth information generation unit configured to generate information regarding a depth direction related to an image. A three-dimensional image data generation unit generates three-dimensional image data based on depth information and the image data subjected to the image processing. A system control unit associates the image data subjected to the image processing by the image signal processing unit with the depth information generated by the depth information generation unit and performs control such that the three-dimensional image data generation unit is caused to generate the three-dimensional image data using the depth information changed according to the image data before and after the image processing.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: July 16, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuyoshi Kiyosawa
  • Patent number: 10271034
    Abstract: In a method of coding video data, a first depth value of a depth look up table (DLT) is determined, where the first depth value is associated with a first pixel of the video data, and a second depth value of the DLT is determined, where the second depth value is associated with a second pixel of the video data. Coding of the second depth value relative to the first depth value is performed during coding of the DLT.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: April 23, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Li Zhang, Ying Chen, Marta Karczewicz
  • Patent number: 10269147
    Abstract: A system provides camera position and point cloud estimation 3D reconstruction. The system receives images and attempts existing structure integration to integrate the images into an existing reconstruction under a sequential image reception assumption. If existing structure integration fails, the system attempts dictionary overlap detection by accessing a dictionary database and searching to find a closest matching frame in the existing reconstruction. If overlaps are found, the system matches the images with the overlaps to determine a highest probability frame from the overlaps, and attempts existing structure integration again. If overlaps are not found or existing structure integration fails again, the system attempts bootstrapping based on the images. If any of existing structure integration, dictionary overlap detection, or bootstrapping succeeds, and if multiple disparate tracks have come to exist, the system attempts reconstructed track merging.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventors: Michael Jones, Adam James Dickin
  • Patent number: 10269148
    Abstract: A system provides image undistortion in 3D reconstruction. The system receives an image produced by a sensor, and determines whether correction values are cached for the sensor, where each correction value is configured to place a corresponding pixel into a corrected location. When there are no cached correction values, the system calculates correction values for pixels in the image, generates a correction grid for the image including vertices corresponding to texture coordinates from the image, where each vertex in the correction grid includes a corresponding correction value, partitions the correction grid into partitioned grids, and caches the partitioned grids. The system then renders the image using the partitioned grids.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventor: Michael Jones
  • Patent number: 10262414
    Abstract: Systems, methods, and computer program products for classifying a brain are disclosed. An embodiment method includes processing image data to generate segmented image data of a brain cortex. The method further includes generating a statistical analysis of the brain based on a three dimensional (3D) model of the brain cortex generated from the segmented image data. The method further includes using the statistical analysis to classify the brain cortex and to identify the brain as being associated with a particular neurological condition. According to a further embodiment, generating the 3D model of the brain further includes registering a 3D volume associated with the model with a corresponding reference volume and generating a 3D mesh associated with the registered 3D volume. The method further includes generating the statistical analysis by analyzing individual mesh nodes of the registered 3D mesh based on a spherical harmonic shape analysis of the 3D model.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: April 16, 2019
    Assignee: University of Louisville Research Foundation, Inc.
    Inventors: Matthew J. Nitzken, Ayman S. El-Baz, Manuel F. Casanova
  • Patent number: 10235795
    Abstract: A method of compressing a texture image for use in generating a 360 degree panoramic video is provided. The method including the steps of: receiving an original texture image for a sphere model, wherein the original texture image is an image with a rectangular shape and includes a plurality of pixel lines and each of the pixels lines has a corresponding spherical position on the sphere model; determining a compression ratio of each of the pixel lines according to the corresponding spherical position of each of the pixel lines; and compressing the pixel lines with the compression ratios corresponding thereto to generate a compressed texture image with a non-rectangular shape, wherein the compressed texture image is further being mapped to the sphere model to generate the 360 degree panoramic video.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 19, 2019
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventors: Yiting Yi, Gong Chen, Rong Xie
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10210668
    Abstract: Techniques are described for generating a three dimensional (3D) object from complete or partial 3D data. Image data defining or partially defining a 3D object may be obtained. Using that data, a common plane facing surface of the 3D object may be defined that is substantially parallel to a common plane (e.g., ground plane). One or more edges of the common plane facing surface may be determined, and extended to the common plane. A bottom surface, which is bound by the one or more extended edges and is parallel with the common plane, may be generated based on the common-plane facing surface. In some aspects, defining the common plane facing surface may include segmenting the image data into a plurality of polygons, orienting at least one of the polygons to face the common plane, and discarding occluding polygons.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: February 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kristofer N. Iverson, Emmett Lalish, Gheorghe Marius Gheorghescu, Jan Jakubovic, Martin Kusnier, Vladimir Sisolak, Tibor Szaszi
  • Patent number: 10198858
    Abstract: A method based on Structure from Motion for processing a plurality of sparse images acquired by one or more acquisition devices to generate a sparse 3D points cloud and of a plurality of internal and external parameters of the acquisition devices includes the steps of collecting the images; extracting keypoints therefrom and generating keypoint descriptors; organizing the images in a proximity graph; pairwise image matching and generating keypoints connecting tracks according maximum proximity between keypoints; performing an autocalibration between image clusters to extract internal and external parameters of the acquisition devices, wherein calibration groups are defined that contain a plurality of image clusters and wherein a clustering algorithm iteratively merges the clusters in a model expressed in a common local reference system starting from clusters belonging to the same calibration group; and performing a Euclidean reconstruction of the object as a sparse 3D point cloud based on the extracted parame
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: February 5, 2019
    Inventors: Yash Singh, Roberto Toldo, Luca Magri, Simone Fantoni, Andrea Fusiello
  • Patent number: 10115035
    Abstract: A vision system is configured to dynamically inspect an object in a field of view. This includes capturing, using a camera, three-dimensional (3D) point cloud data of the field of view and transforming each of the points of the 3D point cloud data into a plurality of tangential surface vectors. Surface normal vectors are determined for each of the points of the 3D point cloud data based upon the plurality of tangential surface vectors. Distribution peaks in the surface normal vectors are detected employing a unit sphere mesh. Parallel planes are separated using the distance distribution peaks. A radially bounded nearest neighbor strategy combined with a process of nearest neighbor searching based upon cell division is executed to segment a planar patch. A planar surface is identified based upon the segmented planar patch.
    Type: Grant
    Filed: January 8, 2015
    Date of Patent: October 30, 2018
    Assignees: SungKyunKwan University Foundation for Corporation Collaboration, GM Global Technology Operations LLC
    Inventors: Sukhan Lee, Hung Huu Nguyen, Jaewoong Kim, Jianying Shi
  • Patent number: 10086955
    Abstract: A camera pose estimation system is provided for estimating the position of a camera within an environment. The system may be configured to receive a 2D image captured by a camera within the environment, and interpret metadata of the 2D image to identify an estimated position of the camera. A synthetic 2D image from a 3D model of the environment may be rendered by a synthetic camera within the model at the estimated position. A correlation between the 2D image and synthetic 2D image may identify a 2D point of correlation, and the system may project a line from the synthetic camera through the 2D point on the synthetic 2D image rendered in an image plane of the synthetic camera such that the line intersects the 3D model at a corresponding 3D point therein. A refined position may be determined based on the 2D point and corresponding 3D point.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: October 2, 2018
    Assignee: The Boeing Company
    Inventor: John H. Aughey
  • Patent number: 10003769
    Abstract: A video telephony system includes a first image display apparatus and a second image display apparatus which makes a video call with the first image display apparatus. The first image display apparatus transmits a first image including a photographed image of a first user to the second image display apparatus, and receives a second image in which a background of a photographed image of a second user is substituted with a virtual background image, from the second image display apparatus. The second image display apparatus changes the virtual background image of the second image according to a change in a location of the first user and transmits the second image.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: June 19, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Eun-seo Kim, Sang-yoon Kim
  • Patent number: 9979961
    Abstract: The invention relates to an image processing device and an image processing method capable of collectively encoding a color image and a depth image of different resolutions. The image processing device comprising an image frame converting unit that converts the resolution of the depth image to the same resolution as that of the color image. An additional information generating unit that generates additional information including information to specify the color image, or information to specify the depth image, image frame conversion information indicating an area of a black image included in the depth image, the resolution of which is converted, and resolution information to distinguish whether the resolutions of the color image and the depth image are different from each other. This technology may be applied to the image processing device of images of multiple viewpoints.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: May 22, 2018
    Assignee: Sony Corporation
    Inventors: Yoshitomo Takahashi, Shinobu Hattori
  • Patent number: 9967525
    Abstract: A monitoring camera apparatus, having a camera for recording a monitoring region with at least one object from a recording position of the camera. The apparatus also includes an actuator for changing the recording position of the camera and a control device for driving the actuator and the camera. A first image is recordable in a first recording position of the camera, and a second image is recordable in a second recording position of the camera. The first and second images show at least one identical subsection with the at least one object of the monitoring region as a common subsection. An evaluation device for evaluating the images of the camera is configured to determine depth information concerning the at least one object in the common subsection from the first and second images.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: May 8, 2018
    Assignee: Robert Bosch GmbH
    Inventors: Wolfgang Niehsen, Dieter Joecker, Michael Meyer
  • Patent number: 9875545
    Abstract: Provided is a camera pose estimation apparatus that estimates an initial camera pose using one of an input depth image and an input color image, and refines the initial camera pose using the other image. When the initial camera pose is estimated using the input depth image, the radius of a first area, in which color information is matched for refinement, can be adaptively set according to the distribution of the depth value of at least one first point that is subject to matching.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: January 23, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seon Min Rhee, Hyong Euk Lee, Yong Beom Lee
  • Patent number: 9868256
    Abstract: A three-dimensional printing system includes a light source unit, at least one image-capturing module, a processing unit and a printing device. The image-capturing module includes an image-capturing unit and a focus-adjusting lens group. The processing unit controls a zooming lens to change a shooting focal length and controlling multiple images of the object to be measured captured by the image-capturing unit under the different shooting focal lengths, wherein each of the plural images includes focused and unfocused local images therein, and the processing unit calculates out a three-dimensional profile data of the object to be measured according to the focused local images in the images and the shooting focal lengths corresponding to the focused local images. The printing device prints multiple cross-sectional profiles corresponding to the object to be measured. Further, a method for three-dimensional printing is also provided.
    Type: Grant
    Filed: November 11, 2013
    Date of Patent: January 16, 2018
    Assignee: SILICON TOUCH TECHNOLOGY INC.
    Inventor: Ling-Yuan Tseng
  • Patent number: 9858638
    Abstract: A spherical harmonic is defined which is an operationally optimal small finite subset of the infinite number of spherical harmonics allowed to exist mathematically. The composition of the subset differs depending on its position on virtual hemisphere. The subsets are further divided into small spherical tesserae whose dimensions vary depending on the distance from the hemispherical center. The images of the outside visual scenes are projected on the flat surface of the webcam and from there are read and recalculated programmatically as if the images have been projected on the hemisphere. rotational invariants are then computed in the smallest tesserae using numerical integration, and then invariants from neighboring tesserae are added to compute the rotational invariant of their union. Every computed invariant is checked with the library and stored there if there is no match. The rotational invariants are solely used for visual recognition and classification and operational decision making.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: January 2, 2018
    Inventor: Alex Simon Blaivas
  • Patent number: 9811880
    Abstract: An apparatus, system, and method for increasing points in a point cloud. In one illustrative embodiment, a two-dimensional image of a scene and the point cloud of the scene are received. At least a portion of the points in the point cloud are mapped to the two-dimensional image to form transformed points. A fused data array is created using the two-dimensional image and the transformed points. New points for the point cloud are identified using the fused data array. The new points are added to the point cloud to form a new point cloud.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: November 7, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Terrell Nathan Mundhenk, Yuri Owechko, Kyungnam Kim
  • Patent number: 9800896
    Abstract: A method and apparatus for depth lookup table (DLT) signaling in a three-dimensional and multi-view coding system are disclosed. According to the present invention, if the pictures contain only texture data, no DLT information is incorporated in the picture parameter set (PPS) corresponding to the pictures. On the other hand, if the pictures contain depth data, the DLT associated with the pictures is determined. If a previous DLT required for predicting the DLT exists, the DLT will be predicted based on the previous DLT. Syntax related to the DLT is included in the PPS. Furthermore, first bit-depth information related to first depth samples of the DLT is also included in the PPS and the first bit-depth information is consistent with second bit-depth information signaled in a sequence level data for second depth samples of a sequence containing the pictures.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: October 24, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Kai Zhang, Jicheng An, Xianguo Zhang, Han Huang
  • Patent number: 9646264
    Abstract: An input time-series is decomposed into a set of constituent frequencies. For each constituent frequency in a subset of the set of constituent frequencies, a corresponding forecasting model is selected in a subset from a set of forecasting models. From a set of component forecasts produced by the subset of forecasting models, a subset of component forecasts is selected. A component forecast in the subset of component forecasts is selected according to a component forecast selection condition. The subset of component forecasts is output to revise the forecast selection condition. A revised forecast selection condition increases a relevance of a future subset of component forecasts.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: May 9, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Aaron K. Baughman
  • Patent number: 9607387
    Abstract: A method for optimizing fiducial marker and camera positions/orientations is realized to simulate camera and fiducial positions and pose estimation algorithm to find best possible marker/camera placement comprises the steps of: acquiring mesh data representing possible camera positions and feasible orientation boundaries of cameras on the environment of tracked object; acquiring mesh data representing possible active marker positions and feasible orientation placements of markers on a tracked object; pose data representing possible poses of tracked object under working conditions; initializing the control parameter for camera placement; create initial solution strings for camera placement; solving marker placement problem for the current camera placement; evaluating the quality of the current LED and camera placement taking pose coverage, pose accuracy, number of placed markers, number of placed camera etc. into account; determining if a stopping criterion is satisfied.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: March 28, 2017
    Assignee: ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI
    Inventors: Erkan Okuyan, Ozgur Yilmaz
  • Patent number: 9595106
    Abstract: A calibration apparatus calibrating a projection apparatus projecting a projection image includes a captured image acquiring unit acquiring a captured image at each change of at least one of a relative position between the projection apparatus and a plane body and a relative posture between the projection apparatus and the plane body, a reflection position estimating unit acquiring reflection positions at each change of at least one of a position of the plane body and a posture of the plane body using a predetermined correspondence relationship between a pixel of the captured image and a position on the plane body, a plane body position posture estimating unit estimating positions and postures of the plane body so as to minimize a degree of misfit of the reflection positions from a straight line of the reflection positions, and a projection light beam identifying unit identifying an equation of the light beam.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: March 14, 2017
    Assignee: Ricoh Company, Ltd.
    Inventor: Takayuki Hara
  • Patent number: 9525862
    Abstract: A method for estimating a camera motion and for determining a three-dimensional model of an environment is provided that includes the steps of: providing intrinsic parameters of a camera; providing a set of reference two-dimensional imaged points captured by the camera at a first camera pose and reference depth samples; determining a three-dimensional model of the environment; providing a set of current two-dimensional imaged points captured by the camera at a second camera pose and current depth samples associated to the set of current two-dimensional imaged points and determining a current three-dimensional model; estimating a camera motion between the first camera pose and the second camera pose; determining a similarity measure between the three-dimensional model and the current three-dimensional model, and if it is determined that the similarity measure meets a first condition, updating the three-dimensional model of the environment and adding the set of current two-dimensional imaged points to the set o
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: December 20, 2016
    Assignee: Metaio GmbH
    Inventors: Selim Benhimane, Sebastian Lieberknecht, Andrea Huber
  • Patent number: 9489724
    Abstract: A method of displaying. The method includes using a system including three-dimensional stereoscopic projection equipment to project an image onto a complex surface of a physical object disposed in an area. The complex surface includes at least one curve or angle. The system also includes a device worn by a user or disposed on a mobile device. The method further includes warping the image based on a geometry of the complex surface, tracking a position and orientation of the user or the mobile device, and further warping the image based on the position and orientation.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: November 8, 2016
    Assignee: The Boeing Company
    Inventors: Paul Robert Davies, Steven Matthew Gunther
  • Patent number: 9483829
    Abstract: A structure for determining a plane in a depth image includes dividing a portion of a depth image into a plurality of areas, fitting a two-dimensional line to depth points in each of the plurality of areas, and combining two or more of the plurality of two-dimensional lines to form a three-dimensional plane estimate.
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: November 1, 2016
    Assignee: International Business Machines Corporation
    Inventor: Jonathan H. Connell, II
  • Patent number: 9466045
    Abstract: Shipments of ordered items may be optimized by minimizing geographic and time constraints, thereby improving efficiencies and reducing costs. Geographic locations of items included in an order may be considered when generating an optimal path for retrieving such items and for processing the order. Upon identifying the items included in an order, locations of an origin, a destination and each of the items may be determined by any means, such as through a photogrammetric analysis of one or more images, and an optimal path for picking each of the items may be determined based on the respective locations of each of the items. Additionally, orders for items for delivery to a common destination may be combined into a single shipment if the orders are received within a window of time, and if the items are sufficiently compatible with one another.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: October 11, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Nirvay Kumar
  • Patent number: 9460238
    Abstract: A method for determining a form for a headphone part using a representative model includes receiving data for at least one ear model. The ear model may be oriented within a coordinate system based on the data, where orienting is focused on the alignment with respect to one or more areas of the ear. A representative model is determined from a plurality of oriented ear models, where the representative model is a representation for a volume determined to be common to at least two oriented ear models. The size and/or shape for the headphone part is determined based upon the representative model.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 4, 2016
    Assignee: APPLE INC.
    Inventors: M. Evans Hankey, Jonathan S. Aase, Matthew D. Rohrbach, Daniele G. De Iuliis, Kristi E. Bauerly
  • Patent number: 9456196
    Abstract: Provided are an apparatus and a method of providing a multiview still image service. The method includes: configuring a multiview still image file format including a plurality of image areas into which a plurality of pieces of image information forming a multiview still image are inserted; inserting the plurality of pieces of image information into the plurality of image areas, respectively; inserting three-dimensional (3D) basic attribute information to three-dimensionally reproduce the multiview still image into a first image area of the plurality of image areas into which main-view image information from among the plurality of pieces of image information is inserted; and outputting multiview still image data comprising the plurality of pieces of image information based on the multiview still image file format.
    Type: Grant
    Filed: February 23, 2011
    Date of Patent: September 27, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yong-tae Kim, Ha-joong Park, Gun-ill Lee, Houng-sog Min, Sung-bin Hong, Kwang-cheol Choi
  • Patent number: 9454796
    Abstract: Systems and methods for aligning ground based images of a geographic area taken from a perspective at or near ground level and a set of aerial images taken from, for instance, an oblique perspective, are provided. More specifically, candidate aerial imagery can be identified for alignment with the ground based image. Geometric data associated with the ground based image can be obtained and used to warp the ground based image to a perspective associated with the candidate aerial imagery. One or more feature matches between the warped image and the candidate aerial imagery can then be identified using a feature matching technique. The matched features can be used to align the ground based image with the candidate aerial imagery.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Qi Shan
  • Patent number: 9436973
    Abstract: In an image processing for correcting a distorted image obtained by photography by use of a super-wide angle optical system such as a fisheye lens or an omnidirectional mirror, to obtain an image of a perspective projection method, a composite index (Rn) combining a height on the projection sphere with a distance from the optical axis is computed (301), and a distance (Rf) from an origin in the distorted image is computed (302), using the composite index (Rn). Further, two-dimensional coordinates (p, q) in the distorted image are computed (303), using the distance (Rf) from the origin, and a pixel value in an output image is determined using a pixel at a position in the distorted image specified by the two-dimensional coordinates, or a pixel or pixels neighboring the specified position. It is possible to perform the projection from the projection sphere to the image plane, that is, the computation of the coordinates on the coordinate plane, while restricting the amount of computation.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: September 6, 2016
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Toru Aoki, Narihiro Matoba
  • Patent number: 9430038
    Abstract: Embodiments that relate to communicating to a user of a head-mounted display device an estimated quality level of a world-lock display mode are disclosed. For example, in one disclosed embodiment a sensor data is received from one or more sensors of the device. Using the sensor data, an estimated pose of the device is determined. Using the estimated pose, one or more virtual objects are displayed via the device in either the world-lock display mode or in a body-lock display mode. One or more of input uncertainty values of the sensor data and pose uncertainty values of the estimated pose are determined. The input uncertainty values and/or pose uncertainty values are mapped to the estimated quality level of the world-lock display mode. Feedback of the estimated quality level is communicated to a user via device.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: August 30, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael John Ebstyne, Frederik Schaffalitzky, Drew Steedly, Ethan Eade, Martin Shetter, Michael Grabner
  • Patent number: 9418475
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, multiple 3D point clouds, which are captured using one or more 3D cameras, are obtained. At least two of the 3D point clouds correspond to different positions of a body relative to at least a single one of the one or more 3D cameras. Two or more of the 3D point clouds are identified as corresponding to two or more predefined poses, and a segmented representation of the body is generated, in accordance with a 3D part-based volumetric model including cylindrical representations, based on the two 3D point clouds identified as corresponding to the two predefined pose.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: August 16, 2016
    Assignee: University of Southern California
    Inventors: Gerard Guy Medioni, Jongmoo Choi, Ruizhe Wang