3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 11372477
    Abstract: The disclosure provides an eye tracking method, an head-mounted display (HMD), and a computer readable storage medium. The method includes: capturing, by the first camera, a first eye image of a first eye of a wearer of the HMD; capturing, by the second camera, a second eye image of the first eye of the wearer; constructing a first eye model of the first eye based on the first eye image and the second eye image; capturing, by the first camera, a first specific eye image of the first eye of the wearer; obtaining a plurality of first specific eye landmarks in the first specific eye image; identifying a first eye pose of the first eye of the wearer based on the first eye model and the first specific eye landmarks.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: June 28, 2022
    Assignee: HTC Corporation
    Inventor: Yung-Chen Lin
  • Patent number: 11373354
    Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: June 28, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Yaacov Chernoi, Antonio Dello Iacono, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Patent number: 11373335
    Abstract: A projection image generation unit 91 applies a plurality of projection schemes that use the radius of a visual field region of a fisheye-lens camera to an image that is imaged by the fisheye-lens camera to generate a plurality of projection images. A display unit 92 displays the plurality of projection images. A selection acceptance unit 93 accepts a projection image selected by a user from among the plurality of displayed projection images. A projection scheme determination unit 94 determines a projection scheme on the basis of the selected projection image. An output unit 95 outputs an internal parameter of the fisheye-lens camera that corresponds to the determined projection scheme.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: June 28, 2022
    Assignee: NEC CORPORATION
    Inventor: Hiroo Ikeda
  • Patent number: 11373326
    Abstract: An information processing apparatus is configured to output a histogram for inspecting a state of a target object based on presence of a peak in a specific class in the histogram, the histogram representing a distribution of depth values from a measurement apparatus to the target object, the information processing apparatus including an acquisition unit configured to acquire depth information obtained from a result of measurement of a depth value from the measurement apparatus to the target object by the measurement apparatus and an output unit configured to output a histogram based on the acquired depth information so that a frequency of a class including a predetermined depth value to be applied when the depth value is not obtained is reduced.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: June 28, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Tomoaki Higo, Kazuhiko Kobayashi, Tomokazu Tsuchimoto
  • Patent number: 11363946
    Abstract: A method for carrying out an eye examination on an examinee is provided. The method includes providing a display configured for displaying a visual stimulus; forming a visual path between the display and at least one eye of the examinee; displaying on the display at least one visual stimulus having visual characteristics; detecting reflexive eye response of the at least one eye in response to the displaying of the visual stimulus; assessing vision of the at least one eye in accordance with the reflexive eye response and the characteristics of the visual stimulus.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: June 21, 2022
    Inventors: Ori Raviv, Abraham Sela, Andrey Markus
  • Patent number: 11367221
    Abstract: An image synthesizing method includes capturing a first image using a reference small aperture size; capturing a second image using a reference large aperture size; obtaining one or more reference color weights according to a corresponding pixel of the first image and adjacent pixels of the corresponding pixel of the first image and a corresponding pixel of the second image; obtaining an associated distance by looking up an association table according to the one or more reference color weights; obtaining one or more associated color weights by looking up the association table according to the associated distance and an expected aperture; and obtaining a color value of a corresponding pixel of a synthesized image, by applying weighting to the corresponding pixel of the first image and the adjacent pixels of the corresponding pixel of the first image with the one or more associated color weights.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: June 21, 2022
    Assignee: Wistron Corporation
    Inventor: Yao-Tsung Chang
  • Patent number: 11361459
    Abstract: The present disclosure provides a method, device and non-transitory computer storage medium for processing an image. The method includes: obtaining an image by a monocular camera; extracting image features with different levels based on the image; determining a fused feature by fusing the image features with different levels; and determining a depth distribution feature map of the image based on the fused feature, where a pixel value of each pixel point in the depth distribution feature map is a depth value.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: June 14, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Shijie An, Yuan Zhang, Chongyang Ma
  • Patent number: 11360571
    Abstract: An information processing device includes: an outline extraction unit extracting an outline of a subject from a picked-up image of the subject; a characteristic amount extraction unit extracting a characteristic amount, by extracting sample points from points making up the outline, for each of the sample points: an estimation unit estimating a posture of a high degree of matching as a posture of the subject by calculating a degree of the characteristic amount extracted in the characteristic amount extraction unit being matched with each of a plurality of characteristic amounts that are prepared in advance and represent predetermined postures different from each other; and a determination unit determining accuracy of estimation by the estimation unit using a matching cost when the estimation unit carries out the estimation.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: June 14, 2022
    Assignee: SONY CORPORATION
    Inventors: Keisuke Yamaoka, Jun Yokono
  • Patent number: 11361502
    Abstract: Computer-implemented methods and systems are described herein for optimising aerial observation positions for capturing aerial images of a geographical area, in which both the field of view of the camera used to capture the images and the surface terrain is taken into consideration to ensure that all of the geographical area is fully and completely imaged. Aspects may be used for improving the aerial imagery used for geospatial surveying. The pixels of a digital surface model representative of a geographic area are analysed to determine whether they will be visible to a camera when it is located at a number of different observation points located above the same target geographic area. For each observation point at which an image is to be captured, each pixel is analysed to determine whether or not they are within the field of view of the camera when positioned at that given observation point.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: June 14, 2022
    Assignee: Ordnance Survey Limited
    Inventor: Joseph Braybrook
  • Patent number: 11361461
    Abstract: According to an embodiment, an electronic device includes a display, a memory, and a processor operatively connected to the display and the memory. The memory stores instructions that cause the processor to generate first depth information in a first direction from an object, generate a first point cloud of the object based on the first depth information, generate a first bounding box containing one or more points of the first point cloud, generate second depth information in a second direction from the object, the second direction being different from the first direction, generate a second point cloud of the object based on the second depth information, generate a second bounding box containing one or more points of the second point cloud, generate a third bounding box by combining the first and second bounding boxes, and display the third bounding box on the display. Certain other embodiments are also possible.
    Type: Grant
    Filed: February 17, 2020
    Date of Patent: June 14, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dongkeun Oh, Jisung Yoo, Beomsu Kim, Chanmin Park, Kihuk Lee
  • Patent number: 11360550
    Abstract: Touch detection may include determining, based on data from an IMU on a first device that monitors movement of a touching object, a touch event, wherein the touch event indicates contact between the touching object and a surface, obtaining a depth image captured by a second device, comprising a touch image, determining a touch point of the object based on the touch image, and providing a response based on the touch point of the object and the touched surface.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: June 14, 2022
    Assignee: Apple Inc.
    Inventors: Lejing Wang, Rohit Sethi
  • Patent number: 11354349
    Abstract: A system for visual discovery is disclosed. The system identifies a visual search query in response to an action associated with an image presented to a user of a client device. The system extracts visual features from the visual search query and compares the visual features with visual features of stored content items. The system then identifies a set of candidate visual content items from the stored content items that have visual features, which are similar to the visual features of the visual search query. The candidate visual content items are ranked using information from a user session and provided for display to the user.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: June 7, 2022
    Assignee: Pinterest, Inc.
    Inventors: Jiajing Xu, Andrei Curelea
  • Patent number: 11354842
    Abstract: An animation system includes an animated figure, multiple sensors, and an animation controller that includes a processor and a memory. The memory stores instructions executable by the processor. The instructions cause the animation controller to receive guest detection data from the multiple sensors, receive shiny object detection data from the multiple sensors, determine an animation sequence of the animated figure based on the guest detection data and shiny object detection data, and transmit a control signal indicative of the animation sequence to cause the animated figure to execute the animation sequence. The guest detection data is indicative of a presence of a guest near the animated figure. The animation sequence is responsive to a shiny object detected on or near the guest based on the guest detection data and the shiny object detection data.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Universal City Studios LLC
    Inventors: David Michael Churchill, Clarisse Vamos, Jeffrey A. Bardt
  • Patent number: 11354774
    Abstract: In an image processing system, a scan of an actor is converted to a high-resolution two-dimensional map, which is converted to low-resolution map and to a facial rig model. Manipulations of the facial rig create a modified facial rig. A new low-resolution two-dimensional map can be obtained of the modified facial rig and a neural network can be used to generate a new high-resolution two-dimensional map that can be used to generate a mesh that is a mesh of the scan, modified by the manipulations of the facial rig.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: June 7, 2022
    Assignee: Unity Technologies SF
    Inventor: Byung Kuk Choi
  • Patent number: 11348303
    Abstract: Methods of creating a texture for a three-dimensional (3D) model using frequency separation and/or depth buffers are provided. Frequency separation may include splitting one or more images each into higher frequency components (which includes finer details such-as facial pores, lines, birthmarks, spots, or other textural details) and lower frequency components (such as color or tone). Depth buffering may include storing higher frequency components of the images within a depth buffer based on a distance of a corresponding vertex in the 3D model from the camera coordinate system, and then, using the closest pixel to the camera. This pixel likely has a highest amount of sharpness or detail. The lower frequency components can be averaged to account for illumination differences, but because the high frequency components have been separated, detail in the final texture may be preserved. Related devices and computer program products are also provided.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: May 31, 2022
    Assignee: Sony Group Corporation
    Inventors: Johannes Elg, Fredrik Olofsson, Lars Novak, Pal Szasz
  • Patent number: 11348279
    Abstract: A system for estimating a three dimensional pose of one or more persons in a scene is disclosed herein. The system includes at least one camera, the at least one camera configured to capture an image of the scene; and a data processor including at least one hardware component, the data processor configured to execute computer executable instructions. The computer executable instructions comprising instructions for: (i) receiving the image of the scene from the at least one camera; (ii) extracting features from the image of the scene for providing inputs to a convolutional neural network; (iii) generating one or more volumetric heatmaps using the convolutional neural network; and (iv) applying a maximization function to the one or more volumetric heatmaps to obtain a three dimensional pose of one or more persons in the scene.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: May 31, 2022
    Assignee: Bertec Corporation
    Inventors: Emre Akbas, Batuhan Karagoz, Bedirhan Uguz, Ozhan Suat, Necip Berme, Mohan Chandra Baro
  • Patent number: 11344102
    Abstract: The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: May 31, 2022
    Assignee: Shiseido Company, Limited
    Inventors: Yun Fu, Haiyi Mao
  • Patent number: 11348263
    Abstract: Provided is a method and apparatus for detecting a vanishing point in a driving image of a vehicle. The method includes: receiving the driving image; generating a probability map, comprising probability information about a position of the vanishing point in the driving image, from the driving image; detecting a vanishing point on the driving image by applying smoothing regression, which softens a boundary region of the vanishing point, to the probability map; and processing a task for driving the vehicle by converting an orientation of the driving image based on the vanishing point.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: May 31, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Sang Jun Lee
  • Patent number: 11341650
    Abstract: Aspects of the subject disclosure may include, for example, a device that has a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, including downsampling a full point cloud to obtain a downsampled point cloud, wherein the downsampling reduces a data size of the full point cloud; and using a machine-learning model to assign labels for segmentation and object identification to points in the downsampled point cloud, wherein the machine-learning model is trained on the full point cloud. Other embodiments are disclosed.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: May 24, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Bo Han, Cheuk Yiu Ip, Eric Zavesky, Huanle Zhang
  • Patent number: 11341183
    Abstract: An apparatus and method for searching for a building on the basis of an image and a method of constructing a building search database (DB) for image-based building search. The method includes constructing a building search DB, receiving a query image from a user terminal, detecting a region to which a building belongs in the query image, extracting features of the region detected in the query image, and searching the building search DB for a building matching the extracted features. Therefore, building search performance can be improved.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: May 24, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Keun Dong Lee, Seung Jae Lee, Jong Gook Ko, Hyung Kwan Son, Weon Geun Oh, Da Un Jung
  • Patent number: 11343485
    Abstract: An apparatus including a stereo camera and a processor. The stereo camera may comprise a first capture device and a second capture device in a vertical orientation. The first capture device may be configured to generate first pixel data and the second capture device may be configured to generate second pixel data. The processor may be configured to receive the first pixel data and the second pixel data, generate a vertical disparity image in response to the first pixel data and the second pixel data, generate a virtual horizontal disparity image in response to the first pixel data and the vertical disparity image and detect objects by analyzing the vertical disparity image and the virtual horizontal disparity image. An analysis of the virtual horizontal disparity image may enable the processor to detect the objects not detected in the vertical disparity image alone.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 24, 2022
    Assignee: Ambarella International LP
    Inventors: Omar Pighi, Alessandro Pacchioni
  • Patent number: 11341668
    Abstract: A distance measuring camera contains a first imaging system for obtaining a first image containing a first subject image, a second imaging system for obtaining a second image containing a second subject image, a size obtaining part for detecting a plurality of feature points of the first subject image in the first image and measure a distance between the feature points of the first subject image to obtain a size of the first subject image and utilizes an epipolar line to detect a plurality of feature points of the second subject image respectively corresponding to the plurality of feature points of the first subject image and measures a distance between the feature points of the second subject image to obtain a size of the second subject image and a distance calculating part for calculating a distance to the subject based an image magnification ratio between a magnification of the first subject image and a magnification of the first subject image and a magnification of the second subject image.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: May 24, 2022
    Assignee: MITSUMI ELECTRIC CO., LTD.
    Inventor: Satoru Suto
  • Patent number: 11342000
    Abstract: Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 24, 2022
    Assignee: WARNER BROS. ENTERTAINMENT INC.
    Inventors: Gregory I. Gewickey, Lewis S. Ostrover, Michael Smith, Michael Zink
  • Patent number: 11341715
    Abstract: A method, a system, a device, and a computer readable storage medium for video reconstruction are disclosed. The method includes: obtaining an image combination of multi-angle free-perspective video frames, parameter data corresponding to the image combinations of the video frames, and position information of a virtual viewpoint based on a user interaction; selecting texture images and depth maps of corresponding groups in the image combinations of the video frames at a time moment of the user interaction according to a preset rule and based on the position information of the virtual viewpoint and the parameter data corresponding to the image combinations of the video frames; and combining and rendering the texture images and the depth maps of the corresponding groups based on the position information of the virtual viewpoint and parameter data corresponding to the depth maps and the texture images of the corresponding groups to obtain a reconstructed image.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 24, 2022
    Assignee: Alibaba Group Holding Limited
    Inventor: Xiaojie Sheng
  • Patent number: 11341620
    Abstract: One example provides a computer-implemented method for reading data stored as birefringence values in a storage medium. The method comprises acquiring an image of a voxel of the storage medium, applying a first low-pass filter with a first cutoff frequency to the image of the voxel to obtain a first background image, applying a second low-pass filter with a second cutoff frequency to the image of the voxel to obtain a second background image, the second cutoff frequency being different than the first cutoff frequency, determining an enhanced background image from the first background image and the second background image, determining birefringence values for the enhanced background image, determining birefringence values for the image of the voxel, and correcting the birefringence values for the image of the voxel based upon the birefringence values for the enhanced background image.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ariel Gomez Diaz, David Lara Saucedo, Peter Gyula Scholtz, Patrick Neil Anderson, Rokas Drevinskas, Richard John Black, James Hilton Clegg
  • Patent number: 11336887
    Abstract: Disclosed herein includes a system, a method, and a device for rendering an image through adaptive reprojection. A first reprojection can be performed to generate a portion of a first image of a first view of a virtual space at a first rate. An amount of change from the first view to a second view of the virtual space can be determined. A portion of a second image of the second view of the virtual space can be determined to generate through a second reprojection. The second reprojection can be performed at a second rate to the portion of the second image according to the amount of change from the first view to the second view of the virtual space.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: May 17, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Dean Joseph Beeler, Volga Aksoy
  • Patent number: 11328500
    Abstract: A platform for design of a lighting installation generally includes an automated search engine for retrieving and storing a plurality of lighting objects in a lighting object library and a lighting design environment providing a visual representation of a lighting space containing lighting space objects and lighting objects. The visual representation is based on properties of the lighting space objects and lighting objects obtained from the lighting object library. A plurality of aesthetic filters is configured to permit a designer in a design environment to adjust parameters of the plurality of lighting objects handled in the design environment to provide a desired collective lighting effect using the plurality of lighting objects.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: May 10, 2022
    Assignee: KORRUS, INC.
    Inventors: Benjamin James Harrison, Shruti Koparkar, Mark Reynoso, Paul Pickard, Raghuram L. V. Petluri, Gary Vick, Andrew Villegas
  • Patent number: 11324285
    Abstract: Systems and processes for measuring and sizing a foot are provided. In one exemplary process, at least one image of a foot and a horizontal reference object from a first point of view is captured. A portion of the foot may be disposed against a vertical reference object. The at least one image may be displayed, where one or more camera guides are overlaid on the at least one displayed image. In response to aligning the one or more camera guides with one or more of the horizontal reference object and the vertical reference object, a measurement of the foot based on the at least one captured image from the first point of view is determined.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: May 10, 2022
    Assignee: NIKE, Inc.
    Inventors: Joseph Hei, Vivian Chiang, Jason Warren
  • Patent number: 11327608
    Abstract: An operation detection method of detecting an operation of a pointing element with respect to an operation surface includes converting first and second taken image obtained by imaging the operation surface into first and second converted taken image calibrated with respect to the operation surface respectively, determining whether or not the pointing element contacts with the operation surface and is in a resting state based on the first and second converted taken image, selecting the first and second converted taken image at when it was determined that the pointing element contacts and to be in the resting state as first and second reference image, calculating a first difference in position between the pointing element in the first reference image and the pointing element in the first converted taken image, calculating a second difference in position between the pointing element in the second reference image selected and the pointing element in the second converted taken image, and determining whether or not
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: May 10, 2022
    Assignee: Seiko Epson Corporation
    Inventors: Mirza Tahir Ahmed, Yang Yang
  • Patent number: 11321863
    Abstract: Systems, methods, and other embodiments described herein relate to generating depth estimates of an environment depicted in a monocular image. In one embodiment, a method includes identifying semantic features in the monocular image according to a semantic model. The method includes injecting the semantic features into a depth model using pixel-adaptive convolutions. The method includes generating a depth map from the monocular image using the depth model that is guided by the semantic features. The pixel-adaptive convolutions are integrated into a decoder of the depth model. The method includes providing the depth map as the depth estimates for the monocular image.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: May 3, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Rares A. Ambrus, Jie Li, Adrien David Gaidon
  • Patent number: 11321833
    Abstract: A method for segmenting metal objects in projection images acquired using different projection geometries is provided. Each projection image shows a region of interest. A three-dimensional x-ray image is reconstructed from the projection images in the region of interest. A trained artificial intelligence segmentation algorithm is used to calculate first binary metal masks for each projection image. A three-dimensional intermediate data set of a reconstruction region that is larger than the region of interest is reconstructed by determining, for each voxel of the intermediate data set, as a metal value, a number of first binary metal masks showing metal in a pixel associated with a ray crossing the voxel. A three-dimensional binary metal mask is determined. Second binary metal masks are determined for each projection image by forward projecting the three-dimensional binary metal mask using the respective projection geometries.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: May 3, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Holger Kunze, Peter Fischer, Björn Kreher, Tristan Gottschalk, Michael Manhart
  • Patent number: 11321953
    Abstract: Computer implemented methods and computerized apparatus for posture, dimension and shape measurements of at least one 3D object in a scanned 3D scene are provided. The method comprises receiving a point cloud and performs 3D geometric feature extraction. In one embodiment, the 3D geometric feature extraction is based on a 3D hybrid voxel-point structure, which comprises a hybrid voxel-point based normal estimation, a hybrid voxel-point based plane segmentation, a voxel-based geometric filtering, a voxel-based edge detection and a hybrid voxel-point based line extraction. Through the process of 3D geometric feature extraction, the geometric features are then passed to the geometric-based dimension and shape measurements for various applications. After 3D geometric feature extraction, a further process of feature-based object alignment is performed.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: May 3, 2022
    Assignee: HONG KONG APPLIED SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE COMPANY LIMITED
    Inventors: Hoi Fai Yu, Xueyan Tang, Meng Chen
  • Patent number: 11315328
    Abstract: A system can include a device and a graphics processing unit (GPU). The device can be configured to receive a first image from one or more cameras corresponding to a first view and a second image from the one or more cameras corresponding to a second view. The GPU can include a motion estimator and configured to receive the first image and the second image and be configured to receive the first image and the second image. The motion estimator can be configured to determine first disparity offsets for the first image and second disparity offsets for the second image. The device can be configured to generate, for rendering 3D image using the first image and the second image, a first depth buffer for the first image derived from the first disparity offsets and a second depth buffer for the second image derived from the second disparity offsets.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: April 26, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Volga Aksoy, Dean Joseph Beeler
  • Patent number: 11315272
    Abstract: The present approach relates to an automatic and efficient motion plan for a drone to collect and save a qualified dataset that may be used to improve reconstruction of 3D models using the acquired data. The present architecture provides an automatic image processing context, eliminating low quality images and providing improved image data for point cloud generation and texture mapping.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: April 26, 2022
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Ming-Ching Chang, Junli Ping, Eric Michael Gros, Arpit Jain, Peter Henry Tu
  • Patent number: 11313684
    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: April 26, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Mikhail Sizintsev, Xun Zhou, Philip Miller, Glenn Murray
  • Patent number: 11315269
    Abstract: A system for generating point clouds having surface normal information includes one or more processors and a memory having a depth map generating module, a point cloud generating module, and surface normal generating module. The depth map generating module causes the one or more processors to generate a depth map from one or more images of a scene. The point cloud causes the one or more processors to generate a point cloud from the depth map having a plurality of points corresponding to one or more pixels of the depth map. The surface normal generating module causes the one or more processors to generate surface normal information for at least a portion of the one or more pixels of the depth map and inject the surface normal information into the point cloud such that the plurality of points of the point cloud include three-dimensional location information and surface normal information.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: April 26, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Victor Vaquero Gomez, Rares A. Ambrus, Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11313676
    Abstract: The three-dimensional measurement apparatus includes a projecting unit configured to project patterned light onto a measurement target and an image capturing unit configured to capture an image of the measurement target onto which the patterned light is projected, with a predetermined exposure time, a calculation unit configured to calculate positions of a three-dimensional point group expressing a three-dimensional shape of the measurement target based on the feature points included in the image, and a determination unit configured to determine the exposure time such that at least one of the number of feature points and the number of three-dimensional point groups is equal to or greater than a threshold that is defined based on one of their maximum numbers, and the exposure time is shorter than an exposure time for the maximum number.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: April 26, 2022
    Assignee: OMRON Corporation
    Inventors: Xingdou Fu, Takashi Shimizu
  • Patent number: 11317076
    Abstract: A peripheral device includes an image sensing module configured to include a plurality of sensors. Each of sensors is arranged on the peripheral device and is configured to capture image data for objects in a real-world space. A processor is part of the peripheral device for determining fixed points in the image data. The fixed points are used to determine changes in spatial position of the peripheral device with reference to the fixed points in the real-world space. In one example, the peripheral device includes a motion sensor to additionally assist in determining the changes in spatial position of the peripheral device.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: April 26, 2022
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Gary Zalewski
  • Patent number: 11315276
    Abstract: Methods for stereo calibration of a dual-camera that includes a first camera and a second camera and system for performing such methods. In some embodiments, a method comprises obtaining optimized extrinsic and intrinsic parameters using initial intrinsic parameters and, optionally, initial extrinsic parameters of the cameras, estimating an infinity offset e using the optimized extrinsic and extrinsic parameters, and estimating a scaling factor s using the optimized extrinsic and extrinsic parameters and infinity offset parameter e, wherein the optimized extrinsic and extrinsic parameters, infinity offset e and scaling factor s are used together to provide stereo calibration that leads to improved depth estimation.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: April 26, 2022
    Assignee: Corephotonics Ltd.
    Inventors: Nadav Geva, Paz Ilan, Oded Gigushinski, Ephraim Goldenberg, Gal Shabtay
  • Patent number: 11308284
    Abstract: In one embodiment, a method includes receiving a user input from a user from a client system associated with the user, wherein the client system comprises one or more cameras, determining one or more points of interest in a field of view of the one or more cameras based on one or more machine-learning models and sensory data captured by the one or more cameras, generating a plurality of media files based on the one or more points of interest, wherein each media file is a recording of at least one of the one or more points of interest, generating one or more highlight files based on the plurality of media files, wherein each highlight file comprises a media file that satisfies a predefined quality standard, and sending instructions for presenting the one or more highlight files to the client system.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: April 19, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Lisa Xiaoyi Huang, Eric Xiao, Nicholas Michael Andrew Benson, Yating Sheng, Zijian He
  • Patent number: 11308696
    Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: April 19, 2022
    Assignee: Apple Inc.
    Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Aleksandr M. Movshovich, Gurjeet S. Saund
  • Patent number: 11308576
    Abstract: In accordance with implementations of the subject matter described herein, there is proposed a solution of visual stylization of stereoscopic images. In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: April 19, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lu Yuan, Gang Hua, Jing Liao, Dongdong Chen
  • Patent number: 11308336
    Abstract: A method for operating a camera-monitor system for a motor vehicle, in which the camera-monitor system has two cameras which are assigned to a common side of the motor vehicle and are both designed to provide an image of the surroundings of the motor vehicle, wherein the imaged surroundings of the images partially overlap. The system ascertains transformation parameters for transforming a second image of the second camera such that an image element in a peripheral region of a transformed second image connects to a corresponding image element in a peripheral region of a first image of the first camera. Further, ascertaining transformation parameters for transforming the second image in dependence on the ascertained transformation parameters and in dependence on a specification provided for the transformed second image. The system transforms the second image with the further transformation parameters into the transformed second image.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: April 19, 2022
    Assignee: Continental Automotive GmbH
    Inventor: Andreas Weinlich
  • Patent number: 11308312
    Abstract: The present teaching relates to method, system, medium, and implementations for understanding a three dimensional (3D) scene. Image data acquired by a camera at different time instances with respect to the 3D scene are received wherein the 3D scene includes a user or one or more objects. The face of the user is detected and tracked at different time instances. With respect to some of the time instances, a 2D user profile representing a region in the image data occupied by the user is generated based on a corresponding face detected and a corresponding 3D space in the 3D scene is estimated based on calibration parameters associated with the camera. Such estimated 3D space occupied by the user in the 3D scene is used to dynamically update a 3D space occupancy record of the 3D scene.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: April 19, 2022
    Assignee: DMAI, INC.
    Inventor: Nishant Shukla
  • Patent number: 11301917
    Abstract: Devices, systems, and methods include a three-dimensional (3D) scanning element, an electronic data storage configured to store a database including fields for 3D scan data and demographic information, a processor, and a user interface. In an example, the processor obtains 3D scan data of a body part of a subject from the 3D scanning element, analyzes the 3D scan data for incomplete regions, generate a composite 3D image of 3D scan data from the database based on similarities of demographic information, and overlays composite 3D image regions corresponding to incomplete regions on the 3D scan data.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: April 12, 2022
    Assignee: NIKE, Inc.
    Inventor: Christopher Andon
  • Patent number: 11301677
    Abstract: There is disclosed a computer implemented eye tracking system and corresponding method and computer readable storage medium, for detecting three dimensional, 3D, gaze, by obtaining at least one head pose parameter using a head pose prediction algorithm, the head pose parameter(s) comprising one or more of a head position, pitch, yaw, or roll; and to input the at least one head pose parameter along with at least one image of a user's eye, generated from a 2D image captured using an image sensor associated with the eye tracking system, into a neural network configured to generate 3D gaze information based on the at least one head pose parameter and the at least one eye image.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: April 12, 2022
    Assignee: Tobil AB
    Inventors: David Molin, Tommaso Martini, Maria Gordon, Alexander Davies, Oscar Danielsson
  • Patent number: 11301712
    Abstract: Systems and processes for identifying a pointer in an image of an analog instrument are provided herein. An instrument contour in the image corresponding to the analog instrument may be identified. A plurality of candidate pointer contours in the image may be identified and screened using one or more geometric property screening techniques including an evaluation of a geometric area, a distance parameter, and/or a gravity center of the plurality of candidate pointer contours. Principal component analysis (PCA) may be performed to select an identified pointer contour from among the reduced plurality of candidate pointer contours. A linear regression model may be applied to pixel points in the contour area of the identified pointer contour and a slope and angle of an associated pointer represented by the identified pointer contour may be determined based on an output of the linear regression model.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: April 12, 2022
    Assignee: SAP SE
    Inventors: Jie He, Yanbing Li, Hong Liu, Yubo Lou, Lin Cai, Xuemin Wang
  • Patent number: 11302074
    Abstract: Implementations generally relate to 3D modeling. In some implementations, a method includes displaying a live video of a scene on a device with a user interface overlay on top of the scene, wherein the user interface overlay includes one or more objects and includes one or more selectable controls. The method further includes receiving a selection of one or more new virtual points on a target object, wherein the target object is one object of the one or more objects in an image. The method further includes generating 3D polygonal data, wherein the 3D polygonal data is saved as a 3-dimensional (3D) model.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: April 12, 2022
    Assignee: Sony Group Corporation
    Inventor: Thomas Dawson
  • Patent number: 11302013
    Abstract: One or more features of a friction ridge signature of a subject may be identified based on information representing a three-dimensional topography of friction ridges of the subject. Information representing the three-dimensional topography of the friction ridges of the subject may be received. One or more level-three features of the friction ridge signature of the subject may be identified based on the information representing the three-dimensional topography of the friction ridges of the subject. The one or more level-three features may include one or more topographical ridge peaks, topographical ridge notches, topographical ridge passes, pores, and/or other information.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: April 12, 2022
    Assignee: Identification International, Inc.
    Inventors: Richard K. Fenrich, Bryan D. Dickerson
  • Patent number: 11301036
    Abstract: In one implementation, an apparatus includes a display to emit light in a first wavelength range, a camera to detect light in a second wavelength range, an eyepiece to distort light in the first wavelength range, and one or more light sources, disposed between the eyepiece and the display, to emit light in the second wavelength range. In one implementation, an apparatus includes a display to emit light in a first wavelength range, one or more light sources to emit light in a second wavelength range, a camera to detect light in a second wavelength range, and an eyepiece to reflect and refract light in the first wavelength range while passing, without substantial distortion, light in the second wavelength range.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: April 12, 2022
    Assignee: APPLE INC.
    Inventors: Noah D. Bedard, Branko Petljanski, John N. Border, Kathrin Berkner-Cieslicki, Qiong Huang