More Than Two Cameras Patents (Class 348/48)
  • Patent number: 11959744
    Abstract: Disclosed is a stereophotogrammetric method based on binocular vision, including the following steps: image acquisition, image correction and stereo matching are performed; cost matching and cost aggregation are performed on images of different sizes after correction; image segmentation is performed on the corrected image to determine edge pixel points of the object to be measured; and a pixel distance at an edge of the object to be measured is calculated to measure the size of the object. The method of the present invention enhances the matching accuracy of contour pixels of the object to be measured and improves the measurement accuracy.
    Type: Grant
    Filed: December 15, 2023
    Date of Patent: April 16, 2024
    Assignee: North China University of Science and Technology
    Inventors: Yina Suo, Xuebin Ning, Fuxing Yu, Ran Wang
  • Patent number: 11961244
    Abstract: Disclosed is a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection. The method comprises: firstly, by means of the fringe projection technology based on a stereoscopic phase unwrapping method, and with the assistance of an adaptive dynamic depth constraint mechanism, acquiring high-precision three-dimensional (3D) data of an object in real time without any additional auxiliary fringe pattern; and then, after a two-dimensional (2D) matching points optimized by the means of corresponding 3D information is rapidly acquired, by means of a two-thread parallel mechanism, carrying out coarse registration based on Simultaneous Localization and Mapping (SLAM) technology and fine registration based on Iterative Closest Point (ICP) technology.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: April 16, 2024
    Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Chao Zuo, Jiaming Qian, Qian Chen, Shijie Feng, Tianyang Tao, Yan Hu, Wei Yin, Liang Zhang, Kai Liu, Shuaijie Wu, Mingzhu Xu, Jiaye Wang
  • Patent number: 11917232
    Abstract: Collaborative video capture and sharing is described. A login to a primary device is received from a first user. A session is formed between the primary device and a secondary device. A start command and a stop command are sent during the session from the primary device to the secondary device to cause the secondary device to begin and stop a take. The session ID and an identification of the devices of the session are loaded to a remote data store from the primary device through the primary device communications interface. The take at the primary device is uploaded into the remote data store through the primary device communications interface with the session ID, and the primary device ID; and the secondary device uploads the take of the secondary device camera into the remote data store through the secondary device communications interface with the secondary device ID.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: February 27, 2024
    Assignee: VIMMERSE, INC.
    Inventors: Jill M. Boyce, Basel Salahieh
  • Patent number: 11879775
    Abstract: Various embodiments disclosed herein describe a divided-aperture infrared spectral imaging (DAISI) system that is adapted to acquire multiple IR images of a scene with a single-shot (also referred to as a snapshot). The plurality of acquired images having different wavelength compositions that are obtained generally simultaneously. The system includes at least two optical channels that are spatially and spectrally different from one another. Each of the at least two optical channels are configured to transfer IR radiation incident on the optical system towards an optical FPA unit comprising at least two detector arrays disposed in the focal plane of two corresponding focusing lenses. The system further comprises at least one temperature reference source or surface that is used to dynamically calibrate the two detector arrays and compensate for a temperature difference between the two detector arrays.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: January 23, 2024
    Assignee: Rebellion Photonics, Inc.
    Inventors: Robert Timothy Kester, Nathan Adrian Hagen
  • Patent number: 11869218
    Abstract: A system and/or method for imaging system calibration, including: determining a dense correspondence map matching features in a first image to features in a second image, and determining updated calibration parameters associated with the image acquisition system based on the matching features.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Compound Eye, Inc.
    Inventors: Jason Devitt, Haoyang Wang, Harold Wadleigh, Zachary Flom
  • Patent number: 11836878
    Abstract: A method for constructing a real-geographic-space scene in real time based on a panoramic-video technique is provided. By using a measuring robot and the attitude sensors, accurately determining the geographic coordinates and the attitudes of the cameras, where the cameras may be installed in a fixed or stringing manner, where in the fixing type a plurality of neighboring videos at the same moment undergo orthographic correction and splicing, and in the stringing type the cameras are installed to a guiding device and may locally, independently and quickly move and shoot, and the videos of the neighboring cameras are spliced in real time; and fusing the videos, the geographic coordinates and the environment sounds that satisfy the delay time, to form a scene video streaming.
    Type: Grant
    Filed: May 9, 2023
    Date of Patent: December 5, 2023
    Assignees: PEKING UNIVERSITY, Beijing LongRuan Technologies Inc.
    Inventors: Shanjun Mao, Yingbo Fan, Ben Li, Huazhou Chen, Xinchao Li
  • Patent number: 11783501
    Abstract: Provided are a method and apparatus for determining image depth information, an electronic device, and a medium. The method includes: acquiring first depth information of pixels in a target image output by a first prediction layer; generating the point cloud model of the target image according to the first depth information, and determining initial depth information of the pixels in the target image in a second prediction layer according to the point cloud model; and performing propagation optimization according to the initial depth information, and determining second depth information of the pixels in the target image output by the second prediction layer, where the first prediction layer is configured before the second prediction layer.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: October 10, 2023
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Ge Wu, Yu Zhou
  • Patent number: 11778231
    Abstract: Processing a 360-degree video content for video coding may include receiving the video content in a first geometry. The video content may include unaligned chroma and luma components associated with a first chroma sampling scheme. The unaligned chroma and luma components may be aligned to a sampling grid associated with a second chroma sampling scheme that has aligned chroma and luma components. A geometric conversion to the video content may be performed. The video content, that may comprise the aligned chroma and luma components, in the first geometry may be converted to a second geometry. The first geometry may be a stitched geometry, and the second geometry may be a coding geometry. The converted video content in the second geometry may include the chroma and luma components aligned to the sampling grid associated with the second chroma sampling scheme.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: October 3, 2023
    Assignee: VID Scale, Inc.
    Inventors: Yuwen He, Yan Ye, Ahmed Hamza
  • Patent number: 11743444
    Abstract: Provided is an electronic device for a temporal synchronization, which determines a set of parameters associated with each imaging device of a plurality of imaging devices. The set of parameters include frame rate of each imaging device. The electronic device generates a synchronization signal that includes a preamble pulse of a first time duration set based on the frame rate and a sequence of alternating ON and OFF pulses. Each pulse of the sequence of alternating ON and OFF pulses is of a second time duration set based on the set of parameters. Based on the synchronization signal, lighting devices may be controlled to generate a pattern of alternating light pulses that is captured by each imaging device. The electronic device further acquires a plurality of images that includes information about the pattern of alternating light pulses. The electronic device further synchronizes the plurality of images, based on the information.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: August 29, 2023
    Assignee: SONY GROUP CORPORATION
    Inventors: Brent Faust, Cheng-Yi Liu
  • Patent number: 11700362
    Abstract: A dual-camera image capture system may include a first light source, disposed above a target area, a first mobile unit, configured to rotate around the target area, and a second mobile unit, operatively coupled to the first mobile unit, configured to move vertically along the first mobile unit. The dual-camera image capture system may further include a second light source, operatively coupled to the second mobile unit and a dual-camera unit, operatively coupled to the second mobile unit. The dual-camera image capture system may include a first camera configured to capture structural data and a second camera configured to capture color data. The first mobile unit and the second mobile unit may be configured to move the first camera and the second camera to face the target area in a variety of positions around the target area.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: July 11, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Jim Hejl, Jerry Phaneuf, Aaron Jeromin
  • Patent number: 11681371
    Abstract: An eye tracking system comprising a controller configured to receive a reference image of an eye of a user and a current image of the eye of the user. The controller is also configured to determine a difference between the reference image and the current image to define a differential image. The differential image has a two dimensional pixel array of pixel locations that are arranged in a plurality of rows and columns. Each pixel location has a differential intensity value. The controller is further configured to calculate a plurality of row values by combining the differential intensity values in corresponding rows of the differential image and to determine eyelid data based on the plurality of row values.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: June 20, 2023
    Assignee: Tobii AB
    Inventors: Joakim Zachrisson, Simon Johansson, Mikael Rosell, Daniel Wrang
  • Patent number: 11669986
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: June 6, 2023
    Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Patent number: 11619722
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to receive an emitted polarized light beam at a lidar receiver that determines a polarization pattern and a distance to an object, wherein the polarization pattern is determined by comparing a linear polarization pattern and a circular polarization pattern and identify the object by processing the polarization pattern and the distance with a deep neural network, wherein the identity of the object can be metallic or non-metallic. The instructions can include further instructions to operate a vehicle based on the identified object.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: April 4, 2023
    Assignees: FORD GLOBAL TECHNOLOGIES, LLC, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Sanjay Emani Sarma, Dajiang Suo
  • Patent number: 11615546
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: March 28, 2023
    Assignee: Adeia Imaging LLC
    Inventor: Kartik Venkataraman
  • Patent number: 11610093
    Abstract: Disclosed are an artificial intelligence learning method and an operating method of a robot using the same. An on-screen label is generated based on image data acquired through a camera, an off-screen label is generated based on data acquired through other sensors, and the on-screen label and the off-screen label are used in learning for action recognition, thereby raising action recognition performance and recognizing a user's action even in a situation in which the user deviates from a camera's view.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: March 21, 2023
    Assignee: LG ELECTRONICS INC.
    Inventor: Jihwan Park
  • Patent number: 11579266
    Abstract: A lightweight, inexpensive LADAR sensor incorporating 3-D focal plane arrays is adapted specifically for modular manufacture and rapid field configurability and provisioning. The sensor generates, at high speed, 3-D image maps and object data at short to medium ranges. The techniques and structures described may be used to extend the range of long range systems as well, though the focus is on compact, short to medium range ladar sensors suitable for use in multi-sensor television production systems and 3-D graphics capture and moviemaking. 3-D focal plane arrays are used in a variety of physical configurations to provide useful new capabilities.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: February 14, 2023
    Assignee: Continental Autonomous Mobility US, LLC
    Inventors: Patrick Gilliland, Laurent Heughebaert, Joseph Spagnolia, Brad Short, Roger Stettner
  • Patent number: 11562498
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: January 24, 2023
    Assignee: Adela Imaging LLC
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Patent number: 11488268
    Abstract: A system for documenting a state of a building, comprising a first measuring system to determine actual data of the building, and at least one server having a memory for storing a digital model of the building, the server providing the digital model to the first measuring system and providing a blockchain of blocks, each block comprising a timestamp, hash data of a previous block and change data related to changes of the model, wherein a current block comprises current change data related to a latest change and task data related to tasks to be performed at the building hat have to be met. The first measuring system is adapted to interpret the task data, perform a measuring task, generate measurement data obtained by performing the measuring task, and to generate a new block of the blockchain, wherein the server is adapted to verify an admissibility of the new block.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: November 1, 2022
    Assignee: HEXAGON TECHNOLOGY CENTER GMBH
    Inventors: Johannes Maunz, Bernd Reimann
  • Patent number: 11394954
    Abstract: Systems, devices, media, and methods are presented for capturing and sharing visual content, especially three-dimensional video, with a plurality of users with compatible devices. The methods in some implementations include transmitting the visual content to a portable projector, generating a unique invite and broadcasting it to a plurality of eyewear devices, and projecting the visual content onto a screen for viewers who accepted the invite and are wearing a compatible eyewear device. For projection in 3D format, the eyewear devices include an active shutter mode. The portable projector may be housed in the interior of a case. In some configurations, the case interior is sized and shaped to support the projector and an eyewear device. The system may include multiple projectors at remote locations directed to project the visual content simultaneously.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: July 19, 2022
    Assignee: Snap Inc.
    Inventors: John Bernard Ardisana, Yoav Ben-Haim, Teodor Dabov, Varun Sehrawat
  • Patent number: 11368661
    Abstract: The embodiment of the present specification discloses an image synthesis method, apparatus and device for free-viewpoint.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: June 21, 2022
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Sheng Wang, Zhenyu Wang, Wen Gao
  • Patent number: 11353588
    Abstract: The present disclosure relates to systems and methods that provide information about a scene based on a time-of-flight (ToF) sensor and a structured light pattern. In an example embodiment, a sensor system could include at least one ToF sensor configured to receive light from a scene. The sensor system could also include at least one light source configured to emit a structured light pattern and a controller that carries out operations. The operations include causing the at least one light source to illuminate at least a portion of the scene with the structured light pattern and causing the at least one ToF sensor to provide information indicative of a depth map of the scene based on the structured light pattern.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 7, 2022
    Assignee: Waymo LLC
    Inventors: Caner Onal, David Schleuning, Brendan Hermalyn, Simon Verghese, Alex Mccauley, Brandyn White, Ury Zhilinsky
  • Patent number: 11336883
    Abstract: A three-dimensional (3D) imaging system includes a mobile device that has a display screen configured to display a series of patterns onto an object that is to be imaged. The mobile device also includes a front-facing camera configured to capture reflections of the series of patterns off of the object. The system also includes a controller that is configured to control a timing of the series of patterns that appear on the display screen and activation of the front-facing camera in relation to the appearance of the series of patterns.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: May 17, 2022
    Assignee: Northwestern University
    Inventors: Florian Willomitzer, Oliver Strider Cossairt, Marc Walton, Chia-Kai Yeh, Vikas Gupta, William Spies, Florian Schiffers
  • Patent number: 11323690
    Abstract: A dual-camera image capture system may include a first light source, disposed above a target area, a first mobile unit, configured to rotate around the target area, and a second mobile unit, operatively coupled to the first mobile unit, configured to move vertically along the first mobile unit. The dual-camera image capture system may further include a second light source, operatively coupled to the second mobile unit and a dual-camera unit, operatively coupled to the second mobile unit. The dual-camera image capture system may include a first camera configured to capture structural data and a second camera configured to capture color data. The first mobile unit and the second mobile unit may be configured to move the first camera and the second camera to face the target area in a variety of positions around the target area.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: May 3, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Jim Hejl, Jerry Phaneuf, Aaron Jeromin
  • Patent number: 11315340
    Abstract: Disclosed are devices, systems, methods, techniques, and computer program products for estimating a Region Of Interest (ROI) corresponding to a plurality of content streams. A method may include receiving a plurality of sensor data associated with a plurality of mobile devices. The plurality of sensor data corresponds to a plurality of content streams captured by the plurality of mobile devices. Further, each of the plurality of mobile devices may include at least one recorder configured to capture a corresponding content stream. Further, a sensor data associated with a mobile device may include a location data and an orientation data of the mobile device during capturing of a content stream. The method may further include, analyzing, by the system, the plurality of sensor data and determining, by the system, a ROI based on at least the plurality of sensor data, thereby generating an estimation of the ROI.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 26, 2022
    Assignee: Streaming Global, Inc.
    Inventors: Richard Oesterreicher, Jonathan Hessing, Zunair Ukani, Austin Schmidt
  • Patent number: 11276145
    Abstract: Systems and methods for stitching videos are disclosed. Image-based registration between frames from a first video source and frames from a second video source is performed at a first rate. Calibration-based registration between frames from the first video source and frames from the second video source are performed at a second rate higher than the first rate. Then, for a first frame from the first video source for which calibration-based registration data and image-based registration data have been generated, a stitching transform that maps the first frame to a counterpart frame from the second video source based on image-based registration data is generated. A delta transform from the image-based registration data and the calibration-based registration data at the first frame is also derived.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 15, 2022
    Assignee: Apple Inc.
    Inventors: Jianping Zhou, Walker Eagleston, Tao Zhang
  • Patent number: 11270467
    Abstract: A system and/or method for imaging system calibration, including: determining a dense correspondence map matching features in a first image to features in a second image, and determining updated calibration parameters associated with the image acquisition system based on the matching features.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: March 8, 2022
    Assignee: Compound Eye, Inc.
    Inventors: Jason Devitt, Haoyang Wang, Harold Wadleigh, Zachary Flom
  • Patent number: 11238603
    Abstract: This disclosure describes a configuration of an aerial vehicle, such as an unmanned aerial vehicle (“UAV”), that includes a plurality of cameras that may be selectively combined to form a stereo pair for use in obtaining stereo images that provide depth information corresponding to objects represented in those images. Depending on the distance between an object and the aerial vehicle, different cameras may be selected for the stereo pair based on the baseline between those cameras and a distance between the object and the aerial vehicle. For example, cameras with a small baseline (close together) may be selected to generate stereo images and depth information for an object that is close to the aerial vehicle. In comparison, cameras with a large baseline may be selected to generate stereo images and depth information for an object that is farther away from the aerial vehicle.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: February 1, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Scott Raymond Harris, Barry James O'Brien, Joshua John Watson
  • Patent number: 11222640
    Abstract: Computing devices and methods utilizing a joint speaker location/speaker identification neural network are provided. In one example a computing device receives an audio signal of utterances spoken by multiple persons. Magnitude and phase information features are extracted from the signal and inputted into a joint speaker location and speaker identification neural network. The neural network utilizes both the magnitude and phase information features to determine a change in the person speaking. Output comprising the determination of the change is received from the neural network. The output is then used to perform a speaker recognition function, speaker location function, or both.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: January 11, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shixiong Zhang, Xiong Xiao
  • Patent number: 11212485
    Abstract: A signaling device able to operate in association with a capturing device able to capture a scene. The signaling device includes: an obtaining module able to obtain a first observation direction from the scene captured by the capturing device, a determining module able to determine a first field of observation associated with the first observation direction, and a signalling module able to signal the first observation field using first signage based on at least one sensorial indicator.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: December 28, 2021
    Assignee: ORANGE
    Inventors: Catherine Salou, Jacques Chodorowski
  • Patent number: 11195292
    Abstract: An automobile-mounted imaging apparatus and a computer readable storage medium for detecting a distance to at least one object. The apparatus comprises circuitry configured to select at least two images from images captured by at least three cameras to use for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to select two cameras of at least three cameras for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to determine which of at least two cameras to use for detecting the distance to the at least one object based on at least one condition from at least three cameras capturing images.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: December 7, 2021
    Assignee: Sony Corporation
    Inventor: Nobuhiro Tsunashima
  • Patent number: 11166007
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: November 2, 2021
    Assignee: Light Field Lab, Inc.
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 11151687
    Abstract: Anamorphic system for the acquisition, post-production and reproduction of 360° panorama images on three-dimensional objects, suitable for the representation of panorama images deformed according to the principle of anamorphosis on a two-dimensional medium, the image being adapted to be reflected on the outer surface of a three-dimensional medium positioned centrally with respect to the two-dimensional medium; the system including: (A) acquisition: wherein 360° photography shots are taken around the object; the photography shots equidistant from the object and provided with focals; the photography shots also having a mutual superimposition included between 30% and 40% of the image; the acquisition step occurring over time, for acquiring shots of the entire object; (B) post-production: wherein the photography shots are edited to generate the deformed image; (C) reproduction of the image on a two-dimensional medium, the image provided with a central hole; and (D) positioning a three-dimensional medium at the ce
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: October 19, 2021
    Assignee: RS LIFE360 SOCIETÀ A RESPONSABILITÀ LIMITATA
    Inventor: Eliyahu Rozenberg
  • Patent number: 11128812
    Abstract: The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user's gaze at the virtual reality content, and suggesting an advertisement based on the location of the user's gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: September 21, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jens Christensen, Thomas M. Annau, Arthur Van Hoff
  • Patent number: 11062613
    Abstract: A method for improving the interpretation of the surroundings of a UAV, and to a UAV system is presented herein. The method comprises the steps of acquiring an image comprising depth data, and determining a boundary between a first image portion and a second image portion. The second image portion surrounds the first image portion, and the boundary is defined by an interface between first periphery sub-portions and second periphery sub-portions. A difference in depth data between adjacent first periphery sub-portions and second periphery sub-portions is above a first predetermined threshold and/or second periphery sub-portions comprises undefined depth data. The method further comprises determining the area of the first image portion, and if the area of the first image portion is below a second predetermined threshold, determining that the first image portion contains incorrect depth data and deleting and/or adjusting the incorrect depth data contained by the first image portion.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: July 13, 2021
    Assignee: EVERDRONE AB
    Inventor: Maciek Drejak
  • Patent number: 11044418
    Abstract: A three camera alignment system, having a first camera module having a first fixed zoom, a second camera module having a second fixed zoom coupled to the first camera module, wherein the second camera module is calibrated to the first camera module and a third camera module having a third fixed zoom coupled to at least one of the first camera module and the second camera module and wherein the third camera module is aligned to the calibrated first camera module and second camera module based on block matching along epipolar lines.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: June 22, 2021
    Assignee: Black Sesame International Holding Limited
    Inventor: Kuochin Chang
  • Patent number: 11024046
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: June 1, 2021
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 11020022
    Abstract: The present disclosure relates to a positioning system suitable for use in an imaging system. The positioning system may include one or more cameras configured to capture images or videos of an imaging object and surrounding environment thereof for ROI targeting or patient position recognition. The positioning system may also include one or more position probes and sources configured to determine an instant location of an imaging object or an ROI thereof in a non-contact manner.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: June 1, 2021
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Weiqiang Hao, Zhuobiao He, Mingchao Wang, Yining Wang
  • Patent number: 11019258
    Abstract: The disclosure includes generating a stream of panoramic images. A method includes determining a first matching camera module. The method includes constructing a first camera map that associates a first pixel location in a first panoramic image to the first matching camera module, wherein the first pixel location corresponds to the point in a panorama from a first viewing direction. The method includes generating, based on the first camera map, a stream of first panoramic images.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 25, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur Van Hoff, Thomas M. Annau, Jens Christensen
  • Patent number: 11012601
    Abstract: A dual camera module system includes a housing and a pair of camera modules (e.g., digital cameras) that are aligned with fields of view that overlap below the housing. The camera modules are mounted to a bench within the housing. The dual camera module system is configured for mounting above areas such as retail establishments or other materials handling facilities. The housing is formed from one or more sections of plastic or like materials, and includes inlets and outlets that enable air to flow past the camera modules and other components within the housing, and to maintain such components at a desired temperature. Images captured by the imaging devices of the camera module may be utilized for any purpose.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: May 18, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Shuai Yue, Matthew Christopher Smith
  • Patent number: 10902556
    Abstract: The disclosure is directed to a method to compensate for visual distortion when viewing video image streams from a multiple camera capture of a scene where the method determines the disparity difference utilizing the user view orientation and then compresses and/or stretches the left and/or right eye video image streams to compensate for the visual distortion. In another aspect, the method describes additional adjustments and corrections to the video image streams including rotating, tilting, shifting, and scaling the video image streams, and correcting for gapping and clipping visual image artifacts. In another aspect, a visual compensation system is described to implement the method. Additionally, a visual compensation apparatus is disclosed to perform the method operations.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: January 26, 2021
    Assignee: Nvidia Corporation
    Inventor: David Cook
  • Patent number: 10897626
    Abstract: There are provided methods and apparatus for video usability information (VUI) for scalable video coding (SVC). An apparatus includes an encoder (100) for encoding video signal data into a bitstream. The encoder specifies video user information, excluding hypothetical reference decoder parameters, in the bitstream using a high level syntax element. The video user information corresponds to a set of interoperability points in the bitstream relating to scalable video coding (340, 355).
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: January 19, 2021
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Jiancong Luo, Lihua Zhu, Peng Yin
  • Patent number: 10893212
    Abstract: There is provided an all-celestial imaging apparatus enabling imaging of images that enable estimation of depth information relating to an object to be imaged by suppressing any generation of occlusion. An all-celestial imaging apparatus that is an aspect of the present technique includes plural imaging parts each arranged being directed in a direction different from that of each other, and the plural imaging parts are arranged such that all imaging ranges on at least one circumference of the imaging ranges by the plural imaging parts are each overlapped by angles of view of two or more pairs of the imaging parts. The present technique is applicable to, for example, an all-celestial camera imaging images that are used in the case where the depth information on a distance to an object to be imaged that may be present in an optional direction of all azimuth directions of 360° is estimated.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: January 12, 2021
    Assignee: SONY CORPORATION
    Inventor: Tooru Masuda
  • Patent number: 10887555
    Abstract: A depth-sensitive system for monitoring for and detecting a predefined condition at a specific location within a visually monitored portion of an area. Plural depth-sensitive cameras are oriented with respect to the area whereby each camera has a field of view within the area that at least partially overlaps with the field of view of another of the plural cameras. The combined field of view encompasses all portions of interest of the area. A system for providing a notification of the visual detection of a predefined condition at a particular location or set of locations within an area of interest is provided, as is a system for generating a visual representation of human activity at one or more specific locations within an area of interest.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: January 5, 2021
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventor: Michael Heydlauf
  • Patent number: 10878585
    Abstract: A camera array for a scalable tracking system includes cameras that are communicatively coupled to camera clients. The cameras are arranged in a grid such that no camera is directly adjacent in the same row or column of the grid to another camera that is communicatively coupled to the same camera client. Cameras that are arranged along a diagonal of the grid are communicatively coupled to the same camera client.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: December 29, 2020
    Assignee: 7-Eleven, Inc.
    Inventors: Caleb Austin Boulio, Sailesh Bharathwaaj Krishnamurthy, Sarath Vakacharla, Trong Nghia Nguyen, Shahmeer Ali Mirza
  • Patent number: 10855968
    Abstract: A method for transmitting stereoscopic video content according to the present disclosure comprises the steps of: generating, on the basis of data of a stereoscopic video which includes a plurality of omnidirectional videos having parallax, a first frame comprising a plurality of first views projected from the plurality of omnidirectional videos; generating a second frame comprising a plurality of second views by packing, on the basis of region-wise packing information, a plurality of first regions included in the plurality of first views; and transmitting data on the generated second frame, wherein the plurality of second views include a plurality of second regions corresponding to the plurality of first regions, and the region-wise packing information includes information on shape, orientation, or transformation for each of the plurality of second regions.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: December 1, 2020
    Inventor: Byeong-Doo Choi
  • Patent number: 10848745
    Abstract: A head-mounted display (HMD) is configured to capture images and/or video of a local area. The HMD includes an imaging assembly and a controller. The imaging assembly includes a plurality of cameras positioned at different locations on the HMD and oriented to capture images of different portions of a local area surrounding the HMD. The controller generates imaging instructions for each camera using image information. The imaging instructions cause respective midpoints of exposure times for each camera to occur at a same time value for each of the captured images. The cameras capture images of the local area in accordance with the imaging instructions. The controller determines a location of the HMD in the local area using the captured images and updates a model that represents a mapping function of the depth and exposure settings of the local area.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: November 24, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Oskar Linde, Andrew Melim
  • Patent number: 10845302
    Abstract: An infrared (IR) imaging system for determining a concentration of a target species in an object is disclosed. The imaging system can include an optical system including a focal plane array (FPA) unit behind an optical window. The optical system can have components defining at least two optical channels thereof, said at least two optical channels being spatially and spectrally different from one another. Each of the at least two optical channels can be positioned to transfer IR radiation incident on the optical system towards the optical FPA. The system can include a processing unit containing a processor that can be configured to acquire multispectral optical data representing said target species from the IR radiation received at the optical FPA. One or more of the optical channels may be used in detecting objects on or near the optical window, to avoid false detections of said target species.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: November 24, 2020
    Assignee: REBELLION PHOTONICS, INC.
    Inventors: Ryan Mallery, Ohad Israel Balila, Robert Timothy Kester
  • Patent number: 10839539
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: November 17, 2020
    Assignee: GOOGLE LLC
    Inventors: Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, Danhang Tang, Cem Keskin, Jonathan James Taylor, Philip L. Davidson, Shahram Izadi, Sean Ryan Fanello, Julien Pascal Christophe Valentin, Christoph Rhemann, Mingsong Dou, Sameh Khamis, David Kim
  • Patent number: 10838076
    Abstract: A self-position measuring device includes an information generator, a position extractor, and a navigation method selector. The information generator generates estimated information that is expected to be obtained when at least one of the ground information and the celestial information is obtained, at each of multiple reference points generated on the basis of measured inertial navigation position. The position extractor checks at least the one of the ground information and the celestial information against the estimated information at each of the multiple reference points and extracts a position of a specific reference point corresponding to specific estimated information with a highest matching degree. The navigation method selector selects, on the basis of the specific reference point, a navigation method with less navigational error as a navigation method for measuring the self-position.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 17, 2020
    Assignee: SUBARU CORPORATION
    Inventors: Yoichi Onomura, Akihiro Yamane, Yukinobu Tomonaga, Takeshi Fukurose
  • Patent number: 10841516
    Abstract: An embodiment includes capturing, via a first camera, a first image having a first pixel density, a pixel of the first image corresponding to the first optical axis is substantially centered within the first image; capturing, via the second camera, a second image having a second pixel density that is greater than the first pixel density, where a pixel of the second image that corresponds to the first optical axis is off-center within the second image; processing the second image to generate a fourth image such that a pixel of the fourth image that corresponds to the first optical axis is substantially centered within the fourth image, where a pixel density of the first image is substantially equal to a pixel density of the fourth image, and where the fourth image represents a field of view that is substantially equal to field of view represented by the first image.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 17, 2020
    Assignee: SNAP-ON INCORPORATED
    Inventors: Robert Hoevenaar, Timothy G. Ruther