Multiple Cameras Patents (Class 348/47)
  • Patent number: 8878909
    Abstract: The present invention is an analog of a set of human eyes, capturing 3D images on a conjugate pair of foveae, with the additions that (i) there can be multiple, independent, conjugate pairs of foveae, and (ii) under computer control, certain conjugate pairs of foveae can be made to move across the detecting surfaces simultaneously to follow moving objects while the lenses remain fixed. Since foveal fields of view are very narrow—of the order of one degree—and little information is transmitted to the computer (or brain) outside this range—there is almost no cross-talk between foveae. By using multiple foveae within each detector, images may be stitched together by algorithms to produce virtually ghost-free full-field 3D images for display.
    Type: Grant
    Filed: November 26, 2010
    Date of Patent: November 4, 2014
    Inventor: John H. Prince
  • Publication number: 20140320612
    Abstract: To obtain an image processing device and a method of processing an image that improve distance accuracy and is capable of performing accurate distance even about an object at a greater distance than before, when a distance to an object is measured, one image object region 302 including an image of an object is extracted from one image of a pair of images imaged by a pair of imaging elements at the same time in the same direction. The degree of background that is likelihood of whether either an object image configuration part 304 or a background image configuration part 303 is calculated for each of a plurality of image configuration parts that configures the one image object region 302. Then, the other image object region 503 having an image similar to the one image object region 302 is extracted from the other image 501 using the degree of background, and a parallax between the one image object region 302 and the other image object region 503 is calculated.
    Type: Application
    Filed: October 24, 2012
    Publication date: October 30, 2014
    Inventors: Shinji Kakegawa, Hiroto Mitoma, Akira Oshima, Haruki Matono, Takeshi Shima
  • Publication number: 20140320604
    Abstract: A method includes providing a primary camera and a secondary camera in a data processing device. The primary camera and the secondary camera are both capable of capturing an image and/or a video frame of a same resolution. The method also includes enabling the secondary camera and the primary camera to be utilized as standalone cameras, and providing a capability to rotate the secondary camera from an angular position of utilization as the standalone camera to an angular position of utilization thereof in conjunction with the primary camera as a three-dimensional (3D) camera offering stereoscopic separation between the primary camera and the secondary camera.
    Type: Application
    Filed: April 24, 2013
    Publication date: October 30, 2014
    Applicant: NVIDIA Corporation
    Inventors: Anup Ashok Dalvi, Dhaval Sanjaykumar Dave
  • Publication number: 20140320605
    Abstract: A method and apparatus is provided for high speed, non-contact method of measuring the 3-D coordinates of a dense grid of points on a surface, including high accuracy interpolation between grid points. A plurality of pulsed laser sub-projectors sequentially illuminates a plurality of discrete Gray code bar pattern transparencies carried on a spinning circular code disk to project high frame rate structured light. The structured light is reflected by the surface and recorded at high signal-to-noise ratio by a plurality of high frame rate digital cameras, then decoded and interpolated by electronic signal processing. A numerical formula is derived for numbers of equally spaced discrete code patterns on the code disk that allow each camera to receive pulses from all sub-projectors and all patterns at a constant frame rate. Methods to derive an extended complementary Gray code pattern sequence and to normalize measured signal amplitudes are presented.
    Type: Application
    Filed: April 25, 2013
    Publication date: October 30, 2014
    Inventor: Philip Martin Johnson
  • Publication number: 20140320611
    Abstract: In one aspect, a multispectral multi-camera display unit is disclosed, including a display unit, a camera array, and an image integration processing unit. In some embodiments, the camera array is configured such that at least one camera is located at each of two or more sides of the display unit. In some embodiments, each camera comprises a color image sensor or multispectral imager.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 30, 2014
    Applicant: nanoLambda Korea
    Inventor: Byung Il Choi
  • Publication number: 20140320606
    Abstract: The present invention provides a 3D video shooting control system which controls the position and direction of a pair of imaging devices in order to obtain a suitable 3D effect or maintain the safety, taking the shooting condition and the viewing condition into consideration comprehensively. A base-line length and a convergence angle of a pair of imaging devices are controlled based on an integration model in which a 3D shooting and broadcasting model and a 3D object model are integrated. A base-line length determination unit 6d-1 determines a base-line length based on the relationship expressed by an integration model equation 1 and a camera control unit 6e moves the pair of imaging devices to change the position thereof so that the pair of imaging devices has the determined base-line length.
    Type: Application
    Filed: April 26, 2013
    Publication date: October 30, 2014
    Inventors: Zining ZHEN, Xiaolin Zhang
  • Publication number: 20140320607
    Abstract: A multifunctional sky camera system and techniques for the use thereof for total sky imaging and spectral irradiance/radiance measurement are provided. In one aspect, a sky camera system is provided. The sky camera system includes an objective lens having a field of view of greater than about 170 degrees; a spatial light modulator at an image plane of the objective lens, wherein the spatial light modulator is configured to attenuate light from objects in images captured by the objective lens; a semiconductor image sensor; and one or more relay lens configured to project the images from the spatial light modulator to the semiconductor image sensor. Techniques for use of the one or more of the sky camera systems for optical flow based cloud tracking and three-dimensional cloud analysis are also provided.
    Type: Application
    Filed: April 30, 2013
    Publication date: October 30, 2014
    Applicant: International Business Machines Corporation
    Inventor: International Business Machines Corporation
  • Publication number: 20140320610
    Abstract: A depth measurement apparatus is adapted for acquiring a first and second image signals, calculating a correlation value for plural shift amounts, acquiring plural provisional shift amounts, reconstructing the first image signal using a filter corresponding to the provisional shift amounts, analyzing a contrast change by reconstruction, and determining depth on the basis of contrast analysis. The provisional shift amount is acquired by determining a first shift amount at which an extreme value of the correlation value is given, and a first range, which is a range of a predetermined shift amount including the first shift amount, into a plurality of second ranges, and acquiring provisional shift amounts for each of the second ranges.
    Type: Application
    Filed: April 23, 2014
    Publication date: October 30, 2014
    Applicant: CANON KABUSHIKI KAISHA
    Inventor: Makoto Oigawa
  • Publication number: 20140320608
    Abstract: In accordance with an example embodiment of the present invention, disclosed is a method and an apparatus thereof for receiving a first command via a first interface that is addressable by a first address and receiving a second command via a second interface that is addressable by a second address.
    Type: Application
    Filed: December 13, 2010
    Publication date: October 30, 2014
    Applicant: NOKIA CORPORATION
    Inventor: Mikko Muukki
  • Publication number: 20140320609
    Abstract: A hybrid three dimensional imaging camera comprises a 2-D video camera utilizing a focal plane array visible light detector and a 3-D flash laser radar utilizing an infrared focal plane array detector. The device is capable of capturing a complete 3-D scene from a single point of view. A production system combining multiple hybrid 3-D cameras around a subject provides 3-D solid models of an object or scene in the common field of view.
    Type: Application
    Filed: April 15, 2014
    Publication date: October 30, 2014
    Inventors: Roger Stettner, Brad Short, Patrick Gilliland, Thomas Laux, Laurent Heughebaert
  • Patent number: 8872817
    Abstract: A real-time three-dimensional (3D) real environment reconstruction apparatus and method are provided. The real-time 3D real environment reconstruction apparatus reconstructs a 3D real environment in real-time, or processes data input through RGB-D cameras in real-time using a plurality of GPUs so as to reconstruct a wide ranging 3D real environment.
    Type: Grant
    Filed: July 13, 2012
    Date of Patent: October 28, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Young-Hee Kim
  • Publication number: 20140313294
    Abstract: A display panel includes: a plurality of pixels configured to display an image; at least one camera sensitive to a non-visible wavelength light and configured to have a field of view overlapping a front area of the display panel; and a plurality of emitters configured to emit light having the non-visible wavelength light in synchronization with exposures of the at least one camera.
    Type: Application
    Filed: April 14, 2014
    Publication date: October 23, 2014
    Applicant: SAMSUNG DISPLAY CO., LTD.
    Inventor: David M. Hoffman
  • Patent number: 8866879
    Abstract: A mobile terminal including a first camera configured to receive an input of a first image; a second camera configured to receive an input of a second image; a touchscreen configured to display a photograph command key including a first zone, a second zone and a common zone; and a controller configured to set a photograph mode selected from a 3D photograph mode and a 2D photograph mode, to control the first and second cameras to respectively capture the first and second images upon receiving a photograph command touch action on the common zone, and to perform either a 3D image processing or a 2D image processing on the photographed first and second images according to the set photograph mode.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: October 21, 2014
    Assignee: LG Electronics Inc.
    Inventors: Seungmin Seen, Jinsool Lee, Seunghyun Woo, Hayang Jung, Shinhae Lee
  • Patent number: 8866890
    Abstract: A method of making a high resolution camera includes assembling, imaging and processing. The assembling includes assembling on a carrier a plurality of sensors. Each sensor is for imaging a portion of an object. The plurality of sensors are disposed so that the portions imaged by adjacent sensors overlap in a seam leaving no gaps between portions. The imaging images a predetermined known pattern to produce from the plurality of sensors a corresponding plurality of image data sets. The processing processes the plurality of image data sets to determine offset and rotation parameters for each sensor by exploiting overlapping seams.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: October 21, 2014
    Assignee: Teledyne Dalsa, Inc.
    Inventors: Anton Petrus Maria Van Arendonk, Cornelis Draijer
  • Patent number: 8866913
    Abstract: Systems and methods for calibrating a 360 degree camera system include imaging reference strips, analyzing the imaged data to correct for pitch, roll, and yaw of cameras of the 360 degree camera system, and analyzing the image data to correct for zoom and shifting of the cameras. Each of the reference strips may include a bullseye component and a dots component to aid in the analyzing and correcting.
    Type: Grant
    Filed: April 8, 2014
    Date of Patent: October 21, 2014
    Assignee: OmniVision Technologies, Inc.
    Inventors: Jeff Hsieh, Hasan Gadjali, Tawei Ho
  • Patent number: 8868684
    Abstract: Telepresence is coordinated among multiple interconnected devices. The presence of a first interconnected device and a second interconnected device in a common space is determined. Multimedia capabilities of the first interconnected device and the second interconnected device are determined. Communications of at least one type of media information using one of the first interconnected device and the second interconnected device are selectively and temporarily enabled by an external controller over a second network. Communications of the at least one type of media information using the other of the first interconnected device and the second interconnected device are selectively and temporarily not enabled by the external controller over the second network.
    Type: Grant
    Filed: June 17, 2011
    Date of Patent: October 21, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: David C. Gibbon, Lee Begeja, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Patent number: 8867823
    Abstract: Provided is a virtual viewpoint image synthesizing method in which a virtual viewpoint image viewed from a virtual viewpoint is synthesized based on image information obtained from a plurality of viewpoints. The virtual viewpoint image is synthesized through a reference images obtaining step, a depth maps generating step, an up-sampling step, a virtual viewpoint information obtaining step, and a virtual viewpoint image synthesizing step.
    Type: Grant
    Filed: December 2, 2011
    Date of Patent: October 21, 2014
    Assignee: National University Corporation Nagoya University
    Inventors: Meindert Onno Wildeboer, Lu Yang, Mehrdad Panahpour Tehrani, Tomohiro Yendo, Masayuki Tanimoto
  • Patent number: 8866943
    Abstract: A digital camera system including a first video capture unit for capturing a first digital video sequence of a scene and a second video capture unit that simultaneously captures a second digital video sequence that includes the photographer. A data processor automatically analyzes first digital video sequence to determine a low-interest spatial image region. A facial video sequence including the photographer's face is extracted from the second digital video sequence, and inserted into the low-interest spatial image region in the first digital video sequence to form the composite video sequence.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: October 21, 2014
    Assignee: Apple Inc.
    Inventors: Minwoo Park, Amit Singhal
  • Patent number: 8866883
    Abstract: Tools are described for preparing digital dental models for use in dental restoration production processes, along with associated systems and methods. Dental modeling is improved by supplementing views of three-dimensional models with still images of the modeled subject matter. Video data acquired during a scan of the model provides a source of still images that can be displayed alongside a rendered three-dimensional model, and the two views (model and still image) may be synchronized to provide a common perspective of the model's subject matter. This approach provides useful visual information for disambiguating surface features of the model during processing steps such as marking a margin of a prepared tooth surface for a restoration.
    Type: Grant
    Filed: June 27, 2008
    Date of Patent: October 21, 2014
    Assignee: 3M Innovative Properties Company
    Inventors: Janos Rohaly, Robert N. Nazzal, Edward K. Tekeian, Ilya A. Kriveshko, Eric B. Paley
  • Publication number: 20140307058
    Abstract: The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output.
    Type: Application
    Filed: June 24, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Oliver A. Whyte, Sing Bing Kang, Charles Lawrence Zitnick, III, Richard S. Szeliski, Shahram Izadi, Christoph Rhemann, Andreas Georgiou, Avronil Bhattacharjee
  • Publication number: 20140307056
    Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
    Type: Application
    Filed: June 14, 2013
    Publication date: October 16, 2014
    Inventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
  • Publication number: 20140307055
    Abstract: The subject disclosure is directed towards projecting light in a pattern in which the pattern contains components (e.g., spots) having different intensities. The pattern may be based upon a grid of initial points associated with first intensities and points between the initial points with second intensities, and so on. The pattern may be rotated relative to cameras that capture the pattern, with captured images used active depth sensing based upon stereo matching of dots in stereo images.
    Type: Application
    Filed: June 11, 2013
    Publication date: October 16, 2014
    Inventors: Sing Bing Kang, Andreas Georgiou, Richard S. Szeliski
  • Publication number: 20140307059
    Abstract: Stacked imager devices that can determine distance and generate three dimensional representations of a subject and associated methods are provided. In one aspect, an imaging system can include a first imager array having a first light incident surface and a second imager array having a second light incident surface. The second imager array can be coupled to the first imager array at a surface that is opposite the first light incident surface, with the second light incident surface being oriented toward the first imager array and at least substantially uniformly spaced. The system can also include a system lens positioned to direct incident light along an optical pathway onto the first light incident surface. The first imager array is operable to detect a first portion of the light passing along the optical pathway and to pass through a second portion of the light, where the second imager array is operable to detect at least a part of the second portion of light.
    Type: Application
    Filed: March 12, 2014
    Publication date: October 16, 2014
    Inventors: Homayoon Haddad, Chen Feng, Leonard Forbes
  • Publication number: 20140307057
    Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Sing Bing Kang, Shahram Izadi
  • Publication number: 20140307054
    Abstract: An auto focus (AF) method adapted to an AF apparatus is provided. The AF method includes following steps. A target object is selected and photographed by a first image sensor and a second image sensor to generate a first image and a second image. A procedure of three-dimensional (3D) depth estimation is performed according to the first image and the second image to generate a 3D depth map. An optimization process is performed on the 3D depth map to generate an optimized 3D depth map. A piece of depth information corresponding to the target object is determined according to the optimized 3D depth map, and a focusing position regarding the target object is obtained according to the pieces of depth information. The AF apparatus is driven to execute an AF procedure according to the focusing position. Additionally, an AF apparatus is provided.
    Type: Application
    Filed: May 22, 2013
    Publication date: October 16, 2014
    Applicant: Altek Semiconductor Corp.
    Inventors: Wen-Yan Chang, Yu-Chen Huang, Hong-Long Chou, Chung-Chia Kang
  • Patent number: 8860786
    Abstract: A CC encoder outputs CC data for displaying a caption with a closed caption. A disparity information creation unit outputs disparity information related with each Window ID included in the CC data. The disparity information has added thereto instruction information instructing which of the left eye and the right eye is to be subjected to shifting on the basis of the disparity information. Also, the disparity information is either configured to be commonly used in individual frames during a display period of closed caption information or configured to be sequentially updated during the individual periods, and has added thereto identification information thereabout.
    Type: Grant
    Filed: May 24, 2012
    Date of Patent: October 14, 2014
    Assignee: Sony Corporation
    Inventor: Ikuo Tsukagoshi
  • Publication number: 20140300702
    Abstract: Systems and methods for creating a 3D photorealistic model of a real-life object by applying a combined solution based on the analysis of photographic images combined with use of the data obtained from depth sensors, which are able to read the depth of and, thereby create maps of, the area sensed. Data arrives from two types of devices and is compared and combined. The developed matching algorithm captures the data and automatically creates a photorealistic 3D model of an object without the need for further manual processing. From this, photorealistic quality 3D models are created by building a polygon mesh and creating sets of textures, using the data coming from these two device types. The depth sensors make it possible to perform calculations of the depth maps generated from the real surrounding space of the object sensed.
    Type: Application
    Filed: March 14, 2014
    Publication date: October 9, 2014
    Inventors: Tagir Saydkhuzhin, Konstantin Popov, Michael Raveney
  • Publication number: 20140300703
    Abstract: An apparatus and a method are provided, which can output stereo images that can be displayed as 3D even in any one of the case where a twin-lens camera captures images with the camera held horizontally or the case where the twin-lens camera captures images with the camera held vertically.
    Type: Application
    Filed: October 19, 2012
    Publication date: October 9, 2014
    Applicant: Sony Corporation
    Inventors: Atsushi Ito, Tomohiro Yamazaki
  • Patent number: 8854434
    Abstract: There is provided a demultiplexer that receives video data for one of a three-dimensional display and a two-dimensional display. There is also provided an HDMI transmission portion that transmits the video data and display information that pertains to one of the three-dimensional display and the two-dimensional display of the video data to a television receiver through TMDS channels #0, #1, and #2 of an HDMI cable. There is also provided a transmission/receiving portion that transmits the display information to the television receiver through a CEC line of the HDMI cable.
    Type: Grant
    Filed: August 11, 2010
    Date of Patent: October 7, 2014
    Assignee: Sony Corporation
    Inventors: Takehiko Saito, Ichiro Hamada
  • Patent number: 8854486
    Abstract: Multiview videos are acquired by overlapping cameras. Side information is used to synthesize multiview videos. A reference picture list is maintained for current frames of the multiview videos, the reference picture indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list with a skip mode and a direct mode, whereby the side information is inferred from the synthesized reference picture. Alternatively, the depth images corresponding to the multiview videos of the input data, and this data are encoded as part of the bitstream depending on a SKIP type.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: October 7, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dong Tian, Ngai-Man Cheung, Anthony Vetro
  • Patent number: 8854426
    Abstract: A time-of-flight 3D camera and related method for illuminating a camera field of view and capturing return image light are disclosed herein. In one example, the time-of-flight 3D camera includes a light source that emits source light along an optical axis, and a collimator that receives and collimates the source light to create collimated light. A refractive diffuser is tuned to the camera field of view and receives and diffuses the collimated light to create refracted light having a varying intensity profile. The refractive diffuser guides the refracted light to illuminate the camera field of view to reduce wasted source light.
    Type: Grant
    Filed: November 7, 2011
    Date of Patent: October 7, 2014
    Assignee: Microsoft Corporation
    Inventors: Asaf Pellman, David Cohan, Giora Yahav
  • Patent number: 8854433
    Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: October 7, 2014
    Assignee: Aquifi, Inc.
    Inventor: Abbas Rafii
  • Patent number: 8854432
    Abstract: A dual lens camera for producing a three-dimensional image includes plural lens systems and a zoom mechanism. Initial correction data is constituted by a displacement vector of an amount and a direction of misalignment between plural images according to a superimposed state thereof for each of zoom positions of the lens systems. A vector detector, if a calibration mode is set, obtains a current displacement vector related to one first zoom position. A data processor outputs current correction data by adjusting the initial correction data according to the initial correction data and current displacement vector. If the current correction data is stored, a displacement vector is obtained from the current correction data according to a zoom position of the lens systems upon forming the plural images, to carry out image registration between the images according to the obtained displacement vector for producing the three-dimensional image.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: October 7, 2014
    Assignee: FUJIFILM Corporation
    Inventor: Masaaki Orimoto
  • Patent number: 8854431
    Abstract: The present invention relates to a method for the optical self-diagnosis of a camera system and to a camera system for carrying out the method. The method comprises recording stereo images obtained from in each case at least two partial images (2, 3) creating a depth image, that is to say a disparity map (5) given by calculated disparity values, determining a number of valid disparity values (6) of the disparity map (5), and outputting a warning signal depending on the number of valid disparity values determined. A device for carrying out such a method comprises a stereo camera (1) having at least two lenses (7,8) and image sensors, an evaluation unit and a display unit.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: October 7, 2014
    Assignee: Hella KGaA Hueck & Co.
    Inventors: Miao Song, Bjorn Lojewski
  • Publication number: 20140293012
    Abstract: A controlling method suitable for an electronic apparatus is disclosed herein. The electronic apparatus includes a first image-capturing unit and a second image-capturing unit. The controlling method includes steps of: obtaining a plurality of second images by the second image-capturing unit when the first image-capturing unit is operated to capture a plurality of first images for a stereo process; detecting an object in the second images; calculating a relative displacement of the object in the second images; and, determining whether the first images are captured by an inappropriate gesture according to the relative displacement calculated from the second images.
    Type: Application
    Filed: April 2, 2014
    Publication date: October 2, 2014
    Applicant: HTC Corporation
    Inventors: Chun-Hsiang HUANG, Yu-Ting LEE, Liang-Kang HUANG, Tzu-Hao KUO, Edward CHANG
  • Publication number: 20140293013
    Abstract: Systems and methods of conducting collaborative sessions between mobile devices may provide for determining a time delay associated with a set of participating mobile devices, and determining a command execution time based at least in part on a clock of a managing device and the time delay. One or more control messages may be transmitted to the participating mobile devices, wherein the control messages include the command and the command execution time. Upon receiving a control message, each participating mobile device may determine a local execution time based at least in part on the command execution time and an offset of the clock of the managing device relative to a local clock. Execution of the command can therefore be coordinated across the set of participating mobile devices.
    Type: Application
    Filed: June 17, 2014
    Publication date: October 2, 2014
    Inventors: Michelle X. Gong, Roy Want, Horst W. Haussecker, Jesse Walker, Sai P. Balasundaram
  • Publication number: 20140293010
    Abstract: A system is disclosed for executing depth image-based rendering of a 3D image by a computer having a processor and that is coupled with one or more color cameras and at least one depth camera. The color cameras and the depth camera are positionable at different arbitrary locations relative to a scene to be rendered. In some examples, the depth camera is a low resolution camera and the color cameras are high resolution. The processor is programmed to propagate depth information from the depth camera to an image plane of each color camera to produce a propagated depth image at each respective color camera, to enhance the propagated depth image at each color camera with the color and propagated depth information thereof to produce corresponding enhanced depth images, and to render a complete, viewable image from one or more enhanced depth images from the color cameras. The processor may be a graphics processing unit.
    Type: Application
    Filed: February 3, 2014
    Publication date: October 2, 2014
    Inventors: Quang H. Nguyen, Minh N Do, Sanjay J. Patel
  • Publication number: 20140294366
    Abstract: A headgear for capturing an immersive experience includes two cameras and two microphone pairs, each with a low volume microphone and a high volume microphone. A global positioning system (GPS) tracking device, accelerometer, and gyroscope are also provided. The data recorded from the cameras, microphones, GPS tracking device, accelerometer, and gyroscope are used to record several feeds and assemble said feeds into a single multimedia clip. Multimedia clips can be combined by time and/or region, forming an aggregate clip for a district or city. The component multimedia clips are linked to each other, allowing a user to experience the aggregate multimedia clip at their direction. Scenarios and advertisements can be embedded into the aggregate multimedia clip, allowing the immersive experience to be used for education, policy analysis, entertainment, and other similar tasks. Information from the GPS tracking device, accelerometer, and gyroscope are used to synchronize audio-video feeds and insert scenarios.
    Type: Application
    Filed: April 1, 2013
    Publication date: October 2, 2014
    Inventor: Michael-Ryan FLETCHALL
  • Publication number: 20140293011
    Abstract: A structured light 3D scanner comprising multiple pattern projectors each projecting a unique pattern onto an object by passing radiation through a stationary imaging substrate and one or more cameras for capturing the projected patterns in sequence. A processor processes the projected patterns based on a predetermined separation between the cameras. The processor uses this information to determine the deviation between the projected patterns and the reflected patterns captured by the camera or cameras. The deviation may be used to determine the three dimensional surface geometry of the object within the capture volume of the cameras. Surface geometry may be used to create a point cloud with each point representing a location on the surface of the object with respect to the 3D scanner.
    Type: Application
    Filed: March 28, 2014
    Publication date: October 2, 2014
    Applicant: Phasica, LLC
    Inventors: William Lohry, Sam Robinson
  • Publication number: 20140293014
    Abstract: There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method for integrating a virtual rendering system and a video capture system for outputting a composite render to a display, the method comprising obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in a virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera, rendering the composite render by processing the feeds, and outputting the composite render to the display.
    Type: Application
    Filed: June 17, 2014
    Publication date: October 2, 2014
    Inventors: Michael Gay, Aaron Thiel
  • Publication number: 20140285632
    Abstract: An imaging system comprising an image capture apparatus, arranged to capture a stereoscopic image of an operator work site, in communication with a display system; the display system arranged to receive and display said stereoscopic image on a display screen to said operator; wherein said display system is arranged such that the display screen is placed intermediate the operator's eyes and the work site.
    Type: Application
    Filed: December 26, 2013
    Publication date: September 25, 2014
    Applicant: NATIONAL UNIVERSITY OF SINGAPORE
    Inventors: Beng Hai Lim, Timothy Poston, James Kolenchery Rappel
  • Publication number: 20140285634
    Abstract: Imagery from two or more users' different smartphones is streamed to a cloud processor, enabling creation of 3D model information about a scene being imaged. From this model, arbitrary views and streams can be synthesized. In one arrangement, a user of such a system is at a sports arena, and her view of the sporting event is blocked when another spectator rises to his feet in front of her. Nonetheless, the imagery presented on her headworn display continues uninterrupted—the blocked imagery from that viewpoint being seamlessly re-created based on imagery contributed by other system users in the arena. A great variety of other features and arrangements are also detailed.
    Type: Application
    Filed: March 18, 2014
    Publication date: September 25, 2014
    Applicant: Digimarc Corporation
    Inventor: Geoffrey B. Rhoads
  • Publication number: 20140285633
    Abstract: A robotic system includes a robot, a display section, and a control section adapted to operate the robot, and an imaging range of a first taken image obtained by imaging an operation object of the robot from a first direction, and an imaging range of a second taken image obtained by imaging the operation object from a direction different from the first direction are displayed on the display section.
    Type: Application
    Filed: March 5, 2014
    Publication date: September 25, 2014
    Applicant: Seiko Epson Corporation
    Inventors: Kenichi Maruyama, Kenji Onda
  • Publication number: 20140285630
    Abstract: An indoor navigation system is based on a multi-beam laser projector, a set of calibrated cameras, and a processor that uses knowledge of the projector design and data on laser spot locations observed by the cameras to solve the space resection problem to find the location and orientation of the projector.
    Type: Application
    Filed: March 20, 2013
    Publication date: September 25, 2014
    Applicant: TRIMBLE NAVIGATION LIMITED
    Inventor: Kevin A. I. Sharp
  • Publication number: 20140285631
    Abstract: An indoor navigation system is based on a multi-beam laser projector, a set of calibrated cameras, and a processor that uses knowledge of the projector design and data on laser spot locations observed by the cameras to solve the space resection problem to find the location and orientation of the projector.
    Type: Application
    Filed: June 20, 2013
    Publication date: September 25, 2014
    Applicant: TRIMBLE NAVIGATION LIMITED
    Inventors: James M. Janky, Kevin A. I. Sharp, Michael V. McCusker, Morrison Ulman
  • Publication number: 20140285635
    Abstract: A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
    Type: Application
    Filed: March 20, 2014
    Publication date: September 25, 2014
    Applicant: MEDIATEK INC.
    Inventors: Chi-Cheng Ju, Ding-Yun Chen, Cheng-Tsai Ho, Chia-Ming Cheng, Po-Hao Huang, Yuan-Chung Lee, Chung-Hung Tsai
  • Patent number: 8842166
    Abstract: A game device 10 calculates, for each of two real cameras, position/orientation information representing a relative position and orientation with respect to a predetermined image-capture object, and based on this calculates a camera interval in the predetermined coordinate system. Then, it calculates correspondence information between a unit length in the predetermined coordinate system and a unit length in a real space by using the calculated distance in the predetermined coordinate system and the camera interval in the real world.
    Type: Grant
    Filed: June 7, 2011
    Date of Patent: September 23, 2014
    Assignees: Nintendo Co., Ltd., Hal Laboratory, Inc.
    Inventors: Yuichiro Ito, Yuki Nishimura
  • Patent number: 8842167
    Abstract: A mobile terminal and controlling method thereof are disclosed, by which a 3D or stereoscopic image can be generated to considerably relive an observer from visual fatigue. The present invention includes a first camera and a second camera configured to take a 3D image, a display unit displaying an image taken by at least one of the first and second cameras, and a controller controlling the first camera, the second camera and the display unit, wherein the controller sets a plurality of divisional sections on a left eye image taken by the first camera and a right eye image taken by the second camera and then generates the 3D image based on a depth value of each of a plurality of the divisional sections.
    Type: Grant
    Filed: September 7, 2011
    Date of Patent: September 23, 2014
    Assignee: LG Electronics Inc.
    Inventors: Kyunghee Kang, Hakhae Kim, Soojung Lim
  • Patent number: 8842165
    Abstract: A stereoscopic image pickup apparatus includes a first imaging section, a second imaging section, a zoom controller, and an image selection section. The first imaging section includes a zoom lens. The second imaging section includes a zoom lens. The zoom controller controls angles of view of the zoom lenses of the first imaging section and the second imaging section. The image selection section outputs image signals, which are output by the first imaging section and the second imaging section, as image signals of two channels constituting a stereoscopic image when the angles of view controlled by the zoom controller are equal to or greater than a predetermined value. In addition, the image selection section outputs image signals, which are based on the image signal output by either the first imaging section or the second imaging section, as the image signals of two channels constituting the stereoscopic image when the angles of view controlled by the zoom controller are less than the predetermined value.
    Type: Grant
    Filed: February 4, 2011
    Date of Patent: September 23, 2014
    Assignee: Panasonic Corporation
    Inventor: Noriaki Wada
  • Publication number: 20140267630
    Abstract: An intersection recognizing apparatus includes a stereo image obtaining unit configured to obtain a stereo image by capturing a forward image in a street by a stereo camera; a parallax image generator configured to generate a parallax image based on the stereo image obtained by the stereo image obtaining unit; a parallax map generator configured to generate a parallax map based on the parallax image; a feature data storage unit configured to store feature data of an intersection road shoulder width regarding a road surface; and a recognition processing calculation unit configured to recognize an intersection condition based on the parallax map and the feature data of the intersection road shoulder width.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 18, 2014
    Applicant: RICOH COMPANY, LIMITED
    Inventor: Wei ZHONG