Picture Signal Generators (epo) Patents (Class 348/E13.074)
-
Publication number: 20130057651Abstract: Method and system for positioning an antenna (11), telescope, aiming device or similar, arranged to a movable platform (13) in a dome (12) or a part of a dome (12), said dome (12) having an interior surface (15) and/or is provided with a screen. The method and system are arranged to arrange or provide one or more patterns (40) or rasters at an interior surface (15) of a dome (12) or a screen arranged in the dome (12), for thereupon recording and analyzing the patterns/rasters (40) to calculate the accurate position of the antenna (11), telescope, aiming device or similar, and in this way highly accurate determination of position of the aiming direction (14) of the antenna (11), telescope, aiming device or similar.Type: ApplicationFiled: May 27, 2011Publication date: March 7, 2013Applicant: KONGSBERG SEATEX ASInventor: Gard Flemming Ueland
-
Publication number: 20130057653Abstract: A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane.Type: ApplicationFiled: July 25, 2012Publication date: March 7, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Il Kyu PARK, Chang Woo CHU, Youngmi CHA, Ji Hyung LEE, Bonki KOO
-
Publication number: 20130057652Abstract: A handheld, cordless scanning device for the three-dimensional image capture of patient anatomy without the use of potentially hazardous lasers, optical reference targets for frame alignment, magnetic reference receivers, or the requirement that the scanning device be plugged in while scanning. The device generally includes a housing having a front end and a rear end. The rear end includes a handle and trigger. The front end includes a pattern projector for projecting a unique pattern onto a target object and a camera for capturing live video of the projected pattern as it is deformed around the object. The front end of the housing also includes a pair of focus beam generators and an indexing beam generator. By utilizing data collected with the present invention, patient anatomy such as anatomical features and residual limbs may be digitized to create accurate three-dimensional representations which may be utilized in combination with computer-aided-drafting programs.Type: ApplicationFiled: May 11, 2012Publication date: March 7, 2013Inventors: David G. Firth, Brendan O. Beardsley, John P. Pella
-
Publication number: 20130057654Abstract: A method and system analyzes data acquired by image systems to more rapidly identify objects of interest in the data. In one embodiment, z-depth data are segmented such that neighboring image pixels having similar z-depths are given a common label. Blobs, or groups of pixels with a same label, may be defined to correspond to different objects. Blobs preferably are modeled as primitives to more rapidly identify objects in the acquired image. In some embodiments, a modified connected component analysis is carried out where image pixels are pre-grouped into regions of different depth values preferably using a depth value histogram. The histogram is divided into regions and image cluster centers are determined. A depth group value image containing blobs is obtained, with each pixel being assigned to one of the depth groups.Type: ApplicationFiled: October 31, 2012Publication date: March 7, 2013Applicant: Microsoft CorporationInventor: Microsoft Corporation
-
Publication number: 20130057655Abstract: The invention provides an image processing system. In one embodiment, the image processing system comprises a first camera, a second camera, a depth map generator, and an automatic focusing module. The first camera generates a first image. The second camera generates a second image. The depth map generator generates a depth map comprising information about visual shift between the first image and the second image. The automatic focusing module estimates a distance between a target object and a center position between the first camera and the second camera, and adjusts the focusing lengths of the first camera and the second camera according to the estimated distance.Type: ApplicationFiled: September 2, 2011Publication date: March 7, 2013Inventors: Wen-Yueh SU, Chun-Ta LIN
-
Publication number: 20130057657Abstract: A stereoscopic image capturing system includes a plurality of lens devices with optical elements, a vibration detection unit, a control unit that calculates a drive signal to drive the optical element(s) for correcting image blurring based on an output from the vibration detection unit, and a driving unit that drives the optical element(s) based on the drive signal.Type: ApplicationFiled: September 4, 2012Publication date: March 7, 2013Applicant: CANON KABUSHIKI KAISHAInventor: Masaomi Kanayama
-
Publication number: 20130057650Abstract: An optical gage (10) with a small field of view for three-dimensional surface profile measurement includes a projector (20) having a light source (22) and projection optics (28, 30, 42) that guide light along a projection light path. An optical grating device (34) is arranged in the projection light path and modifies the projection light distribution to project a structured light pattern (46). A phase shifting apparatus (47) shifts the structured light pattern to at least three positions with desired phase shift on said surface (80) to be measured. A viewer (50) includes viewing optics with a viewing light path that is non-parallel to the projection light path, a light sensing array (58) for sensing images of diffuse reflections of the structured light patterns from said surface, and a camera (57) for recording the images.Type: ApplicationFiled: March 19, 2009Publication date: March 7, 2013Inventors: Guiju Song, Kevin George Harding, Ming Jia, Bo Yang, Qingying Hu, Jianming Zheng, Li Tao
-
Publication number: 20130050427Abstract: A method and an apparatus for capturing a three-dimensional image and an apparatus for displaying the three-dimensional image are provided. The method is adapted to an image capturing apparatus including the following steps. First, a plurality of images are captured with continuous pan. Next, a disparity between each two adjacent images is estimated from the images. Then, the images and the disparities are stored.Type: ApplicationFiled: September 25, 2011Publication date: February 28, 2013Applicant: ALTEK CORPORATIONInventors: Hong-Long Chou, Chia-Chun Tseng
-
Publication number: 20130050433Abstract: A method for monitoring safety of a moving parking unit in a mechanical parking system using a control computer, digital images are captured by one or more image capturing devices positioned on the moving parking unit. The method detects a three dimensional (3D) figure area in each digital image, and controls area safety device to cut of power suppliers of the moving parking unit if the 3D figure area has been detected. The method further outputs alarm messages by lighting one or more signal lamps of the mechanical parking system.Type: ApplicationFiled: June 12, 2012Publication date: February 28, 2013Applicant: HON HAI PRECISION INDUSTRY CO., LTD.Inventors: HOU-HSIEN LEE, CHANG-JUNG LEE, CHIH-PING LO
-
Publication number: 20130050425Abstract: A gesture-based user interface method and corresponding apparatus that includes a light source configured to irradiate light to a user, an image sensor configured to receive light reflected from the user and output a depth image of the user, an image processor configured to recognize a user gesture based on the depth image output from the image sensor, and a controller configured to control the light source and the image sensor such that at least one of an optical wavelength of the light source, an optical power level of the light source, a frame rate of the light source and a resolution of the depth image is adjusted according to a gesture recognition mode.Type: ApplicationFiled: August 24, 2011Publication date: February 28, 2013Inventors: Soungmin Im, Sunjin Yu, Kyungyoung Lim, Sangki Kim, Yongwon Cho
-
Publication number: 20130050429Abstract: There is provided an image processing device including an image acquisition part acquiring an image; a depth acquisition part acquiring a depth associated with a pixel in the image; a depth conversion part converting the depth in accordance with a function having a characteristic to nonlinearly approach a predetermined value with an increase in the depth; and a storage part storing the converted depth in association with the image.Type: ApplicationFiled: July 16, 2012Publication date: February 28, 2013Applicant: Sony CorporationInventors: Naho SUZUKI, Hideki Nabesako, Takami Mizukura
-
Publication number: 20130050410Abstract: An improved apparatus for determining the 3D coordinates of an object (1) includes a projector (10) for projecting a pattern onto the object (1), a camera (11) connected to the projector (10) for taking the object (1), and a reference camera (16) connected to the projector (10) and to the camera (11) for taking one or more reference marks (6, 24) of a field (25) of reference marks (only FIGURE).Type: ApplicationFiled: February 15, 2012Publication date: February 28, 2013Applicant: Steinbichler Optotechnik GmbHInventors: Marcus Steinbichler, Thomas Mayer, Herbert Daxauer
-
Publication number: 20130050430Abstract: An image photographing device includes a photographing unit that receives an image, an image processing unit that generates preview image data using the image, a depth map generation unit that receives the preview image data transmitted from the image processing unit and that generates a depth map of a subject using the preview image data, and a display unit that displays both the preview image data and information regarding the depth map of the subject through a preview image. A control method of an image photographing device includes generating preview image data using an image input during a 3D photographing mode, generating a depth map of a subject using the preview image data, and displaying both the preview image data and information regarding the depth map of the subject through a preview image.Type: ApplicationFiled: August 10, 2012Publication date: February 28, 2013Applicant: Samsung Electronics Co., Ltd.Inventor: Seung Yun Lee
-
Publication number: 20130050437Abstract: Provided is a method and apparatus for linear depth mapping. Linear depth mapping includes using algorithms to correct the distorted depth mapping of stereoscopic capture and display systems.Type: ApplicationFiled: October 29, 2012Publication date: February 28, 2013Applicant: REALD INC.Inventor: REALD INC.
-
Publication number: 20130053101Abstract: A method and system for hiding objectionable frames during autofocusing are disclosed. A personal electronic device such as a cameral telephone can have two cameras that have overlapping fields of view. One camera can provide imaging. The other camera can facilitate autofocusing in a manner wherein images produced thereby are not viewed by a user. Because the autofocus frames are hidden, the user is not distracted or annoying thereby.Type: ApplicationFiled: October 15, 2012Publication date: February 28, 2013Inventors: Richard Tsai, Xiaolei Liu
-
Publication number: 20130050432Abstract: Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.Type: ApplicationFiled: August 30, 2011Publication date: February 28, 2013Inventors: Kathryn Stone Perez, Benjamin I. Vaught, John R. Lewis, Robert L. Crocco, Alex Aben-Athar Kipman
-
Publication number: 20130050438Abstract: An image capturing apparatus includes an image capturing unit configured to receive light beams split after passing through an aperture and output a plurality of stereopsis image data. When the object brightness falls within a predetermined range and the image capturing unit performs image capturing in which a plurality of stereopsis image data are output, the aperture is controlled to open much more than when the image capturing unit performs image capturing in which a plurality of stereopsis image data are not output at the same exposure.Type: ApplicationFiled: July 24, 2012Publication date: February 28, 2013Applicant: CANON KABUSHIKI KAISHAInventor: Tatsuya Arai
-
Publication number: 20130050439Abstract: Feature points are extracted from left and right viewpoint images (Step S12), and the amount of parallax of each feature point is calculated (Step S13). The maximum amount of parallax on a near view side and the maximum amount of parallax on a distant view side are acquired from the calculated amount of parallax of each feature point (Step S14). The maximum display size which enables binocular fusion from a supposed visual distance when a stereoscopic image based on the left and right viewpoint images is displayed on a stereoscopic display is acquired on the basis of at least the maximum amount of parallax on the distant view side (Step S15). The acquired maximum display size is recorded along with image information (Step S16). The image reproduction device which reads the 3D image file recorded in this way can be appropriately displayed on the basis of the maximum display size.Type: ApplicationFiled: October 24, 2012Publication date: February 28, 2013Applicant: FUJIFILM CORPORATIONInventor: FUJIFILM Corporation
-
Publication number: 20130050435Abstract: Disclosed herein are a stereo camera system and a method for controlling convergence, including: a camera unit photographing both-eyes images; a filter unit filtering signal values of pixels for each line for any one of the both-eyes images along a line direction to detect a reference line of any one image; a line memory unit storing data for the reference line and a reference line of the other one image corresponding to the reference line; and a convergence control unit calculating the image control amount so as to align convergences of the both-eyes images by performing a comparison operation on the data for the reference lines and generating an optimal synthesis image of the both-eyes images by applying the image control amount.Type: ApplicationFiled: August 8, 2012Publication date: February 28, 2013Applicant: Samsung Electro-Mechanics Co., Ltd.Inventors: Joo Hyun Kim, Soon Seok Kang
-
Publication number: 20130050431Abstract: A method of observing a cross-section of a cosmetic material includes a sample forming step of forming a sample by providing a cosmetic material on a sample holder; a freezing step of freezing the sample; a cutting step of forming a cut surface on the sample by processing the frozen sample by a focused ion beam; and a cut surface image processing step of obtaining a cut surface image of the cut surface using a scanning electron microscope.Type: ApplicationFiled: August 15, 2012Publication date: February 28, 2013Applicant: SHISEIDO COMPANY, LTD.Inventors: Norinobu YOSHIKAWA, Kaori Ikuta
-
Publication number: 20130050434Abstract: The present invention provides a local multi-resolution 3-D face-inherent model generation apparatus, including one or more 3-D facial model generation cameras for photographing a face of an object at various angles in order to obtain one or more 3-D face models, a 3-D face-inherent model generation unit for generating a 3-D face-inherent model by composing the one or more 3-D face models, a local photographing camera for photographing a local part of the face of the object, a control unit for controlling the position of the local photographing camera on the 3-D face-inherent model, and a local multi-resolution 3-D face-inherent model generation unit for generating a local multi-resolution face-inherent model by composing an image captured by the local photographing camera and the 3-D face-inherent model, a local multi-resolution 3-D face-inherent model generation using the local multi-resolution 3-D face-inherent model generation apparatus, and a skin management system.Type: ApplicationFiled: July 31, 2012Publication date: February 28, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Kap Kee KIM, Seung Uk Yoon, Bon Woo Hwang, Ji Hyung Lee, Bon Ki Koo
-
Publication number: 20130050428Abstract: A method captures images of objects using an image capturing apparatus. The method obtains x-coordinate values, y-coordinate values, and z-coordinate values of the accelerations of a camera device sensed by a gravity sensor in a default time interval, calculates three-dimensional coordinate differences according to the obtained x-coordinate, y-coordinate, and z-coordinate values, and determines whether the three-dimensional coordinate differences are less than corresponding predefined thresholds. If at least one coordinate difference is not less than a corresponding predefined threshold, the method delays a period of time to obtain the x-coordinate, y-coordinate, and the z-coordinate values in a next default time interval. If all the three-dimensional coordinate differences are less than the corresponding predefined thresholds, the method controls the camera device to capture images of the subject object.Type: ApplicationFiled: July 1, 2012Publication date: February 28, 2013Applicants: HON HAI PRECISION INDUSTRY CO., LTD., HONG FU JIN PRECISION INDUSTRY (ShenZhen) CO., LTD .Inventors: GUANG-JIAN WANG, YAN LI, XIAO-MEI LIU, MENG-ZHOU LIU
-
Patent number: 8384762Abstract: The present invention relates to a method of displaying stereographic images of a region R. The method comprises: moving a vehicle (10) relative to the region, the vehicle carrying a camera system (12) for acquiring images I of the region; during movement of the vehicle relative to the region acquiring a series of temporally sequential multiple images (I1, I2, . . . ) of the region at respective multiple different spaced apart locations (L1, L2, . . . ) of the vehicle relative to the region; displaying successive stereographic images of the region during movement of the vehicle relative to the region. Each stereographic image comprises a first of said multiple images acquired at a first location and a second of said multiple images acquired at a second location which are temporally spaced apart in the series one from the other.Type: GrantFiled: September 18, 2009Date of Patent: February 26, 2013Assignee: MBDA UK LimitedInventors: Keith Christopher Markham, Martin Simon Gate
-
Publication number: 20130044185Abstract: An imaging system has a microscope having an objective lens and a projection device configured to project spatially modulated light in one of several preselected predetermined pattern through the objective lens and onto tissue. The system camera configured to record an image of the tissue through the microscope and objective lens as illuminated by the spatially modulated light, and an image processor having a memory with a routine for performing spatial Fourier analysis on the image of the tissue to recover spatial frequencies. The image processor also constructs a three dimensional model of the tissue, and performs fitting of at least absorbance and scattering parameters of voxels of the model to match the recovered spatial frequencies. The processor then displays tomographic slices of the three dimensional model.Type: ApplicationFiled: August 15, 2012Publication date: February 21, 2013Inventors: Venkataramanan Krishnaswamy, Brian William Pogue
-
Publication number: 20130044181Abstract: Embodiments of the present invention disclose a system and method for multi-viewpoint video capture. According to one embodiment, the system includes a camera housing for accommodating both a first multi-imager set and a second multi-imager set, with each multi-imager set including a plurality of optical cameras having different viewpoint directions and configured to produce a source image. Furthermore, each camera in the first multi-imager set and the second multi-imager set include corresponding cameras facing in approximately the same viewpoint direction. The first multi-imager set is positioned laterally adjacent to the second multi-imager set such that lines joining a center of projection of corresponding cameras in the first multi-imager set and second multi-imager set are approximately parallel.Type: ApplicationFiled: May 14, 2010Publication date: February 21, 2013Inventors: Henry Harlyn Baker, Henry Sang, JR., Nelson Liang An Chang
-
Publication number: 20130044191Abstract: A rotation angle obtaining unit obtains an angle by which a stereoscopic image of a subject is to be rotated around a line-of-sight direction. A multi-viewpoint image accumulation unit accumulates a plurality of images of the subject obtained in one direction by changing capturing positions, as multi-viewpoint images. A selection unit selects base images for the stereoscopic image from the multi-viewpoint images accumulated in the multi-viewpoint image accumulation unit, based on the rotation angle. A rotation unit rotates the selected base images based on the rotation angle to generate images which are to form the stereoscopic image of the subject.Type: ApplicationFiled: October 22, 2012Publication date: February 21, 2013Applicant: PANASONIC CORPORATIONInventor: PANASONIC CORPORATION
-
Publication number: 20130044186Abstract: Robust techniques for self-calibration of a moving camera observing a planar scene. Plane-based self-calibration techniques may take as input the homographies between images estimated from point correspondences and provide an estimate of the focal lengths of all the cameras. A plane-based self-calibration technique may be based on the enumeration of the inherently bounded space of the focal lengths. Each sample of the search space defines a plane in the 3D space and in turn produces a tentative Euclidean reconstruction of all the cameras that is then scored. The sample with the best score is chosen and the final focal lengths and camera motions are computed. Variations on this technique handle both constant focal length cases and varying focal length cases.Type: ApplicationFiled: July 17, 2012Publication date: February 21, 2013Inventors: Hailin Jin, Zihan Zhou
-
Publication number: 20130044187Abstract: A 3D camera (10) for monitoring a spatial zone (12) is provided, wherein the 3D camera (10) has at least one image sensor (14a-b) for taking image data from the spatial zone (10), an evaluation unit (22, 24) for generating a distance image with three-dimensional image data from the image data of the image sensor (14a-b) and an illumination unit (100) with a light source (104) and an upstream microoptical array (106) with a plurality of microoptics (106a) to illuminate the spatial zone (12) with an irregular illumination pattern (20). In this respect, the light source (104) has a semiconductor array with a plurality of individual emitters (104a) and the microoptical array (106) has non-imaging microoptics (106a).Type: ApplicationFiled: August 17, 2012Publication date: February 21, 2013Applicant: SICK AGInventors: Markus HAMMES, Stefan MACK
-
Publication number: 20130044190Abstract: 3-D model acquisition of an object is performed using two planar mirrors and a camera. According to some embodiments, 3-D reconstruction is achieved by recovering the scene geometry, including the equations of the mirrors, the camera parameters and the position of the markers, which give the location and orientation of the subjects. After establishing the geometry, a volume intersection algorithm is applied to build a 3-D model of the subject. Camera parameters and spatial constraints of the mirrors may be initially unknown. Camera parameters may be solved with reference to the object and references in the object. Further, distance from the camera to at least one point on the object may be determined once camera parameters are solved. Markers having fixed relative positions may be provided on the object for reference.Type: ApplicationFiled: August 17, 2012Publication date: February 21, 2013Applicant: UNIVERSITY OF ROCHESTERInventors: Bo HU, Jacek Antoni WOJTCZAK, Christopher BROWN
-
Publication number: 20130044189Abstract: An imaging apparatus includes a plurality of imaging units, and a radiator operable to uniformize temperatures of the plurality of the imaging units.Type: ApplicationFiled: September 13, 2012Publication date: February 21, 2013Inventors: Miyoko IRIKIIN, Yasuo YOKOTA, Makoto IYODA, Tomonori MIZUTANI, Yasuhiro MIYAMOTO
-
Publication number: 20130044180Abstract: Stereoscopic teleconferencing techniques described herein are directed toward systems including a plurality of telecommunication stations. A first telecommunication station includes a stereoscopic camera, a pair of shutter glasses, and a first processing unit. A second telecommunication station includes a display, a pair of shutter glasses, and a processing unit. The stereoscopic camera of the first telecommunication station generates a set of stereoscopic images including one or more users. The processing unit of the first telecommunication station determines a location of the shutter glasses in the sets of stereoscopic images and replaces the image of the shutter glasses with the corresponding portion of the given user's face to generate modified sets of stereoscopic images. The display of the second telecommunication station outputs the modified sets of stereoscopic images.Type: ApplicationFiled: August 16, 2011Publication date: February 21, 2013Applicant: SONY CORPORATIONInventor: Peter Shintani
-
Publication number: 20130044188Abstract: A stereoscopic image is displayed with an appropriate amount of parallax based on auxiliary information recorded in a three-dimensional-image file. The size of a display which performs 3D display is acquired (Step S31), and a 3D image file is read (Step S32). The maximum display size capable of appropriately performing 3D display of each viewpoint image is acquired from metadata of the read 3D image file (Step S33), and the size acquired in Step S31 is compared with the maximum display size acquired in Step S33 (Step S34). Viewpoint numbers of images in which the maximum display size is larger are acquired (Step S35), the most appropriate image is selected from the images of the acquired viewpoint numbers (Step S37), and 3D display is performed using the selected image (Step S38). Therefore, an appropriate image can be selected based on the maximum display size and 3D display can be performed.Type: ApplicationFiled: October 25, 2012Publication date: February 21, 2013Applicant: FUJIFILM CORPORATIONInventor: Fujifilm Corporation
-
Publication number: 20130038694Abstract: A method for detecting moving objects including people. Enhanced monitoring, safety and security is provided through the use of a monocular camera and a structured light source, by trajectory computation, velocity computation, or counting of people and other objects passing through a laser plane arranged perpendicular to the ground, and which can be setup anywhere near a portal, a hallway or other open area. Enhanced security is provided for portals such as revolving doors, mantraps, swing doors, sliding doors, etc., using the monocular camera and structured light source to detect and, optionally, prevent access violations such as “piggybacking” and “tailgating”.Type: ApplicationFiled: April 27, 2011Publication date: February 14, 2013Inventors: Sanjay Nichani, Chethan Reddy
-
Publication number: 20130038692Abstract: A remote control system comprises a mobile object, a remote controller for remotely controlling the mobile object, and a storage unit where background images to simulate a driving room or an operation room of the mobile object are stored. The mobile object has a stereo camera, a camera control unit for controlling image pickup direction of the stereo camera, and a first communication unit for communicating information including at least images photographed by the stereo camera. The remote controller has a second communication unit for communicating to and from the first communication unit, a control unit for controlling the mobile object, and a display unit for synthesizing at least a part of the images photographed by the stereo camera and the background images and for displaying the images so that a stereoscopic view can be displayed.Type: ApplicationFiled: August 2, 2012Publication date: February 14, 2013Applicant: KABUSHIKI KAISHA TOPCONInventors: Fumio Ohtomo, Kazuki Osaragi, Tetsuji Anai
-
Publication number: 20130038693Abstract: The present invention is directed towards enhancing the reproduction of three-dimensional dynamic scenes on digital light processing (DLP) and (liquid crystal display) LCD projectors and displays by adding optimal amount of motion blur to stimulate the covered eye to continue perceiving scene picture changes. Too much blur would bring smearing, but a lack of blur induces motion breaking.Type: ApplicationFiled: April 27, 2010Publication date: February 14, 2013Applicant: Thomson LicensingInventor: Emil Tchoukaleysky
-
Publication number: 20130038691Abstract: Depth sensing imaging pixels include pairs of left and right pixels forming an asymmetrical angular response to incident light. A single microlens is positioned above each pair of left and right pixels. Each microlens spans across each of the pairs of pixels in a horizontal direction. Each microlens has a length that is substantially twice the length of either the left or right pixel in the horizontal direction; and each microlens has a width that is substantially the same as a width of either the left or right pixel in a vertical direction. The horizontal and vertical directions are horizontal and vertical directions of a planar image array. A light pipe in each pixel is used to improve light concentration and reduce cross talk.Type: ApplicationFiled: February 24, 2012Publication date: February 14, 2013Applicant: APTINA IMAGING CORPORATIONInventors: Gennadiy Agranov, Dongqing Cao, Avi Yaron
-
Publication number: 20130038698Abstract: An image processing device includes an input unit that receives a first image and a second image that are taken from different positions. A first cutting unit cuts a first block from the first image. A second cutting unit moves a second block by P pixels in a row direction, in a moving limit set in a processing area of the second image, and cuts the second block from the second image. A correlation value calculating unit calculates a correlation value between the first and second blocks. A deviation amount calculating unit calculates an amount of deviation between the first and second images based on a largest value of the correlation value. The setting unit narrows the moving limit based on the correlation value calculated in the N-th row when the amount of deviation is calculated in the (N+1)-th row of the first image.Type: ApplicationFiled: August 7, 2012Publication date: February 14, 2013Applicant: FUJITSU SEMICONDUCTOR LIMITEDInventor: Yuji YOSHIDA
-
Publication number: 20130038701Abstract: Described are a system, apparatus, and method to capture a stereoscopic image pair using an imaging device with a single imaging sensor. Particularly, discussed are systems and methods for capturing a first and second image through an image sensor, determining a vertical and horizontal disparity between the two images, and applying corrections for geometric distortion, vertical disparity, and convergence between the two images. Some embodiments contemplate displaying a directional indicator before the second image of the stereoscopic image pair is captured. By displaying a directional indicator, a more optimal position for the second image of the stereoscopic image pair may be found, resulting in a higher quality stereoscopic image pair.Type: ApplicationFiled: August 12, 2011Publication date: February 14, 2013Applicant: QUALCOMM IncorporatedInventors: Szepo Robert Hung, Ruben M. Velarde, Thomas Wesley Osborne, Liang Liang
-
Publication number: 20130038682Abstract: An apparatus for capturing a stereoscopic image maintains the image quality of partial image data by appropriately cutting out the partial image data from image data generated by an imaging unit. The apparatus includes imaging units which generate right-eye image data and left-eye image data having binocular parallax for making a viewer sense a stereoscopic image; an information storage unit which stores, when image center points of the right-eye and left-eye image data image data are origins, position information indicating positions of right and left correction points obtained by moving marks projected on the right-eye and left-eye image data by a differential vector, respectively; and a cutout control unit which, based on the position information, cuts out pieces of partial image data having the same size from the right-eye and left-eye image data.Type: ApplicationFiled: February 22, 2011Publication date: February 14, 2013Applicant: JVC KENWOOD CORPORATIONInventor: Etsuya Takami
-
Publication number: 20130038606Abstract: There is provided an image processing apparatus including a left eye image input unit configured to input a left eye image (L image) which is a left eye image signal applied to three-dimensional image display, a right eye image input unit configured to input a right eye image (R image) which is a right eye image signal applied to three-dimensional image display, a parallax information generating unit configured to generate parallax information from the left eye image (L image) and the right eye image (R image), and a virtual view point image generating unit configured to receive the left eye image (L image), the right eye image (R image), and the parallax information, and generate virtual view point images including a view point image other than view points of the received LR images.Type: ApplicationFiled: July 31, 2012Publication date: February 14, 2013Inventors: Suguru USHIKI, Masami Ogata
-
Publication number: 20130038697Abstract: A mobile phone includes a shell and a camera module received in the shell. The shell has a first surface and a second surface opposite to the first surface. The first surface defines a first opening. The second surface defines a second opening and a third opening. The camera module includes a first imaging unit, a second imaging unit, and an image processor. The first imaging unit is aligned with the first opening or the second opening, to obtain a first image of an object. The second imaging unit is aligned with the third opening to obtain a second image of the object. When the first imaging unit faces the second opening, the image processor processes the first image and the second image to form a three-dimension image.Type: ApplicationFiled: June 15, 2012Publication date: February 14, 2013Applicant: HON HAI PRECISION INDUSTRY CO., LTD.Inventor: YEN-CHUN CHEN
-
Publication number: 20130038689Abstract: In a minimally invasive surgical system, an image capture unit includes a prism assembly and sensor assembly. The prism assembly includes a beam splitter, while the sensor assembly includes coplanar image capture sensors. Each of the coplanar image capture sensors has a common front end optical structure, e.g., the optical structure distal to the image capture unit is the same for each of the sensors. A controller enhances images acquired by the coplanar image capture sensors. The enhanced images may include (a) visible images with enhanced feature definition, in which a particular feature in the scene is emphasized to the operator of minimally invasive surgical system; (b) images having increased image apparent resolution; (c) images having increased dynamic range; (d) images displayed in a way based on a pixel color component vector having three or more color components; and (e) images having extended depth of field.Type: ApplicationFiled: August 12, 2011Publication date: February 14, 2013Inventor: Ian McDowall
-
Publication number: 20130038699Abstract: A recording unit recording the plurality of time-series parallax images outputted from the plurality of imaging units on a recording medium when the moving image is photographed, reading out the first and second information from the storage unit, and recording the first information and the second information read out from the storage unit on the recording medium in association with the plurality of parallax images.Type: ApplicationFiled: November 9, 2010Publication date: February 14, 2013Applicant: FUJIFILM CORPORATIONInventor: Junji Hayashi
-
Publication number: 20130038700Abstract: An object is to obtain a focal length of an imaging device with ease at a high speed. In order to achieve the object concerned, an information processing device is configured to include: an image obtaining unit that, with regard to an object in a state where a spatial relationship with an imaging system is kept, obtains a standard image in which the object is imaged by an imaging condition having a known standard focal length, and a reference image in which the object is imaged by an imaging condition having an unknown reference focal length; and a focal length obtaining unit that obtains a value of the reference focal length by performing arithmetic operation processing for scaling a value of the standard focal length by using information of image sizes of the object in the standard image and the reference image.Type: ApplicationFiled: April 8, 2011Publication date: February 14, 2013Inventor: Shinichi Horita
-
Publication number: 20130033572Abstract: The invention is directed to systems, methods and computer program products for optimizing usage of image sensors in a stereoscopic environment. The method includes: (a) providing a first image sensor, where the first image sensor is associated with a first image sensor area and a first imaging area; (b) determining a distance from the camera to an object to be captured; and (c) shifting the first imaging area along a length of the first image sensor area, where the amount of the shifting is based at least partially on the distance from the camera to the object, and where the first imaging area can shift along an entire length of the first image sensor area. The invention optimizes usage of an image sensor by permitting an increase in disparity control. Additionally, the invention reduces the closest permissible distance of an object to be captured using a stereoscopic camera.Type: ApplicationFiled: October 17, 2011Publication date: February 7, 2013Applicant: SONY ERICSSON MOBILE COMMUNICATIONS ABInventor: Mats Wernersson
-
Publication number: 20130033575Abstract: There is provided an imaging element that photographs multiple viewing point images corresponding to images observed from different viewing points and an image processing unit separates an output signal of the imaging element, acquires the plurality of viewing point images corresponding to the images observed from the different viewing points, and generates a left eye image and a right eye image for three-dimensional image display, on the basis of the plurality of acquired viewing point images. The image processing unit generates parallax information on the basis of the plurality of viewing point images obtained from the imaging element and generates a left eye image and a right eye image for three-dimensional image display by 2D3D conversion processing using the generated parallax information. By this configuration, a plurality of viewing point images are acquired on the basis of one photographed image and images for three-dimensional image display are generated.Type: ApplicationFiled: July 17, 2012Publication date: February 7, 2013Applicant: Sony CorporationInventors: Seiji Kobayashi, Atsushi Ito
-
Publication number: 20130033578Abstract: A method and a system for processing multi-aperture image data are described, wherein the method comprises: capturing image data associated with one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second and third aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electromagnetic spectrum; and, generating depth information associated with said captured image on the basis displacement information in said second image data, preferably on the basis of displacement information in an auto-correlation function of the high-frequency image data associated with said second image data.Type: ApplicationFiled: February 19, 2010Publication date: February 7, 2013Inventor: Andrew Augustine Wajs
-
Publication number: 20130033577Abstract: A multiple-lens camera has only one image sensor to capture a number of images at different viewing angles. Using a single image sensor, instead of a number of separate image sensors, to capture multiple images simultaneously, one can avoid the calibration process to calibrate the different image sensors to make sure that color balance and the gain are the same for all the image sensors used. The camera has an adjustment mechanism for adjusting the distance between the image lenses, and a processor to receive from the image sensor electronic signals indicative of image data of the captured of images. The camera has a connector to transfer the processed image data to an external device or to an image display. The image display device is configured to display one of said plurality of images.Type: ApplicationFiled: August 3, 2012Publication date: February 7, 2013Applicant: 3DV CO. LTD.Inventor: Allen Kwok-Wah Lo
-
Publication number: 20130033573Abstract: Provided is an image extraction method of optical phase extraction system. The image extraction method may include checking whether a phase error due to an environmental disturbance of optical fiber occurs by monitoring an output signal obtained by interfering reflection optical signals reflected through two paths. When a phase error occurs, an error is compensated using a phase compensation control method of closed loop type through one of the two paths and an image is extracted by capturing an image of object in a state that the image of object is shifted by the set phase value when a phase error is compensated. According to the inventive concept, a phase error occurring in an optical fiber type interferometer due to an environmental disturbance is minimized or compensated. Also, since an interference image accurately shifted by the phase value set among arbitrary various phase values is obtained through a camera, reliability of three-dimensional phase information being extracted is guaranteed.Type: ApplicationFiled: June 15, 2012Publication date: February 7, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Hyoung Jun PARK, Hyun Seo KANG, Young Sun KIM, Kwon-Seob LIM, In Hee SHIN, Young Soon HEO
-
Publication number: 20130033571Abstract: A method and gesture-based control system for manipulating a 3-dimensional medical dataset include translating a body part, detecting the translation of the body part with a camera system. The method and system include translating a crop plane in the 3-dimensional medical dataset based on the translating the body part. The method and system include cropping the 3-dimensional medical dataset at the location of the crop plane after translating the crop plane and displaying the cropped 3-dimensional medical dataset using volume rendering.Type: ApplicationFiled: August 3, 2011Publication date: February 7, 2013Applicant: General Electric CompanyInventor: Erik N. Steen