Abstract: A portable terminal apparatus consecutively more than once captures images of an identical image capture object. The portable terminal apparatus determines whether or not a plurality of pieces of captured image data captured include a combination of a given number (an integer not less than 2) of pieces of captured image data which are applicable to be subjected to the high resolution correction by the correction processing section and are offset by a given amount, and transmits, to an image output apparatus, the given number of pieces of captured image data which are determined to be the combination. The image output apparatus receives the given number of pieces of captured image data from the portable terminal apparatus, and carries out, in accordance with the given number of pieces of captured image data, the high resolution correction for preparing high resolution image data which has a higher resolution than the given number of pieces of captured image data received.
Abstract: A camera module includes an optical assembly, an image sensor, a barrel holder, an angle measurement assembly, a perpendicularity adjustment assembly and a controller. The optical assembly has an optical axis. The image sensor has a light surface. The optical assembly and the image sensor are housed in the barrel holder. The angle measurement assembly is configured for measuring perpendicularity of the optical axis to the light sensing surface of the image sensor. The perpendicularity adjustment assembly is arranged between the optical assembly and the barrel holder, and includes an electrostrictive member deformable in response to a voltage, thereby adjusting the perpendicularity of the optical axis to the light sensing surface. The controller is configured for providing the voltage to the perpendicularity adjustment assembly to deform the electrostrictive member.
Abstract: An imaging apparatus includes an image sensor and a processor that merges together a plurality of images captured by said image sensor to produce a composite image. The positions or the plurality of images being adjusted to reduce displacement of a reference area that is determined within each one of said plurality of images before the plurality of images is merged together. The reference area includes at least one of an in-focus area determined by a focusing operation, a face area determined by face-sensing processes, a predetermined-color area determined by white-balance information and predetermined color information, and a predetermined-brightness area determined by photometry.
Abstract: An imaging method includes a step of setting, when a digital zoom operation mode for enlarging an image imaged by a imaging part of an X-Y address type is selected, a zoom magnification and enlarging the image at the zoom magnification set. The imaging method includes the steps of: setting an imaging range in a vertical direction of the imaging part according to the zoom magnification set in the digital zoom step; outputting a driving signal for scanning the shutter signal and the readout signal to perform exposure in the imaging range set in the imaging range setting step and driving the imaging part; and discarding, when the zoom magnification is changed in the digital zoom step, images imaged by the imaging part before and after the change of the zoom magnification to prevent the images from being used.
Abstract: When a request for a transmitting method of video frames of the client apparatus is received in a communicating unit to communicate with said client apparatus, whether the received request for the transmitting method of the video frames is a request for the transmitting method of the video frames for recording or a request for the transmitting method of the video frames for a live display is discriminated. If the received request for the transmitting method of the video frames is the request for the transmitting method of the video frames for recording, each of the video frames formed in an image sensing unit is temporarily stored in a memory and each of the stored video frames is transmitted. If the received request for the transmitting method of the video frames is the request for the transmitting method of the video frames for the live display, a process for transmitting the latest frame among the video frames formed in the image sensing unit is repetitively executed.
Abstract: A super-resolution processing portion has a high-resolution image generation portion that fuses a plurality of first input images together to generate a high-resolution image. The first input images are shot at a shutter speed equal to or faster than the super-resolution limit shutter speed, which is the lower-limit shutter speed that enables super-resolution processing to make the resolution of the output image equal to or higher than that of the input images. According to the amount of exposure, one of the following different methods for super-resolution processing is selected: a first method that yields as the output image the high-resolution image; a second method that yields as the output image a weighted added image resulting from weighted addition of the high-resolution image and an image based on an averaged image; and a third method that yields as the output image a weighted added image resulting from weighted addition of the high-resolution image and an image based on a second input image.
Abstract: An image processor includes a frequency transformation unit configured to perform frequency transformation processing with respect to a multiple image, and an inter-superimposed-image displacement acquisition unit that calculates a displacement amount between images forming superimposed images included in the multiple image by using a frequency-transformed image subjected to the frequency transformation processing by the frequency transformation unit.
November 26, 2008
Date of Patent:
October 23, 2012
Tokyo Institute of Technology, Olympus Corporation
Abstract: A solid-state image pickup device includes a plurality of light sensing sections; a plurality of vertical transfer registers configured to transfer signal charge of the plurality of light sensing sections in the vertical direction; a horizontal transfer register configured to transfer the signal charge in the horizontal direction; a floating gate amplifier that is placed at an output side of the horizontal transfer register; a floating diffusion amplifier that is placed in a horizontal transfer register which is provided at a stage subsequent to the floating gate amplifier; and an overflow drain mechanism that is placed in the horizontal transfer register between the floating gate amplifier and the floating diffusion amplifier.
Abstract: An imaging apparatus includes an imaging unit configured to perform photoelectric conversion of an optical image, an optical zoom unit configured to perform optical magnification variation in response to a zooming operation, an electronic zoom unit configured to perform electronic magnification variation on a signal output from the imaging unit, and a controller configured to operate the electronic zoom unit together with the optical zoom unit in a first zoom range in response to the zooming operation, to operate the optical zoom unit without operating the electronic zoom unit in response to the zooming operation in a second zoom range, which is closer to a telephoto side than the first zoom range, and to operate the electronic zoom unit together with the optical zoom unit in response to the zooming operation in a third zoom range, which is closer to the telephoto side than the second zoom range.
Abstract: An image pick-up apparatus that enables to appropriately set a holding time in which there is a possibility of returning to a state that allows tracking when a subject cannot be tracked temporarily during the subject tracking to improve the ease-of-use for a user. A specifying unit specifies a subject included in a captured image. A display unit displays the captured image on a screen and displays identification information showing that the specified subject is tracked. A tracking unit tracks the subject. A setting unit sets a holding time in which the display of the identification information is held according to at least one of a focal length of the image pick-up apparatus and a subject distance. An elimination unit eliminates a display of the identification information when the holding time has passed after the tracking unit lost the subject.
Abstract: The present invention discloses a camera door opening/shutting apparatus for a portable communication device. The apparatus includes a camera door disposed at a main body to expose or cover a camera lens included in the main body according to a sliding movement of the camera door, and a door sliding part disposed between the main body and the camera door to slidably couple the camera door with the main body.
Abstract: The accuracy of servo control of a corrective lens in an image stabilization control circuit is prevented from decreasing due to non-linear characteristics of a position-detecting element. A signal representing a component of vibration of an image pickup apparatus is generated based on an angular velocity signal from a vibration-detecting element. A microcomputer corrects the vibration component signal according to a predetermined correction function and generates a target position signal representing a target position of the lens. A position-detection signal based on an output from the position-detecting element is compared with the target position signal, and the position of the lens is servo-controlled. The correction function is set so that the characteristics of variation of the target position signal relative to the target position will be the same as the characteristics of variation of the position-detection signal relative to the actual position of the lens.
Abstract: A control method of detecting an object image to be focused from a sensed image, setting a focus detection area in detecting an in-focus state of a photographing optical system, and exercising control such that the photographing optical system is moved based on a signal output in the focus detection area to carry out focus control, wherein, in the setting of the focus detection area, a first focus detection area corresponding to an object to be focused detected from the sensed image and a second focus detection area which is larger than the first focus detection area are set, and in the focus control, control is exercised such that the photographing optical system is moved based on output signals in the set first and second focus detection areas.
Abstract: Disclosed is a method for integrating an image sensor capable of removing a flicker noise without causing any burdens on a hardware due to setting up additional logics. The method for integrating an exposure time of an image sensor employing a line scan method, including the steps of: performing an integration to a first line when an integer multiple of a light source frequency is different from an integration time; and performing an integration to a second line at a phase substantially equal to a phase in which the integration to the first line is started.
Abstract: An image capturing apparatus performs autofocus control that uses a face detection function. The image capturing apparatus sets a face region as an AF frame if face detection is successful. However, if a state in which face detection is successful transitions to a state in which face detection has failed, and furthermore a variation between subject distances is less than or equal to a threshold value, the image capturing apparatus maintains the previous AF frame setting instead of changing the AF frame setting. If the variation in subject distances is greater than the threshold value, the image capturing apparatus sets the AF frame to a predetermined region that does not follow a face region.
Abstract: An information processing apparatus includes an image-information obtaining unit configured to obtain first image information; an information associating unit configured to generate first related information having certain content related to the first image information and to associate the first related information with the first image information; a display processor configured to use function data including second image information and condition information to display an image of the second image information, and to allow display of a representative image representing the first image information on the image of the second image information; a determining unit configured to determine whether the content of the first related information associated with the first image information satisfies a condition represented by the condition information; and a display controller configured to cause the display processor to display the representative image only when the determining unit determines that the condition is sa
Abstract: A cover member fixed to a pickup element has a non-vertical surface and an upright surface and satisfies H2 tan(?A?2?C)?L1+L1? at any point on the upright surface, and H tan(?A?2?B)+(H1)tan ?B?L1 and ?B>?C at any point on the non-vertical surface, where ?A is the inclination of incident light, ?B is the inclination at a point on the non-vertical surface, ?C is the inclination at a point on the upright surface, H1 is the height of the non-vertical surface, H2 is the height of the upright surface, H is the height of the frame portion, L1 is the distance from the edge of a pixel region to the upper edge of the upright surface, and L1? is the distance from the upper edge to the lower edge of the upright surface in the planar direction.
June 7, 2010
Date of Patent:
October 2, 2012
Canon Kabushiki Kaisha
Koji Tsuduki, Hisatane Komori, Yasuhiro Matsuki, Satoru Hamasaki
Abstract: An electronic camera includes an imaging device. The imaging device has an imaging surface capturing an object scene and repeatedly outputs an object scene image. An orientation of the imaging surface is repeatedly changed by a CPU corresponding to a group photograph mode under which a group of a plurality of faces is captured. The CPU detects a face image from the object scene image outputted from the imaging device in association with a changing process of the orientation of the imaging surface. Furthermore, the CPU decides an angle range within which the group of a plurality of faces is contained based on a detection result of the face image. In addition, the CPU combines the plurality of object scene images outputted from the imaging device so as to create a combined image corresponding to the decided angle range. The created combined image is recorded in a recording medium.
Abstract: When still images are captured, signals read from all pixel cells of a pixel cell array having a Bayer array of color filters are selected, as a first output to be recorded and displayed, by an output selector. When moving images are captured, a 9-pixel binned signal is selected as the first output by the output selector, and a differential component between an output signal of a pixel cell located at the center-of-mass position of the 9 pixels, and the 9-pixel binned signal, is supplied as a second output from a memory circuit. Contrast information of a subject image is acquired based on the second output to perform lens focus adjustment.