Abstract: An image pickup apparatus generates a display image signal by implementing predetermined dynamic range expansion processing on an image signal obtained by performing underexposed image pickup, and detects an image region in which a pixel within a predetermined gradation range exists as a warning region from the image signal either prior to implementation of the predetermined dynamic range expansion processing or following implementation of the predetermined dynamic range expansion processing. The image pickup apparatus synthesizes the display image signal with a warning pattern corresponding to the warning region and displays the synthesized image as a live view image.
Abstract: A holding structure 100 for an image pickup device includes a back-incident type image pickup device 1 and a holding member 51 that holds the image pickup element 1, and the image pickup device 1 has an image pickup element 11 that performs imaging and a wiring board 12 electrically connected to the image pickup element 11. The holding member 51 is freely attachably and detachably attached to a side face 27 of the wiring board 12, at each of the opposing side faces 27a, 27a in the wiring board 12, a to-be-fitted portion 28 is formed, and the to-be-fitted portion 28 and a fitting portion 54 formed at the holding member 51 are fitted together. This relieves, even when an impact is applied to the image pickup device 1 during an inspection, delivery, etc., the impact to be applied to the wiring board 12 and the image pickup element 11 by the holding member 51 while suppressing the holding member 51 from coming off.
Abstract: In an imaging apparatus, an imager generates an imaged picture by converting incident light from a subject that is incident via a focus lens. A setting unit sets, as a movement range for the focus lens, a range of focus lens positions corresponding to imaging magnifications where the rate of change in the imaging magnification lies within a fixed range from a basis, the basis being an imaging magnification corresponding to the position of the focus lens. A focus controller configured to conduct focus control with respect to the subject by moving the focus lens in the set movement range.
Abstract: A flash detection unit calculates a line average luminance of each line of the current screen of image data and a screen average luminance of a past screen at least one screen before the current screen and compares the calculated line average luminance with the calculated screen average luminance to detect whether the current screen includes a line of high luminance due to a flash. A holding unit holds the past screen of the image data. A flash correction unit, if it is detected that some lines of the current screen have high luminance, replaces the lines having high luminance in the current screen with corresponding lines of the past screen held in the holding unit to correct the image data.
Abstract: Camera and sensor augmented reality techniques are described. In one or more implementations, an optical basis is obtained that was generated from data obtained by a camera of a computing device and a sensor basis is obtained that was generated from data obtained from one or more sensors that are not a camera. The optical basis and the sensor basis describe a likely orientation or position of the camera and the one or more sensors, respectively, in a physical environment. The optical basis and the sensor basis are compared to verify the orientation or the position of the computing device in the physical environment.
Abstract: A vehicle reserve security system includes a plurality of cameras, a digital video recorder and a video server. A front view camera views an area in front of a vehicle. A front left side camera views an area behind the vehicle and to a left thereof. A front right side camera views an area behind the vehicle and to a right thereof. A rear left side camera views an area in front of the vehicle and to a left thereof. A rear right side camera views an area in front of the vehicle and to a right thereof. A rear view camera views an area to a rear of the vehicle. The output of the plurality of cameras is input by the digital video recorder. The digital video recorder outputs six delayed video signals to a video server. The video server outputs a single delayed signal to a computer.
Abstract: Disclosed herein is an image pickup apparatus, including: an image pickup lens; a shutter capable of changing over a plurality of light paths from the image pickup lens individually between a light transmission state and a light blocking state; an image pickup element including a plurality of light receiving pixels for which exposure and signal reading out are carried out line-sequentially and adapted to acquire picked up image data based on transmission light beams of the light paths; and a control section adapted to control changeover between the light transmission state and the light blocking state of the light paths by the shutter.
Abstract: A monitoring apparatus includes a video data obtaining section obtaining video data from a monitoring terminal, a metadata obtaining section obtaining metadata describing information on a monitored subject from the monitoring terminal, a filter setting storage section storing a filter setting, a metadata filter section performing filtering processing on the metadata by using a filter setting stored in the filter setting storage section, and an output section outputting a monitoring result based on the filtering processing result obtained by performing filtering processing in the metadata filter section.
Abstract: High-bit depth sensors often capture more information then can be displayed on a commercially available display. Due to this, image processing systems and methods are disclosed to ensure that as much information as possible is presented to a user in a meaningful and statistically significant manner. The image processing systems and methods disclosed herein allow a user to view and process data that would otherwise be invisible to the user.
Type:
Grant
Filed:
March 31, 2011
Date of Patent:
January 6, 2015
Assignee:
DRS Sustainment Systems, Inc.
Inventors:
Matthew Wootton, Christopher Nissman, Michael Luecking, Matthew Blevins
Abstract: User imaging terminals (such as mobile phones with camera or video functionality) may be used to take images that are used to create an image stream of an event. In one implementation, a device may receive the images and transmit the images to one or more second users. The device may receive indications, from the second users, of whether the images are approved by the second users for incorporation into an image stream; and generate the image stream, based on the images that are approved by the second users. The image stream may be transmitted to one or more display devices.
Type:
Grant
Filed:
December 29, 2010
Date of Patent:
January 6, 2015
Assignee:
Verizon Patent and Licensing Inc.
Inventors:
Paul T. Schultz, Umashankar Velusamy, Robert A. Sartini
Abstract: An imaging apparatus including an imaging unit configured to acquire an image signal of an object includes a first image combining unit configured to align and synthesize a plurality of image frames generated from the signal output from the imaging unit at a predetermined accuracy to acquire a synthesized image for focus detection evaluation, and a second image combining unit configured to align and synthesize a plurality of image frames generated from the signal output from the imaging unit at an accuracy lower than the accuracy by the first image combining unit to acquire a synthesized image for recording.
Abstract: Systems, methods, and computer readable media for dynamically adjusting an image capture device's autofocus (AF) operation based, at least in part, on the device's orientation are described. In general, information about an image capture device's orientation may be used to either increase the speed or improve the resolution of autofocus operations. More particularly, orientation information such as that available from an accelerometer may be used to reduce the number of lens positions (points-of-interest) used during an autofocus operation, thereby improving the operation's speed. Alternatively, orientation information may be used to reduce the lens' range of motion while maintaining the number of points-of-interest, thereby improving the operation's resolution.
Abstract: Embodiments related to the alignment of a lens with an image sensor in an optical device are disclosed. For example, one disclosed embodiment comprises an optical device including a printed circuit board, and an image sensor package mounted on the printed circuit board, wherein the image sensor package includes an image sensor. The optical system further comprises a lens holder including a lens, and one or more alignment features arranged on the lens holder. The one or more alignment features are configured to contact the image sensor package to mechanically align the lens holder with the image sensor package.
Abstract: A digital photographing apparatus and a method of controlling the same. The method including setting a division frame configured by dividing a display screen; displaying a first input image in a first region of the set division frame; displaying the first input image captured according to a first photographing signal, in the first region; displaying a second input image in a second region of the set division frame; and displaying the second input image captured according to a second photographing signal in the second region.
Abstract: A camera includes: an image-capturing unit that captures an image of a subject to acquire image data; an image storing unit that stores a plurality of image data acquired by the image-capturing unit in a storage medium; an image specifying unit that specifies image data designated by a user from among the plurality of image data stored in the storage medium; and a transmission controlling unit that when transmitting the plurality of image data to an external device using a wireless communication device, identifies specified image data specified by the image specifying unit and non-specified image data that are not specified by the image specifying unit from each other and transmits the specified image data to the external device in priority to the non-specified image data.
Abstract: A technique of enhancing a scene containing one or more off-center peripheral regions within an initial distorted image captured with a large field of view includes determining and extracting an off-center region of interest (hereinafter “ROI”) within the image. Geometric correction is applied to reconstruct the off-center ROI into a rectangular or otherwise undistorted or less distorted frame of reference as a reconstructed ROI. A quality of reconstructed pixels is determined within the reconstructed ROI. One or more additional initially distorted images is/are acquired, and matching additional ROIs are extracted and reconstructed to combine with reduced quality pixels of the first reconstructed ROI using a super-resolution technique to provide one or more enhanced ROIs.
Type:
Grant
Filed:
April 11, 2011
Date of Patent:
November 25, 2014
Assignee:
Fotonation Limited
Inventors:
Peter Corcoran, Petronel Bigioi, Piotr Støc
Abstract: Certain embodiments provide methods and systems that link a video recording device and a processing device to enhance video media content development workflow and enable a variety of features. For example, a video camera may send all or a portion of a recorded video to a separate computer that can determine information to be associated with the recorded video. In some embodiments, the computer extracts information from the recorded video to be embedded or otherwise associated with the recorded video as metadata. In some cases, the computer retrieves information from other sources such as Internet websites for association with the recorded video. In some embodiments, the computer sends the information back to the camera where it is associated with the recorded video stored at the camera. In some embodiments, the computer provides the recorded video and information to other locations and parties, for example, to a director remotely overseeing filming.
Abstract: According to one embodiment, an image processing apparatus includes a refocus filter generating unit and a refocus image generating unit. The refocus image generating unit executes a filtering process using a refocus filter generated by the refocus filter generating unit. A filtering level of the refocus filter is adjusted according to a subject distance of a subject to be focused and sighted among a plurality of subjects projected in a first image and a second image.
Abstract: An imager includes an array of pixels arranged in rows and a control circuit for sequentially capturing first and second image frames from the array of pixels. The control circuit is configured to sequentially capture first and second pairs of adjacent rows of pixels during first and second exposure times, respectively, when capturing the first image frame. The control circuit is also configured to sequentially capture first and second pairs of adjacent rows of pixels during second and first exposure times, respectively, when capturing the second image frame. The first exposure times during the first and second frames are of similar duration; and the second exposure times during the first and second frames are of similar duration. The control circuit is configured to detect motion of an object upon combining the first and second image frames and, then, correct for the motion of the object.
Abstract: An imaging unit includes an imaging device that transmits an image data to an external display device; and an installation portion that is attachable to and detachable from the display device. The imaging device includes a communication part that transmits an image data to the display device, an outer barrel, and an imaging element. The installation portion includes a slider that is slidable with respect to the outer barrel, a first attached body that is connected to the slider, a second attached body that can change its gap with the first attached body, and a biasing member that biases the first attached body and the second attached body in a direction in which a gap therebetween becomes narrowed, in which the display device is grasped by the first attached body and the second attached body.