Abstract: An imaging device includes a photodiode having a first electrode and a second electrode, a first transistor that controls electrical connection between the first electrode and a first wiring line through which first voltage is supplied, and a second transistor that controls electrical connection between the first electrode and a second wiring line through which second voltage different from the first voltage is supplied, and voltage at the second electrode is read with the first transistor and the second transistor turned off.
Abstract: Provided is an imaging apparatus including a pixel array in which a plurality of pixels are arranged in a matrix, each of the pixels comprising a photoelectric conversion portion. The pixel array includes a first pixel configured to output an imaging signal in accordance with an incident light and a second pixel configured to output a correction signal used for correcting the imaging signal. The second pixel outputs the correction signal after performing a first reset performed in a state where a first bias voltage is applied to the photoelectric conversion portion of the second pixel and a second reset performed in a state where a second bias voltage that is different from the first bias voltage is applied to the photoelectric conversion portion.
Abstract: Exemplary embodiments are directed to configurable demodulation of image data produced by an image sensor. In some aspects, a method includes receiving information indicating a configuration of the image sensor. In some aspects, the information may indicate a configuration of sensor elements and/or corresponding color filters for the sensor elements. A modulation function may then be generated based on the information. In some aspects, the method also includes demodulating the image data based on the generated modulation function to determine chrominance and luminance components of the image data, and generating the second image based on the determined chrominance and luminance components.
April 19, 2017
Date of Patent:
June 4, 2019
Hasib Ahmed Siddiqui, Kalin Mitkov Atanassov, Sergiu Radu Goma
Abstract: There is provided a solid-state imaging device including an imaging unit including a plurality of image sensors, and an analog to digital (AD) conversion unit including a plurality of AD converters arranged in a row direction, each AD converter performing AD conversion of an electrical signal output by the image sensor. Each of the AD converters includes a comparator having a differential pair at an input stage, the differential pair including a first transistor and a second transistor, the first and second transistors are each divided into an equal number of a plurality of division transistors, and an arrangement pattern of the plurality of division transistors constituting the comparator in a predetermined column and an arrangement pattern of the plurality of division transistors constituting the comparator in an adjacent column adjacent to the predetermined column are different from each other.
Abstract: A solid-state image pickup element, including: a pixel array including a plurality of pixels; a first calculator that calculates a phase difference evaluation value for focus detection by a phase difference detection method based on signal from the pixel; and a second calculator that calculates a contrast evaluation value for focus detection by a contrast detection method based on signal from the pixel, wherein, when the first calculator completes calculation of the phase difference evaluation value, the phase difference evaluation value is output regardless of whether or not output of an image signal acquired by the pixel array is completed, and wherein, when the second calculator completes calculation of the contrast evaluation value, the contrast evaluation value is output regardless of whether or not output of the image signal acquired by the pixel array is completed.
Abstract: An image processing apparatus includes: at least one processor for executing a program stored in at least one memory to perform functions of: a first acquiring unit configured to acquire RAW image data; a second acquiring unit configured to acquire first brightness information related to first brightness, which is display brightness of a display unit to be used for displaying an image based on the RAW image data; a setting unit configured to set a development parameter to be used for development processing, based on the first brightness information acquired by the second acquiring unit; and a developing unit configured to perform the development processing using the development parameter set by the setting unit, on the RAW image data acquired by the first acquiring unit.
Abstract: Systems and methods are described for generating a monochrome image from a color filter array. Image data from an image capturing device may be received having a color filter array comprising a plurality of filter positions. The image data may be interpolated to de-mosaic the image data into three sets of data representing red, blue, and green (RGB) data, respectively, for each of the plurality of filter positions. A weight may be calculated for each value of the RGB data based on a local gradient calculated for each value of the RGB data. A pixel value may be calculated for each pixel position for generating a monochrome image using the weight for each value of the RGB data.
Abstract: A digital camera is configured to display a continually updated preview image of an observed scene, wherein kinetic objects that appear in the observed scene do not appear in the continually updated preview image. An observed scene includes static objects and kinetic objects. The observed scene is recorded using a digital imaging sensor which forms part of a smartphone. A live camera feed results, the live camera feed comprising a plurality of frames, each depicting the observed scene at a specific time. A median color value is evaluated over m non-consecutive frames captured from the live camera feed. The median color values are used to generate an output feed that is displayed at a reduced frame rate as compared to the live camera feed. The resulting displayed scene includes the same static objects which appeared in the observed scene, but does not include the kinetic objects.
Abstract: An imaging element mounting substrate may include an insulating substrate and a metal substrate. The insulating substrate may include a first mounting region for mounting an imaging element on a top surface, and a second mounting region located a distance away from the first mounting region for mounting one or more electronic components. The insulating substrate may include a fixed region for securing a lens housing surrounding the first mounting region. A metal substrate may be bonded to a bottom surface of the insulating substrate. A third mounting region located between the first mounting region and the second mounting region in the fixed region of the insulating substrate may be positioned with respect to the center of the insulating substrate in a plan view. The metal substrate may be located to overlap with the third mounting region and bestride the first mounting region and the second mounting region.
Abstract: A video delivery terminal includes: an image capturer configured to perform image capture to generate an image of a subject; and a developing processor configured to crop an area of a portion of the image to generate a cropped image, perform first adjustment regarding the cropped image based on characteristics of the cropped image, and perform second adjustment regarding settings of the image based on characteristics of the image having undergone the first adjustment.
Abstract: The present embodiment relates to a camera module comprising: a first body; a second body coupled to the first body; a lens unit coupled to the second body; a circuit substrate unit located in an internal space formed by the first body and the second body and having an image sensor mounted thereon; and a focusing unit formed in the second body, and moving and fixing the lens unit or the circuit substrate unit in an optical axis direction of the lens unit, wherein a distance between the lens unit and the image sensor in the optical axis direction is adjusted through the focusing unit.
Abstract: In an example embodiment, method, apparatus and computer program product are provided. The method includes facilitating receipt of a plurality of VR contents and a plurality of video contents associated with an event captured by a plurality of VR cameras and a plurality of user camera devices, respectively. Each of the plurality of VR cameras comprises a plurality of camera modules with respective field of views (FOVs) associated with the event. The FOVs of the plurality of user camera devices are linked with respective FOVs of camera modules of the plurality of VR cameras based on at least a threshold degree of similarity between the FOV of the user camera device and the FOV of the camera module. The processor creates an event VR content by combining the plurality of VR contents and the plurality of video contents based on the linking of the FOVs.
Abstract: An image processing apparatus, comprising a memory that stores first image data, and a processor that includes an image associated information processing section, wherein the image associated information processing section, for the image data of a single frame that has been taken at a plurality of shooting conditions, within the first image data that has been stored in the memory, acquires image region information, relating to an image region in which shooting is carried out at different shooting conditions, and image associated information of the image region, associates the image region information and the image associated information and subjects the first image data to image processing, and generates second image data.
Abstract: An electronic device can include a camera, a display, a processor coupled to the camera and the display, and a memory coupled to the processor, wherein the processor when executing instructions stored in the memory is configured to display a preview image obtained from the camera on a first user interface on the display, receive a first input of a user, capture a plurality of images using the camera in response to the first input of the user, for each captured plurality of images, generate data corresponding to a processing of the captured plurality of images, receive a second input of the user during or after the data generation, process at least one of the captured plurality of images based on the generated data in response to the received second input, and display the processed at least one of the captured plurality of images on a second user interface on the display.
Abstract: An image pixel may include a shutter element that is operable in an open state during which a corresponding photodiode accumulates charge and a closed state during which charge is drained from the photodiode. During a first portion of an image frame, the image pixel may operate in a flicker mitigation mode in which a non-continuous exposure period is used. During a second portion of the image frame, the image pixel may operate in a high dynamic range mode in which images are obtained with exposures of varying lengths. To conserve memory requirements, the signal from the flicker mitigation mode may not be sampled until the end of the first exposure of the high dynamic range mode.
Abstract: A location tagged data provision and display system. A personal communication device (PCD) with electromagnetic communication capability has a GPS receiver and a display. The PCD requests maps and location tagged data from data providers and other for display on the PCD. The data providers respond to requests by using searching and sorting schemes to interrogate data bases and then automatically transmitting data responsive to the requests to the requesting PCD.
March 7, 2017
Date of Patent:
March 19, 2019
Silver State Intellectual Technologies, Inc.
Abstract: The invention relates to an electronic device for analyzing a scene, including a plurality of pixels (101) connected to a readout circuit (103) by a same first conductive track (105), wherein: each pixel (101) is capable of detecting an occurrence of a first event characteristic of the scene and of transmitting an event indication signal on the first conductive track (105) when it detects an occurrence of the first event; and the readout circuit (103) is capable of reading from the first conductive track (105) the event indication signals transmitted by the pixels (101) and of deducing therefrom characteristics of the scene, without transmitting event acknowledgement signals to the pixels (101).
October 8, 2017
Date of Patent:
March 5, 2019
COMMISSARIAT À L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
Abstract: Methods and apparatus for providing a user of a camera information about the status of one or more lenses and/or providing a subject of a photograph an indicator of where to look during image capture are described. In some embodiments different sets of camera modules are used for taking photographs depending on user input and/or the distance to the objects to be captured. In various embodiments an indicator, e.g., light emitting element, is associated with each of the different sets of camera modules which maybe used to indicate where a subject should look during image capture by the corresponding set of camera modules. In some embodiment dirty lenses of the camera are detected and the user is notified of the dirty lenses through an audible and/or visual indication. In some embodiments the indication of which lens/lenses are dirty is provided by illuminating light emitting elements corresponding to the dirty lens(es).
Abstract: Various embodiments of the present technology may comprise methods and devices for improving the spectral response of an imaging device. The methods and devices may operate in conjunction with a pixel array and color filter array. The color filter array may comprise pattern of color filters arranged in groups. At least one group may comprise multiple same-color filters and at least one filter of a different color.
Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.