Patents Issued in November 27, 2014
-
Publication number: 20140347478Abstract: Provided is a network camera, including: an event detector configured to detect an event; an image sensor configured to capture an image in response to the detected event; a storage configured to store image data of the captured image; a transceiver configured to transmit and receive the image data over a network; a controller configured to control the event detector, the image sensor, the storage, and the transceiver, to select a single network mode from among a plurality of network modes based on whether power is supplied from an outside, and to configure the network based on the selected network mode; and a power source configured to supply the power to the event detector, the image sensor, the storage, the transceiver, and the controller.Type: ApplicationFiled: May 22, 2014Publication date: November 27, 2014Applicant: CENTER FOR INTEGRATED SMART SENSORS FOUNDATIONInventors: Hyun Tae Cho, Chong Min Kyung
-
Publication number: 20140347479Abstract: The present invention includes methods, systems, apparatuses, circuits and associated computer executable code for providing video based subject characterization, categorization, identification, tracking, monitoring, authentication and/or presence response. According to some embodiments, there may be provided one or more Image Based Biometric Extrapolation (IBBE) methods, systems and apparatuses adapted to extrapolate static and/or dynamic biometric parameters of one or more subjects, from one or more images or video segments including the subjects. According to some embodiments, extrapolated biometric parameters of subjects may be used to identify, track/monitor and/or authenticate the subjects. According to further embodiments, extrapolated biometric parameters may be used to determine physical positions of subjects and may further be used to identify one or more subjects exhibiting suspicious behavior based on their physical positions.Type: ApplicationFiled: July 28, 2014Publication date: November 27, 2014Inventor: Dor Givon
-
Publication number: 20140347480Abstract: An apparatus automatically detects an event occurring in sensor data. The apparatus has a recording device which is configured to receive the sensor data, a feature identification device which is configured to automatically identify a predetermined number of features of the sensor data in the recorded sensor data, an evaluation processing device which is configured to acquire from the predetermined number of features, for each of the features, an evaluation which relates to the event to be detected and which is based on a set of evaluation criteria, and a detection device which is configured to automatically acquire the event to be detected by the features identified by the feature identification device, based on the detected evaluations for the predetermined number of features.Type: ApplicationFiled: October 23, 2012Publication date: November 27, 2014Inventors: Daniel Buschek, Thomas Riegel
-
Publication number: 20140347481Abstract: Systems and methods for interest (ROI), or Frame Segmentation can be provided within a video stream, in real-time, or within a few milliseconds of video frame duration of 30 msec, or even in the sub-millisecond range. This video frame segmentation is the basis of Pre-ATR-based Ultra-Real-Time (PATURT) video compression. Additionally, morphing compression, and watermarking can be based on the PATURT. Example applications of the PATURT include ROI-based real-time video recording in “black-box” devices, recording aircraft accidents, or catastrophes.Type: ApplicationFiled: May 30, 2014Publication date: November 27, 2014Applicant: Physical Optics CorporationInventors: Andrew Kostrzewski, Tomasz Jannson, Wenjian Wang
-
Publication number: 20140347482Abstract: A system and method of acquiring information from an image of a vehicle in real time wherein at least one imaging device with advanced light metering capabilities is placed aboard a unmanned aerial vehicle, a computer processor means is provided to control the imaging device and the advanced light metering capabilities, the advanced light metering capabilities are used to capture an image of at least a portion of the unmanned aerial vehicle, and image recognition algorithms are used to identify the current state or position of the corresponding portion of the unmanned aerial vehicle.Type: ApplicationFiled: June 9, 2014Publication date: November 27, 2014Applicant: Appareo Systems, LLCInventors: Robert V. Weinmann, Joshua N. Gelinske, Robert M. Allen, Johan A. Wiig, Joseph A. Heilman, Jeffrey L. Johnson, Jonathan L. Tolstedt
-
Publication number: 20140347483Abstract: A work vehicle periphery monitoring system (10) is a system which monitors a periphery of a work vehicle with a vessel for loading a load thereon and includes: a plurality of radar devices (21 to 28) each of which is attached to the work vehicle and detects an object existing around the work vehicle and a controller (100) which notifies an alarm based on detection results of the radar devices (21 to 28) and switches between a notification mode for notifying the alarm and a restriction mode for stopping the notification of the alarm based on a state of the work vehicle.Type: ApplicationFiled: November 29, 2012Publication date: November 27, 2014Inventors: Yukihiro Nakanishi, Shinji Mitsuta, Takeshi Kurihara
-
Publication number: 20140347484Abstract: The present invention relates to an apparatus and method for providing the surrounding environment information of a vehicle. The apparatus includes a first information extraction unit for collecting sensing information about a surrounding environment of a vehicle and extracting lane information and object information based on the sensing information. A second information extraction unit acquires an image of the surrounding environment of the vehicle, and extracts lane information and object information based on the image. An information integration unit matches and compares the lane information and the object information extracted by the first information extraction unit with the lane information and the object information extracted by the second information extraction unit, determining ultimate lane information and ultimate object information based on results of comparison, and providing the ultimate lane information and the ultimate object information to a control unit of the vehicle.Type: ApplicationFiled: April 16, 2014Publication date: November 27, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Jae-Min BYUN, Ki-In NA, Myung-Chan ROH, Joo-Chan SOHN, Sung-Hoon KIM
-
Publication number: 20140347485Abstract: A system and method for determining when to display frontal curb view images to a driver of a vehicle, and what types of images to display. A variety of factors—such as vehicle speed, GPS/location data, the existence of a curb in forward-view images, and vehicle driving history—are evaluated as potential triggers for the curb view display, which is intended for situations where the driver is pulling the vehicle into a parking spot which is bounded in front by a curb or other structure. When forward curb-view display is triggered, a second evaluation is performed to determine what image or images to display which will provide the best view of the vehicle's position relative to the curb. The selected images are digitally synthesized or enhanced, and displayed on a console-mounted or in-dash display device.Type: ApplicationFiled: May 16, 2014Publication date: November 27, 2014Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wende Zhang, Jinsong Wang, Kent S. Lybecker, Jeffrey S. Piasecki, Bakhtiar Brian Litkouhi, Ryan M. Frakes
-
Publication number: 20140347486Abstract: A camera calibration system of a vehicle includes a camera disposed at a vehicle and having a field of view exterior of the vehicle. The camera is operable to capture image data. An image processor operable to process image data captured by the camera. The camera calibration system is operable to generate camera calibration parameters utilizing a bundle adjustment algorithm. Responsive to image processing of captured image data during movement of the vehicle along an arbitrary path, and responsive to the bundle adjustment algorithm, the camera calibration system is operable to calibrate the camera. The bundle adjustment algorithm may iteratively refine calibration parameters starting from a known initial estimation.Type: ApplicationFiled: May 20, 2014Publication date: November 27, 2014Applicant: MAGNA ELECTRONICS INC.Inventor: Galina Okouneva
-
Publication number: 20140347487Abstract: The invention relates to a method and a camera assembly for detecting raindrops (28) on a windscreen of a vehicle, in which at least one image (14) is captured by a camera (12), at least one reference object (20) is identified in a first image (18) captured by the camera (12) and the at least one identified object (20) is at least partially superimposed to at least one object extracted from a second image (16) captured by the camera. Raindrop (28) detection is performed within the second image (16).Type: ApplicationFiled: September 7, 2011Publication date: November 27, 2014Applicant: VALEO SCHALTER UND SENSOREN GMBHInventors: Samia Ahiad, Caroline Robert
-
Publication number: 20140347488Abstract: A video display mirror is provided with a half mirror, a monitor, and an interlocking mechanism. The half mirror is used so that a vehicle passenger can look toward the rear of the vehicle. The monitor is disposed near the half mirror toward the front of the vehicle. The interlocking mechanism moves in relation to a video image being displayed on the monitor and changes the angle of a reflection surface of the half mirror from the position of the half mirror when the rear of the vehicle is viewed.Type: ApplicationFiled: October 29, 2012Publication date: November 27, 2014Applicant: Nissan Motor Co., Ltd.Inventors: Yuichi Tazaki, Yuji Matsumoto
-
Publication number: 20140347489Abstract: A vehicle rear monitoring system (1) includes a camera (40) that captures an image of an area to a rear of a vehicle, and a processing unit (10) that processes the image captured by the camera. The processing unit (10) creates a first vehicle rear image that is displayed in a first display area (22A) that is a portion of a display area of a display device, when traveling forward, and creates a second vehicle rear image that is displayed in a second display area (22) that is within the display area of the display device and that is an area that includes the first display area and is larger than the first display area, when traveling backward.Type: ApplicationFiled: December 20, 2012Publication date: November 27, 2014Inventor: Hitoshi Kumon
-
Publication number: 20140347490Abstract: To provide a semiconductor device in which a layer to be peeled is attached to a base having a curved surface, and a method of manufacturing the same, and more particularly, a display having a curved surface, and more specifically a light-emitting device having a light emitting element attached to a base with a curved surface. A layer to be peeled, which contains a light emitting element furnished to a substrate using a laminate of a first material layer which is a metallic layer or nitride layer, and a second material layer which is an oxide layer, is transferred onto a film, and then the film and the layer to be peeled are curved, to thereby produce a display having a curved surface.Type: ApplicationFiled: August 12, 2014Publication date: November 27, 2014Inventors: Toru TAKAYAMA, Junya MARUYAMA, Yuugo GOTO, Hideaki KUWABARA, Shunpei YAMAZAKI
-
Publication number: 20140347491Abstract: This invention is a device and system for monitoring a person's food consumption comprising: a wearable sensor that automatically collects data to detect probable eating events; an imaging member that is used by the person to take pictures of food wherein the person is prompted to take pictures of food when an eating event is detected by the wearable sensor; and a data analysis component that analyzes these food pictures to estimate the types and amounts of foods, ingredients, nutrients, and/or calories that are consumed by the person. In an example, the wearable sensor can be part of a smart watch or smart bracelet. In an example, the imaging member can be part of a smart phone. The integrated operation of the wearable sensor and the imaging member disclosed in this invention offers accurate measurement of food consumption with low intrusion into the person's privacy.Type: ApplicationFiled: May 23, 2013Publication date: November 27, 2014Inventor: Robert A. Connor
-
Publication number: 20140347492Abstract: Image data from cameras can be used to detect structural components and furnishings of a venue using image processing. A venue map can be generated or updated accordingly. Image data may be obtained from existing cameras (e.g., security cameras) and/or specialized cameras (e.g., IR cameras). The updated or generated building map may then be transmitted to a mobile device and/or stored by a server for use by a positioning system.Type: ApplicationFiled: May 24, 2013Publication date: November 27, 2014Applicant: QUALCOMM IncorporatedInventor: Mary FALES
-
Publication number: 20140347493Abstract: An image-capturing device including a lens system and an image-capturing unit upon which light having passed through the lens system is incident, wherein the image-capturing unit includes a plurality of first image-capturing elements configured to receive light in a first wavelength band and a plurality of second image-capturing elements configured to receive light in a second wavelength band which is different from the first wavelength band, and wherein the lens system or the image-capturing unit is provided with an optical element so that the light in the first wavelength band of which light quantity is less than the light quantity of the light in the second wavelength band reaches the image-capturing unit.Type: ApplicationFiled: September 3, 2012Publication date: November 27, 2014Applicant: Sony CorporationInventors: Yoshihito Higashitsutsumi, Hideaki Mogi, Ken Ozawa, Jun Iwama
-
Publication number: 20140347494Abstract: An imaging lens is provided with: a first lens with negative power; a second lens with negative power; a third lens with positive power; and a fourth lens with positive power. The cemented fourth lens is formed from an object side lens with negative power and an image side lens with positive power. The thickness of a resin adhesive layer that bonds the object side lens and the image side lens is 20 ?m or greater on the optical axis, and when Sg1H is the amount of sag in the image side lens surface of the object side lens and Sg2H is the amount of sag in the object side lens surface of the image side lens. The bonding operation is easy without damage occurring to the cemented surfaces, with a design that takes into account thickness of the resin adhesive layer; therefore various forms of aberration can be corrected.Type: ApplicationFiled: February 22, 2013Publication date: November 27, 2014Inventors: Masaki Yamazaki, Takashi Sugiyama
-
Publication number: 20140347495Abstract: A fragmented lens system for creating an invisible light pattern useful to computer vision systems is disclosed. Random or semi-random dot patterns generated by the present system allow a computer to uniquely identify each patch of a pattern projected by a corresponding illuminator or light source. The computer may determine the position and distance of an object by identifying the illumination pattern on the object.Type: ApplicationFiled: August 13, 2014Publication date: November 27, 2014Inventor: Matthew Bell
-
Publication number: 20140347496Abstract: An imager including a self test mode. The imager includes a pixel array for providing multiple pixel output signals via multiple columns; and a test switch for (a) receiving a test signal from a test generator and (b) disconnecting a pixel output signal from a column of the pixel array. The test switch provides the test signal to the column of the pixel array. The test signal includes a test voltage that replaces the pixel output signal. The test signal is digitized by an analog-to-digital converter (ADC) and provided to a processor. The processor compares the digitized test signal to an expected pixel output signal. The processor also interpolates the output signal from a corresponding pixel using adjacent pixels, when the test switch disconnects the pixel output signal from the column of the pixel array.Type: ApplicationFiled: August 11, 2014Publication date: November 27, 2014Inventors: Johannes Solhusvik, Tore Martinussen
-
Publication number: 20140347497Abstract: A projector that projects an image includes a communication section that sends a projection request command that requests another projector connected to the projector to project a test image, an imaging section that captures an image of the test image projected in response to the projection request command by the another projector, and a layout recognition section that recognizes a relative layout relationship between the projector and the another projector based on the image captured by the imaging section.Type: ApplicationFiled: May 14, 2014Publication date: November 27, 2014Applicant: SEIKO EPSON CORPORATIONInventor: Hideo FUKUCHI
-
Publication number: 20140347498Abstract: An imager including a self test mode. The imager includes a pixel array for providing multiple pixel output signals via multiple columns; and a test switch for (a) receiving a test signal from a test generator and (b) disconnecting a pixel output signal from a column of the pixel array. The test switch provides the test signal to the column of the pixel array. The test signal includes a test voltage that replaces the pixel output signal. The test signal is digitized by an analog-to digital converter (ADC) and provided to a processor. The processor compares the digitized test signal to an expected pixel output signal. The processor also interpolates the output signal from a corresponding pixel using adjacent pixels, when the test switch disconnects the pixel output signal from the column of the pixel array.Type: ApplicationFiled: August 11, 2014Publication date: November 27, 2014Inventors: Johannes Solhusvik, Tore Martinussen
-
Publication number: 20140347499Abstract: Disclosed is a method and tool that performs glass-to-glass testing of a test AV system. The test AV system may be a transmitter device that senses AV stimuli and transmits an AV signal to a receiver device that displays video and provides an audio out/speaker of the audio. A light source and a sound source may be placed at the transmitter device. A light sensor and microphone/direct audio out connection may be placed at the receiver device. The automatic test tool may cycle synchronized light/sound stimuli to the transmitter device and measure the delay/latency times for audio, video, and AV synchronization at the receiver device. The automatic test tool may be comprised of a computer running user interface/test management software connected to a low cost FPGA that controls the video/sound sources and sensors to accurately measure both video and audio glass-to-glass latency/synchronization in a continuous, automatic, and self-calibrating manner.Type: ApplicationFiled: May 21, 2013Publication date: November 27, 2014Applicant: AVAYA, INC.Inventors: Dan Gluskin, Michael German, Itai Ephraim Zilbershtein, Yosef Goldberg, Michel Ivgi
-
Publication number: 20140347500Abstract: Embodiments of the present invention relate to classification of documents. A user is able to take a snapshot of a document using a smart device. The photo of the document is matched to one or more existing templates. The one or more existing templates are locally stored on the smart device. If the document in the photo is recognized based on pattern matching, then the photo is tagged with an existing classification. The tagged photo can be locally stored on the smart device, uploaded to and backed up in a cloud, or both. The user is able to perform a search for a particular document based on key words rather than to visually review all photos.Type: ApplicationFiled: April 29, 2014Publication date: November 27, 2014Applicant: Synchronoss Technologies, Inc.Inventor: Jeremi Kurzanski
-
Publication number: 20140347501Abstract: An information processing apparatus includes a first optical system, a second optical system, and a casing. The first optical system is configured to input light into a first imaging device. The second optical system is configured to input light into a second imaging device. The casing includes one surface long in a specific direction with the first optical system and the second optical system being arranged in the one surface in an orthogonal direction almost orthogonal to the specific direction. The first optical system and the second optical system are arranged such that an optical axis of the first optical system and an optical axis of the second optical system form an angle in the specific direction.Type: ApplicationFiled: May 15, 2014Publication date: November 27, 2014Applicant: SONY CORPORATIONInventor: Minoru Ishida
-
Publication number: 20140347502Abstract: A method and apparatus for digital image correction in which a plurality of received color component arrays received from a digital camera are each corrected for distortion dependent upon the color associated with the array. Other corrections may also be applied, such as for sensitivity non-uniformity in the sensing array or illumination non-uniformity. The corrected color component arrays for each of the plurality of color components are combined to form a corrected digital image. The method and apparatus may be integrated with digital cameras in a variety of applications including, but not limited to, digital document imaging.Type: ApplicationFiled: May 21, 2013Publication date: November 27, 2014Applicant: STMicroelectronics, Inc.Inventor: Francis C. STAFFORD
-
Publication number: 20140347503Abstract: A display control section causes a display section to display an image corresponding to imaging data generated by an imaging element before the wireless reception of imaging data which is wirelessly transmitted from a camera is started, and causes the display section to display an image corresponding to imaging data which is wirelessly received after the wireless reception of the imaging data which is wirelessly transmitted from the camera is started. A parameter generating section generates imaging parameters when an instruction for the imaging parameters is input in a state where the image corresponding to the imaging data generated by the imaging element is displayed on the display section. A communication control section causes a wireless communication circuit section to wirelessly transmit the imaging parameters generated by the parameter generating section to the camera, and to wirelessly receive the imaging data wirelessly transmitted from the camera.Type: ApplicationFiled: March 31, 2014Publication date: November 27, 2014Applicant: Olympus CorporationInventor: Takahisa ENDO
-
Publication number: 20140347504Abstract: A system and method is disclosed for enabling user friendly interaction with a camera system. Specifically, the inventive system and method has several aspects to improve the interaction with a camera system, including voice recognition, gaze tracking, touch sensitive inputs and others. The voice recognition unit is operable for, among other things, receiving multiple different voice commands, recognizing the vocal commands, associating the different voice commands to one camera command and controlling at least some aspect of the digital camera operation in response to these voice commands. The gaze tracking unit is operable for, among other things, determining the location on the viewfinder image that the user is gazing upon. One aspect of the touch sensitive inputs provides that the touch sensitive pad is mouse-like and is operable for, among other things, receiving user touch inputs to control at least some aspect of the camera operation.Type: ApplicationFiled: August 6, 2014Publication date: November 27, 2014Inventor: Jeffrey C. Konicek
-
Publication number: 20140347505Abstract: A focus detection sensor and an image pickup system are provided. The focus detection sensor includes photoelectric conversion units converting light into charges, memory units storing the charges generated by the photoelectric conversion units as pixel signals, transfer units transferring the charges generated by the photoelectric conversion units to the memory units, reset units resetting the photoelectric conversion units and the memory units, a detection unit outputting a first detection signal in accordance with the pixel signals stored in the memory units, and a mode switching determination unit performing switching from a first operation mode in which the transfer units are set to a transfer state in a charge accumulation period after the photoelectric conversion units are reset to a second operation mode in which the transfer units are set to a non-transfer state.Type: ApplicationFiled: May 21, 2014Publication date: November 27, 2014Applicant: CANON KABUSHIKI KAISHAInventors: Satoshi Suzuki, Yukihiro Kuroda
-
Publication number: 20140347506Abstract: An image capturing apparatus detects the angular rotational shake and translational shake generated in the apparatus using an angular velocity sensor and an accelerometer. An angular rotational shake correction coefficient calculation unit calculates a first correction coefficient using a zoom lens position and a focus lens position. A translational shake correction coefficient calculation unit calculates a second correction coefficient using the imaging magnification of an imaging optical system.Type: ApplicationFiled: August 6, 2014Publication date: November 27, 2014Applicant: CANON KABUSHIKI KAISHAInventor: Nobushige Wakamatsu
-
Publication number: 20140347507Abstract: An imaging control terminal includes a wireless communication interface configured to wirelessly communicate with an imaging terminal, an imaging module configured to generate imaging data, a display interface configured to display an image corresponding to the imaging data generated by the imaging module, an operation interface configured to receive an operation of an operator designating an imaging area or an imaging target to be imaged by the imaging terminal for the image, an information generation unit configured to generate imaging area information representing the imaging area or imaging target information representing the imaging target, and a communication control unit configured to cause the wireless communication interface to wirelessly transmit the imaging area information or the imaging target information to the imaging terminal.Type: ApplicationFiled: May 16, 2014Publication date: November 27, 2014Applicant: Olympus CorporationInventor: Masaharu YANAGIDATE
-
Publication number: 20140347508Abstract: A method of synchronizing a remote device to image acquisition by a camera body including detecting a predictor signal of the camera body that occurs a known time prior to shutter opening. The detected predictor signal is used to determine a time to synchronize the remote device to image acquisition via wireless communication. For example, the detected predictor signal may be used to predict when the shutter of the camera will be open. A wireless communication system for synchronizing a remote device to a camera body may include a memory having information used to synchronize the remote device to image acquisition based on the detection of the predictor signal occurring prior to a shutter opening.Type: ApplicationFiled: August 11, 2014Publication date: November 27, 2014Applicant: LAB PARTNERS ASSOCIATES, INC.Inventor: James E. Clark
-
Publication number: 20140347509Abstract: Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.Type: ApplicationFiled: August 13, 2014Publication date: November 27, 2014Inventors: Kartik Venkataraman, Amandeep S. Jabbi, Robert H. Mullis, Jacques Duparre, Shane Ching-Feng Hu
-
Publication number: 20140347510Abstract: An image processing device of the present invention comprises a storage section for storing first image data obtained by imaging in front of the imaging device body or using a telephoto lens, and second image data obtained by imaging behind the imaging device body or using a wide-angle lens; a movement pattern detection section for processing the first image data to detect a movement pattern of the first image represented by the first image data; a movement pattern determination section determining whether or not the movement pattern of the first image is unstable movement; and an image processing section for rewriting a part of the first image data, for which it has been determined by the movement pattern determination section that the movement pattern of the first image is unstable, using the second image data.Type: ApplicationFiled: August 13, 2014Publication date: November 27, 2014Inventors: Osamu NONAKA, Naohiro KAGEYAMA
-
Publication number: 20140347511Abstract: A system and method for triggering image re-capture in image processing by receiving a first image captured using a first mode, performing a computer vision task on the first image to produce a first result, generating a confidence score of the first result using a machine learning technique, triggering an image re-capture using a second mode in response to the confidence score of the first result, and performing the computer vision task on a result of the image recapture using the second mode.Type: ApplicationFiled: May 24, 2013Publication date: November 27, 2014Applicant: XEROX CORPORATIONInventors: Jose A. Rodriguez-Serrano, Peter Paul, Florent Perronnin
-
Publication number: 20140347512Abstract: A diagnostic system for biometric mapping of facial skin includes a light filter a light sensor, a non-transient memory, a correlation processor, and an output unit. The light filter filters light reflected from an object to a filtered light signal. The light sensor receives the filtered light signal and generates a first electronic image signal representative of an image of the object in accordance with the filtered light signal. The memory stores a first electronic diagnostic signal representative of a predetermined mal-condition of the object. The processor determines a correlation between the first electronic image signal and the first electronic diagnostic signal, generates a correlation signal representative of a strength of the correlation, determines a diagnosis of the associated object based on the correlation signal, and generates a diagnosis signal in accordance with the diagnosis. The output unit generates a diagnosis result signal in accordance with the diagnosis signal.Type: ApplicationFiled: May 24, 2013Publication date: November 27, 2014Inventor: Rakesh SETHI
-
Publication number: 20140347513Abstract: A feature point detection apparatus sets, in accordance with a shooting mode or a designation from a user, a detection parameter for each region of an image to detect a feature point, and detects the feature point based on the detection parameter.Type: ApplicationFiled: May 15, 2014Publication date: November 27, 2014Inventor: Masaaki Kobayashi
-
Publication number: 20140347514Abstract: A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.Type: ApplicationFiled: June 9, 2014Publication date: November 27, 2014Inventors: Catalina Neghina, Mihnea Gangea, Stefan Petrescu, Emilian David, Petronel Bigioi, Eric Zarakov, Eran Steinberg
-
Publication number: 20140347515Abstract: An imaging lens includes, arranged in sequence from the object side to the imaging surface side, a first lens having a positive power and convex surfaces on both sides; an aperture diaphragm; a second lens being a meniscus lens having a negative power and a convex surface on the object side; a third lens being a meniscus lens having a positive power and a concave surface on the object side; and a fourth lens having a negative power and concave surfaces on both sides. With this structure, the imaging lens is well corrected for various aberrations in spite of being compact in the lens radial direction and thin in the optical axis direction.Type: ApplicationFiled: October 10, 2012Publication date: November 27, 2014Inventors: Takumi Iba, Masatoshi Yamashita
-
Publication number: 20140347516Abstract: There is provided a signal processor including a phase difference detection part configured to acquire a pixel value of one light-shielding pixel having a part of a light-receiving region shielded therein and pixel values of a peripheral pixel row of the light-shielding pixel in a light shielding direction. A corrected pixel value obtained by subjecting the pixel value of the light-shielding pixel to a reduced sensitivity correction is compared with the pixel values of the peripheral pixel row to detect a phase difference of the light-shielding pixel.Type: ApplicationFiled: May 15, 2014Publication date: November 27, 2014Applicant: Sony CorporationInventors: Kenichi Sano, Masayuki Tachi
-
Publication number: 20140347517Abstract: An image processing apparatus includes a setting unit configured to set, in a first range encompassing an area of a captured image acquired by an imaging unit, a first analysis graphic or a first line on a display screen displaying the first analysis range, and set, in the first setting range, a display range of the captured image to be displayed on a display unit, and a determination unit configured, when the display range is such that the first graphic or the first line set in the first range is partially located outside the display range, to determine a second analysis graphic having a number of vertexes equal to or smaller than a number of vertexes of the first analysis graphic, and contained within the display range or a second line contained within the display range.Type: ApplicationFiled: May 19, 2014Publication date: November 27, 2014Inventor: Yoichi Kamei
-
Publication number: 20140347518Abstract: A write control unit selects, in a row or column direction, N storing units from N×N storing units for storing pixel data of N (N?2) read lines of image pickup devices and writes the data in sets of N pixels thereto, and switches a selection direction for selecting the storing units each time writes of the data of N lines are completed. A read control unit selects, in a direction different from the selection direction, N storing units and starts parallel reads of the data of N lines during writes of the data of every N-th line. Each storing unit to be first selected in the writes of the data of every N-th line performs write and read operations using different terminals, and each of the remaining storing units performs write and read operations using a common terminal.Type: ApplicationFiled: May 5, 2014Publication date: November 27, 2014Applicant: FUJITSU SEMICONDUCTOR LIMITEDInventor: Masaki TANAKA
-
Publication number: 20140347519Abstract: An image capturing apparatus includes an interval shooting section (51) that performs an interval shooting process, a lighten compositing section (54) that performs a lighten compositing process using images captured one by one by the interval shooting process, and a composite-image-in-progress displaying section (55) that, when a first operation is performed, causes a composite image in a first memory area, which is used as a compositing buffer for the lighten compositing process, to be displayed on an LCD monitor without causing the interval shooting process to be stopped.Type: ApplicationFiled: September 12, 2012Publication date: November 27, 2014Inventor: Katsuya Yamamoto
-
Publication number: 20140347520Abstract: An imaging apparatus for performing efficient signal processing depending on the operational mode. In the finder mode, a CCD interface 21a decimates horizontal components of image data supplied from an image generating unit 10 to one-third and moreover processes the decimated image data with data conversion and resolution conversion to produce Y, Cb and Cr image data which are routed to and written in an image memory 32 over a memory controller 22. In the recording mode, the CCD interface 21a causes the image data from the image generating unit 10 to be written in the image memory 32 via memory controller 22 after decimation and gamma correction etc. The camera DSP 21c reads out the image data via memory controller 22 from the image memory 32 to effect data conversion for writing the resulting data via memory controller 22 in the image memory 32.Type: ApplicationFiled: August 6, 2014Publication date: November 27, 2014Applicant: SONY CORPORATIONInventors: Yoichi MIZUTANI, Masayuki Takezawa, Hideki Matsumoto, Ken Nakajima, Toshihisa Yamamoto
-
Publication number: 20140347521Abstract: A total exposure time (TET) may be selected. A plurality of images of a scene may be captured using respective TETs that are based on the selected TET. At least two of the images in the plurality of images may be combined to form a merged short-exposure image. A digital gain may be applied to the merged short-exposure image to form a virtual long-exposure image. The merged short-exposure image and the virtual long-exposure image may be combined to form an output image. More of the output image may be properly-exposed than either of the merged short-exposure image or the virtual long-exposure image.Type: ApplicationFiled: May 24, 2013Publication date: November 27, 2014Applicant: Google Inc.Inventors: Samuel William HASINOFF, Ryan GEISS
-
Publication number: 20140347522Abstract: The zoom lens includes at least, in order from an object side: a first lens group G1 having positive refractive power; a second lens group G2 having negative refractive power; a third lens group G3 having positive refractive power; a fourth lens group G4 having negative refractive power; and a fifth lens group G5 having negative refractive power. In the zoom lens, focusing from infinity to a proximity of object is achieved by movement of just fourth lens group toward an image focusing side and is characterized in Satisfaction of expressions below: [Expression 1] 2.1<?rt<3.5??(1) ?1.80<?2t<?0.94??(2) where “?rt” is composite lateral magnification at a telephoto end of lens groups locating closer to the image focusing side than the third lens group at infinity focusing, and “?2t” is lateral magnification at a telephoto end of the second lens group at infinity focusing.Type: ApplicationFiled: May 21, 2014Publication date: November 27, 2014Applicant: Tamron Co., Ltd.Inventor: Yoshito Iwasawa
-
Publication number: 20140347523Abstract: A zoom lens includes, in order from object side to image side, a first lens unit having a negative refractive power and a second lens unit having a positive refractive power. During zooming, the first lens unit and the second lens unit move so that the distance between the first lens unit and the second lens unit changes. The first lens unit includes at least one positive lens and at least one negative lens. The total length of the zoom lens at the wide angle end, the back focal length at the wide angle end, the focal length of the zoom lens at the telephoto end, the focal length of the first lens unit, the focal length of the second lens unit, and the refractive index of the material of the at least one positive lens included in the first lens unit are each appropriately set according to mathematical conditions.Type: ApplicationFiled: May 21, 2014Publication date: November 27, 2014Applicant: CANON KABUSHIKI KAISHAInventor: Shin Kuwashiro
-
Publication number: 20140347524Abstract: The zoom lens is composed of: an object side lens group at least including, in order from an object side: a first lens group G1 having positive refractive power; and a second lens group G2 having negative refractive power; and an image focusing side lens group including, in order from the object side: a negative lens group A having negative refractive power; and a negative lens group B arranged by facing the negative lens group A across an air distance, having negative refractive power. In the zoom lens, focusing from infinity to a close object is achieved by moving just the negative lens group A toward an image focusing side to satisfy a conditional expression below: [Expression 1] ?1.80<?2t<?0.94??(1) (1??At2)×?Bt2<?4.Type: ApplicationFiled: May 22, 2014Publication date: November 27, 2014Applicant: Tamron Co., Ltd.Inventor: Yoshito Iwasawa
-
Publication number: 20140347525Abstract: A zoom lens includes a first lens group having positive refracting power, a second lens group having negative refracting power, a third lens group having positive refracting power, a fourth lens group having negative refracting power and a fifth lens group having negative refracting power in order from an object side, in which the lens groups move in magnification change from a wide angle end to a telephoto end such that a gap between the first lens group and the second lens group increases and a gap between the second lens group and the third lens group decreases, a negative lens group disposed closer to an image focusing side than a diaphragm among all lens groups is set as a focusing lens group, and the focusing lens group moves toward the image focusing side at focusing from infinity to a close object, and the fifth lens group includes at least a single lens block of a meniscus shape provided with a concave surface at an object side, the single lens block of the meniscus shape has a negative focal distancType: ApplicationFiled: May 22, 2014Publication date: November 27, 2014Applicant: Tamron Co., Ltd.Inventor: Yasuhiko Obikane
-
Publication number: 20140347526Abstract: An image processing apparatus includes a decimating unit configured to decimate pixels in a target image to obtain a decimated image containing a smaller number of pixels than the target image; an extracting unit configured to extract similar pixels at each of which a similarity to a pixel of interest is a threshold or more, from a region containing the pixel of interest among pixels of the decimated image; a first calculating unit configured to calculate a correction candidate value based on pixel values of the similar pixels; a second calculating unit configured to calculate a correction candidate value for each decimated pixel, based on the correction candidate value calculated for each pixel of the decimated image; and a correcting unit configured to correct a target pixel value of a target pixel in the target image, based on the correction candidate value calculated by the first or second calculating unit.Type: ApplicationFiled: January 4, 2013Publication date: November 27, 2014Inventors: Takayuki Hara, Kazuhiro Yoshida, Yoshikazu Watanabe, Akira Kataoka
-
Publication number: 20140347527Abstract: An image processing device includes an acquisition section, first conversion section, determination section, filtering section, and second conversion section. The acquisition section acquires first image data defined by a plurality of color components. The first conversion section converts, for each pixel data, the plurality of color components into converted pixel data defined by a luminance and a color difference. The determination section determines whether or not each converted pixel data is target pixel data having a black character attribute. The filtering section performs a filtering process on a luminance of each target pixel data using a filter coefficient to obtain processed pixel data defined by the luminance and the color difference. The filter coefficient enhances a difference in the luminance between the each target pixel data and neighboring pixel data. The second conversion section converts each processed pixel data into updated pixel data defined by the plurality of color components.Type: ApplicationFiled: May 22, 2014Publication date: November 27, 2014Applicant: BROTHER KOGYO KABUSHIKI KAISHAInventors: Toshihiro WATANABE, Atsushi YOKOCHI