Patents Issued in September 25, 2014
-
Publication number: 20140285622Abstract: A broadcast receiver and a 3D video data processing method thereof are disclosed herein. a 3D video data processing method of a broadcast receiver according to an embodiment of the present invention includes receiving, by a receiving unit, a broadcast signal including 3D video data and 3D complementary video information, wherein the 3D video data include half-resolution base video data and complementary video data for configuring a full-resolution image; parsing, by 3D video information processing unit, a 3D complementary video information; decoding, by a base video decoder, the half-resolution base video data; decoding, by a complementary video decoder, the complementary video data for configuring a full-resolution image; and combining and formatting, by an output formatter, the base video data and the complementary video data using the 3D complementary video information, thereby outputting a full-resolution 3D image.Type: ApplicationFiled: May 27, 2014Publication date: September 25, 2014Applicant: LG Electronics Inc.Inventors: Nagaraj Nandhakumar, Jong Yeul Shu, Seung Jong Choi, Jin Seok Im, Jeong Hyu Yang
-
Publication number: 20140285623Abstract: A depth map in a three dimensional [3D] video signal is processed. From the 3D video signal a first depth map (Z1) is derived. A second depth map (Z2) is generated by a multi-dimensional filter (22) that causes the second depth map to have spilling artifacts, whereas the first depth map, in corresponding locations, has less or no such artifacts. A depth difference is determined between the first depth map and the second depth map, a positive depth difference indicating a depth in the second depth map being closer to a viewer. A final, third depth map is generated by combining first depth map values and second depth map values according to a combining function in dependence of the depth difference. The combining function gives preference to the first values where the depth difference is positive.Type: ApplicationFiled: October 5, 2012Publication date: September 25, 2014Inventor: Wilhelmus Hendrikus Alfonsus Bruls
-
Publication number: 20140285624Abstract: One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points.Type: ApplicationFiled: June 25, 2012Publication date: September 25, 2014Applicant: Adobe Systems IncorporatedInventors: James Acquavella, David Simons, Daniel M. Wilk
-
Publication number: 20140285625Abstract: A machine vision system may perform compressive sensing by aggregating signals from multiple pixels. The aggregation of signals may be based on a sampling function. The sampling function may be formed of a product of a random basis, which may be sparse, and a filtering function.Type: ApplicationFiled: March 20, 2013Publication date: September 25, 2014Inventor: John McGarry
-
Publication number: 20140285626Abstract: The techniques and arrangements described herein provide for layered compression of depth image data. In some examples, an encoder may partition depth image data into a most significant bit (MSB) layer and a least significant bit (LSB) layer. The encoder may quantize the MSB layer and generate quantization difference data based at least in part on the quantization of the MSB layer. The encoder may apply the quantization difference data to the LSB layer to generate an adjusted LSB layer.Type: ApplicationFiled: March 25, 2013Publication date: September 25, 2014Applicant: Microsoft CorporationInventor: Microsoft Corporation
-
Publication number: 20140285627Abstract: Provided is a solid-state image pickup device that includes: a plurality of pixels each including a photoelectric conversion element; and a transmittance control element provided on a light incident side of the photoelectric conversion element of at least a part of the plurality of pixels, and configured to change a transmittance of incident light by an external input.Type: ApplicationFiled: March 12, 2014Publication date: September 25, 2014Applicant: Sony CorporationInventor: Nobuyuki Kuboi
-
Publication number: 20140285628Abstract: Optical apparatus includes an image sensor and objective optics, which are configured to collect and focus optical radiation over a range of wavelengths along a common optical axis toward a plane of the image sensor. A dispersive element is positioned to spread the optical radiation collected by the objective optics so that different wavelengths in the range are focused along different, respective optical axes toward the plane.Type: ApplicationFiled: June 5, 2014Publication date: September 25, 2014Inventors: Alexander Shpunt, Niv Gilboa, Haim Bezdin
-
Publication number: 20140285629Abstract: A color filter array has G filters, R filters, and B filters. A pair of phase difference pixels adjoining in a horizontal direction is provided with one of the G, R, and B filters. In the color filter array, a fundamental array pattern, including the G, R, and B filters, is repeatedly disposed in horizontal and vertical directions. The G filters, which most greatly contributes to obtainment of luminance information, are disposed in every line extending in the horizontal direction, the vertical direction, and slanting directions. Both of the R filters and the B filters are disposed in every line extending in the slanting directions. The number of the G filters is larger than that of the R filters or the B filters.Type: ApplicationFiled: June 5, 2014Publication date: September 25, 2014Inventors: Mitsuru OKIGAWA, Kenkichi HAYASHI, Seiji TANAKA
-
Publication number: 20140285630Abstract: An indoor navigation system is based on a multi-beam laser projector, a set of calibrated cameras, and a processor that uses knowledge of the projector design and data on laser spot locations observed by the cameras to solve the space resection problem to find the location and orientation of the projector.Type: ApplicationFiled: March 20, 2013Publication date: September 25, 2014Applicant: TRIMBLE NAVIGATION LIMITEDInventor: Kevin A. I. Sharp
-
Publication number: 20140285631Abstract: An indoor navigation system is based on a multi-beam laser projector, a set of calibrated cameras, and a processor that uses knowledge of the projector design and data on laser spot locations observed by the cameras to solve the space resection problem to find the location and orientation of the projector.Type: ApplicationFiled: June 20, 2013Publication date: September 25, 2014Applicant: TRIMBLE NAVIGATION LIMITEDInventors: James M. Janky, Kevin A. I. Sharp, Michael V. McCusker, Morrison Ulman
-
Publication number: 20140285632Abstract: An imaging system comprising an image capture apparatus, arranged to capture a stereoscopic image of an operator work site, in communication with a display system; the display system arranged to receive and display said stereoscopic image on a display screen to said operator; wherein said display system is arranged such that the display screen is placed intermediate the operator's eyes and the work site.Type: ApplicationFiled: December 26, 2013Publication date: September 25, 2014Applicant: NATIONAL UNIVERSITY OF SINGAPOREInventors: Beng Hai Lim, Timothy Poston, James Kolenchery Rappel
-
Publication number: 20140285633Abstract: A robotic system includes a robot, a display section, and a control section adapted to operate the robot, and an imaging range of a first taken image obtained by imaging an operation object of the robot from a first direction, and an imaging range of a second taken image obtained by imaging the operation object from a direction different from the first direction are displayed on the display section.Type: ApplicationFiled: March 5, 2014Publication date: September 25, 2014Applicant: Seiko Epson CorporationInventors: Kenichi Maruyama, Kenji Onda
-
Publication number: 20140285634Abstract: Imagery from two or more users' different smartphones is streamed to a cloud processor, enabling creation of 3D model information about a scene being imaged. From this model, arbitrary views and streams can be synthesized. In one arrangement, a user of such a system is at a sports arena, and her view of the sporting event is blocked when another spectator rises to his feet in front of her. Nonetheless, the imagery presented on her headworn display continues uninterrupted—the blocked imagery from that viewpoint being seamlessly re-created based on imagery contributed by other system users in the arena. A great variety of other features and arrangements are also detailed.Type: ApplicationFiled: March 18, 2014Publication date: September 25, 2014Applicant: Digimarc CorporationInventor: Geoffrey B. Rhoads
-
Publication number: 20140285635Abstract: A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.Type: ApplicationFiled: March 20, 2014Publication date: September 25, 2014Applicant: MEDIATEK INC.Inventors: Chi-Cheng Ju, Ding-Yun Chen, Cheng-Tsai Ho, Chia-Ming Cheng, Po-Hao Huang, Yuan-Chung Lee, Chung-Hung Tsai
-
Publication number: 20140285637Abstract: A three-dimensional (3D) image capture method, employed in an electronic device with a monocular camera and a 3D display, includes at least the following steps: while the electronic device is moving, deriving a 3D preview image from a first preview image and a second preview image generated by the monocular camera, and providing 3D preview on the 3D display according to the 3D preview image, wherein at least one of the first preview image and the second preview image is generated while the electronic device is moving; and when a capture event is triggered, outputting the 3D preview image as a 3D captured image.Type: ApplicationFiled: February 6, 2014Publication date: September 25, 2014Applicant: MEDIATEK INC.Inventors: Chia-Ming Cheng, Cheng-Che Chan, Po-Hao Huang
-
Publication number: 20140285638Abstract: A method including providing a device that projects a pattern of coherent radiation. The method further includes capturing a reference image of the pattern using an image sensor by projecting the pattern of the coherent radiation onto a reference surface and engendering a relative motion between the reference surface and the image sensor while capturing the reference image. The method also includes storing the reference image in a memory associated with the device.Type: ApplicationFiled: June 11, 2014Publication date: September 25, 2014Inventors: Alexander Shpunt, Dmitri Rais, Niv Galezer
-
Publication number: 20140285639Abstract: The present invention relates to a transparent stereoscopic image display, and in particular, to a transparent stereoscopic image display without a barrier slit or lenticular sheet, which enables an observer to feel a sense of three-dimensionality, by assembling transparent display panels and alternately outputting a left eye image frame and a right eye image frame, and enables the resolution to be improved about two-fold when a 2-dimensional image is output.Type: ApplicationFiled: October 29, 2012Publication date: September 25, 2014Applicant: NEOVIEW KOLON CO., LTD.Inventors: Chung Hyoun Gyoung, Hui Chul An, Dae Yong Kim, Il Ho Park, Hee Sung Lim, Soo Chang Lee, Woo Bin Im
-
Publication number: 20140285640Abstract: A 3D image display device includes: a display panel including pixels; a signal controller controlling driving of the display panel; and an image signal processor generating a 3D input image signal based on image information and outputting the generated 3D input image signal to the signal controller. The image signal processor includes a view point generating unit generating view points corresponding to the respective pixels, and a 3D input image signal generating unit generating the 3D input image signal for the respective pixels based on information on a position of the pixel, information on the generated view point for the respective pixel, and the image information.Type: ApplicationFiled: July 29, 2013Publication date: September 25, 2014Applicant: Samsung Display Co., Ltd.Inventors: Il Yong YOON, Hae Young Yun, Jin Hwan Kim
-
Publication number: 20140285641Abstract: A three-dimensional display device for displaying a three-dimensional image, including: a gaze point obtaining unit which obtains a position of a gaze point of a viewer; a fusional area determination unit which determines a fusional area where binocular fusion is allowed, based on the obtained position of the gaze point; a correction unit which corrects the three-dimensional image so as to suppress display of an object that is included in the three-dimensional image outside the fusional area; and a display unit which displays the corrected three-dimensional image.Type: ApplicationFiled: June 5, 2014Publication date: September 25, 2014Inventors: Yumiko KATO, Jun OZAWA, Tsuyoshi INOUE, Toru NAKADA
-
Publication number: 20140285642Abstract: A non-glasses type stereoscopic image display device comprises: a display panel that comprises a plurality of sub-pixels and displays multi-view images in predetermined units; and an optical plate array that is formed side by side with the sub-pixels and divides the multi-view images into a plurality of multi-view areas, each of the sub-pixels comprising a first color sub-pixel, a second color sub-pixel, and a third color sub-pixel, which alternate in the same row along a horizontal direction in which gate lines extend and are formed side by side along a vertical direction in which data lines extend in different columns, wherein vertically neighboring sub-pixels displaying different colors partially overlap each other in the vertical direction.Type: ApplicationFiled: December 10, 2013Publication date: September 25, 2014Applicant: LG Display Co., Ltd.Inventors: Kwangjo HWANG, Euitae KIM, Hoonki KIM, Youyong JIN
-
Publication number: 20140285643Abstract: A stereoscopic display device is provided that enables stereoscopy for various viewing positions. A stereoscopic display device (1) includes: a display panel (14) configured to display images for a plurality of viewpoints, the images having parallax and being arranged regularly; a light beam convertor (11) disposed adjacent the front side of the display panel (14) configured to form virtual lenticular lenses by controlling a voltage, the lenticular lenses adapted to the images on the display panel (14) and being arranged at a certain interval; and a controller configured to control the display panel (14) and the light beam convertor (11). The controller changes the focal length of the virtual lenticular lenses formed by the light beam convertor (11) depending on the distance between the display panel (14) and a viewer.Type: ApplicationFiled: October 2, 2012Publication date: September 25, 2014Applicant: Sharp Kabushiki KaishaInventor: Naru Usukura
-
Publication number: 20140285644Abstract: The disclosure extends to endoscopic devices and systems for image correction of a rotating sensor. The disclosure allows for the distal image sensor to rotate as the user rotates the lumen with respect to a fixed handpiece. The system includes an angle sensor located at the junction of the rotating lumen and the fixed handpiece. Periodic measurements of angle are used in the system's image processing chain in order to effect suitable software image rotations, thereby providing a final displayed image or video stream with the desired orientation.Type: ApplicationFiled: March 14, 2014Publication date: September 25, 2014Applicant: OLIVE MEDICAL CORPORATIONInventors: John Richardson, Laurent Blanquart, Jeremiah D. Henley, Donald M. Wichern
-
Publication number: 20140285645Abstract: The disclosure extends to systems and methods for reducing the area of an image sensor by reducing the imaging sensor pad count used for data transmission and clock generation.Type: ApplicationFiled: March 15, 2014Publication date: September 25, 2014Applicant: OLIVE MEDICAL CORPORATIONInventors: Laurent Blanquart, Donald M. Wichern
-
Publication number: 20140285646Abstract: A system for providing a list of the best fitting shoes based on comparison of the users foot's measurements and that of the shoes. The measurements are taken from the 3 Dimensional (3D) model, using automated photogrammetry techniques. The 3D model is generated from the images captured by cameras inside the apparatus. The data captured is stored on remotely connected database. User preferences are also taken into consideration for the recommendation of the shoes.Type: ApplicationFiled: November 7, 2013Publication date: September 25, 2014Inventor: Satwinder Kahlon
-
Publication number: 20140285647Abstract: A portable terminal apparatus for imaging diagnosis includes an object recognition unit for selectively specifying one object (body part or anatomical region) in a body, so as to output object information. A region of the object is changeable according to a terminal position or terminal orientation relative to the body. An image acquisition device acquires a medical image from a server apparatus with relevancy to the object according to the object information. A display unit displays the medical image. Preferably, furthermore, an optical camera optically images the object to create a local image. The object recognition unit specifies the object by image analysis of the local image. Also, the object recognition unit compares the local image with an optical body image created by optically imaging the body wholly in the optical camera, to detect the object in the local image.Type: ApplicationFiled: February 27, 2014Publication date: September 25, 2014Applicant: FUJIFILM CorporationInventors: Kiyohisa Okusu, Satoshi Ueda, Yuya Kudo, Hironori Matsumasa, Yasunori Ohta
-
Publication number: 20140285648Abstract: Provided is an optical imaging system capable of increasing diagnosis reliability and preciseness, and efficiently observing a target. The optical imaging system using multiple light sources according to an embodiment of the present invention includes a first light source generating a first light modulated with a first frequency, a second light source generating a second light modulated with a second frequency, a camera simultaneously detecting multiple lights output from an object after the first and second lights are illuminated on the object and outputting multiple image detecting signals, and an image processing unit processing the multiple image detecting signals to obtain a first image representing a shape of the object and a second image representing a desired target portion of the object.Type: ApplicationFiled: March 6, 2014Publication date: September 25, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Eun-Ju JEONG, Bong Kyu KIM, Won Ick JANG, Chang-Geun AHN, Hyun Woo SONG
-
Publication number: 20140285649Abstract: An image acquisition apparatus 1 includes: an image acquisition unit 43 that sequentially acquires images consecutively captured by an image capture unit 16; a face detection unit 44 that detects a predetermined subject in the acquired image each time an image is acquired; and an image acquisition stop unit 46 that stops the acquisition of target images for composition when the predetermined subject is detected.Type: ApplicationFiled: March 19, 2014Publication date: September 25, 2014Applicant: CASIO COMPUTER CO., LTD.Inventor: Kouichi SAITOU
-
Publication number: 20140285650Abstract: A phase distribution measurement method inside a biological sample includes taking in an optical image of the sample formed by a microscope to form a plurality of images with different image contrasts; calculating a component corresponding to phase distribution of the sample and a component corresponding to other than the phase distribution, and dividing the component corresponding to the phase distribution by the component corresponding to other than the phase distribution to forma normalized phase component image; breaking down the phase component image into a plurality of frequency components; performing a deconvolution process to each of the frequency components using an optical response character corresponding to each, and calculating phase distribution of a refraction component and phase distribution of a structure component; and calculating phase distribution of the sample by compounding the phase distribution of the refraction component and the phase distribution of the structure component.Type: ApplicationFiled: February 25, 2014Publication date: September 25, 2014Applicant: OLYMPUS CORPORATIONInventor: Hiroshi ISHIWATA
-
Publication number: 20140285651Abstract: A microscope has a light source for generating a light beam having a wavelength, ?, and beam-forming optics configured for receiving the light beam and generating a Bessel-like beam that is directed into a sample. The beam-forming optics include an excitation objective having an axis oriented in a first direction. Imaging optics are configured for receiving light from a position within the sample that is illuminated by the Bessel-like beam and for imaging the received light on a detector. The imaging optics include a detection objective having an axis oriented in a second direction that is non-parallel to the first direction.Type: ApplicationFiled: March 17, 2014Publication date: September 25, 2014Applicant: HOWARD HUGHES MEDICAL INSTITUTEInventor: Robert E. Betzig
-
Publication number: 20140285652Abstract: According to one embodiment, a method for measuring pattern misalignment, includes: a first step obtaining image data; a second step specifying a measurement region; a third step calculating a first shift amount (x1, y1); a fourth step determining, after calculating the first shift amount, a first distribution; a fifth step executing a plurality of times the second step, the third step, and the fourth step; a seventh step calculating a second shift amount (x2, y2); an eighth step determining, after calculating the second shift amount, a second distribution; a ninth step executing a plurality of times the sixth step, the seventh step, and the eighth step; and a tenth step calculating a difference (x2?x1, y2?y1) between the second pattern misalignment and the first pattern misalignment.Type: ApplicationFiled: July 29, 2013Publication date: September 25, 2014Applicant: Kabushiki Kaisha ToshibaInventors: Yosuke OKAMOTO, Yoshinori Hagio
-
Publication number: 20140285653Abstract: A microscope has a light source for generating a light beam having a wavelength, ?, and beam-forming optics configured for receiving the light beam and generating a Bessel-like beam that is directed into a sample. The beam-forming optics include an excitation objective having an axis oriented in a first direction. Imaging optics are configured for receiving light from a position within the sample that is illuminated by the Bessel-like beam and for imaging the received light on a detector. The imaging optics include a detection objective having an axis oriented in a second direction that is non-parallel to the first direction.Type: ApplicationFiled: March 17, 2014Publication date: September 25, 2014Applicant: Howard Hughes Medical InstituteInventor: Robert E. Betzig
-
Publication number: 20140285654Abstract: Brightness information with a wide dynamic range is acquired and observed while preventing degradation of a detector.Type: ApplicationFiled: March 17, 2014Publication date: September 25, 2014Inventors: Akinori ARAYA, Yohei KUWABARA
-
Publication number: 20140285655Abstract: An apparatus for measuring a shape of an underwater object includes a laser emitter configured to irradiate a laser on a surface of a measurement target object under water; and a camera configured to capture a laser point generated at the surface of the measurement target object by the laser emitter. Further, the apparatus includes a shape restorer configured to convert a pixel coordinates value of the laser point captured by the camera into an absolute coordinates value to restore a three-dimensional (3D) shape of the measurement target object.Type: ApplicationFiled: May 30, 2013Publication date: September 25, 2014Inventors: Seong-Ho SON, Hyuk Je KIM, Jong Moon LEE, Bo Ra KIM, Soon Ik JEON, Hyung Do CHOI
-
Publication number: 20140285656Abstract: A camera skid and thrust nozzle assembly for supporting a camera while being positioned in a tubular pipe or conduit spaced. from any wall of the conduit and permitting movement of the assembly along the conduit. The assembly includes a cage rotatably fastened to a high pressure fluid tractor nozzle, wherein the cage can rotate about an axis through the tractor nozzle. A camera support structure is fastened within the cage for rotation of the support structure about the axis independent of rotation of the cage. The camera support structure has a weight attached thereto for urging rotational orientation of the camera support structure within the cage according to a sensed external gravitational force on the assembly.Type: ApplicationFiled: March 14, 2014Publication date: September 25, 2014Applicant: STONEAGE, INC.Inventors: Gerald Zink, Rawlin Brown
-
Publication number: 20140285657Abstract: The present disclosure is directed to a system for inspecting a sample with multiple wavelengths of illumination simultaneously via parallel imaging paths. The system may include at least a first detector or set of detectors configured to detect illumination reflected, scattered, or radiated along a first imaging path from a selected portion of the sample in response to the first wavelength of illumination and a second detector or set of detectors configured to concurrently detect illumination reflected, scattered, or radiated along a second imaging path from the selected portion of the sample (i.e. the same location on the sample) in response to the second wavelength of illumination, where the second imaging path may at least partially share illumination and/or detection optics with an autofocus channel.Type: ApplicationFiled: March 17, 2014Publication date: September 25, 2014Applicant: KLA-Tencor CorporationInventors: Shiow-Hwei Hwang, Amir Bar, Grace Hsiu-Ling Chen, Daniel L. Cavan
-
Publication number: 20140285658Abstract: A solution including a noncontact electronic measurement device is provided. The measurement device includes one or more imaging devices configured to acquire image data of a surface of an object located in a measurement region relative to the measurement device and one or more projected pattern generators configured to generate divergent pattern(s) of structured light, which impact the surface of the object within a field of view of the imaging device when the object is located in the measurement region. Using image data acquired by the imaging device(s), a computer system can measure a set of attributes of the surface of the object and/or automatically determine whether the measurement device is within the measurement region. An embodiment is configured to be held by a human user during operation.Type: ApplicationFiled: March 19, 2014Publication date: September 25, 2014Applicant: INTERNATIONAL ELECTRONIC MACHINES CORPORATIONInventors: Zahid F. Mian, Ryk E. Spoor, Ronald W. Gamache
-
Publication number: 20140285659Abstract: The present invention relates to an intelligent central surveillance server system and controlling method thereof, which are capable of determining whether or not a preset event occurs based on a sensor sense signal and/or a motion sense signal and transmitting/receiving image information obtained in the occurrence of event through a serve-client system. The intelligent central surveillance server system comprises: a sensor unit configured to sense and transmit a sense signal; a camera unit configured to pick and transmit video information; and a digital video transmitter configured to determine whether or not a preset event occurs, based on the sense signal and/or the video information, and transmit first video information and first list information related to the first video information on the basis of the point of occurrence of event when the preset event occurs.Type: ApplicationFiled: April 22, 2013Publication date: September 25, 2014Applicant: 4NSYS Co., Ltd.Inventors: Hak Tae KIM, Jason CHAE, Jae Cheol LEE, Chang Rok KIM, Yeon Chul JUNG, Jung Ho HWANG
-
Publication number: 20140285660Abstract: According to some implementations, an estimate of a target's location can be calculated by correlating Wi-Fi and video location measurements. This spatio-temporal correlation combines the Wi-Fi and video measurements to determine an identity and location of an object. The accuracy of the video localization and the identity from the Wi-Fi network provide an accurate location of the Wi-Fi identified object.Type: ApplicationFiled: October 25, 2013Publication date: September 25, 2014Applicant: NEARBUY SYSTEMS, INC.Inventors: Mark Jamtgaard, Nathan Mueller
-
Publication number: 20140285661Abstract: Methods and systems for improving presentation of terrain/obstacle alerting information in a composite (synthetic and sensor) image. An exemplary system includes an imaging device that obtains image data for a region exterior to the vehicle, a processing system, and a display device. The processing system is in data communication with the display device and the imaging device. The processing system receives information from an alerting (e.g., a terrain awareness and warning) system and the obtained image data, colors, or makes transparent at least a portion of the obtained image data, if the received information indicates that an alert condition exists for the portion of the obtained image data, and generates a composite image comprising a previously generated synthetic image and the obtained image data with the enhanced portion. The display device displays the composite image. The obtained image data overlie the synthetic image.Type: ApplicationFiled: March 22, 2013Publication date: September 25, 2014Applicant: Honeywell International IncInventors: Thea L. Feyereisen, Gang He, John G. Suddreth
-
Publication number: 20140285662Abstract: An image processing apparatus includes: a memory; and a processor coupled to the memory and configured to: acquire image data, and extract a corner point from the image data, based on brightness information of plurality of pixels in image data, the corner point corresponding to a pixel arranged in a first edge of a horizontal direction and a second edge of the vertical direction, when a number of pixels arranged in each of the first and second edges is more than certain value.Type: ApplicationFiled: January 31, 2014Publication date: September 25, 2014Applicant: FUJITSU LIMITEDInventor: Kimitaka MURASHITA
-
Publication number: 20140285663Abstract: A vision system for a vehicle includes a mirror mounting button attached at a first location at an in-cabin surface of a windshield and an attachment member attached at a second location at the in-cabin surface of the windshield. A light absorbing layer is disposed at the windshield and at least partially masks the presence of the attachment member from view by a viewer who is viewing through the windshield from outside the vehicle. An accessory module is releasably attached to the attachment member and accommodates a forward facing camera. With the accessory module releasably attached to the attachment member at the windshield, the forward facing camera views through the windshield. The forward facing camera includes a component of a camera-based driver assistance system of the vehicle.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Applicant: MAGNA ELECTRONICS INC.Inventors: Kenneth Schofield, Niall R. Lynam
-
Publication number: 20140285664Abstract: A system for capturing image data for gestures from a passenger or a driver in a vehicle with a dynamic illumination level comprises a low-lux sensor equipped to capture image data in an environment with an illumination level below an illumination threshold, a high-lux sensor equipped to capture image data in the environment with the illumination level above the illumination threshold, and an object recognition module for activating the sensors. The object recognition module determines the illumination level of the environment and activates the low-lux sensor if the illumination level is below the illumination threshold. If the illumination level is above the threshold, the object recognition module activates the high-lux sensor.Type: ApplicationFiled: June 10, 2014Publication date: September 25, 2014Applicants: EDGE 3 TECHNOLOGIES LLC, HONDA MOTOR CO., LTD.Inventors: Pedram Vaghefinazari, Stuart Masakazu Yamamoto, Ritchie Winson Huang, Josh Tyler King, Tarek El Dokor
-
Publication number: 20140285665Abstract: The present invention relates to an apparatus and method for assisting parking, the apparatus including: an image sensor photographing front/rear view images of a vehicle; and an estimated trace of vehicle generation and process unit generating an estimated trace of the vehicle to a parking target area using steering angle information of the vehicle, and overlaying the estimated trace of the vehicle to the photographed front/rear view images of the vehicle, wherein the estimated trace of the vehicle includes a first estimated trace of the vehicle based on a rear wheel of the vehicle and a second estimated trace of the vehicle based on a front wheel of the vehicle.Type: ApplicationFiled: November 21, 2011Publication date: September 25, 2014Applicant: LG INNOTEK CO., LTD.Inventors: Sang Yong Lee, Dae Seung Kim
-
Publication number: 20140285666Abstract: A vision system for a vehicle includes an exterior rearview mirror assembly mounted at an exterior portion of a door of a vehicle, an imaging sensor having a field of view exterior the vehicle and a video display screen operable to display video images captured by the imaging sensor. The video display screen is disposed at an interior portion of the vehicle door at which the exterior rearview mirror assembly is mounted. The exterior rearview mirror assembly and the video display screen may be part of a mirror and display module that is mountable at the vehicle door as a unit. The camera may be disposed at the exterior rearview mirror assembly.Type: ApplicationFiled: November 1, 2012Publication date: September 25, 2014Inventors: David P. O'Connell, Kenneth C. Peterson, Keith D. Foote
-
Publication number: 20140285667Abstract: This vehicle periphery monitoring device appropriately determines whether or not an animal detected by an imaging device is a high-risk animal which may possibly contact the vehicle. For example, compared with an animal in a posture with the head facing downwards, an animal in a posture with the head facing upwards is determined to be a high-risk animal which may suddenly bolt, so the latter animal is enclosed in a thick red frame and highlighted as a warning, and an alarm is emitted from speakers.Type: ApplicationFiled: October 5, 2012Publication date: September 25, 2014Inventor: Makoto Aimura
-
Publication number: 20140285668Abstract: A weapons container for releasing a defense weapon includes a cabinet, a cabinet door, and an interior securing the defense weapon. Upon detection of a fingerprint input by a biometric sensor, a surveillance camera captures video and an audio intercom provides communication with a central command center. A recording system saves the video and audio to a storage database. A delay release lock secures the defense weapon for an amount of time and releases the defense weapon depending on a signal from the central command center. A communications system is connected over a network to the central command center, a local authority, and an emergency responder. The communications system notifies the local authority or the emergency responder upon detection of the fingerprint input and streams the video and audio to the central command center and at least one of the local authority and the emergency responder over the Internet.Type: ApplicationFiled: June 4, 2014Publication date: September 25, 2014Inventors: TIMOTHY DEWEESE, BRANDON DELIBRO
-
Publication number: 20140285669Abstract: A system and a method for goal recognition, in which a plurality of cameras monitor a goal line plane that is generated by a framework for a goal, wherein at least one camera module is provided which can be used to monitor a sector outside the goal laterally adjacent to the framework or in front of the framework in the direction of a playing field in order to determine a trajectory for a playing body in the region of the playing field in the direction of the goal, wherein the operation of the cameras for monitoring the goal line plane and/or the evaluation of the image data therefrom can be controlled on the basis of the image data from the camera module.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Inventors: Bjoern LINDNER, Juergen PHILIPPS, Rene BEAUJEAN
-
Publication number: 20140285670Abstract: A photographing device includes a photographing unit, a storage unit, a position obtaining unit, a position determining unit and a photographing control unit. The photographing unit photographs a subject and changes a photographing interval. In the storage unit, information of a designated position and a designated photographing interval that are related to each other are stored. The position obtaining unit obtains information of a present position. The position determining unit determines whether the present position is within an area including the designated position. The photographing control unit sets a photographing interval for the designated position and makes the photographing unit take photographs when the position determining unit determines that the present position is within an area including the designated position.Type: ApplicationFiled: March 19, 2014Publication date: September 25, 2014Applicant: CASIO COMPUTER CO., LTD.Inventor: Ken FUJITA
-
Publication number: 20140285671Abstract: An infrared imaging module according to an embodiment includes: an infrared imaging element including a semiconductor substrate having a recessed portion, and a pixel portion formed on the recessed portion, the pixel portion converting infrared rays to electrical signals; and a lid including a lens portion facing the pixel portion, and a flat plate portion surrounding the lens portion, the flat plate portion being bonded to the semiconductor substrate.Type: ApplicationFiled: March 11, 2014Publication date: September 25, 2014Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Koichi ISHII, Hideyuki FUNAKI, Ikuo FUJIWARA, Hiroto HONDA
-
Publication number: 20140285672Abstract: Systems and methods disclosed herein provide for infrared camera systems and methods for dual sensor applications. For example, in one embodiment, an enhanced vision system comprises an image capture component having a visible light sensor to capture visible light images and an infrared sensor to capture infrared images. The system comprises a first control component adapted to provide a plurality of selectable processing modes to a user, receive a user input corresponding to a user selected processing mode, and generate a control signal indicative of the user selected processing mode, wherein the plurality of selectable processing modes includes a visible light only mode, infrared only mode, and a combined visible-infrared mode.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Inventors: Nicholas Högasten, Jeffrey S. Scott, Patrick B. Richardson, Jeffrey D. Frank, Austin A. Richards, James T. Woolaway