Abstract: Using the same image sensor to capture both a two-dimensional (2D) image of a three-dimensional (3D) object and 3D depth measurements for the object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor generates a multi-bit output in the 2D mode and a binary output in the 3D mode to generate timestamps. Strong ambient light is rejected by switching the image sensor to a 3D logarithmic mode from a 3D linear mode.
Abstract: A thermal imaging system is provided. The thermal imaging system includes an explosion-proof housing with an optical window configured to contain an explosive pressure. The optical window allows electromagnetic thermal energy to pass. A thermal imaging sensor is disposed within the explosion-proof housing. Thermal imaging electronics are coupled to the thermal imaging sensor and configured to provide at least one thermal image based on a signal from the thermal imaging sensor. A lens assembly is disposed at least in front of the optical window external to the explosion-proof housing. A composite optical window for thermal imaging is also provided. In another embodiment, a thermal imaging system includes an explosion-proof housing having an optical window configured to contain an explosive pressure. An infrared (IR) camera is disposed within the explosion-proof housing. A reflector reflects electromagnetic thermal energy to the IR camera, but prevent an object from impacting the optical window.
Type:
Grant
Filed:
June 30, 2015
Date of Patent:
December 22, 2020
Assignee:
Rosmeount Inc.
Inventors:
Jason H. Rud, Andrew J. Kitzman, Sascha Ulrich Kienitz, Ulrich Kienitz, Johannes Gentz
Abstract: According to one aspect of the present invention, the present invention may relate to a method for transmitting a 360 video. The method for transmitting a 360 video may include processing a plurality of circular images captured by a camera having at least one fisheye lens; encoding a picture to which the circular images are mapped; generating signaling information about the 360 video data; encapsulating the encoded picture and the signaling information into a file, and transmitting the file.
Abstract: A method of decoding video comprising: receiving an encoded block of video data, determining a transform for the encoded block of video data, wherein the transform has a size S that is not a power of two, rounding S to a power of two creating a transform with a modified size S?, applying an inverse transform with the modified size S? to the encoded block of video data to create residual video data, and decoding the residual video data to create decoded block of video data.
Type:
Grant
Filed:
January 4, 2018
Date of Patent:
November 24, 2020
Assignee:
QUALCOMM Incorporated
Inventors:
Xiang Li, Xin Zhao, Li Zhang, Jianle Chen, Hsiao-Chiang Chuang, Marta Karczewicz
Abstract: Provided are a method and device to visualize an image during an operation of a ride, and a method and device to manage the image visualizing device. The image visualizing device determines a sight direction with respect to a user terminal worn by a user, determines a moving direction of a ride which the user is on, and provides the user with a field of view (FOV) image determined from a contents image based on the sight direction and the moving direction.
Abstract: An imaging unit includes a light source and a pixel array. The light source projects a line of light that is scanned in a first direction across a field of view of the light source. The line of light oriented in a second direction that is substantially perpendicular to the first direction. The pixel array is arranged in at least one row of pixels that extends in a direction that is substantially parallel to the second direction. At least one pixel in a row is capable of generating two-dimensional color information of an object in the field of view based on a first light reflected from the object and is capable of generating three-dimensional (3D) depth information of the object based on the line of light reflecting from the object. The 3D-depth information includes time-of-flight information.
Abstract: A method and apparatus of Inter/Intra prediction for a chroma component performed by a video encoder or video decoder are disclosed. According to this method, a current chroma prediction block (e.g. a prediction unit, PU) is divided into multiple chroma prediction sub-blocks (e.g. sub-PUs). A corresponding luma prediction block is identified for each chroma prediction sub-block. A chroma prediction mode for each chroma prediction sub-block is determined from a luma prediction mode associated with the corresponding luma prediction block. A local chroma predictor for the current chroma prediction block is generated by applying a prediction process to the multiple chroma prediction sub-blocks using respective chroma prediction modes. In other words, the prediction process is applied at the chroma prediction sub-block level. After the local chroma predictor is derived, a coding block associated with the current chroma prediction block is encoded or decoded using information comprising the local chroma predictor.
Abstract: The invention relates to an exterior mirror simulation with image data recording and a display of the recorded and improved data for the driver of a motor vehicle. The display on a display device shows the data in a way chosen by the driver or the vehicle manufacturer.
Type:
Grant
Filed:
August 30, 2017
Date of Patent:
October 13, 2020
Assignee:
SMR Patents S.Ã .r.l.
Inventors:
Andreas Herrmann, Martin Schwalb, Frank Linsenmaier, Oliver Eder, Firas Mualla
Abstract: Multiview displays include a backlight and a screen used to form a plurality of multiview pixels. Each multiview pixel includes a plurality of sets of light valves. The backlight includes a light source optically coupled to a plate light guide configured with a plurality of multibeam diffraction gratings. Each multibeam diffraction grating corresponds to a set of light valves and is spatially offset with respect to a center of the set of light valves toward a center of the multiview pixel. The plurality of multibeam diffraction gratings is also configured to diffractively couple out light beams from the plate light guide with different diffraction angles and angular offsets such that at least a portion of the coupled-out light beams interleave and propagate in different view directions of the multiview display.
Abstract: A vehicular display control device includes: a display control unit for controlling a display device to superimpose an image on a surrounding view of the vehicle; and a sensor operation detecting unit for detecting whether an obstacle sensor detecting an obstacle around the vehicle is in operation. When the obstacle sensor is in operation, the display control unit: transforms a detection target area, which falls in a detection range of the obstacle sensor and expands from the vehicle to a periphery along a road surface, into a detection target image represented from a viewpoint of a driver of the vehicle; and displays a transformed image to be superimposed on the surrounding view of the vehicle.
Abstract: Multiview displays include a backlight and a screen used to form a plurality of multiview pixels. Each multiview pixel includes a plurality of sets of light valves. The backlight includes a light source optically coupled to a plate light guide configured with a plurality of multibeam elements. Each multibeam element corresponds to a set of light valves and is spatially offset with respect to a center of the set of light valves toward a center of the multiview pixel. The plurality of multibeam elements are also configured to couple out light from the plate light guide with different angles and angular offsets such that at least a portion of the coupled-out light beams interleave and propagate in different view directions of the multiview display.
Abstract: The invention relates to a method for assisting a driver of a motor vehicle (1) in maneuvering the motor vehicle (1) with a trailer (3), wherein image data is captured from an environmental region (12) of the motor vehicle (1) by means of at least one vehicle-side camera (5) and by means of at least one trailer-side camera (10) and an image (B) of the environmental region (12) is created for displaying on a vehicle-side display device (14) depending on the captured image data, wherein a perspective (P1, P2), from which the environmental region (12) is displayed in the image (B), is determined depending on a pivot angle (17) between the trailer (3) and the motor vehicle (1). The invention additionally relates to a driver assistance system (2) as well as to a vehicle/trailer combination with a motor vehicle (1), a trailer (3) and a driver assistance system (2).
Type:
Grant
Filed:
July 27, 2017
Date of Patent:
October 6, 2020
Assignee:
Connaught Electronics Ltd.
Inventors:
Enda Ward, Mike Togher, Fergal O'malley
Abstract: A method of forming a light modulating signal for displaying a 3D includes preparing a plurality of data sets for 2D image data with different viewpoints; imposing a phase value the plurality of data sets, by which each of the 2D images is seen at a corresponding viewpoint; and superposing the 2D images.
Type:
Grant
Filed:
February 25, 2016
Date of Patent:
September 15, 2020
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Hoon Song, Wontaek Seo, Hongseok Lee, Juwon Seo
Abstract: The image processing device includes a scale correction unit. In a case where a difference between a width of a lane calculated by a lane-width calculation unit and a width of the lane acquired from map data is greater than a predetermined value that serves as a criterion for correcting errors, the scale correction unit corrects a correlation between a length of a subject in a captured image and the number of pixels constituting the captured image.
Abstract: A method of forming a video signal by a decoder device is described wherein the method comprises receiving a bitstream and decoder information, the decoder information signaling the decoder device the presence of one or more resolution components in the bitstream for forming a video signal, a resolution component representing a spatially subsampled version, preferably a polyphase subsampled version, of a first video signal having a first resolution, the one or more resolution components being part of a resolution component group, the group comprising a plurality of resolution components on the basis of which the first video signal is reconstructable; and, the decoder device parsing the bitstream and decoding the one or more resolution components into video frames on the basis of the decoder information.
Type:
Grant
Filed:
January 24, 2017
Date of Patent:
September 1, 2020
Assignees:
Koninklijke KPN N.V., Nedeerlandse Oranisatie voor Toegepast—Natuurwetenschappelijk Onderzoek TNO
Inventors:
Emmanuel Thomas, Omar Aziz Niamut, Robert Koenen
Abstract: An oil leakage detector of the present invention includes an image processing unit wherein the image processing unit calculates the values of saturation and intensity of each pixel in the color image of the object after an ultra-violet light is irradiated thereon, draws an intensity-saturation characteristic line of the saturation expressed in an X-axis and the intensity expressed in a Y-axis, sets an upper limit and a lower limit of intensity of each saturation as a threshold value based on an area without oil adhesion on the surface of the object, and determines, in the intensity-saturation characteristic line, an area corresponding to a pixel group where the intensity exceeds the threshold value of the upper limit and a pixel group where the intensity falls below the threshold value of the lower limit, to be an oil leakage adhered area.
Type:
Grant
Filed:
January 12, 2018
Date of Patent:
August 25, 2020
Assignee:
Hitachi, Ltd.
Inventors:
Li Lu, Satoshi Ichimura, Tomohiro Moriyama, Jun Nukaga, Toshiaki Rokunohe, Akira Yamagishi, Yasutomo Saito
Abstract: In an example embodiment, an item listing process is run in an item listing application. Upon reaching a specified point in the item listing process, a camera application on the user device is triggered (or the camera directly accessed by the item listing application) to enable a user to capture images using the camera, wherein the triggering includes providing a wireframe overlay informing the user as to an angle at which to capture images from the camera.
Abstract: An index extraction unit detects indices from a sensed image sensed by a sensing unit which senses an image of a physical space on which a plurality of indices is laid out. A convergence arithmetic unit calculates position and orientation information of the sensing unit based on the detected indices. A CG rendering unit generates a virtual space image based on the position and orientation information. A sensed image clipping unit extracts, as a display image, an image in a display target region from the sensed image. An image composition unit generates a composite image by compositing the extracted display image and the generated virtual space image. A display unit displays the composite image.
Abstract: Using the same image sensor to capture both a two-dimensional (2D) image of a three-dimensional (3D) object and 3D depth measurements for the object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor generates a multi-bit output in the 2D mode and a binary output in the 3D mode to generate timestamps. Strong ambient light is rejected by switching the image sensor to a 3D logarithmic mode from a 3D linear mode.