Abstract: An endoscopy system includes: an image signal generating unit which is connected to one side of a cable, and which includes a signal transmission unit for converting an image signal of inner body according to a protocol for long distance transmission and transmit a converted image signal to thereby send the image signal through a cable; a signal processing unit which is connected to the other side of the cable, and which receives the converted image signal transmitted through the cable and converts the converted image signal into the image signal; a CPU for outputting a user interface for the image signal and an operation control signal according to a handling of the user interface of user; an image processing unit for overlapping the user interface on the image signal output from the signal processing unit and processing the image signal according to the operation control signal.
Abstract: A vehicular vision system includes a camera configured to be disposed behind a windshield of a vehicle so as to have a field of view exterior of the vehicle through the windshield. The camera includes a primary lens, a secondary lens, and an imager, which includes a primary sensing area and a secondary sensing area. The primary sensing area captures image data representative of images focused at the imager by the primary lens and the secondary sensing area captures image data representative of images focused at the imager by the secondary lens. The secondary lens is between the primary lens and the imager. A control includes a processor that processes image data captured by the imager at the primary sensing area and at the secondary sensing area. The control, responsive to processing of image data captured at the secondary sensing area, determines presence of water droplets at the windshield.
Abstract: A method of calibrating a camera array comprising a plurality of cameras configured to capture a plurality of images to generate a panorama, wherein the relative positions among the plurality of cameras are constant, the method comprising: moving the camera array from a first position to a second position; measuring a homogeneous transformation matrix of a reference point on the camera array between the first position and the second position; capturing images at the first position and the second position by a first camera and a second camera on the camera array; and determining a homogenous transformation matrix between the first camera and the second camera based on the images captured by the first camera and the second camera at the first position and the second position.
Abstract: A method for filtering radiation on a CCD based camera inspection video, the method including: capturing video signals via the camera; converting the video signals to a plurality of digital video frames; identifying radiation bright spots, defined as xnoids, in a pixel of at least one of the frames, replacing the xnoids and surrounding pixels with corresponding pixels of another of the frames to create a filtered frame. A system for the inspection of a nuclear power plant comprising: a camera; and a computer, the computer configured to execute identifying xnoids in a pixel of at least one digitized video frame and replacing the xnoids and surrounding pixels with corresponding pixels of another of the frames to create a filtered frame.
Abstract: A method of monitoring ultraviolet radiation reflectance is provided for activating an ultraviolet radiation reflectance digital sensor and display monitor; capturing ultraviolet radiation reflectance passing through a lens onto the digital sensor; analyzing ultraviolet radiation reflectance against a preloaded and predetermined color palate; generating a video image; and outputting the video image to the display monitor. A device is also provided for an ultraviolet radiation reflectance monitoring application which receives data from an ultraviolet radiation sensitive digital imaging plate installed on the device; wherein the application processes data received from the digital imaging plate and generates an output image of ultraviolet radiation reflectance to a video monitor communicatively connected to the device.
Abstract: A method and an encoder for encoding a video stream in a video coding format supporting auxiliary frames, where such auxiliary frames, in conjunction with the frames that reference the auxiliary frames, can be used to for rate control, in that the image data of the auxiliary frames comprises a down scaled version of an image data captured by a video capturing device, and that motion vectors of the frame referring to the auxiliary frame are calculated/determined to scale up the down scaled version of the image data to again have the intended resolution.
Abstract: An imaging apparatus includes a plurality of pixels each including a photoelectric conversion unit and a charge holding unit configured to hold an electric charge generated in the photoelectric conversion unit, a waveguide disposed above the photoelectric conversion unit, and a light blocking unit configured to cover the charge holding unit, wherein a width of a bottom surface of the waveguide is smaller than 1.1 ?m.
Abstract: Operation of a data collection node in a network of data collection nodes includes acquiring data using at least one sensing device of the data collection node. The acquired data is stored in a memory of the data collection node and metadata is generated at the data collection node based on the acquired data. The metadata is analyzed at the data collection node and at least one of the metadata, the acquired data, and an alert is sent from the data collection node to another device on the network based on the analysis of the metadata.
Abstract: A solution for monitoring an area including one or more restricted zones is provided. The solution can include one or more monitoring assemblies deployed to acquire image data of the area and independently monitor operations within the area at each monitoring assembly. A monitoring assembly can include one or more local alert components to generate an audible or visual alarm to local personnel. Data regarding static features present in the area can be used to create a registration map of the field of view, which can subsequently enable accurate determination of the three-dimensional location of a target using two-dimensional image data and/or identify an extent of a restricted zone even when one or more of the static features are obscured. Monitoring a target over a series of images can be used to determine whether an alert condition is present.
Abstract: The present disclosure describes methods and systems for integrating third party automation systems. One computer-implemented method includes calculating, by at least one of a first device and a second device, a risk level, and controlling, by the at least one of the first device and the second device, a switch based on the risk level. If the risk level is less than a predetermined value, the switch is controlled to power on an image display and capturing device. Data is displayed by the first device and received by the second device through visual recognition on the image display and capturing device. If the risk level is not less than the predetermined value, the switch is controlled to power off the image display and capturing device. Data transfer from the first device to the second device through visual recognition on the image display and capturing device is stopped.
July 10, 2018
Date of Patent:
March 24, 2020
Saudi Arabian Oil Company
Hussain Al-Salem, Fouad M. Alkhabbaz, Srinidhi Mallur
Abstract: A system includes an optical waveguide configured to receive multispectral radiation from a scene, a first optical component and a second optical component. The first optical component is configured to cause a first portion of the multispectral radiation with wavelengths in a first range to exit the optical waveguide at a first position, and a second portion of the multispectral radiation with wavelengths in a second range to travel through the optical waveguide from the first position to a second position via total internal reflection. The second optical component is configured to cause the second portion of the multispectral radiation to exit the optical waveguide at the second position.
Abstract: A method for examining a sample includes illuminating the sample in an illumination plane along an illumination strip by an illuminating light beam which propagates along the illumination strip. The illumination strip is projected into a detection plane by detection light originating from the illumination strip being focused in the detection plane. The detection light is detected by a detector. The detector is formed as a slit detector, and the direction of a slit width of the slit detector is oriented at an angle different from zero degrees with respect to the direction of a longitudinal extent of an image of the illumination strip projected into the detection plane.
Abstract: Techniques for improving the quality of images captured by a remote sensing overhead platform such as a satellite. Sensor shifting is employed in an open-loop fashion to compensate for relative motion of the remote sensing overhead platform to the Earth. Control signals are generated for the sensor shift mechanism by an orbital motion compensation calculation that uses the predicted ephemeris (including orbit dynamics) and image geometry (overhead platform to target). Optionally, the calculation may use attitude and rate errors that are determined from on-board sensors.
Abstract: The present disclosure relates to a virtual reality display device and an adaptive parallax adjustment method for the virtual reality display device, which belong to the display technical field. The adaptive parallax adjustment method includes: obtaining pupil distance information of a user; and adjusting a position of a to-be-displayed image on a display screen according to the pupil distance information.
Abstract: An image processing apparatus includes an image processor and an encoder. The image processor enhances an edge of an input image, removes noise from the input image, synthesizes the edge-enhanced image and the noise-removed image, and removes a high frequency from the synthesized image. The encoder pre-encodes a downsized synthesized image, obtains a pre-bit rate of the pre-encoded image, sets a quantization parameter value based on a reference bit rate and the pre-bit rate, and compresses the high-frequency removed image based on the quantization parameter value.
Abstract: Disclosed herein are systems and methods for constructing an image of a sample using a plurality of images acquired under multiple illumination conditions. In some cases, a microscope may include an image capture device, an illumination assembly, and a processor configured to acquire a plurality of images of a sample and a fiducial marker under a plurality of different illumination conditions and to reconstruct a high resolution image in response to the plurality of images. The disclosure also provides a method for generating a high resolution image of a sample comprising acquiring a plurality of images of a sample and a fiducial marker under a plurality of different illumination conditions and reconstructing the high resolution image in response to the plurality of images.
October 26, 2017
Date of Patent:
February 11, 2020
Scopio Labs Ltd.
Ben Leshem, Itai Hayut, Erez Na'Aman, Eran Small
Abstract: A display control system to generate a virtual environment in a vehicle includes an electronic control unit (ECU) configured to receive an input that corresponds to a selection of a video to be displayed on one or more display mediums provided in the vehicle. A relevance factor is determined between a current travel route of the vehicle and a travel route associated with the selected video. The relevance factor is determined in an event the vehicle is in motion along the current travel route. One or more video parameters of the selected video are adjusted based on the determined relevance factor. Display of at least the selected video on the one or more display mediums is controlled in the vehicle in motion in accordance with the adjusted one or more video parameters of the selected video.
Abstract: A signal processing apparatus 6 includes a mode selection unit 6124 that selects any operational mode from among a plurality of operational modes that include a regular operational mode in which a video signal for display is output, and are hierarchically organized in a tree form, the mode selection unit 6124 causing the signal processing apparatus 6 to operate in the selected operational mode. The operational modes include a plurality of lower-level operational modes as lower-level operational modes having a parent-child relation with the regular operational mode. The mode selection unit 6124, after the signal processing apparatus 6 has started up, selects any operational mode from among same-level operational modes for each level from an upper level toward a lower level and, when having selected any of the lower-level operational modes, can select an upper-level operational mode relative to the selected lower-level operational mode on condition that the signal processing apparatus 6 has again started up.
Abstract: A method of decoding JVET video, comprising defining a coding unit (CU) template within a decoded area of a video frame, the CU template being positioned above and/or to the left of a current decoding position for which data was intra predicted, defining a search window within the decoded area, the search window being adjacent to the CU template, generating a plurality of candidate prediction templates based on pixel values in the search window, each of the plurality of candidate prediction templates being generated using different intra prediction modes, calculating a matching cost between the CU template and each of the plurality of candidate prediction templates, selecting an intra prediction mode that generated the candidate prediction template that had the lowest matching cost relative to the CU template, and generating a prediction CU for the current decoding position based on the intra prediction mode.
Abstract: An image pickup apparatus that is capable of preventing degradation of quality of an image obtained. An insertion-extraction unit inserts and extracts a filter of which transmittance for infrared light is higher than the transmittance for visible light. A computation unit computes an object distance between the image pickup apparatus and an object. An evaluation unit evaluates an image quality based on at least a sharpness in an image including a picked up object image. A control unit controls the insertion-extraction unit so as to insert or extract the filter based on at least one of the image quality and the object distance.