Abstract: A surveillance camera system includes a surveillance camera having a plurality of cameras for capturing images in a plurality of different image capturing ranges, and a terminal device capable of communicating with the surveillance camera. The terminal device transmits to the surveillance camera arrangement pattern information and image capturing range information of the plurality of cameras that are input by an operation of a user. The surveillance camera calculates camera parameters of the plurality of cameras based on the arrangement pattern information and the image capturing range information of the plurality of cameras transmitted from the terminal device, respectively set the plurality of cameras based on the camera parameters, and transmits to the terminal device images respectively captured by the plurality of cameras after the setting. The terminal device displays the images captured by the plurality of cameras and transmitted from the surveillance camera.
Abstract: A wearable camera includes a capturing unit configured to capture video data, a memory configured to store the video data captured by the capturing unit, a plurality of sound collectors that are arranged at different positions of a casing and that are configured to collect a sound and output signals, and a controller that is configured to determine a direction from which the sound is emitted based on a deviation of output timings of the signals and add the direction as attribute information to the video data.
Abstract: An endoscope includes two or more imaging modules having a lens barrel containing an optical system, an image sensor, and a sensor holding member that relatively fixes the lens barrel and the image sensor, a sub-frame that relatively fixes each of the two or more imaging modules, and an outer shell portion that accommodates and fixes the sub-frame and the two or more imaging modules.
Abstract: A plug for endoscope includes a flexible tube-shaped sheath through which a plurality of wires are inserted. The plurality of wires is connected to an insertion tip portion having an image capturing portion. The plug for endoscope includes a housing having a substrate accommodation portion and a sheath introduced portion. The substrate accommodation portion accommodates a substrate, a base end aperture portion of the sheath passed through an opening of the housing is arranged in the sheath introduced portion, and a part of the plurality of wires derived from the base end aperture portion is connected to the substrate.
Abstract: A capturing device has a lens block that includes a lens for focusing light from a subject during a daytime, the subject includes a vehicle. The capturing device further includes an image sensor that captures an image based on light from the subject focused by the lens, and a processor that generates a face image of an occupant riding in the vehicle based on a first number of captured imaged of the subject which are captured by the image sensor at different times. The processor generates the face image of the occupant further based on a second number of captured images in which a luminance value of a region of interest is equal to or smaller than a threshold value among the first number of captured images of the subject.
Abstract: A vehicle monitoring system includes at least one camera, and a server that is communicably connected to a client terminal. The camera performs capturing a vehicle while switching a first capturing condition including an image parameter for capturing a number of the vehicle entering an angle of view of the camera and a second capturing condition including an image parameter for capturing an occupant's face of the vehicle, and transmits a first captured video under the first capturing condition and a second captured video under the second capturing condition to the server. The server arranges reproduction screens for the first captured video and the second captured video that are reproduceable in the client terminal and displays the reproduction screens on the client terminal based on the first captured video and the second captured video.
Abstract: A monitoring device includes a receiver, configured to receive a fisheye image of a bird's eye viewpoint captured by a plurality of monitoring cameras, an image transformer, configured to transform the fisheye image into a rectangular image of the bird's eye viewpoint or a different viewpoint image which is an image of a viewpoint different from the bird's eye viewpoint, and a controller, configured to provide a user interface which arranges a plurality of image windows respectively corresponding to the plurality of monitoring cameras, displays the rectangular image in the image windows, and switches a displayed image to the different viewpoint image for each image window.
Abstract: There is provided a monitoring camera system including at least one monitoring camera and a recorder connected to the monitoring camera. The monitoring camera captures an image of an area of a monitoring target, detects a motion in a captured video of the area, associates information relating to the motion with the captured video, and transmits the associated result to the recorder. The recorder associates the captured video of the area captured by the monitoring camera and the information relating to the motion with the monitoring camera, and records the associated result. The recorder reproduces the captured video of the area on a monitor recorded in the way that a reproduction speed of the captured video in a section in which the motion is not detected on the monitor is faster than a reproduction speed of the captured video in a section in which the motion is detected, based on the information relating to the motion.
Abstract: This camera module comprises: an imaging element which is formed in a rectangular shape and to which is provided a plurality of pads on the rear, which is the reverse side from the image capture surface; a wire positioning/fixing body which fixes together a plurality of parallel wires extending in a direction approximately perpendicular to the rear surface, and has each of the conductors of the wires protrude from the opposing end surface which is parallel to the rear surface in accordance with to each of the pads; and electro-conductive materials which conduct electricity from the tip of the conductors to each of the plurality of pads.
Abstract: A capturing camera has a lens block that includes a lens for focusing light from a subject including a face of a person riding in a vehicle and a license plate of the vehicle, an image sensor that performs a capturing process based on the light from the subject focused by the lens, and a processor that generates a face image of the person riding in the vehicle and a license plate image of the vehicle, corresponding to a same vehicle, based on a captured image of the subject generated by the capturing process. The processor has the image sensor perform the capturing process using a plurality of capturing conditions at a time of capturing based on the light from the subject.
Abstract: An identification device includes a processor configured to acquire a result of determining, based on a part of an image with a moving object which does not include a face of an occupant on the moving object, whether the occupant is an identification target, and to identify the face of the occupant based on an image including the face of the occupant if the result indicates that the occupant is the identification target.
Abstract: This endoscope is provided with: a hard part that is provided at the distal end of a scope, is formed in a substantially cylindrical shape, and has a substantially circular distal end surface; and a plurality of cameras that are disposed on the left and right sides of the hard part to be located on both sides of a first virtual line perpendicular to the axis of the hard part on the distal end surface. The plurality of cameras include a first camera. The first camera is disposed so that the imaging axis is offset in the direction along the first virtual line from a second virtual line perpendicular to the axis and the first virtual line.
Abstract: A 3 MOS camera includes a first prism that has a first reflection film which reflects IR light that causes a first image sensor to receive the IR light, a second prism that has a second reflection film which reflects A % (A: a predetermined real number) visible light and that causes a second image sensor to receive the A % visible light, a third prism that causes a third image sensor to receive a (100?A)% visible light, and a video signal processor that combines a first video signal, a second video signal, and a third video signal of an observation part. The video signal processor performs pixel shifting on one of the second video signal and the third video signal having substantially same brightness to generate a fourth video signal and outputs a video signal obtained by combining the fourth video signal and the first video signal.
Abstract: There is provided an image processing apparatus connected to a camera head capable of imaging a left eye image and a right eye image having parallax on one screen based on light at a target site incident on an optical instrument, the apparatus including: an image processor that performs signal processing of the left eye image and the right eye image imaged by the camera head; and an output controller that outputs the left eye image and the right eye image on which the signal processing is performed to a monitor via each of a first channel and a second channel, in which the output controller outputs one of the left eye image and the right eye image on which the signal processing is performed to the monitor via each of the first channel and the second channel in accordance with switching from a 3D mode to a 2D mode.
Abstract: A three-plate camera includes an IR prism that causes an IR image sensor to receive incident IR light of light from an observation part, a visible prism that causes a visible image sensor to receive incident visible light of light from the observation part, a specific prism that causes a specific image sensor to receive incident light of a specific wavelength band of light from the observation part, and a video signal processing unit that generates an IR video signal, a visible video signal, and a specific video signal of the observation part based on respective imaging outputs of the IR image sensor, the visible image sensor, and the specific image sensor, combines the IR video signal, the visible video signal, and the specific video signal, and outputs a combined video signal to a monitor.
Abstract: A wearable camera images a subject in front of the user, executes a first communication setting process for communicating with an in-vehicle communication device, mounted in a vehicle on which the user rides, stores first communication setting information used for communication with the in-vehicle communication device, based on the first communication setting process, in a memory, and transmits a captured image of the subject to the in-vehicle communication device, using the first communication setting information. The wearable camera deletes the first communication setting information from the memory, after a lapse of a certain time from a last communication time.
Abstract: A client terminal displays, on a display device, a visual feature of each of a plurality of vehicles passing through an intersection at a location where an incident occurred and map data indicating a passing direction of each of a plurality of vehicles passing through the intersection and sends, in response to a designation of any one of the plurality of vehicles, an instruction to set the designated vehicle as a tracking target vehicle to the server. When the instruction is received, a server specifies a camera of the intersection at which the tracking target vehicle is highly likely to enter next based on at least a current time and a passing direction of the tracking target vehicle at the intersection and sends camera information of the specified camera to the client terminal. When the camera information is received, the client terminal displays, on the display device, a position of a camera corresponding to the camera information to be superimposed on the map data identifiably.
Abstract: A server analyzes feature information including a whole body and a face of a person reflected in a video image sent from a monitoring camera and store a whole body image obtained by cutting out the whole body of the person and a face image obtained by cutting out the face of the person. In response to designation of a person of interest, the server executes first collation processing targeted for the whole body image of the person of interest and second collation processing targeted for the face image of the person of interest. Also, in response to identification of a person matching at least one of the whole body image and the face image of the person of interest by at least one of the first collation processing and the second collation processing, the server outputs a notification that the person of interest is found.