Abstract: An interchangeable lens (second interchangeable lens) is disclosed which allows a protruding amount from a mount reference surface of the interchangeable lens to be increased. The second interchangeable lens has a protruding portion protruding from a mount reference surface toward an image plane, has a larger protruding amount than the first interchangeable lens and has the same flange back as the first interchangeable lens. The first camera prevents mounting of the second interchangeable lens by the protruding portion of the second interchangeable lens contacting the first wall portion. The second camera includes a second wall portion provided at a position retreated from the protruding portion. The second camera includes a rotatable mirror member, and the rotation center of the mirror member is positioned on the opposite side of a finder optical system with respect to a plane including an in-plane direction of the mirror member.
Type:
Grant
Filed:
July 30, 2004
Date of Patent:
July 10, 2012
Assignee:
Canon Kabushiki Kaisha
Inventors:
Eiri Tsukatani, Masahisa Tamura, Jun Sugita, Atsushi Koyama
Abstract: This invention can reduce the load on a recording apparatus, efficiently design the overall system, and accurately calculate a data amount recordable on a recording unit in accordance with setting change of each communication apparatus or the like. A portable remote storage transmits information on a data amount storable in the portable remote storage to digital cameras. The information on the data amount storable in the portable remote storage is based on the remaining area of a shared storage area commonly available for the digital cameras except for an exclusive storage area assigned to each digital camera.
Abstract: A lens barrel unit includes a holder configured to hold a lens, a cam cylinder including a cam groove configured to engage with the holder and define a movement of the holder in an optical axis direction, and a projection that has a part of the cam groove and projects in the optical axis direction, and a straightforward movement cylinder configured to guide a straightforward movement of the holder, the straightforward movement cylinder including a flange forming a notch. In a process from a retraction state in which the projection is located in the notch to an image pickup state in which the projection is located outside of the notch, the straightforward movement cylinder moves relative to the cam cylinder in the optical axis direction.
Abstract: A video delivery apparatus for delivering a video stream with a property according to a request from a client. The video delivery apparatus includes reception means for receiving a delivery request from one client, first estimation means for estimating a current processing load by calculating a sum total of the processing loads for other clients connected to deliver a video stream upon reception of the delivery request from the one client, second estimation means for estimating a processing load upon delivering the video stream to the one client according to the delivery request, and delivery control means for performing the delivery control of the video stream on the basis of at least one of the current processing load estimated by the first estimation means and the processing load upon delivering the video stream to the one client according to the delivery request, which is estimated by the second estimation means.
Abstract: An image sensing apparatus including an image sensor that performs photoelectric conversion and outputs an image signal, a subtraction circuit that subtracts a black image signal obtained from the image sensor when the image sensor is shielded from light from a subject image signal obtained from the image sensor when the image sensor is exposed, a setting unit that sets a shooting condition, and a control unit that controls a thinning rate during thinning readout from the image sensor of the black image signal in accordance with the shooting condition set by the setting unit.
Abstract: A collision target map and a collision target LUT are generated for each pixel to be subjected to rendering as collision target information with recorded identification information for CG data rendering on the pixel, while carrying out rendering of colliding object group CG data. Then, collision target information corresponding to a rendering pixel is referenced while carrying out rendering of collided object group CG data. In a case in which the colliding object group CG data is contained in the collision target information, it is determined that collision detection should be carried out for a virtual object being rendered, and collision detection information is generated. Such collision detection information allows collision between virtual objects to be detected at high speed.
Abstract: A valve for a pressurized dispensing container has a resilient annular grommet that surrounds a valve stem. The grommet has a lower segment which engages an elongate valve stem opening with a slight interference fit to provide user controllable metering of the product being dispensed. A recess in the lower surface of the grommet contains the stem button from the closed state to the fully open state to provide stability for the stem. The upper portion of the grommet has a restoring boot to assure that the valve is returned to its closed state once manual force is removed from the valve. A boot flange and stem recess engagement together with other dimensional relationships assures that the boot provide the required restoring force throughout the dispensing of product.
Abstract: An image display device that is capable of checking a focusing operation at the time of shooting by a user easily even if a small-sized monitor is used. A luminance-signal extracting unit extracts a luminance signal from a video signal. An amplitude change calculating unit calculates an amplitude change component of the luminance signal extracted. An edge-signal extracting unit extracts an edge signal from the luminance signal extracted. An amplitude calculating unit calculates an amplitude of the edge signal extracted. An amplitude-ratio calculating unit calculates a ratio of the amplitude of the edge signal to the amplitude change component of the luminance signal. A color conversion unit converts the edge signal to add colors according to the calculation result. An adder unit adds the color-converted edge signal and the luminance signal. A display unit displays the video signal to which the color-converted edge signal is added.
Abstract: A communication control device in an audio visual device system has disconnection detection unit for detecting that an audio visual device is disconnected from the audio visual device system, device detection unit for detecting an audio visual device which has not acquired a logical address according to a device type, and control unit for performing control for causing the audio visual device without a logical address to acquire a logical address, when disconnection of a audio visual device is detected by the disconnection detection unit. With this configuration, in an audio visual device system in which an upper limit is set to the number of logical addresses according to a device type, it is possible to cause an audio visual device which cannot acquire a logical address according to the device type to acquire a logical address when it is made available.
Abstract: An image processing method is a method for correcting both an image quality of an overall image and an image quality of a partial image with excellent balance.
Abstract: An image capturing apparatus is provided that is capable of performing both object detection using image recognition and object detection using movement detection on successively captured images. In the image capturing apparatus, the reliability of the result of the object detection using image recognition is evaluated based on the previous detection results. If it is determined that the reliability is high, execution of the object detection using movement detection is determined. If it is determined that the reliability is low, non-execution of the object detection using movement detection is determined. With this configuration, the object region can be tracked appropriately.
Abstract: A hub comprises a cylindrical, hollow hub member (2), which is mounted to rotate about its axis and in whose interior is a transmission system having an input (10), which is mounted to rotate about the axis, and an output connected to rotate with the hub member. The transmission system comprises first and second epicyclic gearsets (14, 16). The first gearset (14) comprises a first sun gear (18), which is mounted to rotate about the axis and is in mesh with a plurality of first planet gears (20) mounted to rotate about respective planet shafts (22) carried by a first common carrier (24), which is mounted to rotate about the first axis. The second gearset (16) comprises a second sun gear (28), which is mounted to rotate about the axis and is in mesh with a plurality of second planet gears (30) mounted to rotate about respective planet shafts (32) carried by a second common carrier (34).
Abstract: An image forming apparatus equipped with an image forming unit capable of image formation using color toner and transparent toner includes, a determining unit to determine whether each pixel of image data belongs to an image region or a text region; a control unit to control the image forming unit by switching between transfer of a toner image to the image region and transfer of a toner image to the text region; a fixing unit to fix the toner image formed on recording material; and a transport unit to discharge the recording material on which the toner image is fixed or re-feed the recording material to the image forming unit.
Abstract: If an image acquired from a video camera (113) contains a two-dimensional bar code as information unique to an operation input device (116), information unique to the video camera (113) and the information unique to the operation input device (116) are managed in a shared memory (107) in association with each other.
Abstract: There is provided an image processing apparatus which corrects a shot image, which is shot by an image capturing apparatus and on which a shadow of a foreign substance existing on a shooting optical path of the image capturing apparatus is captured, so as to reduce an influence of the shadow of the foreign substance. The image processing apparatus includes a display unit which displays the shot image, a correction unit which corrects the shot image so as to reduce the influence of the shadow of the foreign substance on the basis of foreign substance information, an input unit which is used by a user to perform input operation in accordance with a shot image displayed on the display unit and inputs unregistered foreign substance information which has not been registered in foreign substance information, and an additional registration unit which additionally registers the unregistered foreign substance information.
Abstract: An image sensing apparatus comprises an image sensing unit configured to sense an object and generate image data, a face detection unit configured to detect a face region of a person contained in the image data generated by the image sensing unit, and a facial expression determination unit configured to determine a plurality of facial expressions in the face region determined by the face detection unit. When both a first facial expression and a second facial expression are determined, the facial expression determination unit corrects a condition to determine the second facial expression so as to become difficult as compared to the determination condition when the first facial expression is not determined and the second facial expression is determined.
Abstract: When a video output apparatus (100) is connected, a video display apparatus (200) obtains information of content data stored in the video output apparatus (100). Then, based on the obtained information, the video display apparatus (200) determines whether the content data is decodable with respect to itself. If it is not decodable, it requests the video output apparatus (100) to decode the content data before transferring the content data using a data transfer method specified. By doing this, an appropriate transfer protocol is selected automatically when transferring content data between the video display apparatus (200) and the video output apparatus (100).
Abstract: A display processing apparatus which makes it possible to set auto bracketing values while confirming a whole range of shooting conditions configurable for correction in which possible shooting condition corrections are taken into account. An exposure correction value is set based on an instruction from a user for correcting a preset value of exposure. Auto bracketing value auto bracketing shooting are set based on an instruction from the user. A process is carried out for displaying a scale for indicating values of the exposure correction value and the auto bracketing values, indicators indicative of a range of exposure correction values that can be set and are arranged in a manner associated with the scale, and indicators indicative of a range of auto bracketing values that can be set and are arranged in a manner associated with the scale and the first indicators.
Abstract: This invention has as its object to prevent a conventional problem in that the focal point position cannot be determined before and after an in-focus point, even in a slow shutter mode. To this end, a camera control microcomputer of a video camera determines the drive amount of a focus lens for one field in accordance with the rotation angle of a manual focus dial when the shutter speed is 1/60 sec, 1/30 sec, or 1/15 sec, and controls a focus compensation lens driver and focus compensation lens motor to drive the focus lens by the determined drive amount in a direction corresponding to the detected rotation direction of the manual focus dial.