Patents by Inventor Katsuhiro Iwasa
Katsuhiro Iwasa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8289393Abstract: In order to control a camera, so that when an image is displayed on screens an object being captured does not cross a border between the screens, a camera control unit is provided to included a position information acquisition unit, a capturing area determination unit, an angle of view/capturing direction computation unit, a camera control signal generation unit, and a camera control signal sending unit. The capturing area determination unit sets the capturing area of the camera so that the object does not cross the border between the screens, and that all captured objects are displayed in the screens. The angle of view/capturing direction computation unit computes the angle of view and the capturing direction based on the capturing area. The camera control signal generation unit generates a signal for controlling the camera to conform to the angle of view and the capturing direction.Type: GrantFiled: May 30, 2008Date of Patent: October 16, 2012Assignee: Panasonic CorporationInventors: Susumu Okada, Katsuhiro Iwasa
-
Patent number: 7783084Abstract: A face detection device comprises: a judgment area determining operable to judgment areas in an inputted image; and a face judgment unit operable to judge existence of a face image greater than a first predetermined size in the judgment areas. According to face directions, the judgment area determining unit determines a ratio of the inputted image to the judgment areas. The characteristics of face images can be effectively utilized.Type: GrantFiled: January 18, 2006Date of Patent: August 24, 2010Assignee: Panasonic CorporationInventors: Kazuyuki Imagawa, Eiji Fukumiya, Yasunori Ishii, Katsuhiro Iwasa
-
Patent number: 7730505Abstract: A stream reproducing device performing automatic viewing of a non-viewed period of a stream. The stream reproducing device comprises a camera a person detecting unit detecting the viewer based on the output of the camera, a viewing information generating unit generating viewing information based on the person detecting unit, and a reproduction control unit receiving supply of a stream, to control reproduction of the stream. The person detecting unit detects each of the viewers by classifying based on the output of the camera. The viewing information generating unit generates the viewing information for each of the viewers based on the detection result of each of the viewers. The viewing information generated by the viewing information generating unit is related to a time stamp of the stream to identify a non-viewed period, for which each viewer does not view the reproduced result of the stream.Type: GrantFiled: August 26, 2005Date of Patent: June 1, 2010Assignee: Panasonic CorporationInventors: Eiji Fukumiya, Kazuyuki Imagawa, Katsuhiro Iwasa, Yasunori Ishii
-
Publication number: 20080297601Abstract: To control a camera so that when an image from the camera is displayed on plural arranged screens, an object that is being captured does not cross the border between the screens. A camera control unit includes a position information acquisition unit, a capturing area determination unit, an angle of view/capturing direction computation unit, a camera control signal generation unit, and a camera control signal sending unit. The capturing area determination unit sets the capturing area of the camera so that an object does not cross the border between plural displays, and so that all captured objects are displayed in the plural displays, based on geometric position information. The angle of view/capturing direction computation unit then computes the angle of view and the capturing direction of the camera based on that capturing area. The camera control signal generation unit generates a signal for controlling the camera to conform to the angle of view and the capturing direction.Type: ApplicationFiled: May 30, 2008Publication date: December 4, 2008Inventors: Susumu OKADA, Katsuhiro Iwasa
-
Publication number: 20080022295Abstract: A stream reproducing device operable to easily perform the automatic viewing of the non-viewed period of a stream is provided. The stream reproducing device comprises a camera (1) that is arranged toward viewers, a person detecting unit (10) operable to detect the viewer based on the output of the camera, a viewing information generating unit (11) operable to generate viewing information based on the detection result of the person detecting unit, and a reproduction control unit (6) operable to receive supply of a stream, to control reproduction of the stream, and to output a reproduced signal. The person detecting unit detects each of the viewers by classifying based on the output of the camera. The viewing information generating unit generates the viewing information for each of the viewers based on the detection result of each of the viewers by the person detecting unit.Type: ApplicationFiled: August 26, 2005Publication date: January 24, 2008Inventors: Eiji Fukumiya, Kazuyuki Imagawa, Katsuhiro Iwasa
-
Patent number: 7227996Abstract: An image processing method for detecting an object from an input image using a template image, including inputting a specified image with respect to both a template image and an input image, calculating an edge normal direction vector of said specified image, generating an evaluation vector from said edge normal direction vector, subjecting the evaluation vector to orthogonal transformation, a step of performing a product sum calculation of corresponding spectral data with respect to each evaluation vector that has been subjected to orthogonal transformation and has been obtained for each of said template image and said input image, and a step of subjecting it to inverse orthogonal transformation and generating a similarity value map. The formula of the similarity value, the orthogonal transformation, and the inverse orthogonal transformation each have linearity.Type: GrantFiled: January 25, 2002Date of Patent: June 5, 2007Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Kazuyuki Imagawa, Tetsuya Yoshimura, Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata
-
Patent number: 7133658Abstract: The position of a body part area for a personal image in an input image is detected. According to the position of the detected body part area, the position of the origin of a coordinate system for an ornament image is defined. Based on the position of the defined origin, an ornament-arranged input image is outputted. When the personal image moves in the input image, the ornament image also moves by following the movement of the personal image. Even when both the personal image and ornament image move, the ornament image can be made not to interfere with the personal image. Therefore, the personal image can be clearly displayed. Moreover, the input image can be made to look more interesting by synchronizing the movement of the ornament image with the movement of the personal image.Type: GrantFiled: November 3, 2003Date of Patent: November 7, 2006Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Kazuyuki Imagawa, Katsuhiro Iwasa, Takaaki Nishi, Eiji Fukumiya, Hideaki Matsuo, Tomonori Kataoka, Satoshi Kajita, Ikuo Fuchigami
-
Patent number: 7123754Abstract: A face detection device includes a face learning dictionary, which holds learned information for identification between a facial image and a non-facial image. An image input unit inputs a subject image. An edge image extraction unit extracts an edge image from the subject image. A partial image extraction unit, based on the edge image, extracts partial images that are candidates to contain facial images from the subject image. A face/non-face identification unit references the learning dictionary to identify whether or not each extracted partial image contains a facial image. Face detection of high precision, which reflects learned results, is performed.Type: GrantFiled: May 22, 2002Date of Patent: October 17, 2006Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Katsuhiro Iwasa, Toshirou Eshima, Naruatsu Baba
-
Publication number: 20060177110Abstract: A face detection device comprises: a judgment area determining operable to judgment areas in an inputted image; and a face judgment unit operable to judge existence of a face image greater than a first predetermined size in the judgment areas. According to face directions, the judgment area determining unit determines a ratio of the inputted image to the judgment areas. The characteristics of face images can be effectively utilized.Type: ApplicationFiled: January 18, 2006Publication date: August 10, 2006Inventors: Kazuyuki Imagawa, Eiji Fukumiya, Yasunori Ishii, Katsuhiro Iwasa
-
Publication number: 20060007135Abstract: A viewing intention judging device comprises an input image control unit, an operation input unit, an element-image generating/selecting unit operable to generate a user interface element image and to handle selection, a composing unit operable to compose the user interface element image and an inputted image, a display unit operable to display the composed image, a person state acquiring unit operable to acquire information of a person near the display unit, a viewing intention judging unit operable to judge a viewing intention of the person, and an operation control unit operable to control the element-image generating/selecting unit and the operation input unit when a judgment result indicates transit from a state with viewing intention to a state without viewing intention or vice versa.Type: ApplicationFiled: June 22, 2005Publication date: January 12, 2006Inventors: Kazuyuki Imagawa, Eiji Fukumiya, Katsuhiro Iwasa, Yasunori Ishii, Shogo Hamasaki
-
Patent number: 6961446Abstract: The present media editing device generates media including messages in an easy manner in a communication terminal such as a mobile terminal. Therein, a moving image data storage part stores moving image data recorded by a user. A region extraction part extracts any region including the user from the moving image data. A front determination part detects whether or not the user in the extracted region is facing the front. A sound detection part detects the presence or absence of a sound signal of a predetermined level or higher. A frame selection part determines starting and ending frames based on the results outputted from the front determination part and the sound detection part. An editing part performs, for example, an image conversion process by clipping out the media based on thus determined starting and ending frames. A transmission data storage part stores the resultantly edited media as transmission data.Type: GrantFiled: September 12, 2001Date of Patent: November 1, 2005Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Kazuyuki Imagawa, Yuji Takata, Hideaki Matsuo, Katsuhiro Iwasa, Tetsuya Yoshimura
-
Publication number: 20040207610Abstract: The characteristic-detecting unit is operable to detect characteristics from a first or second information terminal unit-captured image. The first information terminal unit-captured image has been obtained by an image input unit. The second information terminal unit-captured image has been decoded by an image-decoding unit. The characteristic-detecting unit is further operable to control an image-encoding unit, an image-displaying unit, the image-decoding unit, or the image input unit in accordance with the characteristic information. The control system provides a reduced amount of processing required for image encoding, either stops displaying the images or displays fewer images, either stops transferring the images from the image-decoding unit to the image-displaying unit or transfers fewer images therebetween in the same transfer direction, and provides fewer image-captured frames. This feature provides an information terminal unit characterized by thrifty power consumption.Type: ApplicationFiled: March 12, 2004Publication date: October 21, 2004Inventors: Satoshi Kajita, Takayuki Ejima, Tomonori Kataoka, Ikuo Fuchigami, Kazuyuki Imagawa, Katsuhiro Iwasa
-
Publication number: 20040170326Abstract: A domain of an object detected by template matching is tracked in accordance with information on a motion vector that is included in compressed and encoded data. This feature eliminates template matching-based object detection when it comes to a motion vector information-containing image subject to object detection. As a result, object detection is achievable with a less amount of processing, when compared with the template matching-based detection of objects in all images subject to object detection.Type: ApplicationFiled: January 23, 2004Publication date: September 2, 2004Inventors: Tomonori Kataoka, Satoshi Kajita, Ikuo Fuchigami, Kazuyuki Imagawa, Katsuhiro Iwasa
-
Publication number: 20040131278Abstract: The position of a body part area for a personal image in an input image is detected. According to the position of the detected body part area, the position of the origin of a coordinate system for an ornament is defined. Based on the position of the defined origin, an ornament-arranged input image is outputted. When the personal image moves in the input image, the ornament also moves following the movement of the personal image. Even when both personal image and ornament move, the ornament can be made not to interfere with the personal image. Therefore, the personal image can be clearly displayed. Moreover, the input image can be made look more interesting by synchronizing the movement of the ornament with the movement of the personal image.Type: ApplicationFiled: November 3, 2003Publication date: July 8, 2004Inventors: Kazuyuki Imagawa, Katsuhiro Iwasa, Takaaki Nishi, Eiji Fukumiya, Hideaki Matsuo, Tomonori Kataoka, Satoshi Kajita, Ikuo Fuchigami
-
Publication number: 20030091239Abstract: An image series including one or more images and control information that is related to the images of the image series and concerns disclosure/nondisclosure of the images are communicated. The control information is related to either or both of a focused-region and a non-focused region which include a part or all of an object within the image, and indicates whether or not the regions are to be disclosed. The control information also adapts to disclosure/nondisclosure to specific/unspecified users.Type: ApplicationFiled: November 12, 2002Publication date: May 15, 2003Inventors: Kazuyuki Imagawa, Hideaki Matsuo, Yuji Takata, Katsuhiro Iwasa, Takaaki Nishi, Eiji Fukumiya
-
Publication number: 20030059117Abstract: An image processing device has an edge extraction unit, which inputs an image and generates an edge image, a voting unit, which uses templates to carry out voting on the edge image and generate voting results; a maxima extraction unit, which extracts the maxima among the voting results and generates extraction results; and an object identifying unit, which identifies the position of an object based on the extraction results. The edge extraction unit has a filter processing unit that uses a filter for performing simultaneous noise elimination and edge extraction of the image.Type: ApplicationFiled: September 26, 2002Publication date: March 27, 2003Applicant: Matsushita Electric Industrial Co., Ltd.Inventors: Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata, Kazuyuki Imagawa, Eiji Fukumiya
-
Publication number: 20020191818Abstract: A face detection device includes a face learning dictionary, which holds learned information for identification between a facial image and a non-facial image. An image input unit inputs a subject image. An edge image extraction unit extracts an edge image from the subject image. A partial image extraction unit, based on the edge image, extracts partial images that are candidates to contain facial images from the subject image. A face/non-face identification unit references the learning dictionary to identify whether or not each extracted partial image contains a facial image. Face detection of high precision, which reflects learned results, is performed.Type: ApplicationFiled: May 22, 2002Publication date: December 19, 2002Applicant: Matsushita Electric Industrial Co., Ltd.Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Katsuhiro Iwasa, Toshirou Eshima, Naruatsu Baba
-
Publication number: 20020136459Abstract: An image processing method for detecting an object from an input image using a template image, including inputting a specified image with respect to both a template image and an input image, calculating an edge normal direction vector of said specified image, generating an evaluation vector from said edge normal direction vector, subjecting the evaluation vector to orthogonal transformation, a step of performing a product sum calculation of corresponding spectral data with respect to each evaluation vector that has been subjected to orthogonal transformation and has been obtained for each of said template image and said input image, and a step of subjecting it to inverse orthogonal transformation and generating a similarity value map. The formula of the similarity value, the orthogonal transformation, and the inverse orthogonal transformation each have linearity.Type: ApplicationFiled: January 25, 2002Publication date: September 26, 2002Inventors: Kazuyuki Imagawa, Tetsuya Yoshimura, Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata
-
Publication number: 20020031262Abstract: The present media editing device generates media including messages in an easy manner in a communication terminal such as a mobile terminal. Therein, a moving image data storage part 14 stores moving image data recorded by a user. A region extraction part 17 extracts any region including the user from the moving image data. A front determination part 18 detects whether or not the user in the extracted region is facing the front. A sound detection part 19 detects the presence or absence of a sound signal of a predetermined level or higher. A frame selection part 20 determines starting and ending frames based on the results outputted from the front determination part 18 and the sound detection part 19. An editing part 21 performs, for example, an image conversion process by clipping out the media based on thus determined starting and ending frames. A transmission data storage part 15 stores the resultantly edited media as transmission data.Type: ApplicationFiled: September 12, 2001Publication date: March 14, 2002Inventors: Kazuyuki Imagawa, Yuji Takata, Hideaki Matsuo, Katsuhiro Iwasa, Tetsuya Yoshimura
-
Publication number: 20010052928Abstract: An image communication terminal comprises a face extraction part 7 for extracting the position and the size of a face with respect to an image picked up by a camera part 4, a display part 3 for displaying the image toward a user, a communication part 9 for establishing two-way communication of the image to and from an information processor on the side of the other party, and a transmitting data processing part 8 for outputting to the communication part 9 an image in a rectangular transmission region set so as to be movable in the image picked up by the camera part 4, an effective region which moves integrally with the transmission region being set in the image picked up by the camera part 4, to move the position of the transmission region in conformity with the position of the face region, provided that the face region deviates from the effective region.Type: ApplicationFiled: May 22, 2001Publication date: December 20, 2001Inventors: Kazuyuki Imagawa, Hideaki Matsuo, Yuji Takata, Masafumi Yoshizawa, Shogo Hamasaki, Tetsuya Yoshimura, Katsuhiro Iwasa