Patents by Inventor Ryouichi Kawanishi
Ryouichi Kawanishi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8671344Abstract: The present invention aims to provide an information display device for easily changing a scrolling speed and a display mode. The information display device receives an instruction for displaying a plurality of contents, via a move operation on a two-dimensional plane defined by first and second axes; determines a moving speed for moving the contents, based on a motion component of the move operation along the first axis, and determines a display mode for displaying the contents, based on a motion component of the move operation along the second axis; and displays the contents in the display mode on a screen, by scrolling through the contents at the moving speed.Type: GrantFiled: February 1, 2010Date of Patent: March 11, 2014Assignee: Panasonic CorporationInventors: Keiji Icho, Ryouichi Kawanishi, Teruyuki Kimata
-
CONTENT DISPLAY PROCESSING DEVICE, CONTENT DISPLAY PROCESSING METHOD, PROGRAM AND INTEGRATED CIRCUIT
Publication number: 20140055479Abstract: A content display processing device includes a display unit, a content acquisition unit, a characteristic determination unit and a display control unit. The content acquisition unit acquires pieces of content. The attribute information acquisition unit acquires pieces of attribute information, each piece of attribute information being acquired from one or more of the pieces of content and indicating an attribute thereof. The characteristic information determination unit determines a piece of characteristic information based on the pieces of attribute information, the piece of characteristic information pertaining to pieces of target content among the pieces of content and indicating an attribute which is characteristic thereof. The display control unit controls the display unit to display the pieces of target content based on the piece of characteristic information. Through the above configuration, pieces of target content can be arranged and displayed on arrangement axes which change in accordance therewith.Type: ApplicationFiled: January 11, 2013Publication date: February 27, 2014Applicant: Panasonic CorporationInventors: Ryouichi Kawanishi, Keiji Icho -
Patent number: 8581700Abstract: A wearable device is worn by a person participating in an event in which a plurality of other people are participating and wearing other wearable devices. The wearable device includes a request unit for transmitting a request signal to other wearable devices that are in a predetermined range, and receiving a response to the request signal from each of the other wearable devices, and a communication unit for determining, with use of the received responses, one or more of the other wearable devices to be a communication partner, and performing data communication with the determined one or more other wearable devices. The data received in the communication is data collected by the one or more other wearable devices determined to be communication partners, and the data is used as a profile component when creating a profile of the event.Type: GrantFiled: February 21, 2007Date of Patent: November 12, 2013Assignee: Panasonic CorporationInventors: Takashi Kawamura, Masayuki Misaki, Ryouichi Kawanishi, Masaki Yamauchi
-
Patent number: 8583647Abstract: A data processing device provides a result of categorization that is satisfactory to a user. The data processing device: stores model data pieces indicating detection counts of feature amounts; judges, for each target data piece, whether the target data piece is a non-categorization data piece including an uncategorizable object, using the model data pieces and the detection count of each of at least two feature amounts detected in the target data piece; when two or more of the target data pieces are judged to be non-categorization data pieces, specifies at least two feature amounts that are included and detected the same number of times, in a predetermined number or more of the non-categorization data pieces; and newly creates a model data piece based on the at least two specified feature amounts, using a class creation method, and stores the model data piece into the storage unit.Type: GrantFiled: December 24, 2010Date of Patent: November 12, 2013Assignee: Panasonic CorporationInventors: Ryouichi Kawanishi, Tsutomu Uenoyama, Akira Ishida
-
Patent number: 8549431Abstract: The invention aims to provide a user interface for efficiently displaying desired content from among a large number of contents. An operation location and an operation amount of an operation that has been made on an operation member is detected. Based on the operation location, one content is selected from among a plurality of contents that have been arranged in sequence, and a display unit displays the selected one content. The display unit displays another content when the operation location has moved during the display of the selected one content, an order of said another content being different from an order of the selected one content by a number based on the operation amount detected by a detection unit.Type: GrantFiled: May 18, 2010Date of Patent: October 1, 2013Assignee: Panasonic CorporationInventors: Hiroshi Yabu, Ryouichi Kawanishi, Tsutomu Uenoyama
-
Voice analysis device, voice analysis method, voice analysis program, and system integration circuit
Patent number: 8478587Abstract: A sound analysis device comprises: a sound parameter calculation unit operable to acquire an audio signal and calculate a sound parameter for each of partial audio signals, the partial audio signals each being the acquired audio signal in a unit of time; a category determination unit operable to determine, from among a plurality of environmental sound categories, which environmental sound category each of the partial audio signals belongs to, based on a corresponding one of the calculated sound parameters; a section setting unit operable to sequentially set judgement target sections on a time axis as time elapses, each of the judgment target sections including two or more of the units of time, the two or more of the units of time being consecutive; and an environment judgment unit operable to judge, based on a number of partial audio signals in each environmental sound category determined in at least a most recent judgment target section, an environment that surrounds the sound analysis device in at least theType: GrantFiled: March 13, 2008Date of Patent: July 2, 2013Assignee: Panasonic CorporationInventors: Takashi Kawamura, Ryouichi Kawanishi -
Publication number: 20130159921Abstract: A display control device includes a first-contents-list displaying unit which displays a first contents list on a display screen, a base-point-content identifying unit which identifies a base-point content on the first contents list, a base-point-content-position obtaining unit which obtains a first base-point-content position on the first contents list, a focus position obtaining unit which obtains a focus position on the first contents list, a positional difference calculating unit which calculates a positional difference between the first base-point-content position and the focus position, and a first scrolling unit which scrolls the first contents list displayed on the display screen so that the positional difference decreases.Type: ApplicationFiled: July 18, 2012Publication date: June 20, 2013Inventors: Keiji Icho, Yuichi Kobayakawa, Ryuji Inoue, Ryouichi Kawanishi
-
Publication number: 20130111373Abstract: To provide a presentation content generation device that generates various types of presentation contents by dynamically generating a template appropriate for the substance of each content set. The presentation content generation device includes an attribute information extraction unit 2 that extracts attribute information indicating image feature from a content set stored in a local data storage unit 1, a design type determination unit 4 that determines a base land pattern and a color of a template based on the extracted attribute information, a selection index type determination unit 5 that, based on the extracted attribute information, selects one or more contents to be placed on the template and respective placement positions of the selected contents on the template, and a view format conversion unit 6 that places the selected contents on the respective placement positions to generate a presentation content.Type: ApplicationFiled: November 21, 2011Publication date: May 2, 2013Inventors: Ryouichi Kawanishi, Tomoyuki Karibe, Tomohiro Konuma
-
Publication number: 20130108244Abstract: An interesting section identifying device for identifying an interesting section of a video file based on an audio signal included in the video file, the interesting section being a section in which a user is estimated to express interest, includes an interesting section candidate extracting unit that extracts an interesting section candidate from the video file, the interesting section candidate being a candidate for the interesting section, a detailed structure determining unit that determines whether the interesting section candidate includes a specific detailed structure, and an interesting section identifying unit that identifies the interesting section by analyzing a specific section when the detailed structure determining unit determines that the interesting section candidate includes the detailed structure, the specific section including the detailed structure and being shorter than the interesting section candidate.Type: ApplicationFiled: April 24, 2012Publication date: May 2, 2013Inventors: Tomohiro Konuma, Ryouichi Kawanishi, Tomoyuki Karibe, Tsutomu Uenoyama
-
Publication number: 20130101223Abstract: Provided is an image processing device for associating images with objects appearing in the images, while reducing burden on the user. The image processing device: stores, for each of events, a photographic attribute indicating a photographic condition predicted to be met with respect to an image photographed in the event; stores an object predicted to appear in an image photographed in the event; extracts from a collection of photographed images a photographic attribute that is common among a predetermined number of photographed images in the collection, based on pieces of photography-related information of the respective photographed images; specifies an object stored for an event corresponding to the extracted photographic attribute; and conducts a process on the collection of photographed images to associate each photographed image containing the specified object with the object.Type: ApplicationFiled: February 29, 2012Publication date: April 25, 2013Inventors: Ryouichi Kawanishi, Tsutomu Uenoyama
-
Publication number: 20130097542Abstract: A categorizing apparatus according to the present invention includes: a selected object position determining unit which determines a first position of an object selected by a user in a first region; an identifying unit which identifies one or more objects which are related to the selected object; a parameter assigning unit which assigns a parameter to each of the related objects according to a degree of relatedness between each of the related objects and the selected object, the parameter contributing to a predetermined relationship which defines tracking property of the related object to the selected object when the selected object is moved from the first position. Hence the categorizing apparatus allows the user to intuitively categorize content items through his or her operation, so that the content items are categorized as the user desires.Type: ApplicationFiled: April 20, 2012Publication date: April 18, 2013Applicant: PANASONIC CORPORATIONInventors: Keiji Icho, Ryouichi Kawanishi
-
Publication number: 20130071031Abstract: The present invention provides an image processing device capable of combining a plurality of contents (e.g. videos) with a story line retained as much as possible, while reducing view's discomfort. The image processing device compares one of the contents, which contains a first partial content and a second partial content subsequent to the first partial content, with another one of the contents, which contains a plurality of consecutive partial contents, so as to detect, as a third partial content, a partial content with the highest similarity value from among the plurality of partial contents, and generates relational information by using the highest similarity value obtained by the first processing unit, the relational information being used for merging the first partial content, the second partial content and the third partial content.Type: ApplicationFiled: April 4, 2012Publication date: March 21, 2013Inventors: Zhongyang Huang, Yang Hua, Shuicheng Yan, Qiang Chen, Ryouichi Kawanishi
-
Publication number: 20130058579Abstract: An image information processing apparatus comprising: an extraction unit that extracts an object from a photographed image; a calculation unit that calculates an orientation of the object as exhibited in the image; and a provision unit that provides a tag to the image according to the orientation of the object.Type: ApplicationFiled: April 15, 2011Publication date: March 7, 2013Inventors: Ryouichi Kawanishi, Tsutomu Uenoyama, Tomohiro Konuma
-
Publication number: 20120321282Abstract: Interesting section extracting device 104 for extracting interesting section interesting to user from video file with reference to audio signal included in the video file such that specified time T0 is included in the interesting section. The interesting section extracting device 104 includes: interface device 109 configured to obtain specified time T0; likelihood vector generating unit 202 configured to calculate, in one-to-one correspondence with first unit sections of the audio signal, likelihoods for anchor models Ar that respectively represent features of a plurality of types of sound pieces, and generate likelihood vectors having the calculated likelihoods as components thereof; and interesting section extracting unit 209 configured to calculate first feature section as candidate section, which is candidate for the interesting section to be extracted, by using likelihood vectors F and extract, as the interesting section, part of the first feature section including the specified time T0.Type: ApplicationFiled: October 28, 2011Publication date: December 20, 2012Inventors: Tomohiro Konuma, Ryouichi Kawanishi, Tsutomu Uenoyama
-
Publication number: 20120301032Abstract: The image classification apparatus extracts first features of each received image (S22) and second features of a relevant image relevant to each received image (S25). Subsequently, the image classification apparatus obtains a third feature by calculation using locality of the extracted first and second features, the third feature being distinctive of a target object of each received image (S26), and creates model data based on the obtained third feature (S27).Type: ApplicationFiled: October 6, 2011Publication date: November 29, 2012Inventors: Ryouichi Kawanishi, Tomohiro Konuma, Tsutomu Uenoyama
-
Publication number: 20120272171Abstract: The information appliance displays content in a visually perceptible grid from which the user selects a target content and upon selection of the processor automatically identifies related content each with an associated relatedness score. Movement by the user of the target content causes the related content to move and follow the target content as if attracted by an invisible spring force or tensile force. The system thus presents the user with a graphical representation of moving items of content which are attracted to the related content based on the degree of relatedness. In this way the user quickly learns how to control selection and organization of related content by mimicking movement of physical objects acting under kinematic forces that mimic natural objects.Type: ApplicationFiled: April 21, 2011Publication date: October 25, 2012Applicant: PANASONIC CORPORATIONInventors: Keiji Icho, Ryouichi Kawanishi
-
Publication number: 20120230546Abstract: The present invention provides an image recognition apparatus with enhanced performance and robustness. In an image recognition apparatus 1, an image classification information accumulating unit 20 stores therein feature information defining visual features of various objects obtained through a learning process. For classification of input images, an image feature obtaining unit 18 extracts descriptors indicating features from each input image, image vocabularies corresponding to the descriptors are voted, and a classifying unit 19 calculates existence probabilities of one or more candidate objects, based on the result of the voting. According to the existence probabilities, objects contained in the image is identified. Through the calculation, the existence probabilities are adjusted by an exclusive classifier, based on exclusive relationship information defining exclusive object sets each containing different objects (object labels) predicted not to coexist in a same image.Type: ApplicationFiled: September 9, 2011Publication date: September 13, 2012Inventors: Yang Hua, Shuicheng Yan, Zhongyang Huang, Sheng Mei Shen, Ryouichi Kawanishi
-
Publication number: 20120117069Abstract: The present invention aims to provide a data processing device that provides a result of categorization that is satisfactory to a user, even when user data includes an object specific to the user.Type: ApplicationFiled: December 24, 2010Publication date: May 10, 2012Inventors: Ryouichi Kawanishi, Tsutomu Uenoyama, Akira Ishida
-
Patent number: 8154616Abstract: A data processing apparatus acquires images captured by cameras. A data processing apparatus 2 calculates relative positional relationship between the camera that captured an image and a camera that is captured in the captured image, by image recognition of the captured image. The data processing apparatus 2 aggregates positional relationships calculated for the captured images, respectively, and by having one camera as reference, calculates the relative positions and directions of other cameras. Based on the calculated results, a user interface for supporting selection of the captured image by a user is generated.Type: GrantFiled: January 16, 2008Date of Patent: April 10, 2012Assignee: Panasonic CorporationInventors: Keiji Icho, Masayuki Misaki, Takashi Kawamura, Kuniaki Isogai, Ryouichi Kawanishi, Jun Ohmiya, Hiromichi Nishiyama
-
Publication number: 20110167384Abstract: The invention aims to provide a user interface for efficiently displaying desired content from among a large number of contents. An operation location and an operation amount of an operation that has been made on an operation member is detected. Based on the operation location, one content is selected from among a plurality of contents that have been arranged in sequence, and a display unit displays the selected one content. The display unit displays another content when the operation location has moved during the display of the selected one content, an order of said another content being different from an order of the selected one content by a number based on the operation amount detected by a detection unit.Type: ApplicationFiled: May 18, 2010Publication date: July 7, 2011Inventors: Hiroshi Yabu, Ryouichi Kawanishi, Tsutomu Uenoyama