Patents by Inventor Yasuhiro Okuno
Yasuhiro Okuno has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20170344881Abstract: An information processing apparatus includes a learning unit configured to learn a plurality of multi-layer neural networks configured to carry out a plurality of tasks, a generation unit configured to generate a shared layer candidate at a predetermined layer between or among the plurality of multi-layer neural networks, a first relearning unit configured to relearn the plurality of multi-layer neural networks in a structure using the shared layer candidate, and a determination unit configured to determine whether to share the shared layer candidate at the predetermined layer with respect to each of the plurality of tasks based on an evaluation of the relearning.Type: ApplicationFiled: May 23, 2017Publication date: November 30, 2017Inventors: Yasuhiro Okuno, Shunta Tate, Yasuhiro Komori
-
Patent number: 9189693Abstract: An information processing apparatus encodes an input pattern to a code including a plurality of bits, calculates reliabilities for respective bits of the code, generates a similar codes each similar to the code based on the reliabilities, and recognizes the input pattern based on the code and the similar codes.Type: GrantFiled: November 28, 2012Date of Patent: November 17, 2015Assignee: CANON KABUSHIKI KAISHAInventors: Yasuhiro Okuno, Yusuke Mitarai
-
Patent number: 9008437Abstract: An information processing apparatus sets a plurality of reference locations of data in information as one reference location pattern and acquires a feature amount obtained from a value of data of one of the plurality of pieces of reference information in one reference location pattern for each of a plurality of reference location patterns and the plurality of pieces of reference information. The apparatus extracts data included in the input information according to each of the plurality of reference location patterns, selects the reference location pattern for classification of the input information from the plurality of reference location patterns based on a value of data included in the extracted input information, and executes classification of the input information by using the feature amount in the selected reference location pattern and data included in the input information at a reference location indicated by the reference location pattern.Type: GrantFiled: November 26, 2012Date of Patent: April 14, 2015Assignee: Canon Kabushiki KaishaInventors: Yasuhiro Okuno, Katsuhiko Mori
-
Patent number: 8866811Abstract: Position and orientation information of a specific part of an observer is acquired (S403). It is determined whether or not a region of a specific part virtual object that simulates the specific part and that of another virtual object overlap each other on an image of a virtual space after the specific part virtual object is laid out based on the position and orientation information on the virtual space on which one or more virtual objects are laid out (S405). When it is determined that the regions overlap each other, an image of the virtual space on which the other virtual object and the specific part virtual object are laid out is generated; when it is determined that the regions do not overlap each other, an image of the virtual space on which only the other virtual object is laid out is generated (S409).Type: GrantFiled: October 30, 2008Date of Patent: October 21, 2014Assignee: Canon Kabushiki KaishaInventor: Yasuhiro Okuno
-
Patent number: 8782126Abstract: An end user is provided with an environment to easily remote-control a video camera via a general network such as the Internet. For this purpose, on a client side, the content of camera control is described in file-transfer protocol description, and the description is transferred to a camera server on the Internet via a browser. The camera server interprets the description, controls a camera in accordance with the designated content, to perform image sensing, and returns the obtained video image as the content of a file to the client. The client performs various controls while observing the video image. When a desired angle has been found, the client instructs to register the angle in a bookmark, then angle information displayed at that time is registered. Thereafter, when the user of the client can see the video image obtained on the same image-sensing conditions by merely select-designating the angle information registered in the bookmark.Type: GrantFiled: July 28, 2011Date of Patent: July 15, 2014Assignee: Canon Kabushiki KaishaInventors: Hirokazu Ohi, Yasutomo Suzuki, Tadashi Yamakawa, Mamoru Sato, Yasuhiro Okuno
-
Patent number: 8755611Abstract: An information processing apparatus that discriminates the orientation of a target includes a calculation unit that calculates a distribution of a difference in feature amount between a plurality of learning patterns each showing the orientation of a target, a determination unit that determines, using a probability distribution obtained from the distribution of differences calculated by the calculation unit, a pixel that is in an input pattern and is to be referred to in order to discriminate the orientation of a target in the input pattern, and a discrimination unit that performs discrimination for obtaining the orientation of the target in the input pattern by comparing a feature amount of the pixel determined by the determination unit and a threshold set in advance.Type: GrantFiled: August 10, 2011Date of Patent: June 17, 2014Assignee: Canon Kabushiki KaishaInventors: Shunta Tate, Yusuke Mitarai, Hiroto Yoshii, Yasuhiro Okuno, Katsuhiko Mori, Masakazu Matsugu
-
Patent number: 8633871Abstract: If an image acquired from a video camera (113) contains a two-dimensional bar code as information unique to an operation input device (116), information unique to the video camera (113) and the information unique to the operation input device (116) are managed in a shared memory (107) in association with each other.Type: GrantFiled: May 31, 2012Date of Patent: January 21, 2014Assignee: Canon Kabushiki KaishaInventor: Yasuhiro Okuno
-
Publication number: 20120236179Abstract: If an image acquired from a video camera (113) contains a two-dimensional bar code as information unique to an operation input device (116), information unique to the video camera (113) and the information unique to the operation input device (116) are managed in a shared memory (107) in association with each other.Type: ApplicationFiled: May 31, 2012Publication date: September 20, 2012Applicant: CANON KABUSHIKI KAISHAInventor: Yasuhiro Okuno
-
Patent number: 8207909Abstract: If an image acquired from a video camera (113) contains a two-dimensional bar code as information unique to an operation input device (116), information unique to the video camera (113) and the information unique to the operation input device (116) are managed in a shared memory (107) in association with each other.Type: GrantFiled: June 18, 2007Date of Patent: June 26, 2012Assignee: Canon Kabushiki KaishaInventor: Yasuhiro Okuno
-
Patent number: 8139087Abstract: An image presentation system which enables a user observing an image on a first display to recognize an instruction sent from a person seeing an image on a second display to the user. An HMD as the first display detects the position and orientation of the HMD and presents an image to the user. The second display presents an image to the person who gives an instruction to the user. A three-dimensional CG drawing device draws a virtual space image seen from a point of view corresponding to the position and orientation of the HMD and displays the virtual space image on the HMD and the second display. When receiving the instruction, the three-dimensional CG drawing device draws a virtual space image to be displayed on the HMD such that the virtual space image reflects the input instruction.Type: GrantFiled: June 28, 2006Date of Patent: March 20, 2012Assignee: Canon Kabushiki KaishaInventors: Tsuyoshi Kuroki, Sonoko Maeda, Kazuki Takemoto, Yasuo Katano, Yasuhiro Okuno
-
Publication number: 20120045120Abstract: An information processing apparatus that discriminates the orientation of a target includes a calculation unit that calculates a distribution of a difference in feature amount between a plurality of learning patterns each showing the orientation of a target, a determination unit that determines, using a probability distribution obtained from the distribution of differences calculated by the calculation unit, a pixel that is in an input pattern and is to be referred to in order to discriminate the orientation of a target in the input pattern, and a discrimination unit that performs discrimination for obtaining the orientation of the target in the input pattern by comparing a feature amount of the pixel determined by the determination unit and a threshold set in advance.Type: ApplicationFiled: August 10, 2011Publication date: February 23, 2012Applicant: CANON KABUSHIKI KAISHAInventors: Shunta Tate, Yusuke Mitarai, Hiroto Yoshii, Yasuhiro Okuno, Katsuhiko Mori, Masakazu Matsugu
-
Publication number: 20110279694Abstract: An end user is provided with an environment to easily remote-control a video camera via a general network such as the Internet. For this purpose, on a client side, the content of camera control is described in file-transfer protocol description, and the description is transferred to a camera server on the Internet via a browser. The camera server interprets the description, controls a camera in accordance with the designated content, to perform image sensing, and returns the obtained video image as the content of a file to the client. The client performs various controls while observing the video image. When a desired angle has been found, the client instructs to register the angle in a bookmark, then angle information displayed at that time is registered. Thereafter, when the user of the client can see the video image obtained on the same image-sensing conditions by merely select-designating the angle information registered in the bookmark.Type: ApplicationFiled: July 28, 2011Publication date: November 17, 2011Inventors: Hirokazu Ohi, Yasutomo Suzuki, Tadashi Yamakawa, Mamoru Sato, Yasuhiro Okuno
-
Patent number: 8022967Abstract: An image processing method includes the steps of acquiring an image of a physical space, acquiring a position and orientation of a viewpoint of the image, generating an image of a virtual object, detecting an area which consists of pixels each having a predetermined pixel value, and superimposing the image of the virtual object on the image of the physical space. The superimposition step includes calculating a distance between a position of the virtual object and a position of the viewpoint, acquiring an instruction indicating whether or not the virtual object is emphasis-displayed, and setting a flag indicating whether or not the image of the virtual object is to be set as a masked target. The masking process image of the virtual object is superimposed or not on the image of the physical space depending if the image of the virtual object is set as the masked target.Type: GrantFiled: June 1, 2005Date of Patent: September 20, 2011Assignee: Canon Kabushiki KaishaInventors: Yasuhiro Okuno, Toshikazu Ohshima, Kaname Tanimura
-
Patent number: 8015242Abstract: An end user is provided with an environment to easily remote-control a video camera via a general network such as the Internet. For this purpose, on a client side, the content of camera control is described in file-transfer protocol description, and the description is transferred to a camera server on the Internet via a browser. The camera server interprets the description, controls a camera in accordance with the designated content, to perform image sensing, and returns the obtained video image as the content of a file to the client. The client performs various controls while observing the video image. When a desired angle has been found, the client instructs to register the angle in a bookmark, then angle information displayed at that time is registered. Thereafter, when the user of the client can see the video image obtained on the same image-sensing conditions by merely select-designating the angle information registered in the bookmark.Type: GrantFiled: July 7, 2006Date of Patent: September 6, 2011Assignee: Canon Kabushiki KaishaInventors: Hirokazu Ohi, Yasutomo Suzuki, Tadashi Yamakawa, Mamoru Sato, Yasuhiro Okuno
-
Patent number: 7952594Abstract: The position and orientation of the viewpoint of an observer (100) are acquired. The position and orientation of a stylus (120) are acquired. A list image is laid out near the position of the stylus (120). An image of a virtual space after laying out the list image, which is seen in accordance with the position and orientation of the viewpoint, generated. The generated image is output to the display screen of an HMD (110).Type: GrantFiled: May 23, 2005Date of Patent: May 31, 2011Assignee: Canon Kabushiki KaishaInventors: Kenji Morita, Yasuhiro Okuno, Hideo Noro, Akihiro Katayama
-
Publication number: 20100265164Abstract: The positional relationship among a physical object, virtual object, and viewpoint is calculated using the position information of the physical object, that of the virtual object, and that of the viewpoint, and it is determined whether or not the calculated positional relationship satisfies a predetermined condition (S402). When it is determined that the positional relationship satisfies the predetermined condition, sound data is adjusted to adjust a sound indicated by the sound data (S404), and a sound signal based on the adjusted sound data is generated and output.Type: ApplicationFiled: November 5, 2008Publication date: October 21, 2010Applicant: CANON KABUSHIKI KAISHAInventor: Yasuhiro Okuno
-
Patent number: 7773098Abstract: A virtual reality presentation apparatus (100) has a CPU (101) which controls the overall apparatus, a memory (103) which stores various programs, a memory (104) which stores various data, an HMD (105) as a video display device, and a position and orientation controller (106) having position and orientation sensors (107, 108, 109), which are connected to each other via a bus (102). After a position and orientation required to render a CG scene for one frame are determined, when CG objects are in an interference state immediately before this processing and the difference between the current time and an interference time (128) set upon determining the interference state does not exceed an interference detection skip time (124), CG image generation means (112) generates a CG scene for one frame based on the determined position and orientation without checking if an interference has occurred between the CG objects.Type: GrantFiled: May 5, 2006Date of Patent: August 10, 2010Assignee: Canon Kabushiki KaishaInventors: Yasuhiro Okuno, Hideo Noro, Taichi Matsui
-
Patent number: 7594853Abstract: There is provided a control apparatus for a game or the like, that is capable of automatically carrying out a game or simulation starting/terminating process according to operations performed necessarily in executing the game or simulation. Controllers are mounted on or gripped by a player for operation. Position sensors detect the positional relationship of the controllers with respect to a space for executing the game. A control circuit controls the start/termination of the game, and the, turning-on/off of power sources of the controllers, according to the results of the detection by the position sensors.Type: GrantFiled: June 12, 2006Date of Patent: September 29, 2009Assignee: Canon Kabushiki KaishaInventors: Hideo Noro, Yasuhiro Okuno
-
Publication number: 20090128564Abstract: Position and orientation information of a specific part of an observer is acquired (S403). It is determined whether or not a region of a specific part virtual object that simulates the specific part and that of another virtual object overlap each other on an image of a virtual space after the specific part virtual object is laid out based on the position and orientation information on the virtual space on which one or more virtual objects are laid out (S405). When it is determined that the regions overlap each other, an image of the virtual space on which the other virtual object and the specific part virtual object are laid out is generated; when it is determined that the regions do not overlap each other, an image of the virtual space on which only the other virtual object is laid out is generated (S409).Type: ApplicationFiled: October 30, 2008Publication date: May 21, 2009Applicant: CANON KABUSHIKI KAISHAInventor: Yasuhiro Okuno
-
Publication number: 20080106488Abstract: If an image acquired from a video camera (113) contains a two-dimensional bar code as information unique to an operation input device (116), information unique to the video camera (113) and the information unique to the operation input device (116) are managed in a shared memory (107) in association with each other.Type: ApplicationFiled: June 18, 2007Publication date: May 8, 2008Inventor: Yasuhiro Okuno