Patents by Inventor Shuhei TERAHATA
Shuhei TERAHATA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10776991Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.Type: GrantFiled: December 17, 2018Date of Patent: September 15, 2020Assignee: COLOPL, INC.Inventor: Shuhei Terahata
-
Patent number: 10496158Abstract: A communication load is reduced in a virtual-reality-space sharing system. An image generation device for generating a virtual reality space image includes a memory for storing an instruction and a processor coupled to the memory for executing the instruction. The instruction, when executed by the processor, causes the processor to acquire, from a second terminal sharing a virtual reality space with a first terminal used by a first user, line-of-sight information including a position and a line-of-sight direction of a second user using the second terminal in the virtual reality space, generate a virtual reality space image to be displayed on the first terminal based on the line-of-sight information acquired from the second terminal, and supply the generated virtual reality space image to the first terminal.Type: GrantFiled: October 5, 2016Date of Patent: December 3, 2019Assignee: COLOPL, INC.Inventor: Shuhei Terahata
-
Publication number: 20190122420Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.Type: ApplicationFiled: December 17, 2018Publication date: April 25, 2019Inventor: Shuhei TERAHATA
-
Patent number: 10198855Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.Type: GrantFiled: July 19, 2017Date of Patent: February 5, 2019Assignee: COLOPL, INC.Inventor: Shuhei Terahata
-
Publication number: 20180329487Abstract: A method includes defining a virtual space, the virtual space containing a virtual viewpoint, an operation object, and a target object. The method further includes defining a visual field in the virtual space in accordance with a position of the virtual viewpoint in the virtual space. The method further includes generating a visual-field image corresponding to the visual field. The method further includes displaying the visual-field image on the HMD. The method further includes detecting a position of a part of a body of a user associated with the HMD. The method further includes determining that a state of the part of the body satisfies a first condition. The method further includes determining, on condition that the first condition is satisfied, that a state of the part of the body satisfies a second condition different from the first condition, the second condition including a condition that the operation object has yet to select the target object.Type: ApplicationFiled: May 11, 2018Publication date: November 15, 2018Inventors: Masaya AOYAMA, Shuhei TERAHATA
-
Publication number: 20180321817Abstract: A method includes defining a virtual space, wherein the virtual space comprises a virtual viewpoint and an operation object. The method further includes detecting a motion of a head-mounted device (HMD). The method further includes generating a visual-field image in the virtual space in accordance with a position of the virtual viewpoint in the virtual space and the motion of the HMD. The method further includes detecting, in a real space, a motion of a part of a body of a user associated with the HMD. The method further includes moving the operation object in accordance with the motion of the part of the body. The method further includes displaying the visual-field image on a display associated with the HMD. The method further includes setting the virtual viewpoint as a first viewpoint. The method further includes displaying a first object in a field-of-view image from the first viewpoint.Type: ApplicationFiled: May 1, 2018Publication date: November 8, 2018Inventor: Shuhei TERAHATA
-
Publication number: 20180314326Abstract: In a method involving determining a point of gaze based on an actual line of sight and displaying a cursor or a pointer at that place to designate a position in a virtual space, the following problem has occurred. When it is considered that a normal line of sight is in a direction of slightly looking downward, it is not easy to designate a position on an object that does not always have a large apparent area as viewed from the operator side, like an upper surface or a lower surface of the object. A provisional line of sight for designating the object being a target of operation is output not from a position of an eye of the operator in the virtual space but from a position separated from the position of the eye by a certain first distance in an up-down direction.Type: ApplicationFiled: June 6, 2016Publication date: November 1, 2018Inventor: Shuhei TERAHATA
-
Patent number: 9971157Abstract: A method includes defining a virtual space including a virtual camera, a character object, and a field in which the character object is movable, the field defining a peripheral direction and surrounding the virtual camera along the peripheral direction. The method includes defining a visual field of the virtual camera. The method includes displaying a visual-field image on a head-mounted display based on the visual field. The method includes detecting a movement input including a lateral-direction component, the lateral-direction being different from the peripheral direction. The method includes moving the character object in the field along the peripheral direction in response to the lateral-direction component of the movement input. The method includes detecting movement of the head-mounted display and changing a direction of the virtual camera based on detected movement of the head-mounted display. The method includes updating the visual-field image based on the direction of the virtual camera.Type: GrantFiled: August 11, 2017Date of Patent: May 15, 2018Assignee: COLOPL, INC.Inventors: Takanori Yoshioka, Takeshi Kobayashi, Takao Kashihara, Shuhei Terahata
-
Patent number: 9959679Abstract: A method of controlling a widget in a virtual space is disclosed, comprising: moving a field-of-view and a point of gaze in the virtual space; determining if the widget and the point of gaze overlap each other, and providing an input to the widget if the widget and the point of gaze overlap; determining if at least a part of the widget is positioned outside the field of view; and moving the widget so that the part of the widget is positioned inside the field of view if it is determined that at least a part of the widget is positioned outside the field of view.Type: GrantFiled: August 8, 2016Date of Patent: May 1, 2018Assignee: COLOPL, Inc.Inventor: Shuhei Terahata
-
Publication number: 20180031845Abstract: A method includes defining a virtual space including a virtual camera, a character object, and a field in which the character object is movable, the field defining a peripheral direction and surrounding the virtual camera along the peripheral direction. The method includes defining a visual field of the virtual camera. The method includes displaying a visual-field image on a head-mounted display based on the visual field. The method includes detecting a movement input including a lateral-direction component, the lateral-direction being different from the peripheral direction. The method includes moving the character object in the field along the peripheral direction in response to the lateral-direction component of the movement input. The method includes detecting movement of the head-mounted display and changing a direction of the virtual camera based on detected movement of the head-mounted display. The method includes updating the visual-field image based on the direction of the virtual camera.Type: ApplicationFiled: August 11, 2017Publication date: February 1, 2018Inventors: Takanori YOSHIOKA, Takeshi KOBAYASHI, Takao KASHIHARA, Shuhei TERAHATA
-
Publication number: 20180025531Abstract: A method of providing a virtual experience to a user includes identifying a plurality of virtual objects. The method further includes detecting a position of a part of the user's body other than the user's head. The method further includes detecting a reference line of sight of the user. The method further includes setting an extension direction for a first virtual object of the plurality of virtual objects based on a direction of the reference line of sight. The method further includes setting a region for a first virtual object of the plurality of virtual objects, wherein the region comprises a part extending in the extension direction. The method further includes determining whether the first virtual object and a virtual representation of the part of the body have touched based on a positional relationship between the region and a position of the virtual representation of the part of the body.Type: ApplicationFiled: July 19, 2017Publication date: January 25, 2018Inventor: Shuhei TERAHATA
-
Publication number: 20180015362Abstract: An information processing method including generating virtual space data for defining a virtual space comprising a virtual camera and a sound source object. The virtual space includes first and second regions. The method further includes determining a visual field of the virtual camera in accordance with a detected movement of a first head mounted display (HMD). The method further includes generating visual-field image data based on the visual field of the virtual camera and the virtual space data. The method further includes instructing the first HMD to display a visual-field image based on the visual-field image data. The method further includes setting an attenuation coefficient for defining an attenuation amount of a sound propagating through the virtual space, wherein the attenuation coefficient is set based on a the visual field of the virtual camera. The method further includes processing the sound based on the attenuation coefficient.Type: ApplicationFiled: July 12, 2017Publication date: January 18, 2018Inventor: Shuhei TERAHATA
-
Patent number: 9741167Abstract: A method includes defining a virtual space including a display object and a three-dimensional object. The method includes defining a first pair of virtual cameras in the virtual space. The method includes defining a second pair of virtual cameras in the virtual space. The method includes generating a first right-eye image, and a first left-eye image. Both the first right-eye and left-eye images include a first part of the virtual space and the display object. The method further includes generating a second right-eye image and a second left-eye image. Both the second right-eye and left-eye images include a second part of the virtual space and the three-dimensional object. The method further includes superimposing the first and right-eye images to overlap the three-dimensional object with the display object. The method further includes superimposing the first and second left-eye images to overlap the three-dimensional object with the display object.Type: GrantFiled: February 7, 2017Date of Patent: August 22, 2017Assignee: COLOPL, INC.Inventor: Shuhei Terahata
-
Publication number: 20170228928Abstract: A method includes defining a virtual space including a display object and a three-dimensional object. The method includes defining a first pair of virtual cameras in the virtual space. The method includes defining a second pair of virtual cameras in the virtual space. The method includes generating a first right-eye image, and a first left-eye image. Both the first right-eye and left-eye images include a first part of the virtual space and the display object. The method further includes generating a second right-eye image and a second left-eye image. Both the second right-eye and left-eye images include a second part of the virtual space and the three-dimensional object. The method further includes superimposing the first and right-eye images to overlap the three-dimensional object with the display object. The method further includes superimposing the first and second left-eye images to overlap the three-dimensional object with the display object.Type: ApplicationFiled: February 7, 2017Publication date: August 10, 2017Inventor: Shuhei TERAHATA
-
Publication number: 20170108922Abstract: A communication load is reduced in a virtual-reality-space sharing system. An image generation device for generating a virtual reality space image includes a memory for storing an instruction and a processor coupled to the memory for executing the instruction. The instruction, when executed by the processor, causes the processor to acquire, from a second terminal sharing a virtual reality space with a first terminal used by a first user, line-of-sight information including a position and a line-of-sight direction of a second user using the second terminal in the virtual reality space, generate a virtual reality space image to be displayed on the first terminal based on the line-of-sight information acquired from the second terminal, and supply the generated virtual reality space image to the first terminal.Type: ApplicationFiled: October 5, 2016Publication date: April 20, 2017Inventor: Shuhei TERAHATA
-
Publication number: 20160364916Abstract: A method of controlling a widget in a virtual space is disclosed, comprising: moving a field-of-view and a point of gaze in the virtual space; determining if the widget and the point of gaze overlap each other, and providing an input to the widget if the widget and the point of gaze overlap; determining if at least a part of the widget is positioned outside the field of view; and moving the widget so that the part of the widget is positioned inside the field of view if it is determined that at least a part of the widget is positioned outside the field of view.Type: ApplicationFiled: August 8, 2016Publication date: December 15, 2016Applicant: COLOPL, Inc.Inventor: Shuhei TERAHATA