Patents by Inventor Toshiyuki Inoko
Toshiyuki Inoko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11868834Abstract: A digital signage system displays a plurality of sets of electronic content on a touch panel display so as to be selectable. The system sets a session for accepting the selection of content, accepts the selection of content by the user, and temporarily holds the information about the content selected by the user during the session. After the selection of content, the system creates access information for collectively accessing the held content, issues a two-dimensional code that is based on the access information and can be read using the user's mobile terminal, and displays the two-dimensional code on the touch panel display.Type: GrantFiled: March 11, 2021Date of Patent: January 9, 2024Assignee: TEAMLAB INC.Inventor: Toshiyuki Inoko
-
Patent number: 11844161Abstract: To more effectively perform staging using light and sound. A staging apparatus 1 is provided with a ball 2 in which a gas is contained, and an internal device 200 including a sound output unit 270 and a light-emitting unit 240 provided in the ball 2. In the internal device 200, sound is outputted from the sound output unit 270 when an operation of the ball 2 is sensed. The staging apparatus 1 preferably receives, from an external control device or the like, a synchronization signal for synchronizing the timing at which sound is output, and outputs sound at a timing that is in accordance with the synchronization signal.Type: GrantFiled: November 28, 2018Date of Patent: December 12, 2023Assignee: TeamLab Inc.Inventor: Toshiyuki Inoko
-
Patent number: 11720166Abstract: To provide a display system which changes a video in accordance with the position of a real object on a display screen, and which provides a strong sense of immersion and a strong sense of presence in a scene. A video display system 100 comprises: a rendering device 10 which generates a two-dimensional video as seen from a specific camera position; a display device 20 which displays the video on a display screen; and a sensor device 30 which detects the position of a real object on the display screen. The rendering device 10 changes the camera position and renders a predetermined object within the video on the basis of the position of the real object the display screen.Type: GrantFiled: February 27, 2019Date of Patent: August 8, 2023Assignee: teamLab Inc.Inventor: Toshiyuki Inoko
-
Publication number: 20230169296Abstract: A digital signage system displays a plurality of sets of electronic content on a touch panel display so as to be selectable. The system sets a session for accepting the selection of content, accepts the selection of content by the user, and temporarily holds the information about the content selected by the user during the session. After the selection of content, the system creates access information for collectively accessing the held content, issues a two-dimensional code that is based on the access information and can be read using the user's mobile terminal, and displays the two-dimensional code on the touch panel display.Type: ApplicationFiled: March 11, 2021Publication date: June 1, 2023Applicant: TEAMLAB INC.Inventor: Toshiyuki INOKO
-
Publication number: 20230166198Abstract: [Problem] To provide a presentation system that effectively uses the characteristic of a float body such as a bubble. [Solution] This presentation system 100 comprises: a tornado fan in which a plurality of blowers each perform blowing so that clockwise or counterclockwise air flow is generated in a presentation space surrounded by exhaust ports 114 of the plurality of blowers when viewed in a plan view; and a supply device 12 that supplies a float body such as a bubble into the presentation space. It is possible to cause an aggregate of bubbles to float in the vicinity of the center in the presentation space by means of a vortex air flow formed in the presentation space.Type: ApplicationFiled: March 17, 2021Publication date: June 1, 2023Applicant: TEAMLAB INC.Inventor: Toshiyuki INOKO
-
Publication number: 20230127443Abstract: To provide a display control system in which a terminal device and a display device operate in tandem. The display control system comprises a terminal device 10 having an image-capture unit, a display device 50 for displaying video, and a control device 20 for controlling the display device 50. The terminal device 10 or the control device 20 identifies a capture range in video displayed by the display device 50 on the basis of an image captured by the image-capture unit. The control device 50 controls the display state of the video displayed by the display device 50 on the basis of the identified capture range. This configuration allows for interactive special effects such as making an object in the capture range of the terminal device 10 disappear from the video or making objects in the capture range of the terminal device 10 appear in the video.Type: ApplicationFiled: December 22, 2022Publication date: April 27, 2023Applicant: TEAMLAB INC.Inventor: Toshiyuki INOKO
-
Publication number: 20220256677Abstract: To more effectively perform staging using light and sound. A staging apparatus 1 is provided with a ball 2 in which a gas is contained, and an internal device 200 including a sound output unit 270 and a light-emitting unit 240 provided in the ball 2. In the internal device 200, sound is outputted from the sound output unit 270 when an operation of the ball 2 is sensed. The staging apparatus 1 preferably receives, from an external control device or the like, a synchronization signal for synchronizing the timing at which sound is output, and outputs sound at a timing that is in accordance with the synchronization signal.Type: ApplicationFiled: November 28, 2018Publication date: August 11, 2022Inventor: Toshiyuki INOKO
-
Publication number: 20210263583Abstract: To provide a display system which changes a video in accordance with the position of a real object on a display screen, and which provides a strong sense of immersion and a strong sense of presence in a scene. A video display system 100 comprises: a rendering device 10 which generates a two-dimensional video as seen from a specific camera position; a display device 20 which displays the video on a display screen; and a sensor device 30 which detects the position of a real object on the display screen. The rendering device 10 changes the camera position and renders a predetermined object within the video on the basis of the position of the real object the display screen.Type: ApplicationFiled: February 27, 2019Publication date: August 26, 2021Inventor: Toshiyuki INOKO
-
Publication number: 20210152730Abstract: To provide a display control system in which a terminal device and a display device operate in tandem. The display control system comprises a terminal device 10 having an image-capture unit, a display device 50 for displaying video, and a control device 20 for controlling the display device 50. The terminal device 10 or the control device 20 identifies a capture range in video displayed by the display device 50 on the basis of an image captured by the image-capture unit. The control device 50 controls the display state of the video displayed by the display device 50 on the basis of the identified capture range. This configuration allows for interactive special effects such as making an object in the capture range of the terminal device 10 disappear from the video or making objects in the capture range of the terminal device 10 appear in the video.Type: ApplicationFiled: October 17, 2018Publication date: May 20, 2021Inventor: Toshiyuki INOKO
-
Patent number: 10965915Abstract: Without carrying out an analysis process of a photographic image, to identify a type of a subject which a terminal device has photographed. Based on data of a timetable format, a specified display device 40 displays the video of the subject of a specified type for a specified display time. A terminal device 10 specifies a photography location and a photography time by a camera 14. Based on the data of a timetable format, an assessment is made as to whether the photography location belongs to the same area as the display device 40 and whether the photography time matches the display time of the video of the specified type subject upon the display device. If it is assessed that said match is true, it is determined that the video of the specified type subject has been photographed by the terminal device.Type: GrantFiled: December 4, 2017Date of Patent: March 30, 2021Assignee: TEAMLAB INC.Inventors: Toshiyuki Inoko, Wataru Sakashita, Hiroki Takizawa
-
Patent number: 10841895Abstract: To efficiently measure the positional relationship between a host terminal and a wireless tag in a terminal device. A terminal device 30: measures a first distance between a plurality of base stations 20 provided at individual reference positions and host terminal on the basis of wireless signals emitted by the base stations 20; specifies the position of the host terminal on the basis of the first distance; and specifies the position of a wireless tag 10 on the basis of information pertaining to a second distance between the wireless tag 10 and the base stations 20, said information being transmitted by base stations 20 that have received a wireless signal emitted by the wireless tag 10.Type: GrantFiled: May 29, 2018Date of Patent: November 17, 2020Assignee: TeamLab Inc.Inventors: Toshiyuki Inoko, Wataru Sakashita
-
Publication number: 20200296684Abstract: To efficiently measure the positional relationship between a host terminal and a wireless tag in a terminal device. A terminal device 30: measures a first distance between a plurality of base stations 20 provided at individual reference positions and a host terminal on the basis of wireless signals emitted by the base stations 20; specifies the position of the host terminal on the basis of the first distance; and specifies the position of a wireless tag 10 on the basis of information pertaining to a second distance between the wireless tag 10 and the base stations 20, said information being transmitted by base stations 20 that have received a wireless signal emitted by the wireless tag 10.Type: ApplicationFiled: May 29, 2018Publication date: September 17, 2020Inventors: Toshiyuki INOKO, Wataru SAKASHITA
-
Publication number: 20190373224Abstract: Without carrying out an analysis process of a photographic image, to identify a type of a subject which a terminal device has photographed. On the basis of subject appearance information which specifies a display device 40 whereupon a subject video is to be displayed, a type of a subject to be displayed upon the display device, and a display time of the video of the subject of said type, a management device 20 causes the specified display device 40 to display the video of the subject of the specified type for the specified display time. A terminal device 10 specifies a photography location and a photography time at which photography has been taken by a photography unit 14.Type: ApplicationFiled: December 4, 2017Publication date: December 5, 2019Inventors: Toshiyuki INOKO, Wataru SAKASHITA, Hiroki TAKIZAWA
-
Patent number: 10140759Abstract: Provided is a method for generating light emission data for a three-dimensional display provided with a plurality of multicolor light emitting elements arranged in three-dimensional directions, the method comprising: a modeling step for acquiring a 3D polygon model; a voxelization step for representing the 3D polygon model by a plurality of voxels and calculating position information of each of the voxels; a surface color calculation step for calculating, for the 3D polygon model, color information of a front-side surface with respect to a specific point of view and color information of a back-side surface with respect to the specific point of view; an interior color calculation step for referring to the position information and calculating, on the basis of the color information of the front-side surface and the color information of the back-side surface, color information of voxels located between the front-side surface and the back-side surface; and a mapping step for referring to the position information aType: GrantFiled: May 11, 2015Date of Patent: November 27, 2018Assignee: TEAMLAB INC.Inventors: Akinori Hamada, Haozhe Li, Toshiyuki Inoko
-
Publication number: 20170084079Abstract: Provided is a method for generating light emission data for a three-dimensional display provided with a plurality of multicolor light emitting elements arranged in three-dimensional directions, the method comprising: a modeling step for acquiring a 3D polygon model; a voxelization step for representing the 3D polygon model by a plurality of voxels and calculating position information of each of the voxels; a surface color calculation step for calculating, for the 3D polygon model, color information of a front-side surface with respect to a specific point of view and color information of a back-side surface with respect to the specific point of view; an interior color calculation step for referring to the position information and calculating, on the basis of the color information of the front-side surface and the color information of the back-side surface, color information of voxels located between the front-side surface and the back-side surface; and a mapping step for referring to the position information aType: ApplicationFiled: May 11, 2015Publication date: March 23, 2017Inventors: Akinori HAMADA, Koutetsu LEE, Toshiyuki INOKO
-
Publication number: 20160343166Abstract: [Problem] To generate a highly realistic composite image. [Solution] This image-capturing system is provided with a camera (10) for capturing an image of a subject, a tracker (20) for detecting the position and orientation of the camera, a space image storage unit (30) in which an image of a three-dimensional virtual space is stored, and an image-forming unit (40) for generating a composite image in which an image of the subject captured using the camera and an image of the three-dimensional virtual space are combined. The image-forming unit (40) projects the three-dimensional virtual space specified by a world coordinate system (X, Y, Z) onto screen coordinates (U, V), in which the camera coordinate system (U, V, N) of the camera is taken as a reference, and combines the images of the three-dimensional virtual space and the subject on a screen specified by the screen coordinates (U, V).Type: ApplicationFiled: December 22, 2014Publication date: November 24, 2016Applicant: TEAMLAB INC.Inventor: Toshiyuki INOKO
-
Patent number: D867202Type: GrantFiled: July 31, 2018Date of Patent: November 19, 2019Assignee: TEAMLAB INC.Inventor: Toshiyuki Inoko
-
Patent number: D867921Type: GrantFiled: July 31, 2018Date of Patent: November 26, 2019Assignee: TEAMLAB INC.Inventor: Toshiyuki Inoko
-
Patent number: D882456Type: GrantFiled: January 31, 2019Date of Patent: April 28, 2020Assignee: teamLab Inc.Inventor: Toshiyuki Inoko
-
Patent number: D914959Type: GrantFiled: November 28, 2018Date of Patent: March 30, 2021Assignee: TEAMLAB INC.Inventor: Toshiyuki Inoko