Patents by Inventor Tomoyuki Yamamoto

Tomoyuki Yamamoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10129548
    Abstract: To reduce the amount of processing related to coding and decoding of transform coefficients, a sub-block coefficient presence/absence flag indicating whether or not at least one non-zero transform coefficient is included is decoded for each of two or more sub-blocks obtained by dividing a unit region, and a context index of a target sub-block is derived on the basis of transform coefficient presence/absence flags each indicating whether or not a transform coefficient is 0. In accordance with the sub-block coefficient presence/absence flags of adjacent sub-blocks that are adjacent to the target sub-block, the context index of the target sub-block is derived.
    Type: Grant
    Filed: December 26, 2012
    Date of Patent: November 13, 2018
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Takeshi Tsukuba, Tomohiro Ikai, Tomoyuki Yamamoto, Yukinobu Yasugi
  • Publication number: 20180311825
    Abstract: An operation device which can reliably prevent any motion of a robot in the real space unintended by the user. An operation device includes a touch screen to display an image of a robot model and to receive a touch input, a model motion execution section to cause the robot model to make a motion in response to a touch input made by touching a surface of the touch screen, a robot motion button for causing the robot to make a motion in the real space, a motion input detection section to detect an input to the robot motion button, and a real machine motion command section to output a command to cause the robot to make an identical motion in the real space to the motion of the robot model executed by the model motion execution section as long as inputs to the robot motion button are continuously detected.
    Type: Application
    Filed: April 20, 2018
    Publication date: November 1, 2018
    Applicant: FANUC CORPORATION
    Inventors: Tomoyuki Yamamoto, Keita Maeda
  • Patent number: 10116966
    Abstract: A hierarchical moving image decoding device (1) includes a profile information decoding unit (1221a) that decodes/configures sublayer profile information after decoding a sublayer profile present flag regarding respective sublayers, and a level information decoding unit (1221b) that decodes/configures sublayer level information.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: October 30, 2018
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Takeshi Tsukuba, Tomoyuki Yamamoto, Tomohiro Ikai, Yukinobu Yasugi
  • Patent number: 10110891
    Abstract: To achieve a reduction in the amount of coding taken in the use of an asymmetric partition and to implement efficient encoding/decoding processes exploiting the characteristics of the asymmetric partition. In a case that a CU information decoding unit decodes information for specifying an asymmetric partition (AMP; Asymmetric Motion Partition) as a partition type, an arithmetic decoding unit is configured to decode binary values by switching between arithmetic decoding that uses contexts and arithmetic decoding that does not use contexts in accordance with the position of the binary values.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: October 23, 2018
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Tomoyuki Yamamoto, Tomohiro Ikai, Yukinobu Yasugi
  • Patent number: 10105846
    Abstract: A robot control system includes an operation command output unit which outputs an operation command of a motor, a position detection unit which is provided on the motor to detect the position of a control shaft, a stop signal output unit which outputs a stop signal to stop the robot when the speed of the control shaft acquired from the position detection unit exceeds a speed threshold value, and an operation command interruption unit which interrupts the operation command outputted from the operation command output unit when the stop signal is outputted.
    Type: Grant
    Filed: July 16, 2015
    Date of Patent: October 23, 2018
    Assignee: FANUC CORPORATION
    Inventors: Takumi Oyama, Tomoyuki Yamamoto
  • Patent number: 10104362
    Abstract: An image decoding device which can extract an independent layer without rewriting of syntax and cause a non-scalable decoder to reproduce the extracted is realized. An image decoding device (1) includes a NAL-unit header decoding unit (211) that decodes a layer ID of an SPS, a dependency layer information decoding unit (2101) that decodes dependency layer information, and a profile level information decoding unit (2102) that decodes profile level information from a VPS. The decoding unit (2102) decodes the profile level information also from the SPS in a case where, it is determined that a layer indicated by the layer ID is an independent layer, based on the dependency layer information.
    Type: Grant
    Filed: October 7, 2014
    Date of Patent: October 16, 2018
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Tomohiro Ikai, Tomoyuki Yamamoto, Takeshi Tsukuba
  • Publication number: 20180295360
    Abstract: A video image decoding device (1) is equipped with a TT information decoder (14) that, in the case where encoded data includes merge/skip information that merges or skips presence information indicating whether or not frequency-domain transform coefficients are included in the quantized transform coefficients, does not decode the presence information, and a TT information inference unit (33) that, in the case where the encoded data includes merge/skip information that merges or skips the presence information, infers the presence information. The TT information decoder (14) uses presence information inferred by the TT information inference unit (33) to decode the encoded and quantized transform coefficients.
    Type: Application
    Filed: June 7, 2018
    Publication date: October 11, 2018
    Inventor: Tomoyuki YAMAMOTO
  • Publication number: 20180293768
    Abstract: A system for detecting and displaying an external force applied to a robot. Magnitude and direction of the detected external force are displayed by an image for visual and intuitive understanding. A robot system includes a robot; a detection section for detecting an external force applied to the robot; a conversion section for converting magnitude and direction of the external force detected by the detection section into a coordinate value of a three-dimensional rectangular coordinate system; an image generating section for generating a force model image representing the magnitude and direction of the external force by a graphic, with use of the coordinate value obtained by the conversion section; and a display section for three-dimensionally displaying the force model image generated by the image generating section.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 11, 2018
    Inventors: Nao OOSHIMA, Keita MAEDA, Tomoyuki YAMAMOTO
  • Publication number: 20180288408
    Abstract: A predicted image is generated by means of a method with which it is easy for parallel processing for a plurality of pixels to be executed, in a case where each pixel of the predicted image is derived according to a distance from a reference region and with reference to an unfiltered reference pixel. A predicted pixel value constituting the predicted image is derived by applying weighted sum in which a weighting coefficient is used with respect to a filtered predicted pixel value in a target pixel within a prediction block, and at least one or more unfiltered reference pixel values, and the weighting coefficient for the unfiltered reference pixel values is derived as a product of a reference intensity coefficient that is determined according to a prediction direction indicated by a prediction mode, and a distance weighting that monotonically decreases according to an increase in a reference distance for the target pixel.
    Type: Application
    Filed: August 24, 2016
    Publication date: October 4, 2018
    Inventors: Tomohiro IKAI, Takeshi TSUKUBA, Tomoyuki YAMAMOTO
  • Publication number: 20180288414
    Abstract: An adaptive offset filter (60) adds an offset to the pixel value of each pixel forming an input image. The adaptive offset filter (60) refers to offset-type specifying information, sets offset attributes for a subject unit area of the input image, decodes an offset having a bit width corresponding to an offset value range included in the set offset attributes, and adds the offset to the pixel value of each pixel forming the input image.
    Type: Application
    Filed: June 4, 2018
    Publication date: October 4, 2018
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Takanori Yamazaki, Tomohiro Ikai, Tomoyuki Yamamoto, Yukinobu Yasugi
  • Publication number: 20180281172
    Abstract: Provided is a robot system including a robot; a control device configured to control the robot; a portable teach pendant connected to the control device; and a teaching handle attached to the robot and connected to the control device, where the teach pendant is provided with a first enable switch configured to permit operation of the robot by the teach pendant, the teaching handle is provided with a second enable switch configured to permit operation of the robot by the teaching handle, and the control device enables operation of the robot by the teaching handle only when the first enable switch is in an off state and the second enable switch is switched to the on state, and enables operation of the robot by the teach pendant only when the second enable switch is in an off state and the first enable switch is switched to the on state.
    Type: Application
    Filed: March 2, 2018
    Publication date: October 4, 2018
    Applicant: Fanuc Corporation
    Inventors: Gou Inaba, Tomoyuki Yamamoto, Hiromitsu Takahashi
  • Publication number: 20180281180
    Abstract: To provide an action information learning device, robot control system and action information learning method for facilitating the performing of cooperative work by an operator with a robot. An action information learning device includes: a state information acquisition unit that acquires a state of a robot; an action information output unit for outputting an action, which is adjustment information for the state; a reward calculation section for acquiring determination information, which is information about a handover time related to handover of a workpiece, and calculating a value of reward in reinforcement learning based on the determination information thus acquired; and a value function update section for updating a value function by way of performing the reinforcement learning based on the value of reward calculated by the reward calculation section, the state and the action.
    Type: Application
    Filed: March 26, 2018
    Publication date: October 4, 2018
    Inventors: Tomoyuki YAMAMOTO, Yuusuke KURIHARA
  • Publication number: 20180257232
    Abstract: A robot system including: a robot and a controller, the controller is configured to conduct: a region generating process that generates a robot inclusion region which includes the robot and the like and whose area increases as a speed of the robot increases, an entry prohibited region near the robot, and a speed limit region along the robot side edge of the entry prohibited region; an entry detecting process that detects whether or not the generated robot inclusion region enters the entry prohibited region or the speed limit region; a speed limiting process that reduces operating speed of the robot if the robot inclusion region enters the speed limit region; and a power cutoff unit that immediately stops the robot if the robot inclusion region enters the entry prohibited region.
    Type: Application
    Filed: March 7, 2018
    Publication date: September 13, 2018
    Applicant: FANUC CORPORATION
    Inventors: Tomoyuki YAMAMOTO, Nao OOSHIMA
  • Patent number: 10075720
    Abstract: The processing amount and coding amount related to decoding/coding of profile information and level information are reduced. The profile/level information decoded by a PTL (profile/level) information decoder (1021) is specified by the semantics of an i-th reference PTL information specifying index at a relative position between i-th profile/level information and referenced profile/level information.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: September 11, 2018
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Takeshi Tsukuba, Tomohiro Ikai, Tomoyuki Yamamoto
  • Patent number: 10075733
    Abstract: To achieve a reduction in the amount of coding taken in the use of an asymmetric partition and to implement efficient encoding/decoding processes exploiting the characteristics of the asymmetric partition. An image decoding device includes a motion compensation parameter derivation unit configured to derive a motion compensation parameter indicating either a uni-prediction scheme or a bi-prediction scheme. In a case that a prediction unit has a size less than or equal to a predetermined value, the motion compensation parameter derivation unit is configured to derive the motion compensation parameter by switching between the prediction schemes.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: September 11, 2018
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Tomoyuki Yamamoto, Tomohiro Ikai, Yukinobu Yasugi
  • Publication number: 20180249173
    Abstract: While maintaining a high degree of freedom in choosing partition sizes and transformation sizes adapted for local characteristics of videos, the amount of metadata is decreased. A video encoding apparatus (10) divides an input video into blocks of a prescribed size and encodes the video block by block. The video encoding apparatus is provided with: a prediction parameter determining portion (102) that decides the block partition structure; a predictive image producing portion (103) that generates predictive images, partition by partition, as prescribed by the partition structure; a transform coefficient producing portion (107) which applies one of the frequency transformations included in a prescribed transformation preset to prediction residuals, i.e. the differences between predictive images and the input video; a transform restriction deriving portion (104) which generates the list of transform candidate, i.e.
    Type: Application
    Filed: April 30, 2018
    Publication date: August 30, 2018
    Applicant: Sharp Kabushiki Kaisha
    Inventors: Tomoyuki YAMAMOTO, Tomohiro IKAI
  • Publication number: 20180229363
    Abstract: A tablet terminal that establishes a wireless connection with one of a plurality of controllers each having identification information and transmits an operation signal for commanding operation of a robot to the wirelessly connected controller, is configured to prevent the robot from being operated by the operation signal when the identification information of the controller to which the base holding the tablet terminal is connected does not coincide with the identification information of the controller to which the tablet terminal is connected.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 16, 2018
    Applicant: FANUC CORPORATION
    Inventors: Yuusuke Kurihara, Tomoyuki Yamamoto, Hiromitsu Takahashi
  • Patent number: 10027956
    Abstract: A video image decoding device (1) is equipped with a TT information decoder (14) that, in the case where encoded data includes merge/skip information that merges or skips presence information indicating whether or not frequency-domain transform coefficients are included in the quantized transform coefficients, does not decode the presence information, and a TT information inference unit (33) that, in the case where the encoded data includes merge/skip information that merges or skips the presence information, infers the presence information. The TT information decoder (14) uses presence information inferred by the TT information inference unit (33) to decode the encoded and quantized transform coefficients.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: July 17, 2018
    Assignee: SHARP KABUSHIKI KAISHA
    Inventor: Tomoyuki Yamamoto
  • Publication number: 20180176586
    Abstract: A moving image decoder (1) includes an intermediate estimated prediction mode deriving section (124) for transforming a prediction mode of each neighbor partition into an intermediate prediction mode included in an intermediate prediction set which is a sum of prediction sets (PS); and an estimated prediction mode deriving section (125) for deriving an estimated prediction mode by estimating a prediction mode of a target partition based on the intermediate prediction mode of each neighbor partition which is obtained by the transform.
    Type: Application
    Filed: February 15, 2018
    Publication date: June 21, 2018
    Applicant: SHARP KABUSHIKI KAISHA
    Inventors: Tomoyuki YAMAMOTO, Tomohiro IKAI
  • Publication number: 20180161988
    Abstract: To provide a robot system capable of reducing the burden of a setting operator regardless of conditions such as setting conditions of a robot and the complexity of a work space at the time of setting an operable-inoperable area of the robot. A robot system has a robot capable of detecting contact with an obstacle. The robot moves inside a predetermined search area in a predetermined posture along a previously-determined scheduled search route and sets an operable-inoperable area of the robot inside the search area based on position-posture data with respect to the robot having come into contact with the obstacle during moving of the robot.
    Type: Application
    Filed: December 6, 2017
    Publication date: June 14, 2018
    Inventors: Toshiaki Kamoi, Tomoyuki Yamamoto