Patents by Inventor Naoki Ohtani

Naoki Ohtani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7995053
    Abstract: A drawing device that includes a triangle detecting unit specifying a triangle to be drawn and specifying a pixel block having a pixel of the triangle and includes a B-edge detecting unit judging whether or not the pixel block specified by the triangle detecting unit includes a pixel of a triangle that is connected to the triangle. The drawing device also includes a rasterizing unit that, when the B-edge detecting unit judges that the pixel block specified by the triangle detecting unit includes the pixel of the triangle, performs the rasterization processing on the pixel block so that pixel data is generated, includes a memory R/W unit writing the pixel data of the pixel block that is generated by the rasterizing unit into a memory, and includes a drawing engine controlling a display of an image in accordance with the pixel data written into the memory.
    Type: Grant
    Filed: August 2, 2005
    Date of Patent: August 9, 2011
    Assignee: Panasonic Corporation
    Inventor: Naoki Ohtani
  • Publication number: 20090009518
    Abstract: To provide a drawing device which can make effective use of a memory bus bandwidth without needing an expensive and complicated circuit configuration.
    Type: Application
    Filed: August 2, 2005
    Publication date: January 8, 2009
    Inventor: Naoki Ohtani
  • Patent number: 6909453
    Abstract: A communication unit 1 carries out voice communication, and a character background selection input unit 2 selects a CG character corresponding to a communication partner. A voice/music processing unit 5 performs voice/music processing required for the communication, a voice/music converting unit 6 converts voice and music, and a voice/music output unit outputs the voice and music. A voice input unit 8 acquires voice. A voice analyzing unit 9 analyzes the voice, and an emotion presuming unit 10 presumes an emotion based on the result of the voice analysis. A lips motion control unit 11, a body motion control unit 12 and an expression control unit 13 send control information to a 3-D image drawing unit 14 to generate an image, and a display unit 15 displays the image.
    Type: Grant
    Filed: December 19, 2002
    Date of Patent: June 21, 2005
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yoshiyuki Mochizuki, Katsunori Orimoto, Toshiki Hijiri, Naoki Ohtani, Toshiya Naka, Takeshi Yamamoto, Shigeo Asahara
  • Publication number: 20030214645
    Abstract: A method of evaluating free-space optical propagation characteristics includes emitting a plurality of laser beams from a corresponding plurality of laser sources, receiving laser beams at different target points, and measuring the time-based spatial fluctuations between the laser beams thus received. The respective distances from the laser sources to each target point are used to normalize the time-based spatial fluctuations. The difference between the normalized spatial positions of the laser beams at the target points is derived and used to obtain the frequency spectrum of time-based fluctuations of the spatial positions.
    Type: Application
    Filed: May 14, 2003
    Publication date: November 20, 2003
    Applicant: Communications Research Lab. Indep. Admin. Inst.
    Inventors: Moriya Nakamura, Makoto Akiba, Toshiaki Kuri, Naoki Ohtani
  • Publication number: 20030117485
    Abstract: A communication unit 1 carries out voice communication, and a character background selection input unit 2 selects a CG character corresponding to a communication partner. A voice/music processing unit 5 performs voice/music processing required for the communication, a voice/music converting unit 6 converts voice and music, and a voice/music output unit outputs the voice and music. A voice input unit 8 acquires voice. A voice analyzing unit 9 analyzes the voice, and an emotion presuming unit 10 presumes an emotion based on the result of the voice analysis. A lips motion control unit 11, a body motion control unit 12 and an expression control unit 13 send control information to a 3-D image drawing unit 14 to generate an image, and a display unit 15 displays the image.
    Type: Application
    Filed: December 19, 2002
    Publication date: June 26, 2003
    Inventors: Yoshiyuki Mochizuki, Katsunori Orimoto, Toshiki Hijiri, Naoki Ohtani, Toshiya Naka, Takeshi Yamamoto, Shigeo Asahara