Patents by Inventor Eiko Yamada

Eiko Yamada has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240092980
    Abstract: A method is provided for producing a resin composition including an olefin-based polymer; an inorganic filler; and a lubricant, in which the olefin-based polymer is at least one polymer selected from a propylene-based polymer and an ethylene-based polymer having a melt flow rate of 0.5 g/10 minutes or more measured at a temperature of 190° C. and a load of 2.16 kg. The method includes a step (1) of mixing the inorganic filler and a lubricant; a step (2) of mixing a mixture obtained in the step (1) and the olefin-based polymer; and a step (3) of kneading a mixture obtained in the step (2).
    Type: Application
    Filed: December 14, 2021
    Publication date: March 21, 2024
    Inventors: Hidekazu YAMADA, Nao IGAWA, Eiko ITO
  • Publication number: 20140351709
    Abstract: An information processing device, having a touch panel, which achieves an information processing function interacting with a user's operation includes a display which display an operation screen, an operation part which receives a user's operation on the operation screen, an information acquisition part which acquires specific information from an display object area on the operation screen, and an information management part which manages specific information in connection with a desired attribute which is set by a user in advance. Additionally, the information processing device further includes a display control part, which displays a specific information operation area and an attribute operation area on the operation screen, and an operation content determination part which determines the content of a user's operation. Moreover, the information processing device may achieve a process to easily create a database using specific information in response to a user's operation.
    Type: Application
    Filed: September 10, 2012
    Publication date: November 27, 2014
    Applicant: NEC CASIO MOBILE COMMUNICATIONS, LTD.
    Inventors: Hiroyuki Uno, Eiko Yamada
  • Publication number: 20140250354
    Abstract: A terminal includes a display unit and a character conversion unit that recognizes a function related to entered characters in a character acceptable state, converts the entered characters to a symbol to be displayed on the display unit for starting the recognized function, and outputs the symbol. The terminal further includes a control unit that starts the function corresponding to the symbol displayed on the display unit.
    Type: Application
    Filed: April 14, 2014
    Publication date: September 4, 2014
    Applicant: NEC CORPORATION
    Inventors: EIKO YAMADA, YOSHIHIRO IWAKI
  • Patent number: 8270946
    Abstract: There are provided a mobile terminal capable of switching a function of the mobile terminal depending on whether a sensor is contacted or not contacted, a mobile terminal control method, a mobile terminal control program, and a recording medium. A mobile phone 1 includes a touch sensor 9 at a position where a user generally contacts the mobile phone 1 when the user holds it. While a user is in contact with the touch sensor 9, the mobile phone 1 can perform normal operation. If the mobile phone 1 continues to be not in contact with the touch sensor 9 for a certain time, it switches from the normal operation. For example, assume that a lock mode setting is made. The mobile phone 1 switches to a lock mode while it is not in contact with the touch sensor 9. When the touch sensor 9 is contacted again, the mobile phone 1 shifts to a state for unlock processing such as password entry. If an entry is normally made, the mobile phone 1 is unlocked.
    Type: Grant
    Filed: January 26, 2007
    Date of Patent: September 18, 2012
    Assignee: NEC Corporation
    Inventor: Eiko Yamada
  • Patent number: 7835728
    Abstract: A client (10) transmits a service request signal to a Web server (20). The Web server which has received the service request signal generates an ID for each session, and transmits the ID to the client together with window information. The client then transmits input voice information to a voice processing server (30) together with the ID. The voice processing server which has received the voice information and the ID processes the voice information, and transmits the processing result to the Web server together with the ID. The Web server prepares information reflecting the voice processing result obtained by the voice processing server in correspondence with the ID from the voice processing server, and transmits the information to the client.
    Type: Grant
    Filed: March 18, 2005
    Date of Patent: November 16, 2010
    Assignee: NEC Corporation
    Inventor: Eiko Yamada
  • Publication number: 20100182230
    Abstract: A terminal includes a display unit and a character conversion unit that recognizes a function related to entered characters in a character acceptable state, converts the entered characters to a symbol to be displayed on the display unit for starting the recognized function, and outputs the symbol. The terminal further includes a control unit that starts the function corresponding to the symbol displayed on the display unit.
    Type: Application
    Filed: June 24, 2008
    Publication date: July 22, 2010
    Inventors: Eiko Yamada, Yoshihiro Iwaki
  • Publication number: 20100167693
    Abstract: There are provided a mobile terminal capable of switching a function of the mobile terminal depending on whether a sensor is contacted or not contacted, a mobile terminal control method, a mobile terminal control program, and a recording medium. A mobile phone 1 includes a touch sensor 9 at a position where a user generally contacts the mobile phone 1 when the user holds it. While a user is in contact with the touch sensor 9, the mobile phone 1 can perform normal operation. If the mobile phone 1 continues to be not in contact with the touch sensor 9 for a certain time, it switches from the normal operation. For example, assume that a lock mode setting is made. The mobile phone 1 switches to a lock mode while it is not in contact with the touch sensor 9. When the touch sensor 9 is contacted again, the mobile phone 1 shifts to a state for unlock processing such as password entry. If an entry is normally made, the mobile phone 1 is unlocked.
    Type: Application
    Filed: January 26, 2007
    Publication date: July 1, 2010
    Inventor: Eiko Yamada
  • Patent number: 7478046
    Abstract: To provide a speech recognition apparatus which enables the reduction of transmission time and of costs. A terminal-side apparatus (100) includes a speech detection portion (101) for detecting a speech interval of inputted data, a waveform compression portion (102) for compressing waveform data at the detected speech interval, and a waveform transmission portion (103) for producing the compressed waveform data. A server-side apparatus (200) includes a waveform reception portion (201) for receiving the waveform data transmitted from the terminal-side apparatus, a waveform decompression portion (202) for decompressing the received waveform data, an analyzing portion (203) for analyzing the decompressed waveform data, and a recognizing portion (204) for performing recognition processing to produce a recognition result.
    Type: Grant
    Filed: June 20, 2002
    Date of Patent: January 13, 2009
    Assignee: NEC Corporation
    Inventors: Eiko Yamada, Hiroshi Hagane, Kazunaga Yoshida
  • Publication number: 20070143102
    Abstract: A client (10) transmits a service request signal to a Web server (20). The Web server which has received the service request signal generates an ID for each session, and transmits the ID to the client together with window information. The client then transmits input voice information to a voice processing server (30) together with the ID. The voice processing server which has received the voice information and the ID processes the voice information, and transmits the processing result to the Web server together with the ID. The Web server prepares information reflecting the voice processing result obtained by the voice processing server in correspondence with the ID from the voice processing server, and transmits the information to the client.
    Type: Application
    Filed: March 18, 2005
    Publication date: June 21, 2007
    Inventor: Eiko Yamada
  • Publication number: 20040243414
    Abstract: To provide a speech recognition apparatus which enables the reduction of transmission time and of costs. A terminal-side apparatus (100) includes a speech detection portion (101) for detecting a speech interval of inputted data, a waveform compression portion (102) for compressing waveform data at the detected speech interval, and a waveform transmission portion (103) for producing the compressed waveform data. A server-side apparatus (200) includes a waveform reception portion (201) for receiving the waveform data transmitted from the terminal-side apparatus, a waveform decompression portion (202) for decompressing the received waveform data, an analyzing portion (203) for analyzing the decompressed waveform data, and a recognizing portion (204) for performing recognition processing to produce a recognition result.
    Type: Application
    Filed: July 13, 2004
    Publication date: December 2, 2004
    Inventors: Eiko Yamada, Hiroshi Hagane, Kazunaga Yoshida
  • Publication number: 20040162731
    Abstract: In a voice recognition dialogue system having a plurality of recognition dialogue servers, there is no framework to select and determine one recognition dialogue server. A client 10 transmits its ability information stored in a terminal information storage 140 to a recognition dialogue selecting server 20. The ability of the client 10 includes a CODEC ability (CODEC type, CODEC compression mode, etc.), a voice data format (compressed voice data, feature vector, etc.), a recorded voice I/O function, a synthesized voice I/O function (without synthesizing engine, with intermediate representation input engine, with character string input engine, etc.), and service contents.
    Type: Application
    Filed: November 4, 2003
    Publication date: August 19, 2004
    Inventors: Eiko Yamada, Hiroshi Hagane
  • Patent number: 6341263
    Abstract: A voice recognition system, method and storage medium is provided. The system includes a plurality of storage sections, a selection section, an adaptation section, a plurality of calculation sections, an adaptation section, a normalization section and a decision section. The method includes the steps for performing the functions associated with the sections.
    Type: Grant
    Filed: May 17, 1999
    Date of Patent: January 22, 2002
    Assignee: NEC Corporation
    Inventors: Eiko Yamada, Hiroaki Hattori
  • Patent number: 6006184
    Abstract: In a speaker recognition system, a tree-structured reference pattern storing unit has first through M-th node stages each of which has nodes that respectively store a reference pattern of inhibiting speakers. The reference pattern of each node of (N-1)-th node stage represents acoustic features in the reference patterns of predetermined ones of the nodes of the N-th node stage. An analysis unit analyzes input speech and converts the input speech into feature vectors. A similarities calculating unit calculates similarities between the feature vectors and the reference patterns of all of the inhibiting speakers. An inhibiting speaker selecting unit sorts the similarities and selects a predetermined number of inhibiting speakers.
    Type: Grant
    Filed: January 28, 1998
    Date of Patent: December 21, 1999
    Assignee: NEC Corporation
    Inventors: Eiko Yamada, Hiroaki Hattori
  • Patent number: 5712956
    Abstract: Speech data is converted into logarithmic spectrum data and orthogonally transformed to develop feature vectors. Normalization coefficient data and unit vector data are stored. An inner product of the feature vector data and the unit vector data is calculated. The inner product may be the average of inner products for a word or a sentence, or may be a regressive average of them. A normalization vector, which corresponds to a second or higher order curve obtained by least-square error approximation of the speech data on logarithmic spectrum space, is calculated on the transformed feature vector space by using the inner product, the normalization coefficient data, and the unit vector data. Normalization of the feature vectors is performed by subtracting the normalization vector from the feature vectors on the transformed feature vector space. Then, a recognition is performed based on the normalized feature vector.
    Type: Grant
    Filed: January 31, 1995
    Date of Patent: January 27, 1998
    Assignee: NEC Corporation
    Inventors: Eiko Yamada, Hiroaki Hattori