Patents by Inventor Koji Okabe
Koji Okabe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11968861Abstract: An organic EL display (1) has a bend (B) where a slit (81) is bored in a base coat film (23), gate insulating film (27), first interlayer insulating film (31) and second interlayer insulating film (35). The bend is provided with a filler layer (83) filling the slit and covering both edges of the slit. The filler layer has a protrusion (85) overlapping each edge in the width direction of the slit. A routed wire (7) routed from the display region (D) and then routed over the filler layer to reach a terminal section (T) extends over the protrusion.Type: GrantFiled: April 4, 2019Date of Patent: April 23, 2024Assignee: SHARP KABUSHIKI KAISHAInventors: Shinji Ichikawa, Shinsuke Saida, Ryosuke Gunji, Hiroki Taniyama, Tohru Okabe, Akira Inoue, Hiroharu Jinmura, Yoshihiro Nakada, Koji Tanimura
-
Patent number: 11957014Abstract: A display device includes: a plurality of control lines; a plurality of power supply lines; a plurality of data signal lines; an oxide semiconductor layer; a first metal layer; a gate insulation film; a first inorganic insulation film; a second metal layer; a second inorganic insulation film; and a third metal layer. The oxide semiconductor layer, in a plan view, contains therein semiconductor lines formed as isolated regions between a plurality of drivers and a display area. The semiconductor lines cross the plurality of control lines and the plurality of power supply lines, are in contact with the plurality of control lines via an opening in a gate insulation film, are in contact with the plurality of power supply lines via an opening in the first inorganic insulation film, and have a plurality of narrowed portions, such that thicker and thinner regions exist along the same line.Type: GrantFiled: July 30, 2018Date of Patent: April 9, 2024Assignee: SHARP KABUSHIKI KAISHAInventors: Tohru Okabe, Shinsuke Saida, Shinji Ichikawa, Hiroki Taniyama, Ryosuke Gunji, Kohji Ariga, Yoshihiro Nakada, Koji Tanimura, Yoshihiro Kohara, Hiroharu Jinmura, Akira Inoue
-
Patent number: 11950462Abstract: A first conductive layer in the same layer as that of a first electrode is coupled to a third conductive layer and a second electrode in the same layer as that of a third metal layer through a slit formed in a flattening film of a non-display area. Second conductive layers in the same layer as that of a second metal layer are provided to overlap with the slit.Type: GrantFiled: March 30, 2018Date of Patent: April 2, 2024Assignee: SHARP KABUSHIKI KAISHAInventors: Tohru Okabe, Shinsuke Saida, Shinji Ichikawa, Hiroki Taniyama, Ryosuke Gunji, Kohji Ariga, Yoshihiro Nakada, Koji Tanimura, Yoshihiro Kohara, Akira Inoue, Hiroharu Jinmura, Takeshi Yaneda
-
Patent number: 11928198Abstract: An authentication device is provided with: a plurality of attribute-dependent score calculation units each calculating an attribute-dependent score dependent on a prescribed attribute for input data; an attribute-independent score calculation unit for calculating an attribute-independent score independent of the attribute for the input data; an attribute estimation unit for performing attribute estimation for the input data; and a score integration unit for determining a score weight of each of a plurality of attribute-dependent scores and of the attribute-independent score using the result of the attribute estimation and calculating an output score using the attribute-dependent scores, the attribute-independent score, and the determined score weights.Type: GrantFiled: June 23, 2021Date of Patent: March 12, 2024Assignee: NEC CORPORATIONInventors: Koji Okabe, Hitoshi Yamamoto, Takafumi Koshinaka
-
Publication number: 20240038240Abstract: A biometric authentication device is provided with: a replay unit for reproducing a sound; an ear authentication unit for acquiring a reverberation sound of the sound in an ear of a user to be authenticated, extracting an ear acoustic feature from the reverberation sound, and calculating an ear authentication score by comparing the extracted ear acoustic feature with an ear acoustic feature stored in advance; a voice authentication unit for extracting a talker feature from a voice of the user that has been input, and calculating a voice authentication score by comparing the extracted talker feature with a talker feature stored in advance; and an authentication integration unit for outputting an authentication integration result calculated based on the ear authentication score and the voice authentication score. After the sound is output into the ear, a recording unit inputs the voice of the user.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: NEC CorporationInventors: Koji OKABE, Takayuki ARAKAWA, Takafumi KOSHINAKA
-
Publication number: 20240038243Abstract: A biometric authentication device is provided with: a replay unit for reproducing a sound; an ear authentication unit for acquiring a reverberation sound of the sound in an ear of a user to be authenticated, extracting an ear acoustic feature from the reverberation sound, and calculating an ear authentication score by comparing the extracted ear acoustic feature with an ear acoustic feature stored in advance; a voice authentication unit for extracting a talker feature from a voice of the user that has been input, and calculating a voice authentication score by comparing the extracted talker feature with a talker feature stored in advance; and an authentication integration unit for outputting an authentication integration result calculated based on the ear authentication score and the voice authentication score. After the sound is output into the ear, a recording unit inputs the voice of the user.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: NEC CorporationInventors: Koji OKABE, Takayuki Arakawa, Takafumi Koshinaka
-
Publication number: 20240038241Abstract: A biometric authentication device is provided with: a replay unit for reproducing a sound; an ear authentication unit for acquiring a reverberation sound of the sound in an ear of a user to be authenticated, extracting an ear acoustic feature from the reverberation sound, and calculating an ear authentication score by comparing the extracted ear acoustic feature with an ear acoustic feature stored in advance; a voice authentication unit for extracting a talker feature from a voice of the user that has been input, and calculating a voice authentication score by comparing the extracted talker feature with a talker feature stored in advance; and an authentication integration unit for outputting an authentication integration result calculated based on the ear authentication score and the voice authentication score. After the sound is output into the ear, a recording unit inputs the voice of the user.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: NEC CorporationInventors: Koji OKABE, Takayuki ARAKAWA, Takafumi KOSHINAKA
-
Publication number: 20240038242Abstract: A biometric authentication device is provided with: a replay unit for reproducing a sound; an ear authentication unit for acquiring a reverberation sound of the sound in an ear of a user to be authenticated, extracting an ear acoustic feature from the reverberation sound, and calculating an ear authentication score by comparing the extracted ear acoustic feature with an ear acoustic feature stored in advance; a voice authentication unit for extracting a talker feature from a voice of the user that has been input, and calculating a voice authentication score by comparing the extracted talker feature with a talker feature stored in advance; and an authentication integration unit for outputting an authentication integration result calculated based on the ear authentication score and the voice authentication score. After the sound is output into the ear, a recording unit inputs the voice of the user.Type: ApplicationFiled: October 10, 2023Publication date: February 1, 2024Applicant: NEC CorporationInventors: Koji Okabe, Takayuki Arakawa, Takafumi Koshinaka
-
Publication number: 20230401784Abstract: An information processing apparatus 100 includes an acquisition unit 105 configured to acquire material data that is to be used for generation of a virtual viewpoint image that is based on a plurality of captured images obtained by a plurality of cameras 101 performing image capturing of a subject, and is material data represented by a first format, a conversion unit 107 configured to convert, based on information for identifying a format of material data processable by a different apparatus being an output destination of the material data, a format of the acquired material data from the first format into a second format, and a transmission/reception unit 108 configured to output the material data converted into the second format, to the different apparatus.Type: ApplicationFiled: August 16, 2023Publication date: December 14, 2023Inventors: Mitsuru Maeda, HINAKO FUNAKI, KOJI OKABE, HIDEKAZU KAMEI, YUYA OTA, KAZUFUMI ONUMA, TAKU OGASAWARA
-
Patent number: 11842741Abstract: A feature vector having high class identification capability is generated. A signal processing system provided with: a first generation unit for generating a first feature vector on the basis of one of time-series voice data, meteorological data, sensor data, and text data, or on the basis of a feature quantity of one of these; a weight calculation unit for calculating a weight for the first feature vector; a statistical amount calculation unit for calculating a weighted average vector and a weighted high-order statistical vector of second or higher order using the first feature vector and the weight; and a second generation unit for generating a second feature vector using the weighted high-order statistical vector.Type: GrantFiled: March 13, 2019Date of Patent: December 12, 2023Assignee: NEC CORPORATIONInventors: Koji Okabe, Takafumi Koshinaka
-
Publication number: 20230394701Abstract: An information processing apparatus identifies a sub region of an object, displayed in a virtual viewpoint image representing a view from a virtual viewpoint, based on virtual viewpoint information, and outputs division model data corresponding to the identified sub region out of foreground model data.Type: ApplicationFiled: August 16, 2023Publication date: December 7, 2023Inventors: Mitsuru Maeda, KOJI OKABE, HIDEKAZU KAMEI, HINAKO FUNAKI, YUYA OTA, TAKU OGASAWARA, KAZUFUMI ONUMA
-
Publication number: 20230386127Abstract: An information processing apparatus includes a management unit configured to acquire a plurality of material data for use in generation of a virtual viewpoint image based on a plurality of images captured with a plurality of cameras by imaging an object, the plurality of material data including first material data and second material data different from the first material data, and a transmission/reception unit configured to output, to a device, material data that has been selected from among the plurality of acquired material data based on information for identifying a format of material data processable by the device.Type: ApplicationFiled: August 14, 2023Publication date: November 30, 2023Inventors: Mitsuru Maeda, KOJI OKABE, HIDEKAZU KAMEI, HINAKO FUNAKI, YUYA OTA, TAKU OGASAWARA, KAZUFUMI ONUMA
-
Publication number: 20230282217Abstract: The voice registration device 1X mainly includes a noise reproduction means 220X, a voice data acquisition means 200X, and a voice registration means 210X. The noise reproduction means 220X is configured to reproduce noise data during a time period in which voice input from a user is performed. The voice data acquisition means 200X is configured to acquire the voice data based on the voice input. The voice registration means 210X is configured to register the voice data or data generated based on the voice data as data to be used for verification relating to a voice of the user.Type: ApplicationFiled: July 27, 2020Publication date: September 7, 2023Applicant: NEC CorporationInventors: Koji OKABE, Takafumi Koshinaka
-
Patent number: 11580967Abstract: A speech feature extraction apparatus 100 includes a voice activity detection unit 103 that drops non-voice frames from frames corresponding to an input speech utterance, and calculates a posterior of being voiced for each frame, a voice activity detection process unit 106 calculates a function value as weights in pooling frames to produce an utterance-level feature, from a given a voice activity detection posterior, and an utterance-level feature extraction unit 112 that extracts an utterance-level feature, from the frame on a basis of multiple frame-level features, using the function values.Type: GrantFiled: June 29, 2018Date of Patent: February 14, 2023Assignee: NEC CORPORATIONInventors: Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka
-
Publication number: 20220383113Abstract: The information processing device is provided in a feature extraction block in a neural network. The information processing device acquires a local feature quantity group constituting one unit of information, and computes a weight corresponding to a degree of importance of each local feature quantity. Next, the information processing device computes a weighted statistic for a whole of the local feature quantity group using the computed weights, and deforms and outputs the local feature quantity group using the computed weighted statistic.Type: ApplicationFiled: November 12, 2019Publication date: December 1, 2022Applicant: NEC CorporationInventors: Koji OKABE, Takafumi KOSHINAKA
-
Publication number: 20220130397Abstract: A speaker recognition system includes a non-transitory computer readable medium configured to store instructions. The speaker recognition system further includes a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for extracting acoustic features from each frame of a plurality of frames in input speech data. The processor is configured to execute the instructions for calculating a saliency value for each frame of the plurality of frames using a first neural network (NN) based on the extracted acoustic features, wherein the first NN is a trained NN using speaker posteriors. The processor is configured to execute the instructions for extracting a speaker feature using the saliency value for each frame of the plurality of frames.Type: ApplicationFiled: February 5, 2020Publication date: April 28, 2022Inventors: Qiongqiong WANG, Koji OKABE, Takafumi KOSHINAKA
-
Publication number: 20220093120Abstract: Provided is a first acoustic information acquisition unit configured to acquire a first acoustic information obtained by receiving a sound wave emitted from a first sound source by a wearable device worn by a user, a second acoustic information acquisition unit configured to acquire a second acoustic information obtained by receiving a sound wave emitted from a second sound source that is different from the first sound source by the wearable device, and a third acoustic information acquisition unit configured to acquire a third acoustic information used for biometric matching of the user based on the first acoustic information and the second acoustic information.Type: ApplicationFiled: January 7, 2020Publication date: March 24, 2022Applicant: NEC CorporationInventors: Koji OKABE, Takayuki ARAKAWA, Takafumi KOSHINAKA
-
Publication number: 20210319087Abstract: An authentication device is provided with: a plurality of attribute-dependent score calculation units each calculating an attribute-dependent score dependent on a prescribed attribute for input data; an attribute-independent score calculation unit for calculating an attribute-independent score independent of the attribute for the input data; an attribute estimation unit for performing attribute estimation for the input data; and a score integration unit for determining a score weight of each of a plurality of attribute-dependent scores and of the attribute-independent score using the result of the attribute estimation and calculating an output score using the attribute-dependent scores, the attribute-independent score, and the determined score weights.Type: ApplicationFiled: June 23, 2021Publication date: October 14, 2021Applicant: NEC CorporationInventors: Koji OKABE, Hitoshi YAMAMOTO, Takafumi KOSHINAKA
-
Publication number: 20210256970Abstract: A speech feature extraction apparatus 100 includes a voice activity detection unit 103 that drops non-voice frames from frames corresponding to an input speech utterance, and calculates a posterior of being voiced for each frame, a voice activity detection process unit 106 calculates a function value as weights in pooling frames to produce an utterance-level feature, from a given a voice activity detection posterior, and an utterance-level feature extraction unit 112 that extracts an utterance-level feature, from the frame on a basis of multiple frame-level features, using the function values.Type: ApplicationFiled: June 29, 2018Publication date: August 19, 2021Applicant: NEC CorporationInventors: Qiongqiong WANG, Koji OKABE, Kong Aik LEE, Takafumi KOSHINAKA
-
Patent number: 11091125Abstract: A wiper apparatus 4 is configured to wipe a rear windshield of a vehicle body rear portion. Above the vehicle body rear portion, the wiper apparatus is arranged on a rear side and a lower side of an ornamental member arranged to form a space to flow air between the ornamental member and an upper surface portion of the vehicle body rear portion. The wiper apparatus is provided with a flow straightening portion standing on the wiper apparatus and including a surface portion that inclines to a vehicle body rear side from a base portion toward a top portion.Type: GrantFiled: January 24, 2019Date of Patent: August 17, 2021Assignee: HONDA MOTOR CO., LTD.Inventors: Koji Okabe, Masashi Miyazawa, Masaki Kawamura