Patents by Inventor GREGORY SENAY

GREGORY SENAY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10796184
    Abstract: Inputting an image to a neural network, performing convolution on a current frame included in the image to calculate a current feature map, which is a feature map at a present time, combining a past feature map, which is obtained by performing convolution on a past frame included in the image, and the current feature map, estimating an object candidate area using the combined past feature map and current feature map, estimating positional information and identification information regarding the one or more objects included in the current frame using the combined past feature map and current feature map and the estimated object candidate area, and outputting the positional information and the identification information regarding the one or more objects included in the current frame of the image estimated in the estimating as object detection results are included.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: October 6, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Gregory Senay, Sotaro Tsukizawa, Min Young Kim, Luca Rigazio
  • Publication number: 20190251383
    Abstract: Inputting an image to a neural network, performing convolution on a current frame included in the image to calculate a current feature map, which is a feature map at a present time, combining a past feature map, which is obtained by performing convolution on a past frame included in the image, and the current feature map, estimating an object candidate area using the combined past feature map and current feature map, estimating positional information and identification information regarding the one or more objects included in the current frame using the combined past feature map and current feature map and the estimated object candidate area, and outputting the positional information and the identification information regarding the one or more objects included in the current frame of the image estimated in the estimating as object detection results are included.
    Type: Application
    Filed: April 25, 2019
    Publication date: August 15, 2019
    Inventors: GREGORY SENAY, SOTARO TSUKIZAWA, MIN YOUNG KIM, LUCA RIGAZIO
  • Publication number: 20190215249
    Abstract: A system and method to receive user input from a human user in a communication session between the human user and a first machine; autonomously determine a sentiment metric of the user input from the human user based on one or more sentiment criteria, wherein the sentiment metric represents an attitude of the human user; autonomously determine a quality metric associated with the communication session; autonomously rank the communication session between human user and first machine based on one or more of the sentiment metric and the quality metric; and determine based on the ranking, to recommend human agent intervention in the communication session between the human user and the first machine.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 11, 2019
    Inventors: Gregory Renard, Mathias Herbaux, Aurelien Verla, Audrey Duet, Anne Mervaillie, Gregory Senay
  • Publication number: 20170235361
    Abstract: Exemplary embodiments of the present invention relate to an interaction system for a vehicle that can be configured by a user. For example, an interaction system for a vehicle can include an image capture resource to receive eye image data of a user. The interaction system for a vehicle can also include a processor to identify a direction of an eye gaze of the user based on the eye image data. The processor can correlate the eye gaze to a driving experience function (DEF), and the processor can transmit a DEF communication.
    Type: Application
    Filed: January 20, 2017
    Publication date: August 17, 2017
    Inventors: LUCA RIGAZIO, CASEY JOSEPH CARLIN, ANGELIQUE CAMILLE LANG, MIKI NOBUMORI, GREGORY SENAY, AKIHIKO SUGIURA
  • Publication number: 20160288708
    Abstract: Exemplary embodiments of the present invention relate to an interaction system for a vehicle. The interaction system includes one or more input devices configured to receive data comprising a characteristic of a driver of the vehicle. The interaction system also includes one or more output devices configured to deliver an output the driver. The interaction system also includes a human machine interface controller. The human machine interface controller receives the data and analyzes the data to identify an emotional state of the driver. The human machine interface controller generates the output based, at least in part, on the emotional state of the driver and sends the output to the one or more output devices. The output includes a simulated emotional state of the human machine interface.
    Type: Application
    Filed: March 30, 2016
    Publication date: October 6, 2016
    Inventors: WOOSUK CHANG, ANGEL CAMILLE LANG, MIKI NOBUMORI, LUCA RIGAZIO, GREGORY SENAY, AKIHIKO SUGIURA