Patents by Inventor Julien van Hout

Julien van Hout has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220116743
    Abstract: The present application discloses systems, methods, and computer-readable media that can utilize a handheld-movement-detection model to detect whether a computing device is moved by hand or otherwise by a person within a vehicle. For instance, the disclosed systems can receive movement data from a computing device and generate filtered signals. Subsequently, the disclosed systems can utilize the handheld-movement-detection model to convert the filtered signals into a binary movement-classification signal (based on a signal threshold) to indicate the presence of handheld movement of a device. Furthermore, the disclosed systems can also utilize movement data from a computing device to detect whether the computing device is mounted and/or to detect vehicular movements.
    Type: Application
    Filed: October 14, 2020
    Publication date: April 14, 2022
    Inventors: Alya Abbott, Alexander Wesley Contryman, Devjit Chakravarti, Julien van Hout, Zhan Zhang, Gautam Kedia
  • Patent number: 11217228
    Abstract: Systems and methods for speech recognition are provided. In some aspects, the method comprises receiving, using an input, an audio signal. The method further comprises splitting the audio signal into auditory test segments. The method further comprises extracting, from each of the auditory test segments, a set of acoustic features. The method further comprises applying the set of acoustic features to a deep neural network to produce a hypothesis for the corresponding auditory test segment. The method further comprises selectively performing one or more of: indirect adaptation of the deep neural network and direct adaptation of the deep neural network.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: January 4, 2022
    Assignee: SRI International
    Inventors: Vikramjit Mitra, Horacio E. Franco, Chris D. Bartels, Dimitra Vergyri, Julien van Hout, Martin Graciarena
  • Publication number: 20210287262
    Abstract: This disclosure describes a vehicle-motion-analysis system that can align axes for a provider device and a corresponding transportation vehicle based on the provider device's location and motion data as a basis for generating driving-event scores for particular driving events. In particular, the disclosed systems can generate axes-rotation parameters that align axes of a provider device with axes of a transportation vehicle. In addition, the disclosed systems can identify motion paths, motion patterns, or other driving behaviors that occur during particular driving events. Further, the disclosed systems can generate driving-event scores for such driving events and can customize a graphical user interface based on the driving-event scores to reflect a provider rating and/or a location rating.
    Type: Application
    Filed: March 16, 2020
    Publication date: September 16, 2021
    Inventors: Alya Abbott, Devjit Chakravarti, Alexander Wesley Contryman, Michael Jonathan DiCarlo, Julien van Hout, James Kevin Murphy, Renee Hei-kyung Park, Ashivni Shekhawat, Zhan Zhang
  • Patent number: 10777188
    Abstract: A computing system determines whether a reference audio signal contains a query. A time-frequency convolutional neural network (TFCNN) comprises a time and frequency convolutional layers and a series of additional layers, which include a bottleneck layer. The computation engine applies the TFCNN to samples of a query utterance at least through the bottleneck layer. A query feature vector comprises output values of the bottleneck layer generated when the computation engine applies the TFCNN to the samples of the query utterance. The computation engine also applies the TFCNN to samples of the reference audio signal at least through the bottleneck layer. A reference feature vector comprises output values of the bottleneck layer generated when the computation engine applies the TFCNN to the samples of the reference audio signal. The computation engine determines at least one detection score based on the query feature vector and the reference feature vector.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: September 15, 2020
    Assignee: SRI International
    Inventors: Julien van Hout, Vikramjit Mitra, Horacio Franco, Emre Yilmaz
  • Publication number: 20200168208
    Abstract: Systems and methods for speech recognition are provided. In some aspects, the method comprises receiving, using an input, an audio signal. The method further comprises splitting the audio signal into auditory test segments. The method further comprises extracting, from each of the auditory test segments, a set of acoustic features. The method further comprises applying the set of acoustic features to a deep neural network to produce a hypothesis for the corresponding auditory test segment. The method further comprises selectively performing one or more of: indirect adaptation of the deep neural network and direct adaptation of the deep neural network.
    Type: Application
    Filed: March 22, 2017
    Publication date: May 28, 2020
    Inventors: Vikramjit Mitra, Horacio E. Franco, Chris D. Bartels, Dimitra Vergyri, Julien van Hout, Martin Graciarena
  • Publication number: 20200152179
    Abstract: A computing system determines whether a reference audio signal contains a query. A time-frequency convolutional neural network (TFCNN) comprises a time and frequency convolutional layers and a series of additional layers, which include a bottleneck layer. The computation engine applies the TFCNN to samples of a query utterance at least through the bottleneck layer. A query feature vector comprises output values of the bottleneck layer generated when the computation engine applies the TFCNN to the samples of the query utterance. The computation engine also applies the TFCNN to samples of the reference audio signal at least through the bottleneck layer. A reference feature vector comprises output values of the bottleneck layer generated when the computation engine applies the TFCNN to the samples of the reference audio signal. The computation engine determines at least one detection score based on the query feature vector and the reference feature vector.
    Type: Application
    Filed: November 14, 2018
    Publication date: May 14, 2020
    Inventors: Julien van Hout, Vikramjit Mitra, Horacio Franco, Emre Yilmaz