Patents by Inventor Yedid Hoshen

Yedid Hoshen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230281959
    Abstract: A method comprising: receiving, as input, training images, wherein at least a majority of the training images represent normal data instances; receiving, as input, a target image; extracting (i) a set of feature representations from a plurality of image locations within each of the training images, and (ii) target feature representations from a plurality of target image locations within the target image; calculating, with respect to a target image location of the plurality of target image locations in the target image, a distance between (iii) the target feature representation of the target image location, and (iv) a subset from the set of feature representations comprising the k nearest the feature representations to the target feature representation; and determining that the target image location is anomalous, when the calculated distance exceeds a predetermined threshold.
    Type: Application
    Filed: March 25, 2021
    Publication date: September 7, 2023
    Inventors: Yedid HOSHEN, Liron BERGMAN, Niv COHEN, Tal REISS
  • Publication number: 20220253699
    Abstract: A system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: receive, as input, a plurality of data instances representing, at least in part, normal data, apply, to each of the data instances, one or more transformations selected from a set of transformations, to generate a set of transformed data instances, and at a training stage, train a machine learning model on a training set comprising: (i) the set of transformed data instances, and (ii) labels indicating the transformation applied to each of the transformed data instances in the set, to predict a transformation from the set applied to a target data instance.
    Type: Application
    Filed: June 18, 2020
    Publication date: August 11, 2022
    Inventors: Yedid HOSHEN, Liron BERGMAN
  • Patent number: 9697826
    Abstract: Methods, including computer programs encoded on a computer storage medium, for enhancing the processing of audio waveforms for speech recognition using various neural network processing techniques. In one aspect, a method includes: receiving multiple channels of audio data corresponding to an utterance; convolving each of multiple filters, in a time domain, with each of the multiple channels of audio waveform data to generate convolution outputs, wherein the multiple filters have parameters that have been learned during a training process that jointly trains the multiple filters and trains a deep neural network as an acoustic model; combining, for each of the multiple filters, the convolution outputs for the filter for the multiple channels of audio waveform data; inputting the combined convolution outputs to the deep neural network trained jointly with the multiple filters; and providing a transcription for the utterance that is determined.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: July 4, 2017
    Assignee: Google Inc.
    Inventors: Tara N. Sainath, Ron J. Weiss, Kevin William Wilson, Andrew W. Senior, Arun Narayanan, Yedid Hoshen, Michiel A. U. Bacchiani
  • Publication number: 20160322055
    Abstract: Methods, including computer programs encoded on a computer storage medium, for enhancing the processing of audio waveforms for speech recognition using various neural network processing techniques. In one aspect, a method includes: receiving multiple channels of audio data corresponding to an utterance; convolving each of multiple filters, in a time domain, with each of the multiple channels of audio waveform data to generate convolution outputs, wherein the multiple filters have parameters that have been learned during a training process that jointly trains the multiple filters and trains a deep neural network as an acoustic model; combining, for each of the multiple filters, the convolution outputs for the filter for the multiple channels of audio waveform data; inputting the combined convolution outputs to the deep neural network trained jointly with the multiple filters; and providing a transcription for the utterance that is determined.
    Type: Application
    Filed: July 8, 2016
    Publication date: November 3, 2016
    Inventors: Tara N. Sainath, Ron J. Weiss, Kevin William Wilson, Andrew W. Senior, Arun Narayanan, Yedid Hoshen, Michiel A.U. Bacchiani