Patents by Inventor Mark Grobman

Mark Grobman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11461614
    Abstract: A novel and useful system and method of data driven quantization optimization of weights and input data in an artificial neural network (ANN). The system reduces quantization implications (i.e. error) in a limited resource system by employing the information available in the data actually observed by the system. Data counters in the layers of the network observe the data input thereto. The distribution of the data is used to determine an optimum quantization scheme to apply to the weights, input data, or both. The mechanism is sensitive to the data observed at the input layer of the network. As a result, the network auto-tunes to optimize the instance specific representation of the network. The network becomes customized (i.e. specialized) to the inputs it observes and better fits itself to the subset of the sample space that is applicable to its actual data flow. As a result, nominal process noise is reduced and detection accuracy improves.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu, Mark Grobman, Alex Finkelstein
  • Patent number: 10387298
    Abstract: A novel and useful artificial neural network that incorporates emphasis and focus techniques to extract more information from one or more portions of an input image compared to the rest of the image. The ANN recognizes that valuable information in an input image is typically not distributed throughout the image but rather is concentrated in one or more regions. Rather than implement CNN layers sequentially (i.e. row by row) on the input domain of each layer, the present invention leverages the fact that valuable information is focused in one or more regions of the image where it is desirable to apply more attention and for which it is desired to apply more elaborate evaluation. Precision dilution can be applied to those portions of the input image that are not the center of focus and emphasis. A spatial aware function determines the location(s) of the ears of focus and is applied to the first convolutional layer.
    Type: Grant
    Filed: August 6, 2017
    Date of Patent: August 20, 2019
    Assignee: Hailo Technologies Ltd
    Inventors: Avi Baum, Or Danon, Mark Grobman, Hadar Zeitlin
  • Publication number: 20180285678
    Abstract: A novel and useful artificial neural network that incorporates emphasis and focus techniques to extract more information from one or more portions of an input image compared to the rest of the image. The ANN recognizes that valuable information in an input image is typically not distributed throughout the image but rather is concentrated in one or more regions. Rather than implement CNN layers sequentially (i.e. row by row) on the input domain of each layer, the present invention leverages the fact that valuable information is focused in one or more regions of the image where it is desirable to apply more attention and for which it is desired to apply more elaborate evaluation. Precision dilution can be applied to those portions of the input image that are not the center of focus and emphasis. A spatial aware function determines the location(s) of the ears of focus and is applied to the first convolutional layer.
    Type: Application
    Filed: August 6, 2017
    Publication date: October 4, 2018
    Applicant: Hailo Technologies Ltd.
    Inventors: Avi Baum, Or Danon, Mark Grobman, Hadar Zeitlin
  • Publication number: 20180285736
    Abstract: A novel and useful system and method of data driven quantization optimization of weights and input data in an artificial neural network (ANN). The system reduces quantization implications (i.e. error) in a limited resource system by employing the information available in the data actually observed by the system. Data counters in the layers of the network observe the data input thereto. The distribution of the data is used to determine an optimum quantization scheme to apply to the weights, input data, or both. The mechanism is sensitive to the data observed at the input layer of the network. As a result, the network auto-tunes to optimize the instance specific representation of the network. The network becomes customized (i.e. specialized) to the inputs it observes and better fits itself to the subset of the sample space that is applicable to its actual data flow. As a result, nominal process noise is reduced and detection accuracy improves.
    Type: Application
    Filed: December 12, 2017
    Publication date: October 4, 2018
    Applicant: Hailo Technologies Ltd.
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu, Mark Grobman, Alex Finkelstein