Patents by Inventor Raizy Kellermann

Raizy Kellermann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11574175
    Abstract: Embodiments are directed to security optimizing compute distribution in a hybrid deep learning environment. An embodiment of an apparatus includes one or more processors to determine security capabilities and compute capabilities of a client machine requesting to use a machine learning (ML) model hosted by the apparatus; determine, based on the security capabilities and based on exposure criteria of the ML model, that one or more layers of the ML model can be offloaded to the client machine for processing; define, based on the compute capabilities of the client machine, a split level of the one or more layers of the ML model for partition of the ML model, the partition comprising offload layers of the one or more layers of the ML model to be processed at the client machine; and cause the offload layers of the ML model to be downloaded to the client machine.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: February 7, 2023
    Assignee: INTEL CORPORATION
    Inventors: Oleg Pogorelik, Alex Nayshtut, Michael E. Kounavis, Raizy Kellermann, David M. Durham
  • Publication number: 20220121944
    Abstract: Adversarial sample protection for machine learning is described. An example of a storage medium includes instructions for initiating processing of examples for training of an inference engine in a system; dynamically selecting a subset of defensive preprocessing methods from a repository of defensive preprocessing methods for a current iteration of processing, wherein a subset of defensive preprocessing methods is selected for each iteration of processing; performing training of the inference engine with a plurality of examples, wherein the training of the inference engine include operation of the selected subset of defensive preprocessing methods; and performing an inference operation with the inference engine, including utilizing the selected subset of preprocessing defenses for the current iteration of processing.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Applicant: Intel Corporation
    Inventors: Alex Nayshtut, Raizy Kellermann, Omer Ben-Shalom, Dor Levy
  • Publication number: 20220114255
    Abstract: Machine learning fraud resiliency using perceptual descriptors is described. An example of a computer-readable storage medium includes instructions for accessing multiple examples in a training dataset for a classifier system; calculating one or more perceptual hashes for each of the examples; generating clusters of perceptual hashes for the multiple examples based on the calculation of the one or more perceptual hashes for each of the plurality of examples; obtaining an inference sample for classification by the classifier system; generating a first classification result for the inference sample utilizing a neural network classifier and generating a second classification result utilizing the generated clusters of perceptual hashes; comparing the first classification result with the second classification result; and, upon a determination that the first classification result does not match the second classification result, determining a suspicion of an adversarial attack.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Raizy Kellermann, Omer Ben-Shalom, Alex Nayshtut
  • Publication number: 20220116513
    Abstract: Privacy-preserving reconstruction for compressed sensing is described. An example of a method includes capturing raw image data for a scene with a compressed sensing image sensor; performing reconstruction of the raw image data, including performing an enhancement reconstruction of the raw image data; and generating a masked image from the reconstruction of the raw image data, wherein the enhancement reconstruction includes applying enhancement utilizing a neural network trained with examples including image data in which private content is masked.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Raizy Kellermann, Omer Ben-Shalom, Alex Nayshtut
  • Publication number: 20220114500
    Abstract: An apparatus is disclosed. The apparatus comprises one or more processors to receive trained model update data from each of a plurality of collaborators, execute an auxiliary machine learning model to the trained model update data to generate a risk score for trained model update data associated with each collaborator, apply one or more policies based on the risk scores to generate adjusted trained model update data associated with each collaborator.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Alex Nayshtut, Raizy Kellermann, Omer Ben-Shalom
  • Publication number: 20210406652
    Abstract: Embodiments are directed to security optimizing compute distribution in a hybrid deep learning environment. An embodiment of an apparatus includes one or more processors to determine security capabilities and compute capabilities of a client machine requesting to use a machine learning (ML) model hosted by the apparatus; determine, based on the security capabilities and based on exposure criteria of the ML model, that one or more layers of the ML model can be offloaded to the client machine for processing; define, based on the compute capabilities of the client machine, a split level of the one or more layers of the ML model for partition of the ML model, the partition comprising offload layers of the one or more layers of the ML model to be processed at the client machine; and cause the offload layers of the ML model to be downloaded to the client machine.
    Type: Application
    Filed: June 25, 2020
    Publication date: December 30, 2021
    Applicant: Intel Corporation
    Inventors: Oleg Pogorelik, Alex Nayshtut, Michael E. Kounavis, Raizy Kellermann, David M. Durham
  • Publication number: 20210319098
    Abstract: Techniques and apparatuses to harden AI systems against various attacks are provided. Among the different techniques and apparatuses, is provided, techniques and apparatuses that expand the domain for an inference model to include both visible classes and well as hidden classes. The hidden classes can be used to detect possible probing attacks against the model.
    Type: Application
    Filed: April 23, 2019
    Publication date: October 14, 2021
    Applicant: INTEL CORPORATION
    Inventors: OLEG POGORELIK, ALEX NAYSHTUT, OMER BEN-SHALOM, DENIS KLIMOV, RAIZY KELLERMANN, GUY BARNHART-MAGEN, VADIM SUKHOMLINOV
  • Publication number: 20190311248
    Abstract: A system and method for random sampled convolutions are disclosed to efficiently boost a convolutional neural network (CNN) expressive power without adding computation cost. The method for random sampled convolutions selects a receptive field size and generates filters with a subset of the receptive field elements, the number of learnable parameters, as being active, wherein the number learnable parameters corresponds to computing characteristics, such as SIMD capability, of the processing system upon which the CNN is executed. Several random filters may be generated, with each being run separately on the CNN. The random filter that causes the fastest convergence is selected over the others. The placement of the random filter in the CNN may be per layer, per channel, or per convergence operation. The CNN employing the random sampled convolutions method performs as well as other CNNs utilizing the same receptive field size.
    Type: Application
    Filed: June 21, 2019
    Publication date: October 10, 2019
    Applicant: Intel Corporation
    Inventors: Shahar Fleishman, Raizy Kellermann, Rana Hanocka
  • Publication number: 20190188386
    Abstract: Methods and apparatus relating to protecting Artificial Intelligence (AI) payloads running in Graphics Processing Unit (GPU) against main Central Processing Unit (CPU) residing adversaries are described. In an embodiment, memory stores data corresponding to one or more Artificial Intelligence (AI) tasks. The memory comprises at least a shared memory partition and a Graphics Processing Unit (GPU) only memory partition. Logic circuitry performs one or more operations in a protected environment to cause transmission of the stored data from the shared memory partition of the memory to the GPU only memory partition of the memory. The shared memory partition is accessible by both a GPU and a Central Processing Unit (CPU), and the GPU only memory partition is only accessible by the GPU. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 27, 2018
    Publication date: June 20, 2019
    Applicant: Intel Corporation
    Inventors: Oleg Pogorelik, Alex Nayshtut, Raizy Kellermann, Venkat Gokulrangan