Patents by Inventor Dmitry Belenko

Dmitry Belenko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11720789
    Abstract: In one embodiment, a method includes receiving an input vector corresponding to a query at a neural network model comprising a plurality of layers, wherein the plurality of layers comprise a last layer associated with a mapping matrix, generating a binary matrix based on the mapping matrix, an identity matrix, and one or more Gaussian vectors, generating an integer vector based on the binary matrix and a binary vector associated with the input vector, identifying a plurality of indices corresponding to a plurality of top values of the integer vector for the integer vector, generating an output vector based on the input vector and a plurality of rows of the mapping matrix, wherein the plurality of rows is associated with the plurality of identified indices, respectively, and determining the query is associated with one or more classes based on the output vector.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Hessam Bagherinezhad, Dmitry Belenko
  • Patent number: 11657124
    Abstract: In one embodiment, a method includes receiving a user request from a client device associated with a user, accessing an instructional file comprising one or more binary inference engines and one or more encrypted model data corresponding to the one or more binary inference engines, respectively, selecting a binary inference engine from the one or more binary inference engines in the accessed instructional file based on the user request, sending a validation request for a permission to execute the binary inference engine to a licensing server, receiving the permission from the licensing server, decrypting the encrypted model data corresponding to the binary inference engine by a decryption key, executing the binary inference engine based on the user request and the decrypted model data, and sending one or more execution results responsive to the execution of the binary inference engine to the client device.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: May 23, 2023
    Assignee: Apple Inc.
    Inventors: Peter Zatloukal, Matthew Weaver, Alexander Kirchhoff, Dmitry Belenko, Ali Farhadi, Mohammad Rastegari, Andrew Luke Chronister, Keith Patrick Wyss, Chenfan Sun
  • Publication number: 20200387783
    Abstract: In one embodiment, a method includes receiving an input vector corresponding to a query at a neural network model comprising a plurality of layers, wherein the plurality of layers comprise a last layer associated with a mapping matrix, generating a binary matrix based on the mapping matrix, an identity matrix, and one or more Gaussian vectors, generating an integer vector based on the binary matrix and a binary vector associated with the input vector, identifying a plurality of indices corresponding to a plurality of top values of the integer vector for the integer vector, generating an output vector based on the input vector and a plurality of rows of the mapping matrix, wherein the plurality of rows is associated with the plurality of identified indices, respectively, and determining the query is associated with one or more classes based on the output vector.
    Type: Application
    Filed: November 1, 2019
    Publication date: December 10, 2020
    Inventors: Hessam Bagherinezhad, Dmitry Belenko
  • Publication number: 20200184037
    Abstract: In one embodiment, a method includes receiving a user request from a client device associated with a user, accessing an instructional file comprising one or more binary inference engines and one or more encrypted model data corresponding to the one or more binary inference engines, respectively, selecting a binary inference engine from the one or more binary inference engines in the accessed instructional file based on the user request, sending a validation request for a permission to execute the binary inference engine to a licensing server, receiving the permission from the licensing server, decrypting the encrypted model data corresponding to the binary inference engine by a decryption key, executing the binary inference engine based on the user request and the decrypted model data, and sending one or more execution results responsive to the execution of the binary inference engine to the client device.
    Type: Application
    Filed: December 10, 2018
    Publication date: June 11, 2020
    Inventors: Peter Zatloukal, Matthew Weaver, Alexander Kirchhoff, Dmitry Belenko, Ali Farhadi, Mohammad Rastegari, Andrew Luke Chronister, Keith Patrick Wyss, Chenfan Sun