Patents by Inventor Vadim Makhervaks

Vadim Makhervaks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240103721
    Abstract: Embodiments of the present disclosure include systems and methods for providing a scalable controller for managing data storages. A system includes a non-volatile memory controller comprising a set of data queues and a set of administrative queues. The system also includes a set of physical storages communicatively coupled to the non-volatile memory controller. A set of logical storages are created from the set of physical storages. A primary non-volatile memory controller is created from the non-volatile memory controller. The primary non-volatile memory controller comprising an administrative queue in the set of administrative queues, a first subset of the set of data queues, and a first subset of the set of logical storages. An extended non-volatile memory controller is created from the non-volatile memory controller. The extended non-volatile memory controller comprising a second subset of the set of data queues and a second subset of the set of logical storages.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 28, 2024
    Inventors: Jacob Kappeler OSHINS, Hari Daas ANGEPAT, Yi YUAN, Vadim MAKHERVAKS
  • Publication number: 20240007268
    Abstract: A computing system uses Advanced Encryption Standard XEX Based Tweaked Codebook Mode with Ciphertext Stealing (AES-XTS) encryption to encrypt a block of data using a tweak key, a data key, a modified tweak value, and the block of data to thereby generate an encrypted block of data. The modified tweak value is computed according to the expression DEC(0, CONST KEY), where DEC is an AES decryption algorithm, and CONST KEY is the tweak key. The encrypted block of data is thereby formatted according to the Advanced Encryption Standard with no extended mode and not according to the XEX Based Tweaked Codebook Mode with Ciphertext Stealing.
    Type: Application
    Filed: December 15, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeny YANKILEVICH, Vadim MAKHERVAKS, Yi YUAN, Robert GROZA, Jr., Oren ISH-AM
  • Publication number: 20230393998
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: August 21, 2023
    Publication date: December 7, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Publication number: 20230385204
    Abstract: A computing system uses AES-XTS encryption to encrypt data of a first part of first data stream using a tweak key, a data key, an initial tweak value, in a first encryption session, store the encrypted first part, then encrypts a second part of the first data stream in a second encryption session commenced after the termination of the first encryption session; and store the encrypted second part in the encrypted data store. The second part of the first data stream is encrypted using a modified tweak value computed based on the initial tweak value, the tweak key, and a block index of a last cipher block of the first part of the first data stream.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeny YANKILEVICH, Vadim MAKHERVAKS, Robert GROZA, JR., Yi YUAN, Oren ISH-AM
  • Publication number: 20230342028
    Abstract: Zone hints for use with a zoned namespace (ZNS) storage device. Zone hints include one or more of a first hint indicating that a zone is part of a group of a plurality of zones, a second hint indicating that the zone is to be fast-filled, or a third hint indicating that the zone is associated with a background operation. The first hint is structured to instruct the ZNS storage device to allocate to the zone a first storage resources that are physically adjacent to second storage resources reserved for others of the plurality of zones. The second hint is structured to instruct the ZNS storage device to bypass a staging area when writing to the zone. The third hint is structured to instruct the ZNS storage device to deprioritizing at least one operation writing to the zone, or to bypass the staging area when writing to the zone.
    Type: Application
    Filed: September 28, 2021
    Publication date: October 26, 2023
    Inventors: Scott Chao-Chueh LEE, Vadim MAKHERVAKS, Madhav Himanshubhai PANDYA, Ioan OLTEAN, Laura Marie CAULFIELD, Lee Edward PREWITT
  • Patent number: 11768783
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vadim Makhervaks, Aaron William Ogus, Jason David Adrian
  • Patent number: 11755527
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: September 12, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Publication number: 20230259656
    Abstract: Methods, systems, apparatuses, and computer program products are provided herein for rendering secured content. For instance, a computing device may be utilized to view content that is to be displayed via a display device coupled thereto. However, rather than rendering the content, the computing device generates and/or provides a graphical representation of the content to a rendering device coupled between the computing device and the display device. The rendering device analyzes the graphical representation to determine characteristics of the graphical representation, characteristics of a display region of an application window in which the content is to be rendered, and a network address at which the actual content is located. The rendering device retrieves the content using the network address and renders the retrieved content over the display region of the application window in accordance with the characteristics determined for the graphical representation and the display region of the application window.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Orr SROUR, Vadim MAKHERVAKS
  • Publication number: 20230185759
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Application
    Filed: August 15, 2022
    Publication date: June 15, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: ELIEZER TAMIR, VADIM MAKHERVAKS, BEN-ZION FRIEDMAN, PHIL CAYTON, THEODORE L. WILLKE
  • Patent number: 11500810
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: November 15, 2022
    Assignee: Tahoe Research, Ltd.
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Publication number: 20220283967
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Patent number: 11372785
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vadim Makhervaks, Aaron William Ogus, Jason David Adrian
  • Publication number: 20220100696
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Application
    Filed: September 3, 2021
    Publication date: March 31, 2022
    Applicant: INTEL CORPORATION
    Inventors: ELIEZER TAMIR, VADIM MAKHERVAKS, BEN-ZION FRIEDMAN, PHIL CAYTON, THEODORE L. WILLKE
  • Publication number: 20210349841
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 11, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Patent number: 11138143
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: October 5, 2021
    Assignee: INTEL CORPORATION
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Patent number: 11068412
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: July 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
  • Patent number: 11025564
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
  • Patent number: 10795718
    Abstract: A technique is described herein for updating the logic used by a hardware accelerator provided by a computing device. In one implementation, the technique provides a pass-through mode which allows a virtual machine (provided by the computing device) to directly interact with the hardware accelerator. Upon the commencement of an updating operation, the technique instructs an emulator to begin emulating the function(s) of the hardware accelerator and the resultant effects of these functions, without interaction with the actual hardware accelerator. When the updating operation finishes, the technique re-enables the pass-through mode. By virtue of the above-summarized manner of operation, the technique allows the computing device to perform the function(s) associated with the hardware accelerator while the hardware accelerator is being updated. In one case, the technique disables the pass-through mode by modifying address-mapping information used by the virtual machine to access system physical addresses.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 6, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jacob Kappeler Oshins, Vadim Makhervaks
  • Publication number: 20200274832
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 27, 2020
    Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL
  • Publication number: 20200272579
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 27, 2020
    Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL