Patents by Inventor Vadim Makhervaks

Vadim Makhervaks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230393998
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: August 21, 2023
    Publication date: December 7, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Publication number: 20230385204
    Abstract: A computing system uses AES-XTS encryption to encrypt data of a first part of first data stream using a tweak key, a data key, an initial tweak value, in a first encryption session, store the encrypted first part, then encrypts a second part of the first data stream in a second encryption session commenced after the termination of the first encryption session; and store the encrypted second part in the encrypted data store. The second part of the first data stream is encrypted using a modified tweak value computed based on the initial tweak value, the tweak key, and a block index of a last cipher block of the first part of the first data stream.
    Type: Application
    Filed: May 25, 2022
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeny YANKILEVICH, Vadim MAKHERVAKS, Robert GROZA, JR., Yi YUAN, Oren ISH-AM
  • Publication number: 20230342028
    Abstract: Zone hints for use with a zoned namespace (ZNS) storage device. Zone hints include one or more of a first hint indicating that a zone is part of a group of a plurality of zones, a second hint indicating that the zone is to be fast-filled, or a third hint indicating that the zone is associated with a background operation. The first hint is structured to instruct the ZNS storage device to allocate to the zone a first storage resources that are physically adjacent to second storage resources reserved for others of the plurality of zones. The second hint is structured to instruct the ZNS storage device to bypass a staging area when writing to the zone. The third hint is structured to instruct the ZNS storage device to deprioritizing at least one operation writing to the zone, or to bypass the staging area when writing to the zone.
    Type: Application
    Filed: September 28, 2021
    Publication date: October 26, 2023
    Inventors: Scott Chao-Chueh LEE, Vadim MAKHERVAKS, Madhav Himanshubhai PANDYA, Ioan OLTEAN, Laura Marie CAULFIELD, Lee Edward PREWITT
  • Patent number: 11768783
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vadim Makhervaks, Aaron William Ogus, Jason David Adrian
  • Patent number: 11755527
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: September 12, 2023
    Assignee: Tahoe Research, Ltd.
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Publication number: 20230259656
    Abstract: Methods, systems, apparatuses, and computer program products are provided herein for rendering secured content. For instance, a computing device may be utilized to view content that is to be displayed via a display device coupled thereto. However, rather than rendering the content, the computing device generates and/or provides a graphical representation of the content to a rendering device coupled between the computing device and the display device. The rendering device analyzes the graphical representation to determine characteristics of the graphical representation, characteristics of a display region of an application window in which the content is to be rendered, and a network address at which the actual content is located. The rendering device retrieves the content using the network address and renders the retrieved content over the display region of the application window in accordance with the characteristics determined for the graphical representation and the display region of the application window.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Orr SROUR, Vadim MAKHERVAKS
  • Publication number: 20230185759
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Application
    Filed: August 15, 2022
    Publication date: June 15, 2023
    Applicant: Tahoe Research, Ltd.
    Inventors: ELIEZER TAMIR, VADIM MAKHERVAKS, BEN-ZION FRIEDMAN, PHIL CAYTON, THEODORE L. WILLKE
  • Patent number: 11500810
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: November 15, 2022
    Assignee: Tahoe Research, Ltd.
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Publication number: 20220283967
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Patent number: 11372785
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vadim Makhervaks, Aaron William Ogus, Jason David Adrian
  • Publication number: 20220100696
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Application
    Filed: September 3, 2021
    Publication date: March 31, 2022
    Applicant: INTEL CORPORATION
    Inventors: ELIEZER TAMIR, VADIM MAKHERVAKS, BEN-ZION FRIEDMAN, PHIL CAYTON, THEODORE L. WILLKE
  • Publication number: 20210349841
    Abstract: A server system is provided that includes one or more compute nodes that include at least one processor and a host memory device. The server system further includes a plurality of solid-state drive (SSD) devices, a local non-volatile memory express virtualization (LNV) device, and a non-transparent (NT) switch for a peripheral component interconnect express (PCIe) bus that interconnects the plurality of SSD devices and the LNV device to the at least one processor of each compute node. The LNV device is configured to virtualize hardware resources of the plurality of SSD devices. The plurality of SSD devices are configured to directly access data buffers of the host memory device. The NT switch is configured to hide the plurality of SSD devices such that the plurality of SSD devices are not visible to the at least one processor of each compute node.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 11, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vadim MAKHERVAKS, Aaron William OGUS, Jason David ADRIAN
  • Patent number: 11138143
    Abstract: Examples are disclosed for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may allocate, in a memory of the server, a buffer, a doorbell, and a queue pair accessible to a client remote to the server. For these examples, the network input/output device may assign an Non-Volatile Memory Express (NVMe) namespace context to the client. For these examples, indications of the allocated buffer, doorbell, queue pair, and namespace context may be transmitted to the client. Other examples are described and claimed.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: October 5, 2021
    Assignee: INTEL CORPORATION
    Inventors: Eliezer Tamir, Vadim Makhervaks, Ben-Zion Friedman, Phil Cayton, Theodore L. Willke
  • Patent number: 11068412
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: July 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
  • Patent number: 11025564
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
  • Patent number: 10795718
    Abstract: A technique is described herein for updating the logic used by a hardware accelerator provided by a computing device. In one implementation, the technique provides a pass-through mode which allows a virtual machine (provided by the computing device) to directly interact with the hardware accelerator. Upon the commencement of an updating operation, the technique instructs an emulator to begin emulating the function(s) of the hardware accelerator and the resultant effects of these functions, without interaction with the actual hardware accelerator. When the updating operation finishes, the technique re-enables the pass-through mode. By virtue of the above-summarized manner of operation, the technique allows the computing device to perform the function(s) associated with the hardware accelerator while the hardware accelerator is being updated. In one case, the technique disables the pass-through mode by modifying address-mapping information used by the virtual machine to access system physical addresses.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 6, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jacob Kappeler Oshins, Vadim Makhervaks
  • Publication number: 20200274832
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 27, 2020
    Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL
  • Publication number: 20200272579
    Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 27, 2020
    Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL
  • Patent number: 10754549
    Abstract: An append-only streams capability may be implemented that allows the host (e.g., the file system) to determine an optimal stream size based on the data to be stored in that stream. The storage device may expose to the host one or more characteristics of the available streams on the device, including but not limited to the maximum number of inactive and active streams on the device, the erase block size, the maximum number of erase blocks that can be written in parallel, and an optimal write size of the data. Using this information, the host may determine which particular stream offered by the device is best suited for the data to be stored.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: August 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bryan S. Matthew, Aaron W. Ogus, Vadim Makhervaks, Laura M. Caulfield, Rajsekhar Das, Scott Chao-Chueh Lee, Omar Carey, Madhav Pandya, Ioan Oltean, Garret Buban, Lee Prewitt
  • Publication number: 20200257547
    Abstract: A technique is described herein for updating the logic used by a hardware accelerator provided by a computing device. In one implementation, the technique provides a pass-through mode which allows a virtual machine (provided by the computing device) to directly interact with the hardware accelerator. Upon the commencement of an updating operation, the technique instructs an emulator to begin emulating the function(s) of the hardware accelerator and the resultant effects of these functions, without interaction with the actual hardware accelerator. When the updating operation finishes, the technique re-enables the pass-through mode. By virtue of the above-summarized manner of operation, the technique allows the computing device to perform the function(s) associated with the hardware accelerator while the hardware accelerator is being updated. In one case, the technique disables the pass-through mode by modifying address-mapping information used by the virtual machine to access system physical addresses.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 13, 2020
    Inventors: Jacob Kappeler OSHINS, Vadim MAKHERVAKS