Patents by Inventor Kirill Malkin
Kirill Malkin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11392417Abstract: An ultraconverged architecture has multiple availability zones within a single server. The functionality in each of the availability zones is independently controlled, such that resetting and/or disconnecting any component in any availability zone from power and replacing said component does not affect availability of any other component in any other availability zone. A manager of availability zones controls reset functionality in each of a plurality of availability zones. The manager of availability zones generates a requested reset type in the requested availability zone. The manager of availability zones generates reset signals or requests for some or all components located in multiple availability zones. The reset signal or request is generated upon external request to the manager of availability zones that specifies the reset type, the availability zone, and optionally the list of components to be reset. The manager of availability zones discovers and enumerates the components in each availability zone.Type: GrantFiled: June 11, 2019Date of Patent: July 19, 2022Assignee: Quantaro, LLCInventors: Vladislav Nikolayevich Bolkhovitin, Kirill Malkin
-
Patent number: 11068420Abstract: A scalable software stack is disclosed. In particular, the present disclosure provides a system and a method directed at allocating logical ownership of memory locations in a shared storage device among two or more associated compute devices that have access to the storage device. The logical ownership allocation can minimize potential conflicts between two simultaneous accesses occurring within the same memory location of the storage device.Type: GrantFiled: May 12, 2015Date of Patent: July 20, 2021Assignee: Hewlett Packard Enterprise Development LPInventor: Kirill Malkin
-
Patent number: 11029847Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.Type: GrantFiled: November 16, 2016Date of Patent: June 8, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
-
Patent number: 10521260Abstract: A high performance computing (HPC) system has an architecture that separates data paths used by compute nodes exchanging computational data from the data paths used by compute nodes to obtain computational work units and save completed computations. The system enables an improved method of saving checkpoint data, and an improved method of using an analysis of the saved data to assign particular computational work units to particular compute nodes. The system includes a compute fabric and compute nodes that cooperatively perform a computation by mutual communication using the compute fabric. The system also includes a local data fabric that is coupled to the compute nodes, a memory, and a data node. The data node is configured to retrieve data for the computation from an external bulk data storage, and to store its work units in the memory for access by the compute nodes.Type: GrantFiled: July 14, 2017Date of Patent: December 31, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Steven J. Dean, Michael Woodacre, Randal S. Passint, Eric C. Fromm, Thomas E. McGee, Michael E. Malewicki, Kirill Malkin
-
Patent number: 10515027Abstract: According to examples, an apparatus may include a memory to which a first queue and a second queue are assigned, in which a storage device is to access data task requests stored in the first queue and the second queue, in which the apparatus is to transfer the first queue to a second apparatus. The apparatus may also include a central processing unit (CPU), the CPU to input data task requests for the storage device into the second queue, in which the second apparatus is to store the first queue in a second memory of the second apparatus, and the storage device is to access data task requests from the first queue stored in the second memory of the second apparatus and data task requests from the second queue stored in the memory to cause the apparatus and the second apparatus to share access to the storage device.Type: GrantFiled: October 25, 2017Date of Patent: December 24, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Kirill Malkin, Alan Poston, Matthew Jacob
-
Publication number: 20190384642Abstract: An ultraconverged architecture has multiple availability zones within a single server. The functionality in each of the availability zones is independently controlled, such that resetting and/or disconnecting any component in any availability zone from power and replacing said component does not affect availability of any other component in any other availability zone. A manager of availability zones controls reset functionality in each of a plurality of availability zones. The manager of availability zones generates a requested reset type in the requested availability zone. The manager of availability zones generates reset signals or requests for some or all components located in multiple availability zones. The reset signal or request is generated upon external request to the manager of availability zones that specifies the reset type, the availability zone, and optionally the list of components to be reset. The manager of availability zones discovers and enumerates the components in each availability zone.Type: ApplicationFiled: June 11, 2019Publication date: December 19, 2019Inventors: Vladislav Nikolayevich Bolkhovitin, Kirill Malkin
-
Patent number: 10296222Abstract: The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.Type: GrantFiled: October 31, 2016Date of Patent: May 21, 2019Assignee: Hewlett Packard Enterprise Development LPInventor: Kirill Malkin
-
Publication number: 20190124180Abstract: A transmitting device can compress a packet prior to transmitting the packet to a receiving device, which then decompresses the packet. The packet can be combined into a single combined packet with other packets within a transmission queue of the same type and that refer to consecutive memory block addresses. A header of the packet can be replaced with a reduced-size header including a sequence number and a flag indicating the header has been replaced with the reduced-size header, if the packet has a consecutive memory block address to that of the most recently transmitted packet. A payload of the packet may also be compressed.Type: ApplicationFiled: October 20, 2017Publication date: April 25, 2019Inventors: Frank Dropps, Russell Nicol, Kirill Malkin
-
Publication number: 20190121753Abstract: According to examples, an apparatus may include a memory to which a first queue and a second queue are assigned, in which a storage device is to access data task requests stored in the first queue and the second queue, in which the apparatus is to transfer the first queue to a second apparatus. The apparatus may also include a central processing unit (CPU), the CPU to input data task requests for the storage device into the second queue, in which the second apparatus is to store the first queue in a second memory of the second apparatus, and the storage device is to access data task requests from the first queue stored in the second memory of the second apparatus and data task requests from the second queue stored in the memory to cause the apparatus and the second apparatus to share access to the storage device.Type: ApplicationFiled: October 25, 2017Publication date: April 25, 2019Applicant: Hewlett Packard Enterprise Development LPInventors: Kirill Malkin, Alan Poston, Matthew Jacob
-
Publication number: 20180018196Abstract: A high performance computing (HPC) system has an architecture that separates data paths used by compute nodes exchanging computational data from the data paths used by compute nodes to obtain computational work units and save completed computations. The system enables an improved method of saving checkpoint data, and an improved method of using an analysis of the saved data to assign particular computational work units to particular compute nodes. The system includes a compute fabric and compute nodes that cooperatively perform a computation by mutual communication using the compute fabric. The system also includes a local data fabric that is coupled to the compute nodes, a memory, and a data node. The data node is configured to retrieve data for the computation from an external bulk data storage, and to store its work units in the memory for access by the compute nodes.Type: ApplicationFiled: July 14, 2017Publication date: January 18, 2018Inventors: Steven J. Dean, Michael Woodacre, Randal S. Passint, Eric C. Fromm, Thomas E. McGee, Michael E. Malewicki, Kirill Malkin
-
Publication number: 20170139607Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.Type: ApplicationFiled: November 16, 2016Publication date: May 18, 2017Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
-
Publication number: 20170115881Abstract: The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.Type: ApplicationFiled: October 31, 2016Publication date: April 27, 2017Inventor: Kirill Malkin
-
Patent number: 9619180Abstract: The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.Type: GrantFiled: July 18, 2014Date of Patent: April 11, 2017Assignee: Silicon Graphics International Corp.Inventor: Kirill Malkin
-
Patent number: 9513844Abstract: The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.Type: GrantFiled: July 18, 2014Date of Patent: December 6, 2016Assignee: SILICON GRAPHICS INTERNATIONAL CORP.Inventor: Kirill Malkin
-
Publication number: 20160335002Abstract: A scalable software stack is disclosed. In particular, the present disclosure provides a system and a method directed at allocating logical ownership of memory locations in a shared storage device among two or more associated compute devices that have access to the storage device. The logical ownership allocation can minimize potential conflicts between two simultaneous accesses occurring within the same memory location of the storage device.Type: ApplicationFiled: May 12, 2015Publication date: November 17, 2016Inventor: Kirill Malkin
-
Publication number: 20150032921Abstract: The present system enables more efficient I/O processing by providing a mechanism for maintaining data within the locality of reference. One or more accelerator modules may be implemented within a solid state storage device (SSD). The accelerator modules form a caching storage tier that can receive, store and reproduce data. The one or more accelerator modules may place data into the SSD or hard disk drives based on parameters associated with the data.Type: ApplicationFiled: July 18, 2014Publication date: January 29, 2015Inventor: Kirill Malkin
-
Publication number: 20150019807Abstract: The present technology provides a two step process for providing a linearized dynamic storage pool. First, physical storage devices are abstracted. The physical storage devices used for the pool are divided into extents, grouped by storage class, and stripes are created from data chunks of similar classified devices. A virtual volume is then provisioned from and the virtual volume is divided into virtual stripes. A volume map is created to map the virtual stripes with data to the physical stripes, linearly mapping the virtual layout to the physical capacity to maintain optimal performance.Type: ApplicationFiled: July 10, 2014Publication date: January 15, 2015Inventors: Kirill Malkin, Yann Livis
-
Patent number: 8429274Abstract: Various information about storage resources in a UNIX or UNIX derivative operating system computing environment is gathered from various sources in response to scan requests. Where a given type of information for a given storage resource is gathered from multiple sources, the information is verified for consistency, and placed in a single file in an industry standard hierarchical format. Scan threads are timed to provide reliable performance.Type: GrantFiled: September 6, 2006Date of Patent: April 23, 2013Assignee: RELDATA, Inc.Inventor: Kirill Malkin
-
Publication number: 20110225382Abstract: A first snapshot is taken of a first block storage resource that is initially identical in content to a second block storage resource. A second snapshot of the first block storage resource is taken at a later time. A record is kept of all blocks modified on the first block storage resource. Only those blocks modified between the time of the first and second snapshots are written to the second block storage resource. After all the modified blocks are written to the second block storage resource, a snapshot is taken of the second block storage resource to maintain a consistent snapshot of the second block storage resource in case of communication failure during the next round. The first snapshot is then deleted, the second takes the role of the first, and the next round of replication begins.Type: ApplicationFiled: May 23, 2011Publication date: September 15, 2011Applicant: RELDATA, INC.Inventors: Kirill MALKIN, Yann LIVIS
-
Patent number: 8015270Abstract: A configuration of a first storage resource is written to a first instance of a single file in a standard hierarchical format that is stored locally in nonvolatile memory and updatable by the first resource. A configuration of a second storage resource is written to a second instance of the single file in the standard hierarchical format stored locally in nonvolatile memory and updatable by the second resource. The first instance and second instance of the single file are updated so that all configurations are present and identical in all instances of the single file.Type: GrantFiled: September 6, 2006Date of Patent: September 6, 2011Assignee: RELDATA, Inc.Inventors: Kirill Malkin, Mikhail Litvin