Patents by Inventor John Cardente
John Cardente has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10659329Abstract: An apparatus in one embodiment comprises a plurality of container host devices of at least one processing platform. The container host devices implement a plurality of containers for executing applications on behalf of one or more tenants of cloud infrastructure. One or more of the container host devices are each configured to compute distance measures between respective pairs of the containers and to assign the containers to container clusters based at least in part on the distance measures. The distance measures may be computed as respective content-based distance measures between hash identifiers of respective layers of layer structures of the corresponding containers. The apparatus may further comprise an interface configured to present a visualization of the container clusters. User feedback received via the interface is utilized to alter at least one parameter of the computation of distance measures and the assignment of clusters to container clusters.Type: GrantFiled: April 28, 2017Date of Patent: May 19, 2020Assignee: EMC IP Holding Company LLCInventors: Junping Zhao, Kevin Xu, Sanping Li, Kun Wang, John Cardente
-
Patent number: 10628079Abstract: A time-series data cache is operatively coupled between a time-series analytics application program and a time-series data store, and configured to temporarily store portions of the time-series data. The time-series data store is configured to persistently store time-series data. The time-series data cache is further configured to be responsive to one or more data read requests received from the time-series analytics application program.Type: GrantFiled: May 27, 2016Date of Patent: April 21, 2020Assignee: EMC IP Holding Company LLCInventors: Sanping Li, Yu Cao, Junping Zhao, Zhe Dong, Accela Zhao, John Cardente
-
Patent number: 10467725Abstract: A graphics processing unit (GPU) service platform includes a control server, and a cluster of GPU servers each having one or more GPU devices. The control server receives a service request from a client system for GPU processing services, allocates multiple GPU servers within the cluster to handle GPU processing tasks specified by the service request by logically binding the allocated GPU servers, and designating one of the at least two GPU servers as a master server, and send connection information to the client system to enable the client system to connect to the master server. The master GPU server receives a block of GPU program code transmitted from the client system, which is associated with the GPU processing tasks specified by the service request, processes the block of GPU program code using the GPU devices of the logically bound GPU servers, and returns processing results to the client system.Type: GrantFiled: February 27, 2019Date of Patent: November 5, 2019Assignee: EMC IP Holding Company LLCInventors: Yifan Sun, Layne Peng, Robert A. Lincourt, Jr., John Cardente, Junping Zhao
-
Publication number: 20190197655Abstract: A graphics processing unit (GPU) service platform includes a control server, and a cluster of GPU servers each having one or more GPU devices. The control server receives a service request from a client system for GPU processing services, allocates multiple GPU servers within the cluster to handle GPU processing tasks specified by the service request by logically binding the allocated GPU servers, and designating one of the at least two GPU servers as a master server, and send connection information to the client system to enable the client system to connect to the master server. The master GPU server receives a block of GPU program code transmitted from the client system, which is associated with the GPU processing tasks specified by the service request, processes the block of GPU program code using the GPU devices of the logically bound GPU servers, and returns processing results to the client system.Type: ApplicationFiled: February 27, 2019Publication date: June 27, 2019Inventors: Yifan Sun, Layne Peng, Robert A. Lincourt, JR., John Cardente, Junping Zhao
-
Patent number: 10262390Abstract: A graphics processing unit (GPU) service platform includes a control server, and a cluster of GPU servers each having one or more GPU devices. The control server receives a service request from a client system for GPU processing services, allocates multiple GPU servers nodes within the cluster to handle GPU processing tasks specified by the service request by logically binding the allocated GPU server nodes, and designating one of the at least two GPU servers as a master server, and send connection information to the client system to enable the client system to connect to the master server. The master GPU server node receives a block of GPU program code transmitted from the client system, which is associated with the GPU processing tasks specified by the service request, processes the block of GPU program code using the GPU devices of the logically bound GPU servers, and returns processing results to the client system.Type: GrantFiled: April 14, 2017Date of Patent: April 16, 2019Assignee: EMC IP Holding Company LLCInventors: Yifan Sun, Layne Peng, Robert A. Lincourt, Jr., John Cardente, Junping Zhao
-
Patent number: 10109030Abstract: A method implemented by a server enables sharing of GPU resources by multiple clients. The server receives a request from a first client for GPU services. The request includes a first block of GPU code of an application executing on the first client. A first task corresponding to the first block of GPU code is enqueued in a task queue. The task queue includes a second task that corresponds to a second block of GPU code of an application executing on a second client. The server schedules a time for executing the first task using a GPU device that is assigned to the first client, and dispatches the first task to a GPU worker process to execute the first task at the scheduled time using the GPU device. The GPU device is shared, either temporally or spatially, by the first and second clients for executing the first and second tasks.Type: GrantFiled: December 27, 2016Date of Patent: October 23, 2018Assignee: EMC IP Holding Company LLCInventors: Yifan Sun, Layne Peng, Robert A. Lincourt, Jr., John Cardente, John S. Harwood
-
Patent number: 10089308Abstract: Embodiments are directed to methods and apparatus for making available to at least one scanning tool, information about at least one data unit shared among multiple storage objects of a plurality of storage objects stored on a storage system. The at least one scanning tool can use the information to influence at least one scanning operation on the at least some of the plurality of storage objects. Embodiments may be implemented in a computer system comprising at least one application program, at least one mapping layer that makes available to the at least one application program a plurality of storage objects, and a storage system that stores data in each of the plurality of storage objects in one or more data units.Type: GrantFiled: September 30, 2008Date of Patent: October 2, 2018Assignee: EMC IP Holding Company LLCInventors: Christopher H. E. Stacey, John Cardente
-
Patent number: 10067949Abstract: Systems and methods are provided for adopting and controlling storage resources of a distributed file system using an acquired namespace metadata service. For example, a computing system includes a first file system, and a distributed file system, which is separate from the first file system. The distributed file system includes storage nodes for storing data. The first file system includes an acquired namespace metadata server that is configured to execute on one or more nodes of the first file system. To adopt and control storage resources of the distributed file system, the first file system acquires a namespace of the distributed file system and uses the acquired namespace metadata server to manage the acquired namespace of the distributed file system. Moreover, the first file system uses the acquired namespace metadata server to directly communicate with and control access to the storage nodes of the distributed file system.Type: GrantFiled: December 23, 2013Date of Patent: September 4, 2018Assignee: EMC IP Holding Company LLCInventors: Chris Stacey, Jr., John Cardente
-
Patent number: 9621431Abstract: Classification techniques are employed in computer networks. For example, network activity is monitored in a computer network and the monitored network activity is used to discover an endpoint of unknown type. A first set of classification models is utilized to identify an endpoint type of the discovered endpoint based on the monitored network activity. In addition, communication patterns between different endpoints of known types are monitored in the computer network, and a second set of classification models is utilized to determine a logical topology of the computer network based on the monitored communication patterns.Type: GrantFiled: December 23, 2014Date of Patent: April 11, 2017Assignee: EMC IP Holding Company LLCInventors: John Cardente, Kenneth Durazzo, Jack Harwood
-
Patent number: 9582328Abstract: A specification of resource requirements is received. One or more resource configurations for a computing environment that satisfy the specification of resource requirements are generated utilizing a description of available resources in the computing environment. A model is utilized to estimate a level of service for each of the resource configurations, wherein the model predicts behavioral dependencies between attributes of the resources in the computing environment. A given one of the resource configurations is selected based at least in part on the estimated levels of service, and resources in the computing environment are assigned according to the selected configuration of resources.Type: GrantFiled: June 19, 2015Date of Patent: February 28, 2017Assignee: EMC IP Holding Company LLCInventors: Simon Tao, Yu Cao, Xiaoyan Guo, Kenneth Durazzo, John Cardente
-
Patent number: 8935751Abstract: Extensions to the Fragment Mapping Protocol are introduced which protect a disk array from malicious client access by exporting file system access information to the storage device. FMP requests received at the storage device can be authorized at a block granularity prior to completion, thereby limiting the exposure of the disk array to malicious clients. Client authorizations can be cached at the storage device to enable the permissions to be quickly extracted for subsequent client accesses to pre-authorized volumes.Type: GrantFiled: September 29, 2006Date of Patent: January 13, 2015Assignee: EMC CorporationInventors: John Cardente, Stephen Fridella, Uday Gupta
-
Patent number: 8489816Abstract: A predictive model specifies a workload to be applied to a hierarchy of caches having multiple levels of caches. The predictive model defines a configuration for the hierarchy of caches by specifying cache characteristics of each level of the hierarchy of caches and the underlying storage pool and applies the workload to the configuration. For each level of the configuration, the predictive model computes a performance metric based on a portion of the workload satisfied at the level and the cache characteristics of the level. The predictive model computes resource allocation metrics based on the performance metric for the levels and a cost associated with the configuration. Based on the workload, the configuration, performance metrics, and resource allocation metrics, the predictive model creates a design time recommendation for the hierarchy of caches, a configuration time recommendation and run time recommendation for the hierarchy of caches.Type: GrantFiled: January 12, 2012Date of Patent: July 16, 2013Assignee: EMC CorporationInventors: David Reiner, John Cardente
-
Patent number: 8112586Abstract: A predictive model specifies a workload to be applied to a hierarchy of caches having multiple levels of caches. The predictive model defines a configuration for the hierarchy of caches by specifying cache characteristics of each level of the hierarchy of caches and the underlying storage pool and applies the workload to the configuration. For each level of the configuration, the predictive model computes a performance metric based on a portion of the workload satisfied at the level and the cache characteristics of the level. The predictive model computes resource allocation metrics based on the performance metric for the levels and a cost associated with the configuration. Based on the workload, the configuration, performance metrics, and resource allocation metrics, the predictive model creates a design time recommendation for the hierarchy of caches, a configuration time recommendation and run time recommendation for the hierarchy of caches.Type: GrantFiled: August 12, 2008Date of Patent: February 7, 2012Assignee: EMC CorporationInventors: David Reiner, John Cardente
-
Patent number: 7359927Abstract: A method for transferring a copy of data stored at a source to a remote location. The method includes storing at the source sequence of sets of changes in the data stored at the source. The method transfers to a first one of a pair of storage volumes at the remote location the most recent pair of the stored sets of changes in the sequence of data stored at the source at a time prior to such transfer. The method subsequently transferring to a second one of the pair of storage volumes at the remote location the most recent pair of the stored sets of changes in the in the sequence of data stored at the source at a time prior to such subsequent transfer.Type: GrantFiled: December 1, 2004Date of Patent: April 15, 2008Assignee: EMC CorporationInventor: John Cardente
-
Patent number: 7254685Abstract: A remote replication solution for a storage system receives a stream of data including independent streams of dependent writes. The method is able to discern dependent from independent writes. The method discerns dependent from independent writes by assigning a sequence number to each write, the sequence number indicating a time interval in which the write began. It then assigns a horizon number to each write request, the horizon number indicating a time interval in which the first write that started at a particular sequence number ends. A write is caused to be stored on a storage device, and its horizon number is assigned as a replication number. Further writes are caused to be stored on the storage device if the sequence number associated with the writes is less than the replication number.Type: GrantFiled: June 15, 2004Date of Patent: August 7, 2007Assignee: EMC CorporationInventor: John Cardente
-
Patent number: 7174422Abstract: In general, in one aspect, the disclosure describes a data storage device that includes a device interface for receiving data access requests, more than two disk drives having platter sizes less than 3.5 inches in diameter, and a controller that accesses the disk drives in response to the received data access requests.Type: GrantFiled: October 23, 2001Date of Patent: February 6, 2007Assignee: EMC CorporationInventors: Michael Kowalchik, John Cardente
-
Patent number: 6973537Abstract: In general, in one aspect, the disclosure describes a cache that includes interface that receives data access requests that specify respective data storage addresses, a back-end interface that can retrieve data identified by the data storage addresses, cache storage formed by at least two disks, and a cache manager that services at least some of the requests received at the front-end interface using data stored in the cache storage.Type: GrantFiled: October 23, 2001Date of Patent: December 6, 2005Assignee: EMC CorporationInventors: Michael Kowalchik, John Cardente