Patents by Inventor Roy Kim
Roy Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12008404Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: GrantFiled: April 14, 2022Date of Patent: June 11, 2024Assignee: PURE STORAGE, INC.Inventors: Ivan Jibaja, Prashant Jaikumar, Stefan Dorsett, Curtis Pullen, Roy Kim
-
Patent number: 11803338Abstract: Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.Type: GrantFiled: November 30, 2021Date of Patent: October 31, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 11768636Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.Type: GrantFiled: December 27, 2022Date of Patent: September 26, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 11556280Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: GrantFiled: May 29, 2020Date of Patent: January 17, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 11403290Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.Type: GrantFiled: July 18, 2019Date of Patent: August 2, 2022Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 11307894Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: GrantFiled: October 22, 2019Date of Patent: April 19, 2022Assignee: Pure Storage, Inc.Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
-
Patent number: 11210140Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.Type: GrantFiled: May 29, 2020Date of Patent: December 28, 2021Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10671434Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.Type: GrantFiled: July 20, 2018Date of Patent: June 2, 2020Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10671435Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: GrantFiled: July 20, 2018Date of Patent: June 2, 2020Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10649988Abstract: An artificial intelligence and machine learning infrastructure system, including: one or more storage systems comprising, respectively, one or more storage devices; and one or more graphical processing units, wherein the graphical processing units are configured to communicate with the one or more storage systems over a communication fabric; where the one or more storage systems, the one or more graphical processing units, and the communication fabric are implemented within a single chassis.Type: GrantFiled: July 27, 2018Date of Patent: May 12, 2020Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10554572Abstract: Approaches, techniques, and mechanisms are disclosed for improving the efficiency with which data units are handled within a device, such as a networking device. Received data units, or portions thereof, are temporarily stored within one or more memories of a merging component, while the merging component waits to receive control information for the data units. Once received, the merging component merges the control information with the associated data units. The merging component dispatches the merged data units, or portions thereof, to an interconnect component, which forwards the merged data units to destinations indicated by the control information. The device is configured to intelligently schedule the dispatching of merged data units to the interconnect component. To this end, the device includes a scheduler configured to select which merged data units to dispatch at which times based on a variety of factors described herein.Type: GrantFiled: February 15, 2017Date of Patent: February 4, 2020Assignee: Innovium, Inc.Inventors: William Brad Matthews, Paul Roy Kim, Puneet Agarwal
-
Patent number: 10452444Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: GrantFiled: January 30, 2018Date of Patent: October 22, 2019Assignee: Pure Storage, Inc.Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
-
Patent number: 10360214Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.Type: GrantFiled: July 26, 2018Date of Patent: July 23, 2019Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10275176Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.Type: GrantFiled: July 26, 2018Date of Patent: April 30, 2019Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10275285Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: GrantFiled: July 26, 2018Date of Patent: April 30, 2019Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10200508Abstract: A special-purpose processing system, a method of carrying out sharing special-purpose processing resources and a graphics processing system. In one embodiment, the special-purpose processing system includes: (1) a special-purpose processing resource and (2) a Representational State Transfer (ReST) application programming interface operable to process data using the special-purpose processing resource in response to stateless commands based on a standard protocol selected from the group consisting of: (2a) a standard network protocol and (2b) a standard database query protocol.Type: GrantFiled: January 7, 2014Date of Patent: February 5, 2019Assignee: Nvidia CorporationInventors: Jonathan Cohen, Michael Houston, Frank Jargstorff, Eric Young, Roy Kim
-
Publication number: 20150081866Abstract: A special-purpose processing system, a method of carrying out sharing special-purpose processing resources and a graphics processing system. In one embodiment, the special-purpose processing system includes: (1) a special-purpose processing resource and (2) a Representational State Transfer (ReST) application programming interface operable to process data using the special-purpose processing resource in response to stateless commands based on a standard protocol selected from the group consisting of: (2a) a standard network protocol and (2b) a standard database query protocol.Type: ApplicationFiled: January 7, 2014Publication date: March 19, 2015Applicant: Nvidia CorporationInventors: Jonathan Cohen, Michael Houston, Frank Jargstorff, Eric Young, Roy Kim
-
Publication number: 20060015689Abstract: The present invention provides parallel processing of write-back and reload operations in a cache system and optimum circuit utilisation by implementing moveable buffers in a cache storage. However, the data and associated pointers are not permanently assigned to a particular buffer—hence, the buffers can move logically around in the facility. Reload pointer is pointing to an empty entry so that retrieved data from the main memory or equal hierarchy cache on cache miss can be always be accommodated. Victim pointer is always pointing to a modified entry for the next candidate of write-back operation. Write-back operation is necessary with reload operation in order to make a free entry for further cache miss handling unless free entry exists. Because of these moveable pointers for reload buffer and victim buffer and integrated write-back buffer in the cache, intra cache data movement is not necessary which improves cache miss handling performance.Type: ApplicationFiled: July 15, 2004Publication date: January 19, 2006Applicants: International Business Machines Corporation, Sony Computer Entertainment Inc.Inventors: Yasukichi Okawa, Roy Kim, Peichun Liu, Thuong Truong
-
Publication number: 20050289300Abstract: The present invention provides for managing an atomic facility cache write back state machine. A first write back selection is made. A reservation pointer pointing to the reserved line in the atomic facility data array is established. A next write back selection is made. An entry for the reservation point for the next write back selection is removed, whereby the valid reservation line is precluded form being selected for the write back. This prevents a modified command from being invalidated.Type: ApplicationFiled: June 24, 2004Publication date: December 29, 2005Applicants: International Business Machines Corporation, Sony Computer Entertainment Inc.Inventors: Roy Kim, Yasukichi Okawa, Thuong Truong
-
Publication number: 20050273563Abstract: A cache write back operation, write back modified data to memory from cache data array to fix inconsistency between them can be cancelled by the results of a comparison of the progress between a write back and snoop push or snoop kill operation. Write back is intended to make an empty slot to accommodate a reload data due to a cache miss and since a snoop push or snoop kill operation creates an invalid entry in the cache, write back is not needed. If simultaneous push or kill with write back operation exist, then write back machine is late cancelled. System performance improves due to preserving more cache lines in cache data array for possible future reuse.Type: ApplicationFiled: June 3, 2004Publication date: December 8, 2005Applicants: International Business Machines Corporation, Sony Computer Entertainment Inc.Inventors: Roy Kim, Yasukichi Okawa, Thuong Truong