Patents by Inventor Roy Kim
Roy Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11950158Abstract: The present invention relates to a method and device for distributing idle UE by a carrier in eNB of a multi-carrier based mobile communication system. The method of distributing idle UE in a multi-carrier based mobile communication system according to the present invention includes a process of determining a search rate by a carrier on the basis of information representing load on the carrier, a step of determining a cell reselection priority on the idle UE on the basis of the determined search rate, and a process of transmitting the determined cell reselection priority to the idle UE.Type: GrantFiled: August 23, 2021Date of Patent: April 2, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jeong-Jae Won, Dae-Joong Kim, Han-Seok Kim, Abhishek Roy, Hwa-Jin Cha, Jung-Min Choi
-
Publication number: 20240028266Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.Type: ApplicationFiled: September 12, 2023Publication date: January 25, 2024Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 11803338Abstract: Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.Type: GrantFiled: November 30, 2021Date of Patent: October 31, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 11768636Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.Type: GrantFiled: December 27, 2022Date of Patent: September 26, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Publication number: 20230126789Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.Type: ApplicationFiled: December 27, 2022Publication date: April 27, 2023Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 11556280Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: GrantFiled: May 29, 2020Date of Patent: January 17, 2023Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Publication number: 20220253443Abstract: Improving machine learning models in an artificial intelligence infrastructure includes: storing, within one or more storage systems of an artificial intelligence infrastructure, information describing a dataset and one or more transformations applied to the dataset resulting in a transformed dataset; and storing, within the one or more storage systems, information describing only portions of previous versions of a machine learning model that differ from a current version of the machine learning model, wherein the previous versions used the transformed dataset as input during one or more prior executions by the artificial intelligence infrastructure.Type: ApplicationFiled: April 26, 2022Publication date: August 11, 2022Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 11403290Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.Type: GrantFiled: July 18, 2019Date of Patent: August 2, 2022Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Publication number: 20220237037Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: ApplicationFiled: April 14, 2022Publication date: July 28, 2022Inventors: IVAN JIBAJA, PRASHANT JAIKUMAR, STEFAN DORSETT, CURTIS PULLEN, ROY KIM
-
Patent number: 11307894Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: GrantFiled: October 22, 2019Date of Patent: April 19, 2022Assignee: Pure Storage, Inc.Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
-
Publication number: 20220091893Abstract: Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.Type: ApplicationFiled: November 30, 2021Publication date: March 24, 2022Inventors: BRIAN GOLD, EMILY POTYRAJ, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 11210140Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.Type: GrantFiled: May 29, 2020Date of Patent: December 28, 2021Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Publication number: 20200293378Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: ApplicationFiled: May 29, 2020Publication date: September 17, 2020Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 10671434Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.Type: GrantFiled: July 20, 2018Date of Patent: June 2, 2020Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10671435Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.Type: GrantFiled: July 20, 2018Date of Patent: June 2, 2020Assignee: PURE STORAGE, INC.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Patent number: 10649988Abstract: An artificial intelligence and machine learning infrastructure system, including: one or more storage systems comprising, respectively, one or more storage devices; and one or more graphical processing units, wherein the graphical processing units are configured to communicate with the one or more storage systems over a communication fabric; where the one or more storage systems, the one or more graphical processing units, and the communication fabric are implemented within a single chassis.Type: GrantFiled: July 27, 2018Date of Patent: May 12, 2020Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
-
Publication number: 20200125941Abstract: An artificial intelligence and machine learning infrastructure system, including: one or more storage systems comprising, respectively, one or more storage devices; and one or more graphical processing units, wherein the graphical processing units are configured to communicate with the one or more storage systems over a communication fabric; where the one or more storage systems, the one or more graphical processing units, and the communication fabric are implemented within a single chassis.Type: ApplicationFiled: July 27, 2018Publication date: April 23, 2020Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
-
Patent number: 10554572Abstract: Approaches, techniques, and mechanisms are disclosed for improving the efficiency with which data units are handled within a device, such as a networking device. Received data units, or portions thereof, are temporarily stored within one or more memories of a merging component, while the merging component waits to receive control information for the data units. Once received, the merging component merges the control information with the associated data units. The merging component dispatches the merged data units, or portions thereof, to an interconnect component, which forwards the merged data units to destinations indicated by the control information. The device is configured to intelligently schedule the dispatching of merged data units to the interconnect component. To this end, the device includes a scheduler configured to select which merged data units to dispatch at which times based on a variety of factors described herein.Type: GrantFiled: February 15, 2017Date of Patent: February 4, 2020Assignee: Innovium, Inc.Inventors: William Brad Matthews, Paul Roy Kim, Puneet Agarwal
-
Patent number: 10452444Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.Type: GrantFiled: January 30, 2018Date of Patent: October 22, 2019Assignee: Pure Storage, Inc.Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
-
Patent number: 10360214Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.Type: GrantFiled: July 26, 2018Date of Patent: July 23, 2019Assignee: Pure Storage, Inc.Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim