Patents by Inventor Roy Kim

Roy Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11950158
    Abstract: The present invention relates to a method and device for distributing idle UE by a carrier in eNB of a multi-carrier based mobile communication system. The method of distributing idle UE in a multi-carrier based mobile communication system according to the present invention includes a process of determining a search rate by a carrier on the basis of information representing load on the carrier, a step of determining a cell reselection priority on the idle UE on the basis of the determined search rate, and a process of transmitting the determined cell reselection priority to the idle UE.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jeong-Jae Won, Dae-Joong Kim, Han-Seok Kim, Abhishek Roy, Hwa-Jin Cha, Jung-Min Choi
  • Publication number: 20240028266
    Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.
    Type: Application
    Filed: September 12, 2023
    Publication date: January 25, 2024
    Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 11803338
    Abstract: Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: October 31, 2023
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Patent number: 11768636
    Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: September 26, 2023
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Publication number: 20230126789
    Abstract: Generating a transformed dataset for use by a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within one or more storage systems, a transformed dataset generated by applying one or more transformations to a dataset that are identified based on one or more expected input formats of data received as input data by one or more machine learning models to be executed on one or more servers; and transmitting, from the one or more storage systems to the one or more servers without reapplying the one or more transformations on the dataset, the transformed dataset including data in the one or more expected formats of data to be received as input data by the one or more machine learning models.
    Type: Application
    Filed: December 27, 2022
    Publication date: April 27, 2023
    Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 11556280
    Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 17, 2023
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Publication number: 20220253443
    Abstract: Improving machine learning models in an artificial intelligence infrastructure includes: storing, within one or more storage systems of an artificial intelligence infrastructure, information describing a dataset and one or more transformations applied to the dataset resulting in a transformed dataset; and storing, within the one or more storage systems, information describing only portions of previous versions of a machine learning model that differ from a current version of the machine learning model, wherein the previous versions used the transformed dataset as input during one or more prior executions by the artificial intelligence infrastructure.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 11, 2022
    Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 11403290
    Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: August 2, 2022
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Publication number: 20220237037
    Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.
    Type: Application
    Filed: April 14, 2022
    Publication date: July 28, 2022
    Inventors: IVAN JIBAJA, PRASHANT JAIKUMAR, STEFAN DORSETT, CURTIS PULLEN, ROY KIM
  • Patent number: 11307894
    Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 19, 2022
    Assignee: Pure Storage, Inc.
    Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
  • Publication number: 20220091893
    Abstract: Executing a machine learning model in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: receiving, by a graphical processing unit (‘GPU’) server, a dataset transformed by a storage system that is external to the GPU server; and executing, by the GPU server, one or more machine learning algorithms using the transformed dataset as input.
    Type: Application
    Filed: November 30, 2021
    Publication date: March 24, 2022
    Inventors: BRIAN GOLD, EMILY POTYRAJ, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 11210140
    Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 28, 2021
    Assignee: Pure Storage, Inc.
    Inventors: Brian Gold, Emily Potyraj, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Publication number: 20200293378
    Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.
    Type: Application
    Filed: May 29, 2020
    Publication date: September 17, 2020
    Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 10671434
    Abstract: Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: June 2, 2020
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Patent number: 10671435
    Abstract: Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: June 2, 2020
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Patent number: 10649988
    Abstract: An artificial intelligence and machine learning infrastructure system, including: one or more storage systems comprising, respectively, one or more storage devices; and one or more graphical processing units, wherein the graphical processing units are configured to communicate with the one or more storage systems over a communication fabric; where the one or more storage systems, the one or more graphical processing units, and the communication fabric are implemented within a single chassis.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: May 12, 2020
    Assignee: Pure Storage, Inc.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim
  • Publication number: 20200125941
    Abstract: An artificial intelligence and machine learning infrastructure system, including: one or more storage systems comprising, respectively, one or more storage devices; and one or more graphical processing units, wherein the graphical processing units are configured to communicate with the one or more storage systems over a communication fabric; where the one or more storage systems, the one or more graphical processing units, and the communication fabric are implemented within a single chassis.
    Type: Application
    Filed: July 27, 2018
    Publication date: April 23, 2020
    Inventors: BRIAN GOLD, EMILY WATKINS, IVAN JIBAJA, IGOR OSTROVSKY, ROY KIM
  • Patent number: 10554572
    Abstract: Approaches, techniques, and mechanisms are disclosed for improving the efficiency with which data units are handled within a device, such as a networking device. Received data units, or portions thereof, are temporarily stored within one or more memories of a merging component, while the merging component waits to receive control information for the data units. Once received, the merging component merges the control information with the associated data units. The merging component dispatches the merged data units, or portions thereof, to an interconnect component, which forwards the merged data units to destinations indicated by the control information. The device is configured to intelligently schedule the dispatching of merged data units to the interconnect component. To this end, the device includes a scheduler configured to select which merged data units to dispatch at which times based on a variety of factors described herein.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: February 4, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Paul Roy Kim, Puneet Agarwal
  • Patent number: 10452444
    Abstract: Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: October 22, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Ivan Jibaja, Stefan Dorsett, Prashant Jaikumar, Roy Kim, Curtis Pullen
  • Patent number: 10360214
    Abstract: Ensuring reproducibility in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, by a unified management plane, one or more transformations applied to a dataset by the artificial intelligence infrastructure, wherein applying the one or more transformations to the dataset causes the artificial intelligence infrastructure to generate a transformed dataset; storing, within the one or more storage systems, information describing the dataset, the one or more transformations applied to the dataset, and the transformed dataset; identifying, by the unified management plane, one or more machine learning models executed by the artificial intelligence infrastructure using the transformed dataset as input; and storing, within the one or more storage systems, information describing one or more machine learning models executed using the transformed dataset as input.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: July 23, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Brian Gold, Emily Watkins, Ivan Jibaja, Igor Ostrovsky, Roy Kim