Patents by Inventor Andrew Chang

Andrew Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12136138
    Abstract: A system and method for training a neural network. In some embodiments, the system includes: a graphics processing unit cluster; and a computational storage cluster connected to the graphics processing unit cluster by a cache-coherent system interconnect. The graphics processing unit cluster may include one or more graphics processing units. The computational storage cluster may include one or more computational storage devices. A first computational storage device of the one or more computational storage devices may be configured to (i) store an embedding table, (ii) receive an index vector including a first index and a second index; and (iii) calculate an embedded vector based on: a first row of the embedding table, corresponding to the first index, and a second row of the embedding table, corresponding to the second index.
    Type: Grant
    Filed: February 11, 2022
    Date of Patent: November 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Shiyu Li, Krishna T. Malladi, Andrew Chang, Yang Seok Ki
  • Patent number: 12135722
    Abstract: A method includes receiving, at a hardware circuit of a device, a target value corresponding to a target data. The method further includes outputting, from the hardware circuit, a first indicator that source data corresponds to the target value. The method further includes, based on the first indicator, outputting, from software executing at the device, a result indicator that the source data corresponds to the target data.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: November 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Praveen Krishnamoorthy, Changho Choi, Andrew Chang
  • Patent number: 12001427
    Abstract: A method of processing data may include receiving a stream of first keys associated with first data, receiving a stream of second keys associated with second data, comparing, in parallel, a batch of the first keys and a batch of the second keys, collecting one or more results from the comparing, and gathering one or more results from the collecting. The collecting may include reducing an index matrix and a mask matrix. Gathering one or more results may include storing, in a leftover vector, at least a portion of the one or more results from the collecting. Gathering one or more results further may include combining at least a portion of the leftover vector from a first cycle with at least a portion of the one or more results from the collecting from a second cycle.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: June 4, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Shiyu Li, Yiqun Zhang, Joo Hwan Lee, Yang Seok Ki, Andrew Chang
  • Patent number: 11994938
    Abstract: Systems and methods for error detection for an address channel are disclosed. The method includes generating a token, applying the token to a request at a source, and generating a first result. The request with the first result is transmitted to a destination over the address channel. A determination is made, at the destination, whether an error associated with the request has occurred. The determining whether the error has occurred includes: receiving a received request corresponding to the request over the address channel; receiving the first result with the received request; applying the token to the received request and generating a second result; comparing the first result with the second result; and transmitting a signal in response to the comparing.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: May 28, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yukui Luo, Andrew Chang
  • Patent number: 11995727
    Abstract: A claim adjudication system including an automatic adjudication pipeline that uses pipeline rules to automatically adjudicate a claim associated with a benefit plan at one or more stages. The automatic adjudication pipeline can redirect the claim to a user interface for manual review when a pipeline rule associated with a manual review condition is triggered. The user interface can present information about the claim relevant to the manual review condition in an integrated format, including highlighting the reasons for the manual review. A user can evaluate the claim in the user interface and provide user input that addresses the manual review condition, and the claim can be routed back to the automatic adjudication pipeline. The user input can also be used as training data for machine learning to adjust pipeline rules that are used to automatically process claims and to redirect future claims for manual review.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: May 28, 2024
    Assignee: CollectiveHealth, Inc.
    Inventors: Nicholas Halpern-Manners, Thomas Bedington, Andrew Chang, Yulia Eskin, Erica Leigh Horowitz, Chetan Subramanya Ithal, Asif Khalak, Sergio Martinez-Ortuno, Raphael N'Gouan, John George O'Leary, Izac Benjamin Milstein Ross, Xiaowen Ye, Heather Grates, Irene Victoria Tollinger, Henning Chiv
  • Publication number: 20240095219
    Abstract: Techniques for discovering semantic meaning of data in fields included in one or more data sets, the method including: a first field having a previously-assigned label that indicates a semantic meaning of the first field; identifying a set of one or more candidate labels, for potential assignment to the first field instead of the previously-assigned label; evaluating, using a previously-determined label score and a first candidate label score, whether to assign a first candidate label to the first field, the evaluating comprising: when the first candidate label score is at least a first threshold amount greater than a previously-determined label score, presenting the first candidate label to a user by generating an interface through which the user can provide input indicating whether to assign the first candidate label to the first field instead of the previously-determined label.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 21, 2024
    Inventors: John Joyce, David Huang, Andrew Chang, Niel Morrison
  • Patent number: 11927634
    Abstract: A method and a memory device are provided. Data is obtained for a scan operation at an input buffer of a scan kernel in the memory device. The input buffer is adaptable to a first mode and a second mode of the scan kernel. Preprocessing of the data from the input buffer is performed to generate preprocessed data. A different type of preprocessing is performed for the first mode and the second mode. The preprocessed data is filtered to generate a filtered result. The filtered result is provided from the scan kernel to a controller of the memory device.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: March 12, 2024
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Andrew Chang, Jingchi Yang, Vinit Apte, Brian Luu
  • Publication number: 20240045823
    Abstract: A system and method for managing memory resources. In some embodiments the system includes a first server, a second server, and a server-linking switch connected to the first server and to the second server. The first server may include a stored-program processing circuit, a cache-coherent switch, and a first memory module. In some embodiments, the first memory module is connected to the cache-coherent switch, the cache-coherent switch is connected to the server-linking switch, and the stored-program processing circuit is connected to the cache-coherent switch.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: Krishna Teja Malladi, Andrew Chang, Ehsan M. Najafabadi
  • Publication number: 20240020307
    Abstract: A method includes receiving, at a hardware circuit of a device, a target value corresponding to a target data. The method further includes outputting, from the hardware circuit, a first indicator that source data corresponds to the target value. The method further includes, based on the first indicator, outputting, from software executing at the device, a result indicator that the source data corresponds to the target data.
    Type: Application
    Filed: December 30, 2022
    Publication date: January 18, 2024
    Inventors: Praveen Krishnamoorthy, Changho Choi, Andrew Chang
  • Patent number: 11853210
    Abstract: Provided are systems, methods, and apparatuses for providing a storage resource. The method can include: operating a first controller coupled to a network interface in accordance with a cache coherent protocol; performing at least one operation on data associated with a cache using a second controller coupled to the first controller and coupled to a first memory; and storing the data on a second memory coupled to one of the first controller or the second controller.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: December 26, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Andrew Chang, Ehsan Najafabadi
  • Patent number: 11841814
    Abstract: A system and method for managing memory resources. In some embodiments the system includes a first server, a second server, and a server-linking switch connected to the first server and to the second server. The first server may include a stored-program processing circuit, a cache-coherent switch, and a first memory module. In some embodiments, the first memory module is connected to the cache-coherent switch, the cache-coherent switch is connected to the server-linking switch, and the stored-program processing circuit is connected to the cache-coherent switch.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: December 12, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna Teja Malladi, Andrew Chang, Ehsan M. Najafabadi
  • Publication number: 20230367711
    Abstract: Provided are systems, methods, and apparatuses for providing a storage resource. The method can include: operating a first controller coupled to a network interface in accordance with a cache coherent protocol; performing at least one operation on data associated with a cache using a second controller coupled to the first controller and coupled to a first memory; and storing the data on a second memory coupled to one of the first controller or the second controller.
    Type: Application
    Filed: July 24, 2023
    Publication date: November 16, 2023
    Inventors: Krishna T. Malladi, Andrew Chang, Ehsan Najafabadi
  • Publication number: 20230314511
    Abstract: A method and a memory device are provided. Data is obtained for a scan operation at an input buffer of a scan kernel in the memory device. The input buffer is adaptable to a first mode and a second mode of the scan kernel. Preprocessing of the data from the input buffer is performed to generate preprocessed data. A different type of preprocessing is performed for the first mode and the second mode. The preprocessed data is filtered to generate a filtered result. The filtered result is provided from the scan kernel to a controller of the memory device.
    Type: Application
    Filed: June 2, 2022
    Publication date: October 5, 2023
    Inventors: Andrew CHANG, Jingchi YANG, Vinit APTE, Brian LUU
  • Publication number: 20230281128
    Abstract: A memory system is disclosed. The memory system may include a first cache-coherent interconnect memory module and a second cache-coherent interconnect memory module. A cache-coherent interconnect switch may connect the first cache-coherent interconnect memory module, the second cache-coherent interconnect memory module, and a processor. A processing element may process a data stored on at least one of the first cache-coherent interconnect memory module and the second cache-coherent interconnect memory module.
    Type: Application
    Filed: June 7, 2022
    Publication date: September 7, 2023
    Inventors: Wenqin HUANGFU, Krishna T. MALLADI, Andrew CHANG
  • Publication number: 20230147472
    Abstract: A system and method for training a neural network. In some embodiments, the system includes: a graphics processing unit cluster; and a computational storage cluster connected to the graphics processing unit cluster by a cache-coherent system interconnect. The graphics processing unit cluster may include one or more graphics processing units. The computational storage cluster may include one or more computational storage devices. A first computational storage device of the one or more computational storage devices may be configured to (i) store an embedding table, (ii) receive an index vector including a first index and a second index; and (iii) calculate an embedded vector based on: a first row of the embedding table, corresponding to the first index, and a second row of the embedding table, corresponding to the second index.
    Type: Application
    Filed: February 11, 2022
    Publication date: May 11, 2023
    Inventors: Shiyu LI, Krishna T. MALLADI, Andrew CHANG, Yang Seok KI
  • Publication number: 20230144843
    Abstract: Systems and methods for error detection for an address channel are disclosed. The method includes generating a token, applying the token to a request at a source, and generating a first result. The request with the first result is transmitted to a destination over the address channel. A determination is made, at the destination, whether an error associated with the request has occurred. The determining whether the error has occurred includes: receiving a received request corresponding to the request over the address channel; receiving the first result with the received request; applying the token to the received request and generating a second result; comparing the first result with the second result; and transmitting a signal in response to the comparing.
    Type: Application
    Filed: April 8, 2022
    Publication date: May 11, 2023
    Inventors: Yukui Luo, Andrew Chang
  • Publication number: 20230146611
    Abstract: A system and method for training a neural network. In some embodiments, the system includes a computational storage device including a backing store. The computational storage device may be configured to: store, in the backing store, an embedding table for a neural network embedding operation; receive a first index vector including a first index and a second index; retrieve, from the backing store: a first row of the embedding table, corresponding to the first index, and a second row of the embedding table, corresponding to the second index; and calculate a first embedded vector based on the first row and the second row.
    Type: Application
    Filed: February 9, 2022
    Publication date: May 11, 2023
    Inventors: Shiyu LI, Krishna T. MALLADI, Andrew CHANG, Yang Seok KI
  • Publication number: 20230069786
    Abstract: A system for computing. In some embodiments, the system includes: a memory, the memory including one or more function-in-memory circuits; and a cache coherent protocol interface circuit having a first interface and a second interface. A function-in-memory circuit of the one or more function-in-memory circuits may be configured to perform an operation on operands including a first operand retrieved from the memory, to form a result. The first interface of the cache coherent protocol interface circuit may be connected to the memory, and the second interface of the cache coherent protocol interface circuit may be configured as a cache coherent protocol interface on a bus interface.
    Type: Application
    Filed: October 10, 2022
    Publication date: March 2, 2023
    Inventors: Krishna T. Malladi, Andrew Chang
  • Patent number: 11567971
    Abstract: A method of processing data in a system having a host and a storage node may include performing a shuffle operation on data stored at the storage node, wherein the shuffle operation may include performing a shuffle write operation, and performing a shuffle read operation, wherein at least a portion of the shuffle operation is performed by an accelerator at the storage node. A method for partitioning data may include sampling, at a device, data from one or more partitions based on a number of samples, transferring the sampled data from the device to a host, determining, at the host, one or more splitters based on the sampled data, communicating the one or more splitters from the host to the device, and partitioning, at the device, data for the one or more partitions based on the one or more splitters.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: January 31, 2023
    Inventors: Hui Zhang, Joo Hwan Lee, Yiqun Zhang, Armin Haj Aboutalebi, Xiaodong Zhao, Praveen Krishnamoorthy, Andrew Chang, Yang Seok Ki
  • Publication number: 20230020462
    Abstract: A system and method for managing memory resources. In some embodiments, the system includes a first memory server, a second memory server, and a server-linking switch connected to the first memory server and to the second memory server. The first server may include a cache-coherent switch and a first memory module. In some embodiments, the first memory module is connected to the cache-coherent switch, and the cache-coherent switch is connected to the server-linking switch.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventors: Krishna Teja Malladi, Byung Hee Choi, Andrew Chang, Ehsan M. Najafabadi