Patents by Inventor In S. Chung

In S. Chung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127107
    Abstract: Embodiments of the present disclosure include techniques for machine language processing. In one embodiment, the present disclosure include commands with data structures comprising fields describing multi-dimensional data and fields describing synchronization. Large volumes of data may be processed and automatically synchronized by execution of a single command.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: Haishan ZHU, Eric S. CHUNG
  • Publication number: 20240126617
    Abstract: Embodiments of the present disclosure include techniques for machine language processing. In one embodiment, the present disclosure includes configuring functional modules on a machine learning processor to execute a plurality of machine learning (ML) operations during a plurality of time segments. During the time segments, a first portion of the ML operations execute serially and at least one other ML operation executes during at least a majority of the time of each of the time segments. Serial ML operations may be processed simultaneously with the at least one other ML operation.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: Haishan ZHU, Preyas Janak SHAH, Tiyasa MITRA, Eric S. CHUNG
  • Publication number: 20240120371
    Abstract: Methods and apparatus for a device that includes a circuit, such as a memory cell, and an isolation structure to electrically isolate the circuit cell. The isolation structure can include a p-type substrate, a first series of p-type material extending to the p-type substrate, and a second series of p-type material extending to the p-type substrate. The first series of p-type material, the p-type substrate, and the second series of p-type material surrounds a first side, a second side, and a bottom of the circuit cell to electrically isolate the circuit cell with continuous p-type material. In some embodiments, the first series of p-type material comprises p-type well regions. In some embodiments, the first series of p-type material comprises deep trench isolation.
    Type: Application
    Filed: October 11, 2022
    Publication date: April 11, 2024
    Applicant: Allegro MicroSystems, LLC
    Inventors: James McClay, Maxim Klebanov, Sundar Chetlur, Thomas S. Chung
  • Patent number: 11951130
    Abstract: The present invention relates to an antigen-binding molecule comprising a heavy chain variable region comprising a heavy-chain complementarity-determining region 1 (HCDR1) comprising an amino acid sequence represented by Sequence No. 1, an HCDR2 comprising an amino acid sequence represented by Sequence No. 2, and an HCDR3 comprising an amino acid sequence represented by Sequence No. 3; a light-chain variable region comprising a light-chain complementarity-determining region 1 (LCDR1) comprising an amino acid sequence represented by Sequence No. 4, an LCDR2 comprising an amino acid sequence represented by Sequence No. 5, and an LCDR3 comprising an amino acid sequence represented by Sequence No. 6; wherein the antigen-binding molecule is a T cell receptor (TCR); and to a cell line expressing the same.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: April 9, 2024
    Assignee: Eutilex Co., Ltd.
    Inventors: Byoung S. Kwon, Young Ho Kim, Kwang Hee Kim, Ji Won Chung, Young Gyoon Chang, Bo Rim Yi, Jung Yun Lee, Seung Hyun Lee, Sun Woo Im, Jin Kyung Choi, Hyun Tae Son, Eun Hye Yoo
  • Patent number: 11934327
    Abstract: A field programmable gate array (FPGA) including a configurable interconnect fabric connecting a plurality of logic blocks, the configurable interconnect fabric and the logic blocks being configured to implement a data masking circuit configured to: receive input data including data values at a plurality of indices of the input data; select between a data value of the data values and an alternative value using a masking multiplexer to generate masked data, the masking multiplexer being controlled by a mask value of a plurality of mask values at indices corresponding to the indices of the input data; and output the masked data. In some examples, the configurable interconnect fabric and the logic blocks are further configured to implement a mask generation circuit configured to generate the mask values. In some examples, the mask values are received from external memory.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: March 19, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinwen Xi, Ming Gang Liu, Eric S. Chung
  • Patent number: 11933786
    Abstract: Antibodies that selectively bind to glycosylated PD-1 relative to unglycosylated PD-1 are provided. In some aspects, PD-1 polypeptides comprising glycosylated amino acid positions are also provided. Methods for making and using such antibodies and polypeptides (e.g., for the treatment of cancer) are also provided.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: March 19, 2024
    Assignees: STCUBE, INC., BOARD OF REGENTS, THE UNIVERSITY OF TEXAS
    Inventors: Stephen S. Yoo, Ezra M. Chung, Yong-Soo Kim, Mien-Chie Hung, Chia-Wei Li, Seung-Oe Lim
  • Publication number: 20240086233
    Abstract: Embodiments of the present disclosure include systems and methods for providing a hierarchical programming model for AI hardware. A system includes a set of lower-level control threads. The system also includes a higher-level control thread configured to receive a command from a device, generate a set of commands based on the command, and provide the set of commands to a subset of the set of lower-level control threads. A lower-level control thread in the subset of the set of lower-level control threads is configured to instruct, based on a particular command in the set of commands, a subset of a plurality of processing threads to perform a set of operations.
    Type: Application
    Filed: September 9, 2022
    Publication date: March 14, 2024
    Inventors: Haishan ZHU, Eric S. CHUNG
  • Publication number: 20240065390
    Abstract: Bands for wearable devices include multiple band retainers used to maintain engagement between an assembly (e.g., a pair) of bands. Some band retainers may be permanently affixed with the band at a certain location of the band, while other band retainers can be removable. The removable band retainers can be moved to different locations of the band, thus allowing the band retainer to retain another band at different locations. As a result, the assembly of bands can be used with different users, and in particular, users with different wrist sizes. Moreover, using multiple band retainers can provide an engagement force between the bands to withstand higher-impact events, such as swimming and diving. Additionally, bands and band retainers may include one or more liquid-resistant and corrosion-resistant materials.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 29, 2024
    Inventors: Nicholas S. Brodine, Molly J. Anderson, Clement C. Tissandier, Osamu Yabe, Mengxi Zhao, Timothy S. Lui, Chia Tse Yeh, Kai-Yu Chung, Jen-Chun Hsu, Tatsuya Sano, Peng Li
  • Publication number: 20240063310
    Abstract: A Schottky diode includes a substrate having a first type dopant, a buried layer within the substrate and having a second type dopant, an epitaxial layer above the buried layer and having the second type dopant, a plurality of rings within the epitaxial layer and having the first type dopant, wherein the plurality of rings comprises an L-shaped ring, a shallow trench isolation (STI) layer at the top region of the epitaxial layer, an anode, a cathode spaced from the anode by the STI layer, and wherein the buried layer has an open region substantially vertically aligned with the anode.
    Type: Application
    Filed: August 16, 2022
    Publication date: February 22, 2024
    Applicant: Allegro MicroSystems, LLC
    Inventors: Yu-Chun Li, Felix Palumbo, Chung C. Kuo, Thomas S. Chung, Maxim Klebanov
  • Patent number: 11886833
    Abstract: Embodiments of the present disclosure include systems and methods for providing hierarchical and shared exponent floating point data types. First and second shared exponent values are determined based on exponent values of a plurality of floating point values. A third shared exponent value is determined based the first shared exponent value and the second shared exponent value. First and second difference values are determined based on the first shared exponent value, the second shared exponent value, and the third shared exponent value. Sign values and mantissa values are determined for the plurality of floating point values. The sign value and the mantissa value for each floating point value in the plurality of floating point values, the third shared exponent value, the first difference value, and the second difference value are stored in a data structure for a shared exponent floating point data type.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bita Darvish Rouhani, Venmugil Elango, Rasoul Shafipour, Jeremy Fowers, Ming Gang Liu, Jinwen Xi, Douglas C. Burger, Eric S. Chung
  • Publication number: 20230413687
    Abstract: In one aspect, a Hall effect device includes an implantation layer; an epitaxial layer located above the implantation layer; a trench filled with a dielectric material and extending from a top surface of the epitaxial layer into the implantation layer and defining an enclosed region; a buried layer the epitaxial layer from the implantation layer within the enclosed region; and a contact pad located on the epitaxial layer. The trench reduces a current from the contact pad from traveling in a lateral direction orthogonal to a vertical direction and enables the current to travel in the vertical direction.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 21, 2023
    Applicant: Allegro MicroSystems, LLC
    Inventors: Thomas S. Chung, Maxim Klebanov, Sundar Chetlur
  • Publication number: 20230385374
    Abstract: A method for sparse matrix multiplication comprises receiving a first block having M elements in a first dimension, and parsing the first block of M elements into a first set of B sub-blocks including MB elements in the first dimension. A first sparsity mask having S % sparsity is applied to the first block of elements, such that each of the first set of B sub-blocks has S % sparsity. A second block is received having M elements in a second dimension, and is parsed into a second set of B sub-blocks that include MB elements in the second dimension. A second sparsity mask having S?% sparsity is applied to the second block of elements, such that S?% of the second set of B sub-blocks have 100% sparsity and (100?S?)% of the second set of B sub-blocks have 0% sparsity. The first and second blocks are then matrix multiplied.
    Type: Application
    Filed: April 4, 2022
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Venmugil ELANGO, Bita DARVISH ROUHANI, Eric S CHUNG, Douglas Christopher BURGER
  • Publication number: 20230376725
    Abstract: Embodiments of the present disclosure include systems and methods for providing model customizations of transformers for improved efficiency. A first set of settings for a transformer model is received. Based on the first set of settings, a second set of settings for the transformer model is determined. The first set of settings and the second set of settings are used to configure and train the transformer model.
    Type: Application
    Filed: May 19, 2022
    Publication date: November 23, 2023
    Inventors: Maral Mesmakhosroshahi, Bita Darvish Rouhani, Eric S. Chung, Douglas C. Burger, Maximilian Taylor Golub
  • Publication number: 20230335197
    Abstract: A memory device includes a first storage transistor and a first select transistor. The first storage transistor is configured to store a first data bit. The first select transistor is configured to change the resistance of a gate of the first storage transistor, to write the first data bit into the first storage transistor, a first terminal of the first select transistor being coupled to the gate of the first storage transistor. A method of operating a memory device is also disclosed herein.
    Type: Application
    Filed: March 27, 2023
    Publication date: October 19, 2023
    Inventor: Steve S. CHUNG
  • Patent number: 11790212
    Abstract: Quantization-aware neural architecture search (“QNAS”) can be utilized to learn optimal hyperparameters for configuring an artificial neural network (“ANN”) that quantizes activation values and/or weights. The hyperparameters can include model topology parameters, quantization parameters, and hardware architecture parameters. Model topology parameters specify the structure and connectivity of an ANN. Quantization parameters can define a quantization configuration for an ANN such as, for example, a bit width for a mantissa for storing activation values or weights generated by the layers of an ANN. The activation values and weights can be represented using a quantized-precision floating-point format, such as a block floating-point format (“BFP”) having a mantissa that has fewer bits than a mantissa in a normal-precision floating-point representation and a shared exponent.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: October 17, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kalin Ovtcharov, Eric S. Chung, Vahideh Akhlaghi, Ritchie Zhao
  • Publication number: 20230316042
    Abstract: A method is presented for operating a machine learning model including one or more mixture of experts layers. The method comprises receiving one or more input data shards at a routing gate network for a mixture of experts layer comprising a plurality of neural network experts. One or more neural network experts in the mixture of experts layer is designated layer to evaluate each input data shard. For each designated neural network expert, a weight matrix is retrieved having a predetermined sparsity to generate a sparsified designated neural network expert. Each input data shard is evaluated with a respective sparsified designated neural network expert.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bita DARVISH ROUHANI, Douglas Christopher BURGER, Eric S CHUNG
  • Publication number: 20230316080
    Abstract: A method is presented for training a neural network. For a weight matrix having integer dimensions M1 in a first dimension and an integer dimension M2 in a second dimension, a first balanced sparsity mask is generated that is an N1 of M1 mask in the first dimension. The first balanced sparsity mask is applied to the weight matrix during inference. A second balanced sparsity mask is generated for a transpose of the weight matrix. The second balanced sparsity mask is an N2 of M2 mask in the second dimension. The second balanced sparsity mask is applied to the transpose of the weight matrix during backpropagation.
    Type: Application
    Filed: March 29, 2022
    Publication date: October 5, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Maximilian Taylor GOLUB, Bita DARVISH ROUHANI, Eric S CHUNG, Douglas Christopher BURGER
  • Publication number: 20230316043
    Abstract: A method for operating a machine learning model is presented. The machine learning model includes a plurality of sequential transformer blocks. The method comprises receiving input data at a transformer block and processing the input data via a mixture of experts layer. At an auxiliary classifier, a measure of perplexity of the processed input data is determined. Based on the determined measure of perplexity, one or more experts in a downstream transformer block that will subsequently process the input data are indicated. Weight matrices are then fetched for the indicated one or more experts.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bita DARVISH ROUHANI, Douglas Christopher BURGER, Eric S. CHUNG
  • Publication number: 20230316039
    Abstract: A computing system is configured to implement a deep neural network comprising an input layer for receiving inputs applied to the deep neural network, an output layer for outputting inferences based on the received inputs, and a plurality of hidden layers interposed between the input layer and the output layer. A plurality of nodes selectively operate on the inputs to generate and cause outputting of the inferences, wherein operation of the nodes is controlled based on parameters of the deep neural network. A sparsity controller is configured to selectively apply a plurality of different sparsity states to control parameter density of the deep neural network. A quantization controller is configured to selectively quantize the parameters of the deep neural network in a manner that is sparsity-dependent, such that quantization applied to each parameter is based on which of the plurality of different sparsity states applies to the parameter.
    Type: Application
    Filed: May 23, 2022
    Publication date: October 5, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rasoul SHAFIPOUR, Bita DARVISH ROUHANI, Douglas Christopher BURGER, Ming Gang LIU, Eric S. CHUNG, Ritchie Zhao
  • Publication number: 20230299195
    Abstract: In one aspect, a double-diffused metal oxide semiconductor (DMOS) includes a region of a semiconductor having a first region of a semiconductor having a first-type dopant, a first well having a second-type dopant, a dielectric within the first well, the dielectric having a bottom surface and a top surface opposite the bottom surface, a gate disposed on the top surface of the dielectric. The gate, the dielectric and the first well are configured to form a first reduced surface field (RESURF). The bottom surface of the dielectric has a first portion and a second portion, and the first portion of the bottom surface of the dielectric is closer to the top surface of the dielectric than the second portion of the bottom surface of the dielectric.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Applicant: Allegro MicroSystems, LLC
    Inventors: Thomas S. Chung, Chung C. Kuo, Maxim Klebanov, Sundar Chetlur