Patents Examined by Paul M Knight
  • Patent number: 12366960
    Abstract: An integrated circuit device includes a memory controller coupleable to a memory. The memory controller to schedule memory accesses to regions of the memory based on memory timing parameters specific to the regions. A method includes receiving a memory access request at a memory device. The method further includes accessing, from a timing data store of the memory device, data representing a memory timing parameter specific to a region of the memory cell circuitry targeted by the memory access request. The method also includes scheduling, at the memory controller, the memory access request based on the data.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: July 22, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Yi Xu, Nuwan S. Jayasena, Yuan Xie
  • Patent number: 12367377
    Abstract: A computer-implemented prediction method of making time-series predictions for controlling and/or monitoring a computer-controlled system, such as a semi-autonomous vehicle. The method uses a time series of one or more observed states. A state comprises values of measurable quantities of multiple interacting objects. Based on the observed states, values of time-invariant latent features for the multiple objects are determined, for example, according to an encoder model. A decoder model is then used to predict at least one next state. This involves applying a trained graph model to obtain a first prediction contribution based on an object's interactions with other objects, and applying a trained function to obtain a second prediction contribution based just on information about the object itself. Based on the predicted next state, output data is generated for use in controlling and/or monitoring the computer-controlled system.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: July 22, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventors: Claudia Blaiotta, Sebastian Ziesche
  • Patent number: 12361271
    Abstract: A computer-implemented method, computer program product, and/or computer system that performs the following operations: (i) receiving a deep neural network (DNN) having a set of inputs nodes, a set of output nodes, and a set of weight parameters; (ii) configuring the DNN for application to a set of analog resistive processing unit (RPU) arrays, the configuring including applying a set of modifiers to respective outputs of the set of output nodes, the set of modifiers corresponding to the set of analog RPU arrays; (iii) training the DNN using a training process, the training yielding an updated set of weight parameters and an updated set of modifiers; and (iv) transferring the updated set of weight parameters and the updated set of modifiers to the set of analog RPU arrays.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: July 15, 2025
    Assignee: International Business Machines Corporation
    Inventor: Malte Johannes Rasch
  • Patent number: 12346250
    Abstract: The embodiments of the disclosed technology relate to a controller and operating method thereof. Based on some embodiments of the disclosed technology, the controller may include i) a first memory configured to store map data including a plurality of map data entries, ii) a second memory configured to store map search data indicating a first map data entry, which corresponds to a first logical address, among the plurality of map data entries, and iii) a core configured to search for information on a physical address mapped to a second logical address from the map data, based on whether the map search data is stored in the second memory.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: July 1, 2025
    Assignee: SK HYNIX INC.
    Inventor: Jeen Park
  • Patent number: 12340316
    Abstract: Techniques disclosed herein relate generally to constructing a customized knowledge graph. In one embodiment, entities and relations among entities are extracted from a user dataset based on certain rules to generate a seed graph. Large-scale knowledge graphs are then traversed using a finite state machine to identify candidate entities and/or relations to add to the seed graph. A priority function is used to select entities and/or relations from the candidate entities and/or relations. The selected entities and/or relations are then added to the seed graph to generate the customized knowledge graph.
    Type: Grant
    Filed: March 3, 2023
    Date of Patent: June 24, 2025
    Assignee: Oracle International Corporation
    Inventors: Gautam Singaraju, Prithviraj Venkata Ammanabrolu
  • Patent number: 12333442
    Abstract: Particular embodiments can update a deployed machine learning model with actual entity data depending on anomalies detected in stream data, which can be stored to a computer object, such as a journal. Various embodiments map particular subsets of a larger pool of raw input data to the particular models that need the input data and store the raw input data to computer objects so that the corresponding machine learning models can make predictions according to any suitable policy or triggering event on any of the data located in the computer objects. Such mapping allows each machine learning model to continuously make predictions based on the data it needs.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: June 17, 2025
    Assignee: Cerner Innovation, Inc.
    Inventors: Uttam B. Ramamurthy, David Dellsperger, Christopher S. Finn, Brandon Davis, James Gritter, Stephen Patel, Maulik Gandhi, Adam Sabaliauskas
  • Patent number: 12333443
    Abstract: A server for updating a current version of a machine learning model resident in implanted medical devices includes an interface, a memory, and a processor. The interface is configured to receive a plurality of updated versions of the machine learning model from a plurality of remote sources remote from the server. The remote source may be, e.g., implanted medical devices and/or subservers. The processor is coupled to the memory and the interface and is configured to aggregate the plurality of updated versions to derive a server-updated version of the machine learning model, and to transmit the server-updated version of the machine learning model to one or more of the plurality of remote sources as a replacement for the current version of the machine learning model.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: June 17, 2025
    Assignee: NeuroPace, Inc
    Inventors: Sharanya Arcot Desai, Thomas K. Tcheng
  • Patent number: 12332778
    Abstract: Embodiments of the present disclosure relate to a memory system, a memory controller, and a method for operating the same. Garbage collection is performed with regard to the memory device on the basis of a first amount of time and a second amount of time, the first amount of time being a period of time between triggering of first garbage collection and triggering of second garbage collection, and the second amount of time being an amount of time necessary to perform the second garbage collection. A ratio of the first amount of time to the second amount of time is determined as a target ratio value, and the second amount of time is determined to be equal to or longer than a minimum garbage collection operation time. Accordingly, efficient garbage collection can be performed, and the optimal time to perform garbage collection can be determined with regard to a configured performance drop value.
    Type: Grant
    Filed: March 1, 2023
    Date of Patent: June 17, 2025
    Assignee: SK hynix Inc.
    Inventors: Min Jun Jang, Hyoung Pil Choi
  • Patent number: 12332798
    Abstract: An intelligent processing device includes a first memory, a second memory, a memory management circuit and a convolution operation circuit. The memory management circuit transfers an input data from an external memory to the first memory. The convolution operation circuit reads the input data from the first memory, and performs multiple stages of calculations to generate multiple sets of feature map data. After a first data tile of a first feature map data is generated, the memory management circuit stores the first data tile to the second memory. When a data amount of the first data tile stored satisfies a predetermined value, the memory management circuit transfers the first data tile from the second memory to the first memory, and the convolution operation circuit reads the first data tile from the first memory and accordingly performs a second-stage calculation to generate a second data tile of a second feature map data.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: June 17, 2025
    Assignee: SIGMASTAR TECHNOLOGY LTD.
    Inventors: Hu He, Shi-Jie Zhou
  • Patent number: 12326820
    Abstract: In various examples, a memory model may support multicasting where a single request for a memory access operation may be propagated to multiple physical addresses associated with multiple processing elements (e.g., corresponding to respective local memory). Thus, the request may cause data to be read from and/or written to memory for each of the processing elements. In some examples, a memory model exposes multicasting to processes. This may include providing for separate multicast and unicast instructions or shared instructions with one or more parameters (e.g., indicating a virtual address) being used to indicate multicasting or unicasting. Additionally or alternatively, whether a request(s) is processed using multicasting or unicasting may be opaque to a process and/or application or may otherwise be determined by the system. One or more constraints may be imposed on processing requests using multicasting to maintain a coherent memory interface.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: June 10, 2025
    Assignee: NVIDIA Corporation
    Inventors: Glenn Alan Dearth, Mark Hummel, Daniel Joseph Lustig
  • Patent number: 12327202
    Abstract: The present disclosure provides an entity tag association prediction method, device, system, and a computer readable storage medium.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: June 10, 2025
    Assignee: CHINA UNIONPAY CO., LTD.
    Inventors: Lizhi Liu, Yu Wang, Haijian Jiang, Qing Min
  • Patent number: 12314858
    Abstract: The learning unit 81 learns a neural network. The linearization quantity determination unit 82 determines linearization quantity, which is a parameter included in an activation function used in the neural network, and which is a parameter that brings the activation function closer to a linear function by increasing or decreasing itself. The aggregation unit 83 replaces the activation function, which is determined to converge to a linear function by increasing or decreasing the linearization quantity, with the linear function, and aggregating weights among layers using the replaced linear function. The learning unit 81 calculates evaluation value based on output by the neural network in learning the neural network, and the linearization quantity determination unit 82 changes the linearization quantity when the evaluation value satisfies the predetermined criterion.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: May 27, 2025
    Assignee: NEC CORPORATION
    Inventor: Yasuhiro Mizukoshi
  • Patent number: 12292825
    Abstract: A memory control method, a memory storage device, and a memory control circuit unit are disclosed. The method includes: generating a first operation command via one of a plurality of processing circuits, wherein the first operation command instructs to access a first memory group in a plurality of memory groups; and in response to a first state information, sending a first command sequence to the first memory group according to the first operation command to instruct the first memory group to perform an access operation. The first state information reflects a first activation state of the plurality of memory groups, and the first command sequence does not include a control command sequence configured to activate the first memory group.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: May 6, 2025
    Assignee: PHISON ELECTRONICS CORP.
    Inventors: Sheng-Min Huang, Shih-Ying Song
  • Patent number: 12282422
    Abstract: Disclosed is a method of operating a storage device which includes a non-volatile memory device. The method includes informing a host that a designation functionality for designating a data criticality and a priority to namespaces of the non-volatile memory device is possible, enabling the designation functionality in response to receiving an approval of the designation functionality, receiving, from the host, a first request for designating a data criticality and a priority for a first namespace of the namespaces, and generating a namespace mapping table in response to the first request.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: April 22, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Karan Singh, Rajendra Singh
  • Patent number: 12282859
    Abstract: The computing device trains a first model on a first data set using a first graph to predict relevant links between a plurality of nodes. The computing device obtains the first data set or a second data set associated with the plurality of nodes. The computing device determines the one or more features for the one or more links between the plurality of nodes, applies the trained first model to the one or more links between the plurality of nodes, outputs the relevant links and non-relevant links of the one or more links between the plurality of nodes, removes the non-relevant links between the plurality of nodes, connects each node of the plurality of nodes with the relevant links to generate one or more first sets of networks, and outputs the one or more first sets of generated networks.
    Type: Grant
    Filed: July 25, 2024
    Date of Patent: April 22, 2025
    Assignee: SAS INSTITUTE INC.
    Inventors: Nicholas Akbar Ablitt, James Byron Morris
  • Patent number: 12277065
    Abstract: Methods, systems, and devices for shared virtual address spaces are described. In some examples, a globally shared address space may be shared across a plurality of memory devices that are included in one or more domains. A host system may set parameters for determining whether an address (e.g., a virtual address) is included within the globally shared address space, and whether the address is associated with a memory device. When a memory device receives a memory request (e.g., a data packet), a processing unit of the memory device may determine whether an address included in the memory request is associated with the memory device. The processing unit may either initiate an access operation on a physical address of the memory device or transmit the memory request to another memory device.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: April 15, 2025
    Assignee: Micron Technology, Inc.
    Inventors: Bryan Hornung, Tony M. Brewer
  • Patent number: 12277511
    Abstract: The computing device trains a first model on a first data set using a first graph to predict relevant links between a plurality of nodes. The computing device applies the trained first model to the one or more links between the plurality of nodes from a first node, iteratively connects each node to the one or more first sets of generated networks for each of the relevant links until the relevant links for connection to the plurality of nodes are not present, and outputs the one or more first sets of generated networks. The computing device also applies the trained first model to the one or more links between the plurality of nodes, removes the non-relevant links, connects each node of the plurality of nodes with the relevant links to generate one or more second sets of networks, and outputs the one or more second sets of generated networks.
    Type: Grant
    Filed: July 19, 2024
    Date of Patent: April 15, 2025
    Assignee: SAS INSTITUTE INC.
    Inventors: Nicholas Akbar Ablitt, James Byron Morris
  • Patent number: 12277493
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting action slates using reinforcement learning. One of the methods includes receiving an observation characterizing a current state of an environment; selecting an action slate by processing the observation and a plurality of candidate action slates using a deep neural network, wherein each candidate action slate comprises a respective plurality of actions from the set of actions, and wherein the deep neural network is configured to, for each of the candidate action slates, process the observation and the actions in the candidate action slate to generate a slate Q value for the candidate action slate that is an estimate of a long-term reward resulting from the candidate action slate being provided to the action selector in response to the observation; and providing the selected action slate to an action selector in response to the observation.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: April 15, 2025
    Assignee: DeepMind Technologies Limited
    Inventor: Peter Goran Sunehag
  • Patent number: 12271318
    Abstract: Method and apparatus monitor eviction conflicts among cache directory entries in a cache directory and produce cache directory victim entry information for a memory manager. In some examples, the memory manager reduces future cache directory conflicts by changing a page level physical address assignment for a page of memory based on the produced cache directory victim entry information. In some examples, a scalable data fabric includes hardware control logic that performs the monitoring of the eviction conflicts among cache directory entries in the cache directory and produces the cache directory victim entry information.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: April 8, 2025
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Brandon K. Potter, Marko Scrbak, Sergey Blagodurov, Kishore Punniyamurthy, Nathaniel Morris
  • Patent number: 12265914
    Abstract: A method includes training a recurrent neural network by monitoring data in a memory of a first server as the first server executes jobs and by determining an amount of computing resources used by the first server while executing the jobs and applying the recurrent neural network to data in the memory to predict an amount of computing resources that the first server will use when executing a first future job. The method also includes, in response to determining that execution of the first future job did not meet a performance criterion, making a change to the first server. The method further includes further training the recurrent neural network using a reinforcement learning technique, applying the recurrent neural network to determine that the change should be made to a second server, and in response, making the change to the second server before the second server executes a second future job.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: April 1, 2025
    Assignee: Kyndryl, Inc.
    Inventors: Robert Bradley Desaulniers, Clea Anne Zolotow, Mihai Criveti, Ana Maria Bezerra Maimoni