Patents Examined by Titus Wong
  • Patent number: 11372800
    Abstract: The present invention provides a SoC including a first CPU, a first tightly-coupled memory, a second CPU and a second tightly-coupled memory is disclosed. The first CPU includes a first core circuit, a first level one memory interface and a first level two memory interface. The first tightly-coupled memory is directly coupled to the first level one memory interface, and the first tightly-coupled memory includes a first mailbox. The second CPU includes a second core circuit, a second level one memory interface and a second level two memory interface. The second tightly-coupled memory is directly coupled to the second level one memory interface, and the second tightly-coupled memory includes a second mailbox. When the first CPU sends a command to the second mailbox within the second tightly-coupled memory, the second core circuit directly reads the command from the second mailbox, without going through the second level two memory interface.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: June 28, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: An-Pang Li
  • Patent number: 11360671
    Abstract: An adjacent track interference (ATI) metric is determined for each of a plurality of regions of a single surface of a magnetic disk. Based on the ATI metrics, each of the regions is assigned a region-specific directed offline scan (DOS) criterion, at least two of the DOS criteria being different from one another. Based on a write count of a track within one of the regions satisfying the associated region-specific DOS criterion, a DOS remediation of the track is performed.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: June 14, 2022
    Assignee: Seagate Technology LLC
    Inventors: Jian Qiang, Jose Mari Toribio, Teck Khoon Lim, Wenxiang Xie, Xiong Liu
  • Patent number: 11341066
    Abstract: Disclosed is a cache including a dataflow controller for transmitting first data to a first processor and receiving second data from the first processor, an external direct memory access (DMA) controller for receiving the first data from an external memory to transmit the first data to the dataflow controller and receiving the second data from the dataflow controller to transmit the second data to the external memory, a scratchpad memory for storing the first data or the second data transmitted between the dataflow controller and the external DMA controller, a compression/decompression device for compressing data to be transmitted from the scratchpad memory to the external memory and decompressing data transmitted from the external memory to the scratchpad memory, and a transfer state buffer for storing transfer state information associated with data transfer between the dataflow controller and the external DMA controller.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: May 24, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Ho Han, Min-Seok Choi, Young-Su Kwon
  • Patent number: 11341079
    Abstract: A device includes a transmitter coupled to a node, where the node is to couple to a wired link. The transmitter has a plurality of modes of operation including a calibration mode in which a range of communication data rates over the wired link is determined in accordance with a voltage margin corresponding to the wired link at a predetermined error rate. The range of communication data rates includes a maximum data rate, which can be a non-integer multiple of an initial data rate.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: May 24, 2022
    Assignee: RAMBUS INC.
    Inventors: Yohan U. Frans, Hae-Chang Lee, Brian S. Leibowitz, Simon Li, Nhat M. Nguyen
  • Patent number: 11334484
    Abstract: Determining and using the ideal size of memory to be transferred from high speed memory to a low speed memory may result in speedier saves to the low speed memory and a longer life for the low speed memory.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael R. Fortin, Robert L. Reinauer
  • Patent number: 11334516
    Abstract: This disclosure relates generally to systems and methods of translating between Universal Serial Bus (USB) and synchronous serial protocols. In one embodiment, an apparatus includes an application-specific integrated circuit (ASIC) configured to operate in a multi-protocol generic mode and in an adaptive clock mode. The ASIC is configured to implement a multi-protocol generic command processor in the multi-protocol generic mode where the ASIC is operable to be commanded so as to execute a generic bus command that converts between the USB protocol and any commanded synchronous serial protocol. Furthermore, the ASIC can synchronize the execution of the generic bus command with an externally generated clock signal on the synchronous serial side when the ASIC is also provided in the adaptive clock mode. In this manner, a computer device with a USB port can use the ASIC as a bridge for data communications with a radio having a synchronous serial port.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: May 17, 2022
    Assignee: Venturi, LLC
    Inventor: Benjamin Victor Payment
  • Patent number: 11334291
    Abstract: Embodiments of a method and device are disclosed. In an embodiment, a controller includes a plurality of memories each having registers that are accessible using an address, a plurality of memory controllers each coupled to a memory and configured to control read and write operations to the respective coupled memory, a bus coupled to each of the memory controllers configured to communicate data and commands to each of the memory controllers, a plurality of processing cores coupled to the bus and configured to read and write data to the memories through the memory controllers, and a plurality of isolation stages, each isolation stage being coupled between a memory controller and a memory and configured to isolate the respective memory from receiving a memory clock signal when the memory is not addressed by the memory controller.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 17, 2022
    Assignee: NXP B.V.
    Inventor: Jo Frisson
  • Patent number: 11301401
    Abstract: An apparatus includes a memory component having a plurality of ball grid array (BGA) components, wherein each respective one of the BGA components includes a plurality of memory blocks and a BGA component controller and firmware adjacent the plurality of memory blocks to manage the plurality of memory blocks. The apparatus further includes a processing device, included in the memory component, to perform memory operations on the BGA components.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: April 12, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Suresh Rajgopal, Balint Fleischer
  • Patent number: 11303472
    Abstract: A new processing architecture is described in which a data processing unit (DPU) is utilized within a device. Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of, the DPU. For example, various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks are the domain of the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage I/O to and from multiple other components and/or devices. This frees resources of the CPU, if present, for computing-intensive tasks.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: April 12, 2022
    Assignee: Fungible, Inc.
    Inventors: Pradeep Sindhu, Jean-Marc Frailong, Bertrand Serlet, Wael Noureddine, Felix A. Marti, Deepak Goel, Rajan Goyal
  • Patent number: 11281616
    Abstract: A device includes a driver circuit to send data bits onto a data bus that is partitioned into a DC component and an AC component. The driver circuit is to, for some data bits, retrieve a value of a DC power ratio of the data bus. The driver circuit is further to determine, using the value of the DC power ratio, a first value for a first portion of total power to be dissipated over the DC component to transmit the data bits, and determine, using one minus the value of the DC power ratio, a second value for a second portion of total power to be dissipated over the AC component to transmit the data bits. The driver circuit is to determine whether to send the data bits onto the data bus using data bus inversion dependent on a combination of the first value and the second value.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: March 22, 2022
    Assignee: Intel Corporation
    Inventors: Melin Dadual, Vivek Joy Kozhikkottu, Shankar Ganesh Ramasubramanian
  • Patent number: 11275526
    Abstract: The technology disclosed in this patent document can be implemented in embodiments to provide a memory controller configured to control a memory device and a method of operating the memory controller and the memory device. The memory controller may control a memory device including a plurality of pages, and may include a command analysis unit configured to generate command information indicating a type of read command for a page selected from among the plurality of pages, and an initialization time decision unit configured to decide on a channel initialization time for initializing channels of a plurality of memory cells included in the selected page based on the command information.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: March 15, 2022
    Assignee: SK hynix Inc.
    Inventor: Se Chang Park
  • Patent number: 11269782
    Abstract: Embodiment of this disclosure provides a mechanism to extend a workload instruction to include both untranslated and translated address space identifiers (ASIDs). In one embodiment, a processing device comprising a translation manager is provided. The translation manager receives a workload instruction from a guest application. The workload instruction comprises an untranslated (ASID) and a workload for an input/output (I/O) device. The untranslated ASID is translated to a translated ASID. The translated ASID inserted into a payload of the workload instruction. Thereupon, the payload is provided to a work queue of the I/O device to execute the workload based in part on at least one of: the translated ASID or the untranslated ASID.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: March 8, 2022
    Assignee: Intel Corporation
    Inventors: Kun Tian, Xiao Zheng, Ashok Raj, Sanjay Kumar, Rajesh Sankaran
  • Patent number: 11262925
    Abstract: Techniques for configuring paths for transmitting I/O operations may include: configuring a first path over which logical devices are exposed over a first port of a data storage system to a second port of a host, wherein the logical devices include a first logical device having a first service level objective and a second logical device having a second service level objective denoting a lower service level than the first service level objective; determining whether there is a service level objective violation of the first service level for the first logical device; and responsive to determining there is a service level objective violation for the first logical device, performing first processing that exposes the first logical device and the second logical device over different ports of the data storage system. Masking information may indicate which logical devices are exposed over which data storage system ports to which host ports.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: March 1, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Violet S. Beckett, Jaeyoo Jung, Arieh Don
  • Patent number: 11249918
    Abstract: A memory access system may include a first memory address translator, a second memory address translator and a mapping entry invalidator. The first memory address translator translates a first virtual address in a first protocol of a memory access request to a second virtual address in a second protocol and tracks memory access request completions. The second memory address translator is to translate the second virtual address to a physical address of a memory. The mapping entry invalidator requests invalidation of a first mapping entry of the first mapping address translator requests invalidation of a second mapping entry of the second memory address translator corresponding to the first mapping entry following invalidation of the first mapping entry and based upon the tracked memory access request completions.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: February 15, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Shawn K. Walker, Christopher Shawn Kroeger, Derek A. Sherlock
  • Patent number: 11232052
    Abstract: A timing control method and system applied on the network simulator platform are disclosed. When at least one subprocess calls a system call to enter a blocking I/O, a marking operation is performed, the kernel issues a first notification event to a network simulator, to request the network simulator to pause until the subprocess leaves from the blocking I/O. when the kernel detects that the subprocess leaves from the blocking I/O already, the kernel issues a second notification event to the network simulator, so that the network simulator continues to simulate. The present invention can control the subprocess to not continuously occupy resource, and first and second notification events can prevent a simulator timer from continuously running during the subprocess execution period, to cause an abnormal timing of a simulation result.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: January 25, 2022
    Assignee: Estinet Technologies Incorporation
    Inventors: Chih-Che Lin, Ting-Wei Ho
  • Patent number: 11221979
    Abstract: Synchronization of a plurality of aggregate DMA transfers on large number of DMA queues can be achieved using a small number of semaphores. One or more semaphores from M semaphores can be assigned to each aggregate DMA transfer based on round-robin or another suitable method. Each aggregate DMA transfer can comprise N DMA transfers, where M is smaller than N. Each DMA transfer can be assigned to one of the assigned one or more semaphores from the M semaphores. Each DMA engine of N DMA engines can increment the assigned semaphore after performing a respective DMA transfer of the N DMA transfers. A computational engine waiting on completion of a certain aggregate DMA transfer can perform an operation based upon the one or more assigned semaphores reaching respective threshold values.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: January 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventor: Drazen Borkovic
  • Patent number: 11216316
    Abstract: Facilitating object deletion based on delete lock contention in distributed file systems is provided herein. A first node device of a cluster of node devices. The first node device can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise determining whether a contention callback is assigned to an object scheduled to be removed from cache of the first node device. The operations also can comprise, based on the contention callback being assigned to the object, granting a write lock to a second node device of the cluster of node devices and removing from the cache a link to the object.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: January 4, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Lisa Sproat, Ron Steinke, Douglas Kilpatrick
  • Patent number: 11200186
    Abstract: Systems, methods, and apparatuses relating to operations in a configurable spatial accelerator are described. In one embodiment, a configurable spatial accelerator includes a first processing element that includes a configuration register within the first processing element to store a configuration value that causes the first processing element to perform an operation according to the configuration value, a plurality of input queues, an input controller to control enqueue and dequeue of values into the plurality of input queues according to the configuration value, a plurality of output queues, and an output controller to control enqueue and dequeue of values into the plurality of output queues according to the configuration value.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: December 14, 2021
    Assignee: Intel Corporation
    Inventors: Kermin E. Fleming, Jr., Simon C. Steely, Jr., Kent D. Glossop, Mitchell Diamond, Benjamin Keen, Dennis Bradford, Fabrizio Petrini, Barry Tannenbaum, Yongzhi Zhang
  • Patent number: 11200183
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: December 14, 2021
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Utkarsh Y. Kakaiya, Kun Tian
  • Patent number: 11200173
    Abstract: Techniques are disclosed relating to controlling cache size and priority of data stored in the cache using machine learning techniques. A software cache may store data for a plurality of different user accounts using one or more hardware storage elements. In some embodiments, a machine learning module generates, based on access patterns to the software cache, a control value that specifies a size of the cache and generates time-to-live values for entries in the cache. In some embodiments, the system evicts data based on the time-to-live values. The disclosed techniques may reduce cache access times and/or improve cache hit rate.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: December 14, 2021
    Assignee: PayPal, Inc.
    Inventor: Shanmugasundaram Alagumuthu