Patents Examined by Cheng-Yuan Tseng
  • Patent number: 11720509
    Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: August 8, 2023
    Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
  • Patent number: 11704271
    Abstract: A system-in-package architecture in accordance with aspects includes a logic die and one or more memory dice coupled together in a three-dimensional slack. The logic die can include one or more global building blocks and a plurality of local building blocks. The number of local building blocks can be scalable. The local building blocks can include a plurality of engines and memory controllers. The memory controllers can be configured to directly couple one or more of the engines to the one or more memory dice. The number and type of local building blocks, and the number and types of engines and memory controllers can be scalable.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: July 18, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Lide Duan, Wei Han, Yuhao Wang, Fei Xue, Yuanwei Fang, Hongzhong Zheng
  • Patent number: 11704124
    Abstract: Disclosed embodiments relate to executing a vector multiplication instruction. In one example, a processor includes fetch circuitry to fetch the vector multiplication instruction having fields for an opcode, first and second source identifiers, and a destination identifier, decode circuitry to decode the fetched instruction, execution circuitry to, on each of a plurality of corresponding pairs of fixed-sized elements of the identified first and second sources, execute the decoded instruction to generate a double-sized product of each pair of fixed-sized elements, the double-sized product being represented by at least twice a number of bits of the fixed size, and generate an unsigned fixed-sized result by rounding the most significant fixed-sized portion of the double-sized product to fit into the identified destination.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: July 18, 2023
    Assignee: Intel Corporation
    Inventors: Venkateswara R. Madduri, Carl Murray, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney, Robert Valentine, Jesus Corbal
  • Patent number: 11698790
    Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved.
    Type: Grant
    Filed: November 10, 2021
    Date of Patent: July 11, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower
  • Patent number: 11687820
    Abstract: Methods, systems, and apparatus for operating a system of qubits. In one aspect, a method includes operating a first qubit from a first plurality of qubits at a first qubit frequency from a first qubit frequency region, and operating a second qubit from the first plurality of qubits at a second qubit frequency from a second first qubit frequency region, the second qubit frequency and the second first qubit frequency region being different to the first qubit frequency and the first qubit frequency region, respectively, wherein the second qubit is diagonal to the first qubit in a two-dimensional grid of qubits.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: June 27, 2023
    Assignee: Google LLC
    Inventors: John Martinis, Rami Barends, Austin Greig Fowler
  • Patent number: 11687764
    Abstract: A method of flattening channel data of an input feature map in an inference system includes retrieving pixel values of a channel of a plurality of channels of the input feature map from a memory and storing the pixel values in a buffer, extracting first values of a first region having a first size from among the pixel values stored in the buffer, the first region corresponding to an overlap region of a kernel of the inference system with channel data of the input feature map, rearranging second values corresponding to the overlap region of the kernel from among the first values in the first region, and identifying a first group of consecutive values from among the rearranged second values for supplying to a first dot-product circuit of the inference system.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: June 27, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ali Shafiee Ardestani, Joseph Hassoun
  • Patent number: 11681640
    Abstract: Enabling multi-channel communications between controllers in a storage array, including: creating a plurality of logical communications channels between two or more storage array controllers; inserting, into a buffer utilized by a direct memory access (‘DMA’) engine of a first storage array controller, a data transfer descriptor describing data stored in memory of the first storage array controller and a location to write the data to memory of a second storage array controller; retrieving, in dependence upon the data transfer descriptor, the data stored in memory of the first storage array controller; and writing, via a predetermined logical communications channel, the data into the memory of the second storage array controller in dependence upon the data transfer descriptor.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: June 20, 2023
    Assignee: PURE STORAGE, INC.
    Inventors: Roland Dreier, Yan Liu, Sandeep Mann
  • Patent number: 11681529
    Abstract: Systems, methods, and apparatuses relating to access synchronization in a shared memory are described. In one embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, and an execution unit to execute the decoded instruction to: receive a first input operand of a memory address to be tracked and a second input operand of an allowed sequence of memory accesses to the memory address, and cause a block of a memory access that violates the allowed sequence of memory accesses to the memory address. In one embodiment, a circuit separate from the execution unit compares a memory address for a memory access request to one or more memory addresses in a tracking table, and blocks a memory access for the memory access request when a type of access violates a corresponding allowed sequence of memory accesses to the memory address for the memory access request.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: June 20, 2023
    Assignee: Intel Corporation
    Inventors: Swagath Venkataramani, Dipankar Das, Sasikanth Avancha, Ashish Ranjan, Subarno Banerjee, Bharat Kaul, Anand Raghunathan
  • Patent number: 11675326
    Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: June 13, 2023
    Assignee: Intel Corporation
    Inventors: Nicolas A. Salhuana, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Narayan Ranganathan
  • Patent number: 11669470
    Abstract: The present disclosure provides a storage system including a first storage device (e.g., a main storage device) and one or more additional storage devices (e.g., sub storage devices). The first storage device includes a host interface for communicating with a host device and is directly connected to the host device. The additional storage devices may be directly connected to the first storage device and may communicate with the host device through the host interface included in the first storage device. The storage system thus has a total combined capacity of both the capacity of the first storage device and the capacity of the one or more additional storage devices. Further, the one or more additional storage devices may be added or removed to increase or decrease the total capacity of the storage system, and the one or more additional storage devices may not necessarily themselves include a host interface.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: June 6, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sungwon Jeong, Jinhyuk Lee, Younghoi Heo, Jaeshin Lee
  • Patent number: 11669471
    Abstract: A method, computer program product, and computing system for receiving an input/output (IO) command for processing data within a storage system. An IO command-specific entry may be generated in a register based upon, at least in part, the IO command. An compare-and-swap operation may be performed on the IO command-specific entry to determine an IO command state associated with the IO command. The IO command may be processed based upon, at least in part, the IO command state associated with the IO command.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: June 6, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Eldad Zinger, Ran Anner, Amit Engel
  • Patent number: 11656668
    Abstract: A method is disclosed for synchronizing operating modes across a plurality of peripheral devices. The method includes each of the plurality of peripheral devices transmitting requests to change a rate of power consumption in response to a set of criteria for each peripheral device. A host computing device determines if one or more peripheral devices should change power modes and transmits corresponding commands to synchronize the plurality of peripheral devices into a single operating mode. Each peripheral device may have a unique power mode for a given operating mode and each peripheral device includes one or more common features, such as a lighting display, that is synchronized across all peripheral devices for a given operating mode.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: May 23, 2023
    Assignee: Logitech Europe S.A.
    Inventor: David Tarongi Vanrell
  • Patent number: 11657015
    Abstract: A device is provided with two or more uplink ports to connect the device via two or more links to one or more sockets, where each of the sockets includes one or more processing cores, and each of the two or more links is compliant with a particular interconnect protocol. The device further includes I/O logic to identify data to be sent to the one or more processing cores for processing, determine an affinity attribute associated with the data, and determine which of the two or more links to use to send the data to the one or more processing cores based on the affinity attribute.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: May 23, 2023
    Assignee: Intel Corporation
    Inventors: Debendra Das Sharma, Anil Vasudevan, David Harriman
  • Patent number: 11657267
    Abstract: A neural network apparatus (20) includes a storage unit (24) storing a neural network model, and an arithmetic unit (22) inputting input information into an input layer of the neural network and outputting an output layer. A weight matrix (W) of an FC layer of the neural network model is constituted by a product of a weight basis matrix (Mw) of integers and a weight coefficient matrix (Cw) of real numbers. In the FC layer, the arithmetic unit (22) uses an output vector from a previous layer as an input vector (x) to decompose the input vector (x) into a product of a binary input basis matrix (Mx) and an input coefficient vector (cx) of real numbers and an input bias (bx) and derives a product of the input vector (x) and a weight matrix (W).
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: May 23, 2023
    Assignee: DENSO IT LABORATORY, INC.
    Inventor: Mitsuru Ambai
  • Patent number: 11650949
    Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: May 16, 2023
    Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
  • Patent number: 11645212
    Abstract: Processing elements include interfaces that allow direct access to memory banks on one or more DRAMs in an integrated circuit stack. These additional (e.g., per processing element) direct interfaces may allow the processing elements to have direct access to the data in the DRAM stack. Based on the size/type of operands being processed, and the memory bandwidth of the direct interfaces, rate calculation circuitry on the processor die determines the speed each processing element and/or processing nodes within each processing element are operated.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: May 9, 2023
    Assignee: Rambus Inc.
    Inventors: Steven C. Woo, Thomas Vogelsang, Joseph James Tringali, Pooneh Safayenikoo
  • Patent number: 11635969
    Abstract: Systems and methods described herein are directed to upgrading one or more of add-on firmware and disk firmware for a server, which can involve connecting a port of the server to an isolated network, the isolated network dedicated to firmware upgrades for the server; caching onto cache memory of the server, an operating system received through the isolated network; booting the operating system on the server from the cache memory; conducting an Network File System (NFS) mount on the server to determine hardware information associated with the upgrading of the one or more of the add-on firmware and the disk firmware; and upgrading the one or more of the add-on firmware and the disk firmware based on the hardware information.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: April 25, 2023
    Assignee: HITACHI VANTARA LLC
    Inventors: Francis Kin-Wing Hong, Arturo Cruz, Liren Zhao
  • Patent number: 11630785
    Abstract: The present disclosure generally relates to improving data transfer speed. A data storage device includes both a controller and a memory device. The controller provides instructions regarding read and/or write commands to the memory device through the use of control lines. The data to be written/read is transferred between the controller and the memory device along data lines. The control lines typically are not used during data transfer. During data transfer, the control lines can be used to increase data transfer speed by utilizing the otherwise idle control lines for data transfer in addition to the data lines. Hence, data transfer speed is increased by using not only the data lines, but additionally the control lines. Once the data transfer is complete, the control lines return to their legacy function.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: April 18, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Refael Ben-Rubi, Moshe Cohen
  • Patent number: 11614947
    Abstract: An example device includes a plurality of computational memory banks. Each computational memory bank of the plurality of computational memory banks includes an array of memory units and a plurality of processing elements connected to the array of memory units. The device further includes a plurality of single instruction, multiple data (SIMD) controllers. Each SIMD controller of the plurality of SIMD controllers is contained within at least one computational memory bank of the plurality of computational memory banks. Each SIMD controller is to provide instructions to the at least one computational memory bank.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: March 28, 2023
    Assignee: UNTETHER AI CORPORATION
    Inventors: William Martin Snelgrove, Darrick Wiebe
  • Patent number: 11615334
    Abstract: Quantum memory management is becoming a pressing problem, especially given the recent research effort to develop new and more complex quantum algorithms. The disclosed technology concerns various example memory management schemes for quantum computing. For example, certain embodiments concern methods for managing quantum memory based on reversible pebbling games constructed from SAT-encodings.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: March 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Martin Roetteler, Giulia Meuli