Patents Examined by Scott C Sun
  • Patent number: 11386022
    Abstract: A storage device includes: a host interface to receive a host command from a host device over a storage interface; one or more memory translation layers to execute one or more operations associated with the host command to retrieve one or more chunks of data associated with the host command from storage memory; a bitmap circuit including a bitmap to track a constrained order of the one or more chunks of data to be transferred to the host device; and a transfer trigger to trigger a data transfer to the host device for the one or more chunks of data in the constrained order according to a state of one or more bits of the bitmap.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: July 12, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Richard N. Deglin, Atrey Hosmane, Srinivasa Raju Nadakuditi
  • Patent number: 11379234
    Abstract: An arithmetic unit performs store-to-load forwarding based on predicted dependencies between store instructions and load instructions. In some embodiments, the arithmetic unit maintains a table of store instructions that are awaiting movement to a load/store unit of the instruction pipeline. In response to receiving a load instruction that is predicted to be dependent on a store instruction stored at the table, the arithmetic unit causes the data associated with the store instruction to be placed into the physical register targeted by the load instruction. In some embodiments, the arithmetic unit performs the forwarding by mapping the physical register targeted by the load instruction to the physical register where the data associated with the store instruction is located.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: July 5, 2022
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gregory W. Smaus, Francesco Spadini, Matthew A. Rafacz, Michael Achenbach, Christopher J. Burke, Emil Talpes, Matthew M. Crum
  • Patent number: 11372677
    Abstract: When scheduling instructions for execution on a computing device, load instructions are processed before their dependent computational instructions. This can result in the load instructions being scheduled in a non-optimal order. To schedule the load instructions in a preferred order, a scheduler can speculatively schedule the load instructions without committing to their order. Subsequently, when the scheduler encounters the dependent computational instructions, the scheduler can reorder the speculatively scheduled load instructions according to the execution order of the dependent computational instructions.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: June 28, 2022
    Assignee: Amazon Technologies, Inc.
    Inventor: Robert Geva
  • Patent number: 11347994
    Abstract: The present disclosure is directed to systems and methods of bit-serial, in-memory, execution of at least an nth layer of a multi-layer neural network in a first on-chip processor memory circuitry portion contemporaneous with prefetching and storing layer weights associated with the (n+1)st layer of the multi-layer neural network in a second on-chip processor memory circuitry portion. The storage of layer weights in on-chip processor memory circuitry beneficially decreases the time required to transfer the layer weights upon execution of the (n+1)st layer of the multi-layer neural network by the first on-chip processor memory circuitry portion. In addition, the on-chip processor memory circuitry may include a third on-chip processor memory circuitry portion used to store intermediate and/or final input/output values associated with one or more layers included in the multi-layer neural network.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: May 31, 2022
    Assignee: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young, Abhishek Sharma
  • Patent number: 11334786
    Abstract: A method (and structure and computer product) to optimize an operation in a Neural Network Accelerator (NNAccel) that includes a hierarchy of neural network layers as computational stages for the NNAccel and a configurable hierarchy of memory modules including one or more on-chip Static Random-Access Memory (SRAM) modules and one or more Dynamic Random-Access Memory (DRAM) modules, where each memory module is controlled by a plurality of operational parameters that are adjustable by a controller of the NNAcc. The method includes detecting bit error rates of memory modules currently being used by the NNAccel and determining, by the controller, whether the detected bit error rates are sufficient for a predetermined threshold value for an accuracy of a processing of the NNAccel. One or more operational parameters of one or more memory modules are dynamically changed by the controller to move to a higher accuracy state when the accuracy is below the predetermined threshold value.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alper Buyuktosunoglu, Nandhini Chandramoorthy, Prashant Jayaprakash Nair, Karthik V. Swaminathan
  • Patent number: 11327908
    Abstract: A memory management system for facilitating communication between an interconnect and a system memory of a system-on-chip includes a plurality of memory controllers coupled with the system memory, and processing circuitry coupled with the interconnect and the plurality of memory controllers. The processing circuitry is configured to receive a transaction request from the interconnect, and identify a memory controller of the plurality of memory controllers that is associated with the received transaction request. Further, the processing circuitry is configured to provide the transaction request to the identified memory controller for an execution of a transaction associated with the received transaction request.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 10, 2022
    Assignee: NXP USA, Inc.
    Inventor: Ankur Behl
  • Patent number: 11321133
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: May 3, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 11314663
    Abstract: An electronic apparatus includes a connection port having a plurality of pins, the connection port being configured to receive a signal through a first pin, the first pin being predefined to correspond to any one of signals of a plurality of protocols receivable through the connection port; and a processor configured to: identify, based on a connection between a connector of an external apparatus and the connection port, whether the signal has a characteristic defined for the first pin, identify, based on the identification that the signal has the characteristic, a protocol corresponding to the characteristic among the plurality of protocols, and control to communicate with the external apparatus based on the identified protocol.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: April 26, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyunjong Shin, Minsang Kim
  • Patent number: 11314675
    Abstract: A data processing system comprises a master node to initiate data transmissions; one or more slave nodes to receive the data transmissions; and a home node to control coherency amongst data stored by the data processing system; in which at least one data transmission from the master node to one of the one or more slave nodes bypasses the home node.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: April 26, 2022
    Assignee: Arm Limited
    Inventors: Guanghui Geng, Andrew David Tune, Daniel Adam Sara, Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal
  • Patent number: 11308013
    Abstract: A data acquisition system includes a receptacle and a data acquisition device. The receptacle has a housing, sensor inputs to receive data signals from sensors coupled to an object, and a rib to block insertion of a standard Universal Serial Bus (USB) plug and facilitate insertion of a modified USB plug having a slot that mates with the rib. The data acquisition device includes circuitry to receive, store and process data, a USB plug having pins operatively coupled to the circuitry, a first subset of pins configured to receive data signals from the receptacle and a second subset of pins configured to support standard USB communication with USB-compliant devices, and a slot formed in the USB plug such that the slot facilitates interconnection of the USB plug both with standard USB-compliant devices and with the receptacle, the slot mating with the rib to facilitate interconnection.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: April 19, 2022
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Erich Vlach
  • Patent number: 11288388
    Abstract: A secure proxy-free data store access system includes plurality of hierarchically privileged nested tuple-space partitions in a content addressable memory, a plurality of hierarchically contained programming interface functions defined within each of the plurality of hierarchically privileged nested tuple-space partitions, and a plurality of virtual machines each associated with a processor core associated with at least one tuple-space partition. The system further includes logic for reading and writing data from the content addressable memory via a transactional read pipeline and a transactional write pipeline.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: March 29, 2022
    Assignee: Substrate Inc.
    Inventors: Christian Beaumont, Behnaz Beaumont, Jouke van der Maas, Jan Drake
  • Patent number: 11281597
    Abstract: Embodiments of the present disclosure are directed toward a universal serial bus (USB) device and a USB host controller. The USB device and USB host controller may be configured to couple to one another via a USB link that may include a high-speed data line and a low-speed data line. The USB device may then transmit, via the high-speed data line, an indication of a digital image to the USB host controller. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: March 22, 2022
    Assignee: Intel Corporation
    Inventors: Huimin Chen, Karthi R. Vadivelu, Abdul R. Ismail, Raul Gutierrez
  • Patent number: 11281497
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 11275612
    Abstract: Systems, apparatuses, and methods for efficient parallel execution of multiple work units in a processor by reducing a number of memory accesses are disclosed. A computing system includes a processor core with a parallel data architecture. One or more of a software application and firmware implement matrix operations and support the broadcast of shared data to multiple compute units of the processor core. The application creates thread groups by matching compute kernels of the application with data items, and grouping the resulting work units into thread groups. The application assigns the thread groups to compute units based on detecting shared data among the compute units. Rather than send multiple read access to a memory subsystem for the shared data, a single access request is generated. The single access request includes information to identify the multiple compute units for receiving the shared data when broadcasted.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: March 15, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Li Peng, Jian Yang, Chi Tang
  • Patent number: 11269798
    Abstract: Systems, methods, apparatuses, and software for computing systems are provided herein. In one example, a system includes a plurality of first modules each having a Peripheral Component Interconnect Express (PCIe) interface and a processor, and a plurality of second modules each having a PCIe interface. PCIe switch circuitry is coupled to the PCIe interfaces of the first modules and the PCIe interfaces of the second modules, wherein the PCIe switch circuitry is configured to establish logical isolation in the PCIe switch circuitry between one or more first modules and one or more second modules. At least one processor instantiates access to the one or more second modules for the one or more first modules over at least the logical isolation in the PCIe switch circuitry.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: March 8, 2022
    Assignee: Liqid Inc.
    Inventors: Christopher Long, Jason Breakstone
  • Patent number: 11271720
    Abstract: The present disclosure includes apparatuses, methods, and systems for validating data stored in memory using cryptographic hashes. An embodiment includes a memory, and circuitry configured to divide the memory into a plurality of segments, wherein each respective segment is associated with a different cryptographic hash, validate, during a powering of the memory, data stored in each respective one of a first number of the plurality of segments using the cryptographic hash associated with that respective segment, and validate, after the powering of the memory, data stored in a second number of the plurality of segments, data stored in each respective one of a second number of the plurality of segments using the cryptographic hash associated with that respective segment.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: March 8, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Alberto Troia, Antonino Mondello
  • Patent number: 11263042
    Abstract: A control processing device includes a determining unit that determines whether a write of a transaction is to be executed using a write lock or without using the write lock upon receiving a write command of the transaction, a writing unit that acquires the write lock for the transaction and executes the write of the transaction when the determining unit has determined that the write of the transaction is to be executed using the write lock, an optimizing unit that discards the write command of the transaction when the determining unit has determined that the write of the transaction is to be executed without using the write lock, and a return unit that returns to a command source that the write of the transaction has succeeded after the write by the writing unit or after the discarding of the write command by the optimizing unit.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: March 1, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Sho Nakazono, Hiroki Kumazaki, Hiroyuki Uchiyama
  • Patent number: 11258676
    Abstract: An electronic meeting tool and method for communicating arbitrary media content from users at a meeting includes a node configuration adapted to operate a display node of a communications network, the display node being coupled to a first display. The node configuration is adapted to receive user selected arbitrary media content and to control display of the user selected arbitrary media content on the first display. A peripheral device adapted to communicate the user selected arbitrary media content via the communications network is a connection unit including a connector adapted to couple to a port of a processing device having a second display, a memory and an operating system, and a transmitter. A program is adapted to obtain user selected arbitrary media content, the program leaving a zero footprint on termination. The user may trigger a transfer of the user selected arbitrary media content to the transmitter.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: February 22, 2022
    Assignee: BARCO N.V.
    Inventors: Koen Simon Herman Beel, Yoav Nir, Filip Josephine Johan Louwet, Guy Coen
  • Patent number: 11249939
    Abstract: An Execution Array Memory Array (XarMa©) processor is described for signal processing and internet of things (IoT) applications, (pronounced sharma, that means happiness in Sanskrit). The XarMa© processor uses a 1 to K+1 adjacency network in an array of execution units. The 1 to K+1 adjacency refers to connections separately made in rows and in columns of execution unit and local file nodes, where the number of Rows?K>1 and of Columns?K>1 and K is an odd integer. Instead of a large central multi-ported register file, a distributed set of storage files local to each execution unit is used. The instruction set architecture uses instructions that specify forwarding of execution results to execution units associated with destination instructions. This execution array is scalable to support cost effective and low power high-performance application specific processing focused on target product requirements.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: February 15, 2022
    Inventor: Gerald George Pechanek
  • Patent number: 11244130
    Abstract: An interim charging system includes a docking station and a case for a mobile device. The case is magnetically secured to the docking station. The docking station includes a power source and a charger that charges the mobile device in the case.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: February 8, 2022
    Assignee: The Code Corporation
    Inventor: Phil Utykanski