Patents Examined by Cheng-Yuan Tseng
-
Patent number: 12086599Abstract: A method of operating an accelerator includes receiving, from a central processing unit (CPU), commands for the accelerator and a peripheral device of the accelerator, processing the received commands according to a subject of performance of each of the commands, and transmitting a completion message indicating that performance of the commands is completed to the CPU after the performance of the commands is completed.Type: GrantFiled: April 11, 2022Date of Patent: September 10, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jaehyung Ahn, Wooseok Chang, Yongha Park
-
Patent number: 12086591Abstract: Techniques and mechanisms for determining a relative order in which a load instruction and a store instruction are to be executed. In an embodiment, a processor detects an address collision event wherein two instructions, corresponding to different respective instruction pointer values, target the same memory address. Based on the address collision event, the processor identifies respective instruction types of the two instructions as an aliasing instruction type pair. The processor further determines a count of decisions each to forego a reversal of an order of execution of instructions. Each decision represented in the count is based on instructions which are each of a different respective instruction type of the aliasing instruction type pair. In another embodiment, the processor determines, based on the count of decisions, whether a later load instruction is to be advanced in an order of instruction execution.Type: GrantFiled: March 26, 2021Date of Patent: September 10, 2024Assignee: Intel CorporationInventors: Sudhanshu Shukla, Jayesh Gaur, Stanislav Shwartsman, Pavel I. Kryukov
-
Patent number: 12079494Abstract: A storage system has a first storage and a second storage. The first storage has a first plurality of blades with first computing resources, first RAM resources and first solid-state storage resources. The second storage has second plurality of blades with second computing resources, second RAM resources and second solid-state storage resources. The first computing resources and the second computing resources cooperate to determine on which blades of the first and second pluralities of blades, and in which storage of the first and second storage, to perform compute processes and memory controller processes, using which of the first and second computing resources.Type: GrantFiled: December 28, 2021Date of Patent: September 3, 2024Assignee: PURE STORAGE, INC.Inventors: Hari Kannan, Ying Gao, Peter E. Kirkpatrick
-
Patent number: 12079158Abstract: An integrated circuit includes a plurality of kernels and a virtual machine coupled to the plurality of kernels. The virtual machine is configured to interpret instructions directed to different ones of the plurality of kernels. The virtual machine is configured to control operation of the different ones of the plurality of kernels responsive to the instructions.Type: GrantFiled: July 25, 2022Date of Patent: September 3, 2024Assignee: Xilinx, Inc.Inventors: Sanket Pandit, Jorn Tuyls, Xiao Teng, Rajeev Patwari, Ehsan Ghasemi, Elliott Delaye, Aaron Ng
-
Patent number: 12061969Abstract: A system and method for enhancing C*RAM, improving its performance for known applications such as video processing but also making it well suited to low-power implementation of neural nets. The required computing engine is decomposed into banks of enhanced C*RAM each having a SIMD controller, thus allowing operations at several scales simultaneously. Several configurations of suitable controllers are discussed, along with communication structures and enhanced processing elements.Type: GrantFiled: November 10, 2022Date of Patent: August 13, 2024Assignee: UNTETHER AI CORPORATIONInventors: William Martin Snelgrove, Darrick Wiebe
-
Patent number: 12056576Abstract: Methods, systems, and apparatus for operating a system of qubits. In one aspect, a method includes operating a first qubit from a first plurality of qubits at a first qubit frequency from a first qubit frequency region, and operating a second qubit from the first plurality of qubits at a second qubit frequency from a second first qubit frequency region, the second qubit frequency and the second first qubit frequency region being different to the first qubit frequency and the first qubit frequency region, respectively, wherein the second qubit is diagonal to the first qubit in a two-dimensional grid of qubits.Type: GrantFiled: May 25, 2023Date of Patent: August 6, 2024Assignee: Google LLCInventors: John Martinis, Rami Barends, Austin Greig Fowler
-
Patent number: 12050810Abstract: Systems and methods for hardware-based asynchronous logging include: initiating first and second atomic regions on first and second cores of a central processing unit (CPU); and asynchronously logging data for the first atomic region and the second atomic region using the CPU by: asynchronously performing log persist operations (LPOs) to log an old data value from each atomic region; updating the old data value to a new data value from each atomic region; tracking dependencies between the first atomic region and the second atomic region using a memory controller; asynchronously performing data persist operations (DPOs) to persist the new data value for each atomic region; and committing the first atomic region and the second atomic region based on the dependencies using the memory controller of the CPU.Type: GrantFiled: September 27, 2022Date of Patent: July 30, 2024Assignee: The Board of Trustees of the University of IllinoisInventors: Ahmed Abulila, Nam Sung Kim, Izzat El Hajj
-
Patent number: 12045155Abstract: The present disclosure involves systems, software, and computer implemented methods for efficient memory leak detection in database systems. One example method includes receiving a query at a database system. Memory allocations and deallocations are traced during processing of the query. Each memory allocation entry in a tracing file can be processed, including determining, for each allocation, whether a memory deallocation entry exists in the tracing file. A determination can be made as to whether a memory leak has occurred in response to determining whether a memory deallocation entry corresponding to a memory allocation entry exists in the tracing file. For example, a determination can be made that a memory leak has occurred in response to determining that no memory deallocation entry corresponding to an allocated memory address exists in the tracing file. One or more actions can be performed in response to determining that a memory leak has occurred.Type: GrantFiled: February 15, 2023Date of Patent: July 23, 2024Assignee: SAP SEInventors: Yinghua Ouyang, Zhen Tian
-
Patent number: 12045179Abstract: A method for handling configuration data for an interconnection protocol within hibernation operation, a controller and an electronic device are provided. The method includes the following steps. In an electronic device, a hibernation entering indication signal indicating entering a hibernation state of the interconnection protocol is received. The electronic device has a memory and an index table, wherein the index table includes attribute identifiers corresponding to management information base (MIB) attributes, which belong to sub-layers of a link layer of the interconnection protocol and are required to be retained during hibernation. In response to the hibernation entering indication signal, MIB attribute storing is performed by a hardware protocol engine for implementing the link layer to read, for each one of the sub-layers, attribute data from the sub-layers according to the attribute identifiers from the index table sequentially and to write the attribute data sequentially to the memory.Type: GrantFiled: March 1, 2023Date of Patent: July 23, 2024Assignee: SK hynix inc.Inventors: Fu Hsiung Lin, Lan Feng Wang
-
Patent number: 12039329Abstract: Systems, methods, and apparatuses relating to circuitry to implement dynamic two-pass execution of a partial flag updating instruction in a processor are described.Type: GrantFiled: December 24, 2020Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Wing Shek Wong, Vikash Agarwal, Charles Vitu, Mihir Shah
-
Patent number: 12026106Abstract: The present disclosure provides an interconnect for a non-uniform memory architecture platform to provide remote access where data can dynamically and adaptively be compressed and decompressed at the interconnect link. A requesting interconnect link can add a delay to before transmitting requested data onto an interconnect bus, compress the data before transmission, or packetize and compress data before transmission. Likewise, a remote interconnect link can decompress request data.Type: GrantFiled: March 30, 2020Date of Patent: July 2, 2024Assignee: INTEL CORPORATIONInventors: Keqiang Wu, Zhidong Yu, Cheng Xu, Samuel Ortiz, Weiting Chen
-
Patent number: 12020067Abstract: A method of scheduling instructions within a parallel processing unit is described. The method comprises decoding, in an instruction decoder, an instruction in a scheduled task in an active state, and checking, by an instruction controller, if an ALU targeted by the decoded instruction is a primary instruction pipeline. If the targeted ALU is a primary instruction pipeline, a list associated with the primary instruction pipeline is checked to determine whether the scheduled task is already included in the list. If the scheduled task is already included in the list, the decoded instruction is sent to the primary instruction pipeline.Type: GrantFiled: May 17, 2022Date of Patent: June 25, 2024Assignee: Imagination Technologies LimitedInventors: Simon Nield, Yoong-Chert Foo, Adam de Grasse, Luca Iuliano
-
Patent number: 12013808Abstract: Embodiments are generally directed to a multi-tile architecture for graphics operations. An embodiment of an apparatus includes a multi-tile architecture for graphics operations including a multi-tile graphics processor, the multi-tile processor includes one or more dies; multiple processor tiles installed on the one or more dies; and a structure to interconnect the processor tiles on the one or more dies, wherein the structure to enable communications between processor tiles the processor tiles.Type: GrantFiled: March 14, 2020Date of Patent: June 18, 2024Assignee: INTEL CORPORATIONInventors: Altug Koker, Ben Ashbaugh, Scott Janus, Aravindh Anantaraman, Abhishek R. Appu, Niranjan Cooray, Varghese George, Arthur Hunter, Brent E. Insko, Elmoustapha Ould-Ahmed-Vall, Selvakumar Panneer, Vasanth Ranganathan, Joydeep Ray, Kamal Sinha, Lakshminarayanan Striramassarma, Prasoonkumar Surti, Saurabh Tangri
-
Patent number: 12008371Abstract: Systems, apparatuses, and methods for implementing as part of a processor pipeline a reprogrammable execution unit capable of executing specialized instructions are disclosed. A processor includes one or more reprogrammable execution units which can be programmed to execute different types of customized instructions. When the processor loads a program for execution, the processor loads a bitfile associated with the program. The processor programs a reprogrammable execution unit with the bitfile so that the reprogrammable execution unit is capable of executing specialized instructions associated with the program. During execution, a dispatch unit dispatches the specialized instructions to the reprogrammable execution unit for execution.Type: GrantFiled: August 12, 2022Date of Patent: June 11, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Andrew G. Kegel
-
Patent number: 12001376Abstract: An adaptive interface high availability storage device. In some embodiments, the adaptive interface high availability storage device includes: a rear storage interface connector; a rear multiplexer, connected to the rear storage interface connector; an adaptable circuit connected to the rear multiplexer; a front multiplexer, connected to the adaptable circuit; and a front storage interface connector, connected to the front multiplexer. The adaptive interface high availability storage device may be configured to operate in a single-port state or in a dual-port state. The adaptive interface high availability storage device may be configured: in the single-port state, to present a single-port host side storage interface according to a first storage protocol at the rear storage interface connector, and in the dual-port state, to present a dual-port host side storage interface according to the first storage protocol at the rear storage interface connector.Type: GrantFiled: January 14, 2022Date of Patent: June 4, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Sompong Paul Olarig
-
Patent number: 11989142Abstract: An accelerator is disclosed. A circuit may process a data to produce a processed data. A first tier storage may include a first capacity and a first latency. A second tier storage may include a second capacity and a second latency. The second capacity may be larger than the first capacity, and the second latency may be slower than the first latency. A bus may be used to transfer at least one of the data or the processed data between the first tier storage and the second tier storage.Type: GrantFiled: January 27, 2022Date of Patent: May 21, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Marie Mai Nguyen, Rekha Pitchumani, Zongwang Li, Yang Seok Ki, Krishna Teja Malladi
-
Patent number: 11977892Abstract: A stream of data is accessed from a memory system by an autonomous memory access engine, converted on the fly by the memory access engine, and then presented to a processor for data processing. A portion of a lookup table (LUT) containing converted data elements is preloaded into a lookaside buffer associated with the memory access engine. As the stream of data elements is fetched from the memory system each data element in the stream of data elements is replaced with a respective converted data element obtained from the LUT in the lookaside buffer according to a content of each data element to thereby form a stream of converted data elements. The stream of converted data elements is then propagated from the memory access engine to a data processor.Type: GrantFiled: May 19, 2022Date of Patent: May 7, 2024Assignee: Texas Instruments IncorporatedInventor: Joseph Raymond Michael Zbiciak
-
Patent number: 11966358Abstract: A processing device comprises a first set of processors comprising a first processor and a second processor, each of which comprises at least one controllable port, a first memory operably coupled to the first set of processors, at least one forward data line configured for one-way transmission of data in a forward direction between the first set of processors, and at least one backward data line configured for one-way transmission of data in a backward direction between the first set of processors. wherein the first set of processors are operably coupled in series via the at least one forward data line and the at least one backward data line.Type: GrantFiled: August 9, 2023Date of Patent: April 23, 2024Assignee: Rebellions Inc.Inventors: Wongyu Shin, Juyeong Yoon, Sangeun Je
-
Patent number: 11966343Abstract: A storage device is disclosed. The storage device may include a storage for a data and a controller to process an input/output (I/O) request from a host processor on the data in the storage. A computational storage unit may implement at least one service for execution on the data in the storage. A command router may route a command received from the host processor to the controller or the computational storage unit based at least in part on the command.Type: GrantFiled: September 22, 2021Date of Patent: April 23, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ramzi Ammari, Changho Choi
-
Patent number: 11966633Abstract: An NVM algorithm generator that evaluates a Liberty file characterizing an NVM module and a memory view of the NVM module that identifies ports and associated operations of the NVM module to generate a control algorithm. The control algorithm includes a read algorithm that includes an order of operations for assigning values to ports of the NVM module to assert a read condition of a strobe port, executing a memory read on the NVM module and setting values to the ports on the NVM module to assert a complement of a program condition. The control algorithm also includes a program algorithm that includes an order of operations for assigning values to ports of the NVM module to assert the program condition of the strobe port, executing a memory write and setting values to the ports on the NVM module to assert the complement of the program condition.Type: GrantFiled: July 13, 2022Date of Patent: April 23, 2024Assignee: Cadence Design Systems, Inc.Inventors: Steven L. Gregor, Puneet Arora