Patents Issued in April 14, 2022
  • Publication number: 20220114086
    Abstract: Examples include techniques to expand system memory via use of available device memory. Circuitry at a device coupled to a host device partitions a portion of memory capacity of a memory configured for use by compute circuitry resident at the device to execute a workload. The partitioned portion of memory capacity is reported to the host device as being available for use as a portion of system memory. An indication from the host device is received if the portion of memory capacity has been identified for use as a first portion of pooled system memory. The circuitry to monitor usage of the memory capacity used by the compute circuitry to execute the workload to decide whether to place a request to the host device to reclaim the memory capacity from the first portion of pooled system memory.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Chace A. CLARK, James A. BOYD, Chet R. DOUGLAS, Andrew M. RUDOFF, Dan J. WILLIAMS
  • Publication number: 20220114087
    Abstract: The present technology relates to an electronic device. According to the present technology, a storage device that manages map data using a volatile memory device having a limited capacity may include a nonvolatile memory device and a memory controller which includes a map chunk buffer, a map chunk status table, a journal buffer, and a meta slice buffer.
    Type: Application
    Filed: April 9, 2021
    Publication date: April 14, 2022
    Inventors: Ju Hyun KIM, Jin Yeong KIM, Jae Wan YEON
  • Publication number: 20220114088
    Abstract: The present disclosure generally relates to more efficient use of a delta buffer. To utilize the delta buffer, an efficiency can be gained by utilizing absolute delta entries and relative delta entries. The absolute delta entry will include the type of delta entry, the L2P table index, the L2P table offset, and the PBA. The relative delta entry will include the type of delta entry, the L2P table offset, and the PBA offset. The relative delta entry will utilize about half of the storage space of the absolute delta entry. The relative delta entry can be used after an absolute delta entry so long as the relative delta entry is for data stored in the same block as the previous delta entry. If data is stored in a different block, then the delta entry will be an absolute delta entry.
    Type: Application
    Filed: February 23, 2021
    Publication date: April 14, 2022
    Inventors: Amir SHAHARABANY, Shay VAZA
  • Publication number: 20220114089
    Abstract: A storage device includes: a memory device including a map data block including mapping information between a logical address and a physical address; a buffer memory device for storing a block state table including block state information; and a memory controller for determining valid data of a source block among the plurality of memory blocks based on mapping information and block state information corresponding to the source block, and moving the valid data to open memory block. The memory controller may generate a valid page list in which information of the valid data is arranged in a stripe page unit according to an order of logical addresses, and control the memory device to move the valid data to the open memory block, based on the valid page list.
    Type: Application
    Filed: April 16, 2021
    Publication date: April 14, 2022
    Inventor: Hyeong Ju NA
  • Publication number: 20220114090
    Abstract: According to one embodiment, a memory system includes a non-volatile memory and a data map configured to manage validity of data written in the non-volatile memory. The data map includes a plurality of first fragment tables corresponding to a first hierarchy and a second fragment table corresponding to a second hierarchy higher than the first hierarchy. Each of the first fragment tables is used to manage the validity of each data having a predetermined size written in a range of physical address in the non-volatile memory allocated to the first fragment table. The second fragment table is used for each of the first fragment tables to manage reference destination information for referencing the first fragment table.
    Type: Application
    Filed: June 11, 2021
    Publication date: April 14, 2022
    Applicant: Kioxia Corporation
    Inventors: Yuki SASAKI, Shinichi KANNO, Takahiro KURITA
  • Publication number: 20220114091
    Abstract: A data storage device may include a storage including a plurality of memory blocks composed of system memory blocks for storing system data and user memory blocks for storing user data; and a controller configured to: control exchange of the system and user data with the storage in response to a request of a host device; and determine whether a start condition for performing a garbage collection operation on the storage is satisfied, based on a number of bad memory blocks in the plurality of memory blocks.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventor: Gun Wook LEE
  • Publication number: 20220114092
    Abstract: A technology is disclosed for estimating the impact that heap memory allocations have on the behavior of garbage collection activities. A sampling mechanism randomly and unbiased selects a subset of allocations for detailed analysis. A detailed analysis is performed for the selected allocation activities. Allocation monitoring data, including type and size of the allocated object and data describing the code location on which the allocation was performed are gathered. Further, the point in time, when the allocated object is later reclaimed by garbage collection is recorded. Gathered object allocation and reclaim data are used to estimate for individual allocation sites or types of allocated objects, the number of bytes that are allocated, and the number of bytes that survived a garbage collection run. Allocation activity causing frequently garbage collection runs is identified using allocation size data and the survived byte counts are used to identify allocation activity causing long garbage collection runs.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 14, 2022
    Applicant: Dynatrace LLC
    Inventor: Philipp LENGAUER
  • Publication number: 20220114093
    Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.
    Type: Application
    Filed: October 14, 2020
    Publication date: April 14, 2022
    Applicant: Micron Technology, Inc.
    Inventor: David Andrew Roberts
  • Publication number: 20220114094
    Abstract: A storage system with a controller having a persistent memory interface to local memory is provided. The persistent memory can be used to store a logical-to-physical address table. A logical-to-physical address table manager, local to the controller or remote in a secondary controller, can be used to access the logical-to-physical address table. The manager can be configured to improve bandwidth and performance in the storage system.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 14, 2022
    Inventors: Daniel HELMICK, Richard S. LUCKY, Stephen GOLD, Ryan R. JONES
  • Publication number: 20220114095
    Abstract: A system includes memory and one or more processors programmed to operate a logical layer, a media link layer, and a slot layer. The logical layer is configured to send and receive object data to a host according to an object storage protocol. The media link layer is configured to map the object data to virtual media addresses. The slot layer is configured to map the virtual media addresses to physical addresses of data storage devices.
    Type: Application
    Filed: October 12, 2020
    Publication date: April 14, 2022
    Inventors: Deepak Nayak, Hemant Mohan
  • Publication number: 20220114096
    Abstract: Multi-tile Memory Management for Detecting Cross Tile Access, Providing Multi-Tile Inference Scaling with multicasting of data via copy operation, and Providing Page Migration are disclosed herein. In one embodiment, a graphics processor for a multi-tile architecture includes a first graphics processing unit (GPU) having a memory and a memory controller, a second graphics processing unit (GPU) having a memory and a cross-GPU fabric to communicatively couple the first and second GPUs. The memory controller is configured to determine whether frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU in the multi-GPU configuration and to send a message to initiate a data transfer mechanism when frequent cross tile memory accesses occur from the first GPU to the memory of the second GPU.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Lakshminarayanan Striramassarma, Prasoonkumar Surti, Varghese George, Ben Ashbaugh, Aravindh Anantaraman, Valentin Andrei, Abhishek Appu, Nicolas Galoppo Von Borries, Altug Koker, Mike Macpherson, Subramaniam Maiyuran, Nilay Mistry, Elmoustapha Ould-Ahmed-Vall, Selvakumar Panneer, Vasanth Ranganathan, Joydeep Ray, Ankur Shah, Saurabh Tangri
  • Publication number: 20220114097
    Abstract: Methods, devices, and systems for managing performance of a processor having multiple compute units. An effective number of the multiple compute units may be determined to designate as having priority. On a condition that the effective number is nonzero, the effective number of the multiple compute units may each be designated as a priority compute unit. Priority compute units may have access to a shared cache whereas non-priority compute units may not. Workgroups may be preferentially dispatched to priority compute units. Memory access requests from priority compute units may be served ahead of requests from non-priority compute units.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Zhe Wang, Sooraj Puthoor, Bradford M. Beckmann
  • Publication number: 20220114098
    Abstract: In an embodiment, an apparatus for memory access may include: a memory comprising at least one atomic memory region, and a control circuit coupled to the memory, The control circuit may be to: for each submission queue of a plurality of submission queues, identify an atomic memory location specified in a first entry of the submission queue, wherein each submission queue is to store access requests from a different requester; determine whether the atomic memory location includes existing requester information; and in response to a determination that the atomic memory location does not include existing requester information, perform an atomic operation for the atomic memory location based at least in part on the first entry of the submission queue. Other embodiments are described and claimed.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Debendra Das Sharma, Robert Blankenship
  • Publication number: 20220114099
    Abstract: In an embodiment, a system may include an interconnect device comprising first, second, and third ports; a first processor coupled to the first port; a second processor coupled to the second port; and a system memory coupled to the third port. The interconnect device may be to: receive, from the first processor via the first port, a speculative read request for a data element stored in the system memory, where coherence of the data element is managed by the second processor, receive a direct read request for the data element, merge the direct read request with the speculative read request, and transmit the data element directly to the first processor via the first port. Other embodiments are described and claimed.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Robert Blankenship, Debendra Das Sharma
  • Publication number: 20220114100
    Abstract: A method, computer program product, and computing system for receiving, at a node of a multi-node storage system, one or more updates to a reference count associated with a metadata block. One or more reference count deltas associated with the metadata block may be stored in a cache memory system of the node. An existing copy of the metadata block in a cache memory system of each other node of the multi-node storage system may be retained.
    Type: Application
    Filed: October 12, 2020
    Publication date: April 14, 2022
    Inventors: Bar David, Bar Harel, Dror Zalstein
  • Publication number: 20220114101
    Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.
    Type: Application
    Filed: November 18, 2021
    Publication date: April 14, 2022
    Inventors: Richard E. KESSLER, David ASHER, Shubhendu S. MUKHERJEE, Wilson P. SNYDER, II, David CARLSON, Jason ZEBCHUK, Isam AKKAWI
  • Publication number: 20220114102
    Abstract: An apparatus comprises a write buffer to buffer store requests issued by the processing circuitry, prior to the store data being written to at least one cache. Draining circuitry detects a draining trigger event having potential to cause loss of state stored in the at least one cache. In response to the draining trigger event, the draining circuitry performs a draining operation to identify whether the write buffer buffers any committed store requests requiring persistence, and when the write buffer buffers at least one committed store request requiring persistence, to cause the store data associated with the at least one committed store request to be written to persistent memory. This helps to eliminate barrier instructions from software, simplifying persistent programming and improving performance.
    Type: Application
    Filed: October 13, 2020
    Publication date: April 14, 2022
    Inventors: Wei WANG, Prakash S. RAMRAKHYANI, Gustavo Federico PETRI
  • Publication number: 20220114103
    Abstract: A system, method, and apparatus for graph memory. In one embodiment, the method includes: traversing program instructions disposed in an associative memory for operating a computer, the method comprising: receiving input data to be processed; identifying a next instruction to be fetched in the memory for processing the input data via: receiving a current node ID of a current state; performing a computational test on the input data resulting in a computed value; generating a search key by combining at least a portion of the computed edge value with the current node ID; and accessing the next instruction in associative memory via the search key.
    Type: Application
    Filed: July 16, 2021
    Publication date: April 14, 2022
    Applicant: MoSys, Inc.
    Inventor: Michael J. Miller
  • Publication number: 20220114104
    Abstract: A method comprises receiving, in a store buffer, at least a portion of a store instruction, the at least a portion of the store instruction comprising a data operand and a first object capability register operand which comprises a first object type identifier for a first object, obtaining, from a corresponding load instruction, a second object capability register operand which comprises a second object type identifier, and determining whether the first object type identifier matches the second object type identifier.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventor: Michael LeMay
  • Publication number: 20220114105
    Abstract: A fabric controller to provide a coherent accelerator fabric, including: a host interconnect to communicatively couple to a host device; a memory interconnect to communicatively couple to an accelerator memory; an accelerator interconnect to communicatively couple to an accelerator having a last-level cache (LLC); and an LLC controller configured to provide a bias check for memory access operations.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Ritu Gupta, Aravindh V. Anantaraman, Stephen R. Van Doren, Ashok Jagannathan
  • Publication number: 20220114106
    Abstract: Aspects of the invention include receiving, at an operating system executing on a processor, a write request from a program to write data to a memory. The write request includes a virtual memory address and the data. It is determined that the virtual memory address is not assigned to a physical memory address. Based on the determining, the unassigned virtual memory address is assigned to a physical memory address in an overflow memory. The data is written to the physical memory address in the overflow memory and an indication that the write data was successfully written is returned to the program. Future requests by the program to access the virtual memory address are directed to the physical memory address in the overflow memory.
    Type: Application
    Filed: October 13, 2020
    Publication date: April 14, 2022
    Inventors: Michael Peter Lyons, Andrew C. M. Hicks, Tynan J. Garrett, Miles C. Pedrone
  • Publication number: 20220114107
    Abstract: Embodiments are directed to providing a secure address translation service. An embodiment of a system includes a computer-readable memory for storage of data, the computer-readable memory comprising a first memory buffer and a second memory buffer, an attack discovery unit device comprising processing circuitry to perform operations, comprising, receiving a direct memory access (DMA) request from a remote device via a Peripheral Component Interconnect Express (PCIe) link, the direct memory access (DMA) request comprising a host physical address and a header indicating that the target memory address has previously been translated to a host physical address (HPA), and blocking a direct memory access in response to a determination of at least one of that the remote device has not obtained a valid address translation from a translation agent, or that the remote device has not obtained a valid translation for the target memory address from the translation agent.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventor: Przemyslaw Duda
  • Publication number: 20220114108
    Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache memory that is coupled to the processing resources. The cache controller is configured to set an initial aging policy using an aging field based on age of cache lines within the cache memory and to determine whether a hint or an instruction to indicate a level of aging has been received.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Altug Koker, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Abhishek Appu, Aravindh Anantaraman, Valentin Andrei, Durgaprasad Bilagi, Varghese George, Brent Insko, Sanjeev Jahagirdar, Scott Janus, Pattabhiraman K., SungYe Kim, Subramaniam Maiyuran, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Xinmin Tian
  • Publication number: 20220114109
    Abstract: The data processing apparatus includes a memory protection setting storage unit capable of storing a plurality of address sections as memory protection setting targets, a plurality of first determination units provided for each of the address sections stored in the memory protection setting storage unit and provisionally determining whether or not an access request is permitted based on whether or not an access destination address specified by the access request corresponds to the address section acquired from the memory protection setting storage unit, and a second determination unit finally determining whether or not the access request is permitted based on the classification information and the results of provisional determinations by the first determination unit.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventor: Yasuhiro SUGITA
  • Publication number: 20220114110
    Abstract: Access to an array is efficiently performed without reveling an accessed position. A storage 10 stores an array of concealed values [x??] of an array x?? and an array of addresses a?? corresponding to respective elements of the array of concealed values [x??]. A refresh unit 11 determines a concealed value [F] of a random parameter F, an array of concealed values [x?] of an array x? generated by permutating the array x?? with random permutation ?, and an array of public tags b? calculated from respective elements of the array of addresses a? with the function TagF. An access unit 12 performs a desired access to an element of the array of concealed values [x?] corresponding to a tag that is calculated from a concealed value [j] of an access position j with the function Tag and the concealed value [F] of the parameter.
    Type: Application
    Filed: January 9, 2020
    Publication date: April 14, 2022
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Koki HAMADA, Atsunori ICHIKAWA
  • Publication number: 20220114111
    Abstract: An integrated chip and a data processing method are provided, to improve system security and service processing efficiency of a system. The integrated chip includes: an application processor, configured to write first data into an off-chip memory in a normal secure mode by using a storage controller, where an address of the first data in the off-chip memory is a first address; a security processor, configured to send a first read instruction to the storage controller in an enhanced secure mode, where the first read instruction is used to request to read the first data at the first address; and the storage controller, configured to control the security processor to read the first data from the off-chip memory.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Changjian Gao, Yu Liu
  • Publication number: 20220114112
    Abstract: A method comprises generating, for a cacheline, a first tag and a second tag, the first tag and the second tag generated as a function of user data stored and metadata in the cacheline stored in a first memory device, and a multiplication parameter derived from a secret key, storing the user data, the metadata, the first tag and the second tag in the first cacheline of the first memory device; generating, for the cacheline, a third tag and a fourth tag, the third tag and the fourth tag generated as a function of the user data stored and metadata in the cacheline stored in a second memory device, and the multiplication parameter; storing the user data, the metadata, the third tag and the fourth tag in the corresponding cache line of the second memory device; receiving, from a requesting device, a read operation directed to the cacheline; and using the first tag, the second tag, the third tag, and the fourth tag to determine whether a read error occurred during the read operation.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Sergej Deutsch, Karanvir Grewal, David M. Durham, Rajat Agarwal
  • Publication number: 20220114113
    Abstract: An example of an apparatus includes an application engine to execute an application. The apparatus includes a first communication interface to communicate with a first peripheral device. The apparatus includes a second communication interface to communicate with a second peripheral device. The apparatus includes an orchestration engine in communication with the application engine, the first communication interface, and the second communication interface. The orchestration engine is to receive an application command from the application and to broadcast the application command to the first peripheral device and the second peripheral device. The orchestration engine is to receive a device command from the first peripheral device or the second peripheral device, wherein the device command is to control the application.
    Type: Application
    Filed: June 20, 2019
    Publication date: April 14, 2022
    Applicant: Hewlett-Packard Development Company, L.P.
    Inventors: Endrigo Nadin Pinheiro, Christopher Charles Mohrman, Roger Benson, Stephen Mark Hinton, Syed Azam
  • Publication number: 20220114114
    Abstract: A method for accessing data in an external memory of a microcontroller, the microcontroller having an internal memory. The method includes: providing a classification data record in the internal memory, the classification data record for data stored in segments in the external memory including a segment-data classification for each segment, the segment-data classification characterizing the data stored in the respective segment; and a read access in which data corresponding to a predetermined data classification are read from the external memory.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 14, 2022
    Inventors: Axel Aue, Martin Assel
  • Publication number: 20220114115
    Abstract: An apparatus comprising a first memory interface of a first type to couple to at least one first memory device; a second memory interface of a second type to couple to at least one second memory device; and circuitry to interleave memory requests targeting contiguous memory addresses among the at least one first memory device and the at least one second memory device.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Anand K. Enamandram, Rita Deepak Gupta, Robert A. Branch, Kerry Vander Kamp
  • Publication number: 20220114116
    Abstract: An accelerator includes: a memory configured to store input data; a plurality of shift buffers each configured to shift input data received sequentially from the memory in each cycle, and in response to input data being stored in each of internal elements of the shift buffer, output the stored input data to a processing element (PE) array; a plurality of backup buffers each configured to store input data received sequentially from the memory and transfer the stored input data to one of the shift buffers; and the PE array configured to perform an operation on input data received from one or more of the shift buffers and on a corresponding kernel.
    Type: Application
    Filed: March 4, 2021
    Publication date: April 14, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATION
    Inventors: Seung Wook LEE, Hweesoo KIM, Jung Ho AHN
  • Publication number: 20220114117
    Abstract: A storage system is provided. The storage system includes a storage device including a plurality of nonvolatile memories configured to transmit storage throughput information, and a host device configured to change connection configurations for the storage device based on the storage throughput information, wherein the host device changes the connection configurations by changing configurations for transmitter and receiver paths between the storage device and the host device independently.
    Type: Application
    Filed: May 26, 2021
    Publication date: April 14, 2022
    Inventors: Young Min LEE, Soong-Mann SHIN, Kyung Phil YOO
  • Publication number: 20220114118
    Abstract: A method performed by a device connected to a host processor via a bus includes: providing a first read request including a first address to a memory; receiving a second address stored in a first region of the memory corresponding to the first address, from the memory; providing a second read request including the second address to the memory; and receiving first data stored in a second region of the memory corresponding to the second address, from the memory, wherein the first read request further includes information indicating that the first address is an indirect address of the first data.
    Type: Application
    Filed: July 16, 2021
    Publication date: April 14, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jeongho LEE, Ipoom JEONG, Younggeon YOO, Younho JEON
  • Publication number: 20220114119
    Abstract: A method for implicit addressing includes providing within a first unit and a second unit respectively a counter unit, a comparison unit and a storing unit for the storage of an identifier, allocating a first identifier to the first unit, allocating a second identifier to the second unit setting the same counter value in the counter units of both units, after setting the counter values comparing the counter value in the first unit to the first identifier and comparing the counter value in the second unit to the second identifier, based on equality of the comparison in the first unit sending of first data from the first unit or-assigning of first data to the first unit, based on inequality of the comparison in the second unit no sending or assigning of data to the second unit, and counting up or down the counter value in both units.
    Type: Application
    Filed: January 15, 2019
    Publication date: April 14, 2022
    Inventor: Christoph HELDEIS
  • Publication number: 20220114120
    Abstract: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Mihir MODY, Niraj NANDAN, Hetul SANGHVI, Brian CHAE, Rajasekhar Reddy ALLU, Jason A.T. JONES, Anthony LELL, Anish REGHUNATH
  • Publication number: 20220114121
    Abstract: A processor package module comprises a substrate, one or more compute die mounted to the substrate, and one or more photonic die mounted to the substrate. The photonic die have N optical I/O links to transmit and receive optical I/O signals using a plurality of virtual optical channels, the N optical I/O links corresponding to different types of I/O interfaces excluding power and ground I/O. The substrate is mounted into a socket that support the power and ground I/O and electrical connections between the one or more compute die and the one or more photonic die.
    Type: Application
    Filed: October 9, 2020
    Publication date: April 14, 2022
    Inventors: Anshuman THAKUR, Dheeraj SUBAREDDY, MD Altaf HOSSAIN, Ankireddy NALAMALPU, Mahesh KUMASHIKAR, Sandeep SANE
  • Publication number: 20220114122
    Abstract: A physical layer (PHY) is coupled to a serial, differential link that is to include a number of lanes. The PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes. The transmitter coupled to each lane is configured to embed a clock with data to be transmitted over the lane, and the PHY periodically issues a blocking link state (BLS) request to cause an agent to enter a BLS to hold off link layer flit transmission for a duration. The PHY utilizes the serial, differential link during the duration for a PHY associated task selected from a group including an in-band reset, an entry into low power state, and an entry into partial width state.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Robert J. Safranek, Robert G. Blankenship, Venkatraman Iyer, Jeff Willey, Robert Beers, Darren S. Jue, Arvind A. Kumar, Debendra Das Sharma, Jeffrey C. Swanson, Bahaa Fahim, Vedaraman Geetha, Aaron T. Spink, Fulvio Spagna, Rahul R. Shah, Sitaraman V. Iyer, William Harry Nale, Abhishek Das, Simon P. Johnson, Yuvraj S. Dhillon, Yen-Cheng Liu, Raj K. Ramanujan, Robert A. Maddox, Herbert H. Hum, Ashish Gupta
  • Publication number: 20220114123
    Abstract: A method of operating a processing unit includes storing a first copy of a first interrupt control value in a cache device of the processing unit, receiving from an interrupt controller a first interrupt message transmitted via an interconnect fabric, where the first interrupt message includes a second copy of the first interrupt control value, and if the first copy matches the second copy, servicing an interrupt specified in the first interrupt message.
    Type: Application
    Filed: October 12, 2020
    Publication date: April 14, 2022
    Inventors: Bryan P Broussard, Paul Moyer, Eric Christopher Morton, Pravesh Gupta
  • Publication number: 20220114124
    Abstract: A device includes a first interface to receive a signal from a first communication link, wherein the receive signal includes out-of-band (OOB) information. A detector coupled to the first interface detects the OOB information. An encoder coupled to the detector encodes the OOB information into one or more symbols (e.g., control characters). A second interface is coupled to the encoder and a second communication link (e.g., a serial transport path). The second interface transmits the symbols on the second communication link. The device also includes mechanisms for preventing false presence detection of terminating devices.
    Type: Application
    Filed: May 21, 2021
    Publication date: April 14, 2022
    Inventor: Michael J. Sobelman
  • Publication number: 20220114125
    Abstract: A processor having a system on a chip (SOC) architecture comprises one or more central processing units (CPUs) comprising multiple cores. An optical Compute Express Link (CXL) communication path incorporating a logical optical CXL protocol stack path transmits and receives an optical bit stream directly after the link layer, bypassing multiple levels of the CXL protocol stack. A CXL interface controller is connected to the one or more CPUs to enable communication between the CPUs and one or more CXL devices over the optical CXL communication path.
    Type: Application
    Filed: October 9, 2020
    Publication date: April 14, 2022
    Inventors: Anshuman THAKUR, Dheeraj SUBBAREDDY, MD Altaf HOSSAIN, Ankireddy NALAMALPU, Mahesh KUMASHIKAR
  • Publication number: 20220114126
    Abstract: Techniques for interfacing with a universal serial bus (USB) camera by a controller hub are disclosed. In one embodiment, a controller hub includes a USB multiplexer, allowing the USB camera connected to the multiplexer to be controlled by a component of the controller hub or by a host controller of a host system. In another embodiment, a USB camera is connected to a controller hub, and the controller hub includes USB video class (UVC) function circuitry to send images from the USB camera to a host controller of a host system. The images can also be processed by a component of the controller hub.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Applicant: Intel Corporation
    Inventors: Aruni P. Nelson, Ashok Mishra, John S. Howard
  • Publication number: 20220114127
    Abstract: An information handling system includes a processor that provides a USB-2 channel and a USB-3 channel to a device. The device provides the USB-2 and -3 channels to selected ports. Each port includes a USB-3 enable setting. When the USB-3 enable setting for each particular USB port is in a first state, the associated device USB-3 channel is active, and when the USB-3 enable setting for each particular USB port is in a second state, the associated device USB-3 channel is inactive. The USB-3 enable setting for at least one of the USB ports is placed into the second state to reduce electromagnetic interference between the associated USB-3 channel and an antenna.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventors: Richard Schaefer, Daniel W. Kehoe, Derric C. Hobbs
  • Publication number: 20220114128
    Abstract: Methods and apparatuses associated with a secure stream protocol for a serial interconnect are disclosed herein. In embodiments, an apparatus comprises a transmitter and a receiver. The transmitter and receiver are configured to transmit and receive transaction layer data packets through a link, the transaction layer data packets including indicators associated with transmission of order set transmitted after a predetermined number of data blocks, when the transmission is during a header suppression mode. Additional features and other embodiments are also disclosed.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Michelle Jen, Debendra Das Sharma, Bruce Tennant, Prahladachar Jayaprakash Bharadwaj
  • Publication number: 20220114129
    Abstract: Methods and apparatus for implementing a bus in a resource constrained system. In embodiments, a first FPGA is to a parallel bus and a second FPGA is connected to the first FPGA via a serial interface but not the parallel bus. The first FPGA processes a transaction request, which has a parallel bus protocol format, to the second FPGA by an initiator and converts the transaction request to the second FPGA into a transaction on the serial interface between the first and second FPGAs. The first FPGA responds to the initiator via the parallel bus indicating that the transaction request in the format for the parallel bus to the second FPGA is complete.
    Type: Application
    Filed: October 9, 2020
    Publication date: April 14, 2022
    Applicant: Raytheon Company
    Inventors: Hrishikesh Shinde, Daryl Coleman
  • Publication number: 20220114130
    Abstract: An information handling system with modular riser components for receiving expansion cards having various requirements. The system includes a riser body assembly having a common support structure for receiving expansion cards. The common support structure may be coupled to different expansion structures to provide support of expansion cards having requirements that would not be met by the common support structure alone.
    Type: Application
    Filed: October 8, 2020
    Publication date: April 14, 2022
    Applicant: Dell Products L.P.
    Inventors: Yu-Feng Lin, Hao-Cheng Ku, Yi-Wei Lu
  • Publication number: 20220114131
    Abstract: In one embodiment, a device includes: an interface circuit to couple the device to a host via a link, where in a first mode the interface circuit is to be configured as an integrated switch controller and in a second mode the interface circuit is to be configured as a link controller; and a fabric coupled to the interface circuit, the fabric to couple to a plurality of hardware circuits, where the fabric is to be dynamically configured for one of the first mode or the second mode based on link training of the link. Other embodiments are described and claimed.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Lakshminarayana Pappu, David J. Harriman, Ramadass Nagarajan, Mahesh S. Natu
  • Publication number: 20220114132
    Abstract: An artificial intelligence (AI) switch chip includes a first AI interface, a first network interface, and a controller. The first AI interface is used by the AI switch chip to couple to a first AI chip in a first server. The first network interface is used by the AI switch chip to couple to a second server. The controller receives, through the first AI interface, data from the first AI chip, and then sends the data to the second server through the first network interface. By using the AI switch chip, when a server needs to send data in an AI chip to another server, an AI interface may be used to directly receive the data from the AI chip, and then the data is sent to the other server through one or more network interfaces coupled to the controller.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Xinyu Hou, Qun Jia, Weibin Liin
  • Publication number: 20220114133
    Abstract: A fractal computing device according to an embodiment of the present application may be included in an integrated circuit device. The integrated circuit device includes a universal interconnect interface and other processing devices. The calculating device interacts with other processing devices to jointly complete a user specified calculation operation. The integrated circuit device may also include a storage device. The storage device is respectively connected with the calculating device and other processing devices and is used for data storage of the computing device and other processing devices.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Shaoli LIU, Guang JIANG, Yongwei ZHAO, Jun LIANG
  • Publication number: 20220114134
    Abstract: Apparatuses, methods and storage medium for providing access from outside a multicore processor System on Chip (SoC) are disclosed herein. In embodiments, an SoC may include a memory to store a plurality of embedded values correspondingly associated with a plurality of architecturally identical cores. Each embedded value may indicate a default voltage for a respective one of the plurality of architecturally identical cores. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to provide access from outside the multicore processor SoC to individually configure voltages of the plurality of architecturally identical cores to values that are different than the values of the default voltages. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventors: DANIEL J. RAGLAND, GUY M. THERIEN, KIRK PFAENDER
  • Publication number: 20220114135
    Abstract: A reconfigurable computer architecture includes a reconfigurable chip. The reconfigurable chip includes learning computing blocks that are interconnected virtually. The learning computing blocks each store a source address and a destination address and communicate with one another using message passing.
    Type: Application
    Filed: September 21, 2021
    Publication date: April 14, 2022
    Inventor: Mostafizur Rahman