Intel Patents

Intel Corporation designs and manufactures microprocessors and chipsets for computing and communications equipment manufacturers. Its products may be found in desktops, servers, tablets, smartphones and other devices.

Intel Patents by Type
  • Intel Patents Granted: Intel patents that have been granted by the United States Patent and Trademark Office (USPTO).
  • Intel Patent Applications: Intel patent applications that are pending before the United States Patent and Trademark Office (USPTO).
  • Publication number: 20250118641
    Abstract: Generally discussed herein are systems, methods, and apparatuses that include conductive pillars that are about co-planar. According to an example, a technique can include growing conductive pillars on respective exposed landing pads of a substrate, situating molding material around and on the grown conductive pillars, removing, simultaneously, a portion of the grown conductive pillars and the molding material to make the grown conductive pillars and the molding material about planar, and electrically coupling a die to the conductive pillars.
    Type: Application
    Filed: December 19, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Robert L. Sankman, Sanka Ganesan
  • Publication number: 20250120143
    Abstract: Described herein are gate-all-around (GAA) transistors with extended drains, where the drain region extends through a well region below the GAA transistor. A high voltage can be applied to the drain, and the extended drain region provides a voltage drop. The transistor length (and, specifically length of the extended drain) can be varied based on the input voltage to the device, e.g., providing a longer drain for higher input voltages. The extended drain transistors can be implemented in devices that include CFETs, either by implementing the extended drain transistor across both CFET layers, or by providing a sub-fin pedestal with the well regions in the lower layer.
    Type: Application
    Filed: October 6, 2023
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Sanjay Rangan, Adam Brand, Chen-Guan Lee, Rahul Ramaswamy, Hsu-Yu Chang, Adithya Shankar, Marko Radosavljevic
  • Publication number: 20250119733
    Abstract: This disclosure describes systems, methods, and devices related to using encrypted 802.11 association. A device may identify a beacon received from an access point (AP), the beacon including an indication of an authentication and key manager (AKM); transmit, to the AP, an 802.11 authentication request including an indication of parameters associated with the AKM; identify an 802.11 authentication response received from the AP based on the 802.11 authentication request, the 802.11 authentication response including a message integrity check (MIC) using a key confirmation key (KCK) and an indication that the parameters have been selected by the AP; transmit, to the AP, an 802.11 association request encrypted by a security key based on an authenticator address of the AP; and identify an 802.11 association response received from the AP based on the 802.11 association request, the 802.11 association response encrypted by the security key.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Po-Kai HUANG, Ilan PEER, Johannes BERG, Ido OUZIELI, Elad OREN, Emily QI
  • Publication number: 20250120102
    Abstract: Package substrates with components included in cavities of glass cores are disclosed. An example apparatus includes: a glass core having a first opening and a second opening spaced apart from the first opening, the second opening having a greater width than the first opening. The example apparatus further includes a conductive material adjacent a first wall of the first opening; and a dielectric material adjacent a second wall of the second opening.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Brandon Christian Marin, Whitney Bryks, Gang Duan, Jeremy Ecton, Jason Gamba, Haifa Hariri, Sashi Shekhar Kandanur, Joseph Peoples, Srinivas Venkata Ramanuja Pietambaram, Mohammad Mamunur Rahman, Bohan Shan, Joshua James Stacey, Hiroki Tanaka, Jacob Ryan Vehonsky
  • Publication number: 20250117639
    Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.
    Type: Application
    Filed: September 16, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Anbang Yao, Aojun Zhou, Kuan Wang, Hao Zhao, Yurong Chen
  • Publication number: 20250117063
    Abstract: Methods and apparatus to improve user experience on computing devices are disclosed. An example computing device includes a microphone to capture audio corresponding to spoken words. The example computing device further includes a speech analyzer to: detect a keyword prompt from among the spoken words, the keyword prompt to precede a query statement of a user of the computing device; and identify topics associated with a subset of the spoken words, the subset of the spoken words captured by the microphone before the keyword prompt. The example computing device also includes a communications interface to, in response to detection of the keyword prompt, transmit information indicative of the query statement and ones of the identified topics to a remote server.
    Type: Application
    Filed: December 6, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Kristoffer Fleming, Melanie Daniels, Paul Diefenbaugh, Aleksander Magi, Lawrence Falkenstein, Raoul Rivas Toledano, Vishal Sinha, Deepak Samuel Kirubakaran, Venkateshan Udhayan, Marko Bartscherer, Kathy Bui
  • Publication number: 20250118698
    Abstract: Microelectronic assemblies, related devices and methods, are disclosed herein. In some embodiments, a microelectronic assembly may include a first die, having a first surface and an opposing second surface, in a first layer; a redistribution layer (RDL) on the first layer, wherein the RDL is electrically coupled to the second surface of the first die by solder interconnects, and a second die in a second layer on the RDL, wherein the second die is electrically coupled to the RDL by non-solder interconnects.
    Type: Application
    Filed: December 19, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Xavier Francois Brun, Sanka Ganesan, Holly Sawyer, William J. Lambert, Timothy A. Gosselin, Yuting Wang
  • Publication number: 20250120036
    Abstract: A datacenter including a plurality of racks. The racks associated with a motorized and/or automated system to move the racks between first and second positions. In the first position, the racks are arranged in a side-by-side fashion in one or more rows. In the second position, a rack is moved so that a lateral side of the rack is accessible. In some embodiments, the racks include a motor and gear system for interacting with tracks. In some embodiments, each of the racks includes a plurality of chassis, each chassis including a plurality of input/output (I/O) connectors to receive a connector of a cable, the plurality of I/O connectors are arranged along a lateral side of the chassis so that they are accessible when the rack is in the second position. In use, the racks may be moved between the first and second positions while the chassis remain in normal operation.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Ralph Jensen, Michael Crocker, Carl Williams
  • Publication number: 20250117633
    Abstract: Predictive uncertainty of a generative machine learning model may be estimated. The generative machine learning model may be a large language model or large multi-modal model. A datum may be input into the generative machine learning model. The generative machine learning model may generate outputs from the datum. Latent embeddings for the outputs may be extracted from the generative machine learning model. A covariance matrix with respect to the latent embeddings may be computed. The covariance matrix may be a two-dimensional matrix, such as a square matrix. The predictive uncertainty of the generative machine learning model may be estimated using the covariance matrix. For instance, the matrix entropy of the covariance matrix may be determined. The matrix entropy may be an approximated dimension of a latent semantic manifold spanned by the outputs of the generative machine learning model and may indicate the predictive uncertainty of the generative machine learning model.
    Type: Application
    Filed: December 19, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Anthony Daniel Rhodes, Ramesh Radhakrishna Manuvinakurike, Sovan Biswas, Giuseppe Raffa, Lama Nachman
  • Publication number: 20250119773
    Abstract: This disclosure describes systems, methods, and devices related to high throughput (HT) control information. A device may determine a frame comprising HT control information. The device may determine to extend a size of the HT control information. The device may cause to generate a management or data frame for sending to a first station device of one or more station devices, the management or data frame comprising extended high throughput (HT) control information, define a new control identification (ID) associated with the extended HT control information, and cause to send the management or data frame to the first station device.
    Type: Application
    Filed: November 4, 2024
    Publication date: April 10, 2025
    Applicant: INTEL CORPORATION
    Inventors: Po-Kai Huang, Daniel F. Bravo, Danny Alexander, Arik Klein, Danny Ben-Ari, Laurent Cariou, Robert Stacey
  • Publication number: 20250118003
    Abstract: An apparatus to facilitate exception handling for debugging in a graphics environment is disclosed. The apparatus includes load store pipeline hardware circuitry to: in response to a page fault exception being enabled for a memory access request received from a thread of the plurality of threads, allocate a memory dependency token correlated to a scoreboard identifier (SBID) that is included with the memory access request; send, to memory fabric of the graphics processor, the memory access request comprising the memory dependency token; receive, from the memory fabric in response to the memory access request, a memory access response comprising the memory dependency token and indicating occurrence of a page fault error condition and fault details associated with the page fault error condition; and return the SBID associated with the memory access response and fault details of the page fault error condition to a debug register of the thread.
    Type: Application
    Filed: October 18, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: John Wiegert, Joydeep Ray, Fabian Schnell, Kelvin Thomas Gardiner
  • Publication number: 20250117264
    Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.
    Type: Application
    Filed: November 1, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Utkarsh Y. KAKAIYA, Rajesh M. SANKARAN, Sanjay KUMAR, Kun TIAN, Philip LANTZ
  • Publication number: 20250117060
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to collect user information for a user of a data processing device, generate a user profile for the user of the data processing device from the user information, and set a power profile a processor in the data processing device using the user profile. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: September 12, 2024
    Publication date: April 10, 2025
    Applicant: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray, Balaji Vembu, Prasoonkumar Surti, Kamal Sinha, Eric J. Hoekstra, Wenyin Fu, Nikos Kaburlasos, Bhushan M. Borole, Travis T. Schluessler, Ankur N. Shah, Jonathan Kennedy
  • Publication number: 20250119340
    Abstract: Logic may monitor quality of communication of data to a wireless receiver device based on transport characteristics at a wireless source device. Logic may evaluate the transport characteristics to identify indication(s) of a problem with the quality of the communication. Logic may identify a root cause associated with the indication(s). Logic may associate the root cause with one or more actions to mitigate the degradation of the quality. And logic may cause performance of an operation to mitigate the degradation of the quality based on the one or more actions. The logic to evaluate the transport characteristics may determine an upper limit for an achievable mean opinion score (MOS) based on the transport characteristics; and, based on the upper limit for the achievable MOS being less than a threshold MOS, may identify the indication(s) associated with the upper limit for the achievable MOS.
    Type: Application
    Filed: December 19, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Balvinder Pal Singh, Kobi Guetta, Yoni Kahana, Amichay Israel, Ehud Apsel, Anubhav David, Gila Kamhi
  • Publication number: 20250116812
    Abstract: Described herein are stacked photonic integrated circuit (PIC) assemblies that include multiple layers of waveguides. The waveguides are formed of substantially monocrystalline materials, which cannot be repeatedly deposited. Layers of monocrystalline material are fabricated and repeatedly transferred onto the PIC structure using a layer transfer process, which involves bonding a monocrystalline material using a non-monocrystalline bonding material. Layers of isolation materials are also deposited or layer transferred onto the PIC assembly.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Abhishek A. Sharma, Wilfred Gomes
  • Publication number: 20250117318
    Abstract: Memory management for wireless networks is described. A method, includes accessing an operational parameter for a network slice of a wireless network, determining a first memory region of a plurality of memory regions in the memory pool based on the operational parameter, and encoding configuration information to allocate the first memory region to the network slice. Other embodiments are described and claimed.
    Type: Application
    Filed: December 19, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Sunku Ranganath, John Browne, Hassnaa Moustafa, Mandar Chincholkar, Amar Srivastava
  • Publication number: 20250117503
    Abstract: The disclosed embodiments are generally directed to inline encryption of data at line speed at a chip interposed between two memory components. The inline encryption may be implemented at a System-on-Chip (“SOC” or “SOC”). The memory components may comprise Non-Volatile Memory express (NVMe) and a dynamic random access memory (DRAM). An exemplary device includes an SOC to communicate with a Non-Volatile Memory NVMe circuitry to provide direct memory access (DMA) to an external memory component. The SOC may include: a cryptographic controller circuitry; a cryptographic memory circuitry in communication with the cryptographic controller, the cryptographic memory circuitry configured to store instructions to encrypt or decrypt data transmitted through the SOC; and an encryption engine in communication with the crypto controller circuitry, the encryption engine configured to encrypt or decrypt data according to instructions stored at the crypto memory circuitry. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: October 29, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Prashant Dewan, Baiju Patel
  • Publication number: 20250117285
    Abstract: In one embodiment, an apparatus includes: an integrity circuit to receive data and generate a protection code based at least in part on the data; a cryptographic circuit coupled to the integrity circuit to encrypt the data into encrypted data and encrypt the protection code into an encrypted protection code; a message authentication code (MAC) circuit coupled to the cryptographic circuit to compute a MAC comprising a tag using header information, the encrypted data, and the encrypted protection code; and an output circuit to send the header information, the encrypted data, and the tag to a receiver via a link. Other embodiments are described and claimed.
    Type: Application
    Filed: December 9, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Raghunandan MAKARAM, Kirk S. YAP
  • Publication number: 20250117329
    Abstract: Methods and apparatus relating to an instruction and/or micro-architecture support for decompression on core are described. In an embodiment, decode circuitry decodes a decompression instruction into a first micro operation and a second micro operation. The first micro operation causes one or more load operations to fetch data into one or more cachelines of a cache of a processor core. Decompression Engine (DE) circuitry decompresses the fetched data from the one or more cachelines of the cache of the processor core in response to the second micro operation. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: November 14, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Jayesh Gaur, Adarsh Chauhan, Vinodh Gopal, Vedvyas Shanbhogue, Sreenivas Subramoney, Wajdi Feghali
  • Publication number: 20250117360
    Abstract: A processing apparatus includes a processing resource including a general-purpose parallel processing engine and a matrix accelerator. The matrix accelerator includes first circuitry to receive a command to perform operations associated with an instruction, second circuitry to configure the matrix accelerator according to a physical depth of a systolic array within the matrix accelerator and a logical depth associated with the instruction, third circuitry to read operands for the instruction from a register file associated with the systolic array, fourth circuitry to perform operations for the instruction via one or more passes through one or more physical pipeline stages of the systolic array based on a configuration performed by the second circuitry, and fifth circuitry to write output of the operations to the register file associated with the systolic array.
    Type: Application
    Filed: October 30, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Jorge Parra, Wei-yu Chen, Kaiyu Chen, Varghese George, Junjie Gu, Chandra Gurram, Guei-Yuan Lueh, Stephen Junkins, Subramaniam Maiyuran, Supratim Pal
  • Publication number: 20250117356
    Abstract: Methods and apparatus relating to techniques for multi-tile memory management. In an example, a graphics processor includes an interposer, a first chiplet coupled with the interposer, the first chiplet including a graphics processing resource and an interconnect network coupled with the graphics processing resource, cache circuitry coupled with the graphics processing resource via the interconnect network, and a second chiplet coupled with the first chiplet via the interposer, the second chiplet including a memory-side cache and a memory controller coupled with the memory-side cache. The memory controller is configured to enable access to a high-bandwidth memory (HBM) device, the memory-side cache is configured to cache data associated with a memory access performed via the memory controller, and the cache circuitry is logically positioned between the graphics processing resource and a chiplet interface.
    Type: Application
    Filed: October 15, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Aravindh Anantaraman, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Mike Macpherson, Subramaniam Maiyuran, Joydeep Ray, Lakshminarayanan Striramassarma, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Prasoonkumar Surti, David Puffer, James Valerio, Ankur N. Shah
  • Publication number: 20250117359
    Abstract: A processing apparatus described herein includes a general-purpose parallel processing engine comprising a systolic array having multiple pipelines, each of the multiple pipelines including multiple pipeline stages, wherein the multiple pipelines include a first pipeline, a second pipeline, and a common input shared between the first pipeline and the second pipeline.
    Type: Application
    Filed: October 11, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Jorge Parra, Jiasheng Chen, Supratim Pal, Fangwen Fu, Sabareesh Ganapathy, Chandra Gurram, Chunhui Mei, Yue Qi
  • Publication number: 20250117873
    Abstract: Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.
    Type: Application
    Filed: October 4, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Eriko Nurvitadhi, Balaji Vembu, Tsung-Han Lin, Kamal Sinha, Rajkishore Barik, Nicolas C. Galoppo Von Borries
  • Publication number: 20250117673
    Abstract: Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.
    Type: Application
    Filed: December 16, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Anjali Singhai Jain, Tamar Bar-Kanarik, Marcos Carranza, Karthik Kumar, Cristian Florin Dumitrescu, Keren Guy, Patrick Connor
  • Publication number: 20250117501
    Abstract: Technologies for trusted I/O include a computing device having a hardware cryptographic agent, a cryptographic engine, and an I/O controller. The hardware cryptographic agent intercepts a message from the I/O controller and identifies boundaries of the message. The message may include multiple DMA transactions, and the start of message is the start of the first DMA transaction. The cryptographic engine encrypts the message and stores the encrypted data in a memory buffer. The cryptographic engine may skip and not encrypt header data starting at the start of message or may read a value from the header to determine the skip length. In some embodiments, the cryptographic agent and the cryptographic engine may be an inline cryptographic engine. In some embodiments, the cryptographic agent may be a channel identifier filter, and the cryptographic engine may be processor-based. Other embodiments are described and claimed.
    Type: Application
    Filed: October 1, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Soham Jayesh Desai, Siddhartha Chhabra, Bin Xing, Pradeep M. Pappachan, Reshma Lal
  • Publication number: 20250117874
    Abstract: One embodiment provides an apparatus comprising a memory stack including multiple memory dies and a parallel processor including a plurality of multiprocessors. Each multiprocessor has a single instruction, multiple thread (SIMT) architecture, the parallel processor coupled to the memory stack via one or more memory interfaces. At least one multiprocessor comprises a multiply-accumulate circuit to perform multiply-accumulate operations on matrix data in a stage of a neural network implementation to produce a result matrix comprising a plurality of matrix data elements at a first precision, precision tracking logic to evaluate metrics associated with the matrix data elements and indicate if an optimization is to be performed for representing data at a second stage of the neural network implementation, and a numerical transform unit to dynamically perform a numerical transform operation on the matrix data elements based on the indication to produce transformed matrix data elements at a second precision.
    Type: Application
    Filed: October 7, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland
  • Publication number: 20250117875
    Abstract: In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive an input from one or more detectors proximate a display to present an output from a graphics pipeline, determine that a user is not interacting with the display, and in response to a determination that the user is not interacting with the display, to reduce a frame rendering rate of the graphics pipeline. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 9, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Balaji Vembu, Nikos Kaburlasos, Josh B. Mastronarde
  • Patent number: 12271735
    Abstract: Systems, methods, and apparatuses relating to circuitry to precisely monitor memory store accesses are described.
    Type: Grant
    Filed: January 22, 2024
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Ahmad Yasin, Raanan Sade, Liron Zur, Igor Yanover, Joseph Nuzman
  • Patent number: 12270657
    Abstract: Various aspects of techniques, systems, and use cases include provide instructions for operating an autonomous mobile robot (AMR). A technique may include capturing audio or video data using a sensor of the AMR, performing a classification of the audio or video data using a trained classifier, and identifying a coordinate of an environmental map corresponding to a location of the audio or video data. The technique may include updating the environmental map to include the classification as metadata corresponding to the coordinate. The technique may include communicating the updated environmental map to an edge device.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Ruchika Singh, Mandar Chincholkar, Hassnaa Moustafa, Francesc Guim Bernat, Rita Chattopadhyay
  • Patent number: 12270886
    Abstract: Methods, apparatus, systems and articles of manufacture to trigger calibration of a sensor node using machine learning are disclosed. An example apparatus includes a machine learning model trainer to train a machine learning model using first sensor data collected from a sensor node. A disturbance forecaster is to, using the machine learning model and second sensor data, forecast a temporal disturbance to a communication of the sensor node. A communications processor is to transmit a first calibration trigger in response to a determination that a start of the temporal disturbance is forecasted and a determination that a first calibration trigger has not been sent.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Yatish Mishra, Mats Agerstam, Mateo Guzman, Sindhu Pandian, Shubhangi Rajasekhar, Pranav Sanghadia, Troy Willes
  • Patent number: 12271306
    Abstract: Three-dimensional (3D) DRAM integrated in the same package as compute logic enable forming high-density caches. In one example, an integrated 3D DRAM includes a large on-de cache (such as a level 4 (L4) cache), a large on-die memory-side cache, or both an L4 cache and a memory-side cache. One or more tag caches cache recently accessed tags from the L4 cache, the memory-side cache, or both. A cache controller in the compute logic is to receive a request from one of the processor cores to access an address and compare tags in the tag cache with the address. In response to a hit in the tag cache, the cache controller accesses data from the cache at a location indicated by an entry in the tag cache, without performing a tag lookup in the cache.
    Type: Grant
    Filed: March 27, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Wilfred Gomes, Adrian C. Moga, Abhishek Sharma
  • Patent number: 12271308
    Abstract: Examples provide an application program interface or manner of negotiating locking or pinning or unlocking or unpinning of a cache region by which an application, software, or hardware. A cache region can be part of a level-1, level-2, lower or last level cache (LLC), or translation lookaside buffer (TLB) are locked (e.g., pinned) or unlocked (e.g., unpinned). A cache lock controller can respond to a request to lock or unlock a region of cache or TLB by indicating that the request is successful or not successful. If a request is not successful, the controller can provide feedback indicating one or more aspects of the request that are not permitted. The application, software, or hardware can submit another request, a modified request, based on the feedback to attempt to lock a portion of the cache or TLB.
    Type: Grant
    Filed: December 28, 2023
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Andrew J. Herdrich, Priya Autee, Abhishek Khade, Patrick Lu, Edwin Verplanke, Vivekananthan Sanjeepan
  • Patent number: 12271305
    Abstract: A two-level main memory in which both volatile memory and persistent memory are exposed to the operating system in a flat manner and data movement and management is performed in cache line granularity is provided. The operating system can allocate pages in the two-level main memory randomly across the first level main memory and the second level main memory in a memory-type agnostic manner, or, in a more intelligent manner by allocating predicted hot pages in first level main memory and predicted cold pages in second level main memory. The cache line granularity movement is performed in a “swap” manner, that is, a hot cache line in the second level main memory is swapped with a cold cache line in first level main memory because data is stored in either first level main memory or second level main memory not in both first level main memory and second level main memory.
    Type: Grant
    Filed: March 27, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Sai Prashanth Muralidhara, Alaa R. Alameldeen, Rajat Agarwal, Wei P. Chen, Vivek Kozhikkottu
  • Patent number: 12271319
    Abstract: Systems, methods, and computer-readable media are provided for variable precision first in, first out (FIFO) buffers (VPFB) that dynamically changes the amount of data to be stored in the VPFB based on a current amount of data stored in the VPFB and/or based on a current amount of available memory space of the VPFB. The currently unavailable memory space (or the current available memory space) is used to select the size of a next data block to be stored in the VPFB. Other embodiments are disclosed and/or claimed.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Yanjie Pan, Yong Jiang, Yuanyuan Li, Yong Zhang
  • Patent number: 12271327
    Abstract: Techniques and mechanisms for determining an operation to be performed with a direct memory access (DMA) request. An inspection unit (105) is coupled between an input-output memory management unit (IOMMU) (120) and an endpoint device (118). The inspection unit (105) stores a registry (330) comprising entries (332) which each correspond to a respective address, and a respective one or more resources of the endpoint device (118). A given entry (332) of the registry (330) is created based on a message from the IOM MU (120) which indicates the successful completion of an address translation to facilitate a DMA request. The endpoint device (118) performs a search, based on a DMA request, to determine if any registry (330) entry (332) indicates a combination of an address and an endpoint resource, where said combination matches a corresponding combination indicated by the DMA request. Communication of the DMA request to the IOMMU (120) is contingent on a result of the search.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Kaijie Guo, Xin Zeng, Ned Smith, Weigang Li, Junyuan Wang, Songwu Shen, Zijuan Fan, Yao Huo, Maksim Lukoshkov, Laurent Coquerel
  • Patent number: 12273274
    Abstract: System and techniques for network flow-based hardware allocation are described herein. A workload for is obtained for execution. Here, the workload includes a flow that has a processing component and a network component. Then, during execution of the workload, the flow is repeatedly profiled and assigned a network service and a processing service during a next execution based on a network metric and a processing metric obtained from the profiling.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Akhilesh Thyagaturu, Hassnaa Moustafa, Lavanya Gubbala
  • Patent number: 12271249
    Abstract: An apparatus comprising circuitry to buffer video data; and a DisplayPort Transmitter to communicate the video data to a DisplayPort Receiver via a virtual channel through at least one intermediate device between the DisplayPort Transmitter and the DisplayPort Receiver, wherein the virtual channel comprises a unidirectional Main-Link and a bidirectional auxiliary channel (AUX_CH); and communicate a power down signal over the Main-Link to the at least one intermediate device and the DisplayPort Receiver in conjunction with turning off the Main-Link to place the at least one intermediate device and the DisplayPort Receiver in respective low power states.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Nausheen Ansari, Ziv Kabiry, Gal Yedidia
  • Patent number: 12270831
    Abstract: An apparatus includes a thermal heat sensor trace including conductive metal and disposed in a space transformer, the thermal heat sensor trace being configured to form a resistance, and a controller configured to sense a voltage across the resistance formed by the thermal heat sensor trace, the voltage positively correlating to a temperature of the space transformer. The controller is further configured to determine whether the sensed voltage is greater than or equal to a predetermined threshold voltage, and based on the sensed voltage being determined to be greater than or equal to the predetermined threshold voltage, output an alert signal for reducing and/or warning of the temperature of the space transformer.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventor: Arthur Isakharov
  • Patent number: 12272484
    Abstract: An inductor can be formed in a coreless electronic substrate from magnetic materials and/or fabrication processes that do not result in the magnetic materials leaching into plating and/or etching solutions/chemistries, and results in a unique inductor structure. This may be achieved by forming the inductors from magnetic ferrites. The formation of the electronic substrates may also include process sequences that prevent exposure of the magnetic ferrites to the plating and/or etching solutions/chemistries.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Srinivas Pietambaram, Pooya Tadayon, Kristof Darmawikarta, Tarek Ibrahim, Prithwish Chatterjee
  • Patent number: 12271198
    Abstract: Various systems and methods for providing autonomous driving within a restricted area are discussed. In an examples, an autonomous vehicle control system can include an interface for receiving data from multiple sensors for detecting an environment about the vehicle, a security processor coupled to the configured to receive sensor information from the sensor interface, and autonomous driving system including one or more virtual machines configured to selectively receive information from the security processor based on a security request from infrastructure of the restricted area.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Ralf Graefe, Michael Paulitsch
  • Patent number: 12272614
    Abstract: Disclosed herein are integrated circuit (IC) packages with solder thermal interface materials (STIMs) with embedded particles, as well as related methods and devices. For example, in some embodiments, an IC package may include a package substrate, a lid, a die between the package substrate and the lid and a STIM between the die and the lid. The STIM may include embedded particles, and at least some of the embedded particles may have a diameter equal to a distance between the die and the lid.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Amitesh Saha, Sergio Antonio Chan Arguedas, Marco Aurelio Cartas, Ken Hackenberg, Peng Li
  • Patent number: 12270941
    Abstract: A light detection and ranging system is provided using a first electromagnetic radiation of a first emitting structure as local oscillator signal for a second electromagnetic radiation received from the outside of the light detection and ranging system, wherein the first and second electromagnetic radiations are coherent and the resulting signal is detected by a detecting structure. The resulting signal corresponds to an information of a target at the outside of the light detection and ranging system.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventor: George Rakuljic
  • Patent number: 12273327
    Abstract: An improved AR/VR operation includes receiving, by a server computing device, encrypted AR/VR user data and cleartext metadata associated with the encrypted AR/VR user data from a client computing device; getting server data based at least in part on cleartext metadata; encoding the server data; performing an AR/VR process on the encrypted AR/VR user data and the encoded server data to generate encrypted AR/VR results; and sending the encrypted AR/VR results to the client computing device.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Kylan Race, Ernesto Zamora Ramos, Jeremy Bottleson, Bradley Smith
  • Patent number: 12271616
    Abstract: An embodiment of an integrated circuit comprises circuitry to share page tables associated with a page between a processor memory management unit (MMU) and an input/output memory management unit (IOMMU), store a page table entry in the memory associated with the page, and separately control access to the page from a processor and from a direct memory access (DMA) request based on one or more fields of the stored page table entry. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, David Koufaty, Rajesh Sankaran, Vedvyas Shanbhogue
  • Patent number: 12273964
    Abstract: This disclosure describes systems, methods, and devices related to multi-link operation. A device may configure a single N×N transmit (TX)/receive (RX) radio to a plurality of 1×1 TX/RX radios, where N is a positive integer. The device may monitor a first channel of a plurality of channels to determine its availability. The device may monitor a second channel of the plurality of channels to determine its availability. The device may identify a first control frame received from an access point (AP) multi-link device (MLD) on the second channel. The device may cause to send a second control frame to the AP MLD on the second channel. The device may configure back to a single N×N TX/RX radio to receive a data frame.
    Type: Grant
    Filed: March 13, 2023
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Minyoung Park, Po-Kai Huang, Thomas J. Kenney, Daniel Bravo, Ehud Reshef, Laurent Cariou, Dibakar Das, Dmitry Akhmetov
  • Patent number: 12272656
    Abstract: Embodiments disclosed herein include electronic packages and methods of fabricating electronic packages. In an embodiment, an electronic package comprises an interposer, where a cavity passes through the interposer, and a nested component in the cavity. In an embodiment, the electronic package further comprises a die coupled to the interposer by a first interconnect and coupled to the nested component by a second interconnect. In an embodiment, the first and second interconnects comprise a first bump, a bump pad over the first bump, and a second bump over the bump pad.
    Type: Grant
    Filed: October 13, 2023
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Debendra Mallik, Ravindranath Mahajan, Robert Sankman, Shawna Liff, Srinivas Pietambaram, Bharat Penmecha
  • Patent number: 12273856
    Abstract: An apparatus and system for a multiple universal subscriber identity module (MUSIM) user equipment (UE) are described. Due to activity on another USIM, the MUSIM UE transmits a service request message to release an N1 non-access stratum (NAS) signaling connection or reject a paging request received from the network, dependent on whether the MUSIM UE is in a 5th generation (5G) Mobility Management (5GMM)-CONNECTED mode or a 5GMM-IDLE mode. The service request message may contain a paging restriction that restrict paging to: all paging, all paging except for paging for voice service, all paging except for packet data unit (PDU) session(s), and all paging except for voice service and PDU session(s).
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventor: Thomas Luetzenkirchen
  • Patent number: 12271329
    Abstract: Systems, apparatuses and methods may provide for technology that collects, by a BIOS (basic input output system), memory information from a first host path to a coherent device memory on a memory expander, wherein the memory expander includes a plurality of host paths, transfers the memory information from the BIOS to an OS (operating system) via one or more OS interface tables, and initializes, by the OS, the memory expander based on the memory information, wherein the memory information includes memory capabilities and configuration settings associated with the memory expander.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Zhuangzhi Li, Jie Bai, Di Zhang, Changcheng Liu, Zhonghua Sun
  • Patent number: 12273740
    Abstract: Examples relate to processing circuitry, processing means, methods and computer programs for a base station and a user equipment. The processing circuitry for the base station is configured to select one of a first uplink beamforming management mode and a second uplink beamforming mode for a beamformed uplink communication between a user equipment and the base station. The selection is based on a path loss on a first wireless channel between the base station and the user equipment and based on a path loss on a second wireless channel between the user equipment and the base station. The processing circuitry is configured to provide an instruction related to the selection of the first or second uplink beamforming management mode to the user equipment.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventor: Zhibin Yu
  • Patent number: 12271991
    Abstract: Systems, apparatuses and methods may provide for technology that receives, at a topology shader in a graphics pipeline, an object description and generates, at the topology shader, a set of polygons based on the object description. Additionally, the set of polygons may be sent to a vertex shader.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Hugues Labbe, Tomer Bar-On, Gabor Liktor, Andrew T. Lauritzen, John G. Gierach