Patents Assigned to Intel Corporeation
-
Publication number: 20250117264Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.Type: ApplicationFiled: November 1, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Utkarsh Y. KAKAIYA, Rajesh M. SANKARAN, Sanjay KUMAR, Kun TIAN, Philip LANTZ
-
Publication number: 20250116812Abstract: Described herein are stacked photonic integrated circuit (PIC) assemblies that include multiple layers of waveguides. The waveguides are formed of substantially monocrystalline materials, which cannot be repeatedly deposited. Layers of monocrystalline material are fabricated and repeatedly transferred onto the PIC structure using a layer transfer process, which involves bonding a monocrystalline material using a non-monocrystalline bonding material. Layers of isolation materials are also deposited or layer transferred onto the PIC assembly.Type: ApplicationFiled: December 17, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Abhishek A. Sharma, Wilfred Gomes
-
Publication number: 20250118641Abstract: Generally discussed herein are systems, methods, and apparatuses that include conductive pillars that are about co-planar. According to an example, a technique can include growing conductive pillars on respective exposed landing pads of a substrate, situating molding material around and on the grown conductive pillars, removing, simultaneously, a portion of the grown conductive pillars and the molding material to make the grown conductive pillars and the molding material about planar, and electrically coupling a die to the conductive pillars.Type: ApplicationFiled: December 19, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Robert L. Sankman, Sanka Ganesan
-
Publication number: 20250119733Abstract: This disclosure describes systems, methods, and devices related to using encrypted 802.11 association. A device may identify a beacon received from an access point (AP), the beacon including an indication of an authentication and key manager (AKM); transmit, to the AP, an 802.11 authentication request including an indication of parameters associated with the AKM; identify an 802.11 authentication response received from the AP based on the 802.11 authentication request, the 802.11 authentication response including a message integrity check (MIC) using a key confirmation key (KCK) and an indication that the parameters have been selected by the AP; transmit, to the AP, an 802.11 association request encrypted by a security key based on an authenticator address of the AP; and identify an 802.11 association response received from the AP based on the 802.11 association request, the 802.11 association response encrypted by the security key.Type: ApplicationFiled: December 17, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Po-Kai HUANG, Ilan PEER, Johannes BERG, Ido OUZIELI, Elad OREN, Emily QI
-
Publication number: 20250117873Abstract: Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Eriko Nurvitadhi, Balaji Vembu, Tsung-Han Lin, Kamal Sinha, Rajkishore Barik, Nicolas C. Galoppo Von Borries
-
Publication number: 20250117875Abstract: In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive an input from one or more detectors proximate a display to present an output from a graphics pipeline, determine that a user is not interacting with the display, and in response to a determination that the user is not interacting with the display, to reduce a frame rendering rate of the graphics pipeline. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: December 9, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Balaji Vembu, Nikos Kaburlasos, Josh B. Mastronarde
-
Publication number: 20250120036Abstract: A datacenter including a plurality of racks. The racks associated with a motorized and/or automated system to move the racks between first and second positions. In the first position, the racks are arranged in a side-by-side fashion in one or more rows. In the second position, a rack is moved so that a lateral side of the rack is accessible. In some embodiments, the racks include a motor and gear system for interacting with tracks. In some embodiments, each of the racks includes a plurality of chassis, each chassis including a plurality of input/output (I/O) connectors to receive a connector of a cable, the plurality of I/O connectors are arranged along a lateral side of the chassis so that they are accessible when the rack is in the second position. In use, the racks may be moved between the first and second positions while the chassis remain in normal operation.Type: ApplicationFiled: December 17, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Ralph Jensen, Michael Crocker, Carl Williams
-
Publication number: 20250117356Abstract: Methods and apparatus relating to techniques for multi-tile memory management. In an example, a graphics processor includes an interposer, a first chiplet coupled with the interposer, the first chiplet including a graphics processing resource and an interconnect network coupled with the graphics processing resource, cache circuitry coupled with the graphics processing resource via the interconnect network, and a second chiplet coupled with the first chiplet via the interposer, the second chiplet including a memory-side cache and a memory controller coupled with the memory-side cache. The memory controller is configured to enable access to a high-bandwidth memory (HBM) device, the memory-side cache is configured to cache data associated with a memory access performed via the memory controller, and the cache circuitry is logically positioned between the graphics processing resource and a chiplet interface.Type: ApplicationFiled: October 15, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Aravindh Anantaraman, Elmoustapha Ould-Ahmed-Vall, Valentin Andrei, Nicolas Galoppo Von Borries, Varghese George, Mike Macpherson, Subramaniam Maiyuran, Joydeep Ray, Lakshminarayanan Striramassarma, Scott Janus, Brent Insko, Vasanth Ranganathan, Kamal Sinha, Arthur Hunter, Prasoonkumar Surti, David Puffer, James Valerio, Ankur N. Shah
-
Publication number: 20250120143Abstract: Described herein are gate-all-around (GAA) transistors with extended drains, where the drain region extends through a well region below the GAA transistor. A high voltage can be applied to the drain, and the extended drain region provides a voltage drop. The transistor length (and, specifically length of the extended drain) can be varied based on the input voltage to the device, e.g., providing a longer drain for higher input voltages. The extended drain transistors can be implemented in devices that include CFETs, either by implementing the extended drain transistor across both CFET layers, or by providing a sub-fin pedestal with the well regions in the lower layer.Type: ApplicationFiled: October 6, 2023Publication date: April 10, 2025Applicant: Intel CorporationInventors: Sanjay Rangan, Adam Brand, Chen-Guan Lee, Rahul Ramaswamy, Hsu-Yu Chang, Adithya Shankar, Marko Radosavljevic
-
Publication number: 20250118698Abstract: Microelectronic assemblies, related devices and methods, are disclosed herein. In some embodiments, a microelectronic assembly may include a first die, having a first surface and an opposing second surface, in a first layer; a redistribution layer (RDL) on the first layer, wherein the RDL is electrically coupled to the second surface of the first die by solder interconnects, and a second die in a second layer on the RDL, wherein the second die is electrically coupled to the RDL by non-solder interconnects.Type: ApplicationFiled: December 19, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Xavier Francois Brun, Sanka Ganesan, Holly Sawyer, William J. Lambert, Timothy A. Gosselin, Yuting Wang
-
Publication number: 20250117503Abstract: The disclosed embodiments are generally directed to inline encryption of data at line speed at a chip interposed between two memory components. The inline encryption may be implemented at a System-on-Chip (“SOC” or “SOC”). The memory components may comprise Non-Volatile Memory express (NVMe) and a dynamic random access memory (DRAM). An exemplary device includes an SOC to communicate with a Non-Volatile Memory NVMe circuitry to provide direct memory access (DMA) to an external memory component. The SOC may include: a cryptographic controller circuitry; a cryptographic memory circuitry in communication with the cryptographic controller, the cryptographic memory circuitry configured to store instructions to encrypt or decrypt data transmitted through the SOC; and an encryption engine in communication with the crypto controller circuitry, the encryption engine configured to encrypt or decrypt data according to instructions stored at the crypto memory circuitry. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: October 29, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Prashant Dewan, Baiju Patel
-
Publication number: 20250117318Abstract: Memory management for wireless networks is described. A method, includes accessing an operational parameter for a network slice of a wireless network, determining a first memory region of a plurality of memory regions in the memory pool based on the operational parameter, and encoding configuration information to allocate the first memory region to the network slice. Other embodiments are described and claimed.Type: ApplicationFiled: December 19, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Sunku Ranganath, John Browne, Hassnaa Moustafa, Mandar Chincholkar, Amar Srivastava
-
Publication number: 20250117673Abstract: Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.Type: ApplicationFiled: December 16, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Anjali Singhai Jain, Tamar Bar-Kanarik, Marcos Carranza, Karthik Kumar, Cristian Florin Dumitrescu, Keren Guy, Patrick Connor
-
Publication number: 20250120102Abstract: Package substrates with components included in cavities of glass cores are disclosed. An example apparatus includes: a glass core having a first opening and a second opening spaced apart from the first opening, the second opening having a greater width than the first opening. The example apparatus further includes a conductive material adjacent a first wall of the first opening; and a dielectric material adjacent a second wall of the second opening.Type: ApplicationFiled: December 17, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Brandon Christian Marin, Whitney Bryks, Gang Duan, Jeremy Ecton, Jason Gamba, Haifa Hariri, Sashi Shekhar Kandanur, Joseph Peoples, Srinivas Venkata Ramanuja Pietambaram, Mohammad Mamunur Rahman, Bohan Shan, Joshua James Stacey, Hiroki Tanaka, Jacob Ryan Vehonsky
-
Publication number: 20250117633Abstract: Predictive uncertainty of a generative machine learning model may be estimated. The generative machine learning model may be a large language model or large multi-modal model. A datum may be input into the generative machine learning model. The generative machine learning model may generate outputs from the datum. Latent embeddings for the outputs may be extracted from the generative machine learning model. A covariance matrix with respect to the latent embeddings may be computed. The covariance matrix may be a two-dimensional matrix, such as a square matrix. The predictive uncertainty of the generative machine learning model may be estimated using the covariance matrix. For instance, the matrix entropy of the covariance matrix may be determined. The matrix entropy may be an approximated dimension of a latent semantic manifold spanned by the outputs of the generative machine learning model and may indicate the predictive uncertainty of the generative machine learning model.Type: ApplicationFiled: December 19, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Anthony Daniel Rhodes, Ramesh Radhakrishna Manuvinakurike, Sovan Biswas, Giuseppe Raffa, Lama Nachman
-
Publication number: 20250117360Abstract: A processing apparatus includes a processing resource including a general-purpose parallel processing engine and a matrix accelerator. The matrix accelerator includes first circuitry to receive a command to perform operations associated with an instruction, second circuitry to configure the matrix accelerator according to a physical depth of a systolic array within the matrix accelerator and a logical depth associated with the instruction, third circuitry to read operands for the instruction from a register file associated with the systolic array, fourth circuitry to perform operations for the instruction via one or more passes through one or more physical pipeline stages of the systolic array based on a configuration performed by the second circuitry, and fifth circuitry to write output of the operations to the register file associated with the systolic array.Type: ApplicationFiled: October 30, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Jorge Parra, Wei-yu Chen, Kaiyu Chen, Varghese George, Junjie Gu, Chandra Gurram, Guei-Yuan Lueh, Stephen Junkins, Subramaniam Maiyuran, Supratim Pal
-
Publication number: 20250118003Abstract: An apparatus to facilitate exception handling for debugging in a graphics environment is disclosed. The apparatus includes load store pipeline hardware circuitry to: in response to a page fault exception being enabled for a memory access request received from a thread of the plurality of threads, allocate a memory dependency token correlated to a scoreboard identifier (SBID) that is included with the memory access request; send, to memory fabric of the graphics processor, the memory access request comprising the memory dependency token; receive, from the memory fabric in response to the memory access request, a memory access response comprising the memory dependency token and indicating occurrence of a page fault error condition and fault details associated with the page fault error condition; and return the SBID associated with the memory access response and fault details of the page fault error condition to a debug register of the thread.Type: ApplicationFiled: October 18, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: John Wiegert, Joydeep Ray, Fabian Schnell, Kelvin Thomas Gardiner
-
Publication number: 20250117063Abstract: Methods and apparatus to improve user experience on computing devices are disclosed. An example computing device includes a microphone to capture audio corresponding to spoken words. The example computing device further includes a speech analyzer to: detect a keyword prompt from among the spoken words, the keyword prompt to precede a query statement of a user of the computing device; and identify topics associated with a subset of the spoken words, the subset of the spoken words captured by the microphone before the keyword prompt. The example computing device also includes a communications interface to, in response to detection of the keyword prompt, transmit information indicative of the query statement and ones of the identified topics to a remote server.Type: ApplicationFiled: December 6, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Kristoffer Fleming, Melanie Daniels, Paul Diefenbaugh, Aleksander Magi, Lawrence Falkenstein, Raoul Rivas Toledano, Vishal Sinha, Deepak Samuel Kirubakaran, Venkateshan Udhayan, Marko Bartscherer, Kathy Bui
-
Publication number: 20250117639Abstract: Methods, apparatus, systems and articles of manufacture for loss-error-aware quantization of a low-bit neural network are disclosed. An example apparatus includes a network weight partitioner to partition unquantized network weights of a first network model into a first group to be quantized and a second group to be retrained. The example apparatus includes a loss calculator to process network weights to calculate a first loss. The example apparatus includes a weight quantizer to quantize the first group of network weights to generate low-bit second network weights. In the example apparatus, the loss calculator is to determine a difference between the first loss and a second loss. The example apparatus includes a weight updater to update the second group of network weights based on the difference. The example apparatus includes a network model deployer to deploy a low-bit network model including the low-bit second network weights.Type: ApplicationFiled: September 16, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Anbang Yao, Aojun Zhou, Kuan Wang, Hao Zhao, Yurong Chen
-
Publication number: 20250117285Abstract: In one embodiment, an apparatus includes: an integrity circuit to receive data and generate a protection code based at least in part on the data; a cryptographic circuit coupled to the integrity circuit to encrypt the data into encrypted data and encrypt the protection code into an encrypted protection code; a message authentication code (MAC) circuit coupled to the cryptographic circuit to compute a MAC comprising a tag using header information, the encrypted data, and the encrypted protection code; and an output circuit to send the header information, the encrypted data, and the tag to a receiver via a link. Other embodiments are described and claimed.Type: ApplicationFiled: December 9, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Raghunandan MAKARAM, Kirk S. YAP