Patents by Inventor Marcos E. Carranza

Marcos E. Carranza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230273821
    Abstract: A method is described. The method includes dispatching jobs across electronic hardware components. The electronic hardware components are to process the jobs. The electronic hardware components are coupled to respective cooling systems. The respective cooling systems are each capable of cooling according to different cooling mechanisms. The different cooling mechanisms have different performance and cost operating realms. The dispatching of the jobs includes assigning the jobs to specific ones of the electronic hardware components to keep the cooling systems operating in one or more of the realms having lower performance and cost than another one of the realms.
    Type: Application
    Filed: April 18, 2023
    Publication date: August 31, 2023
    Inventors: Amruta MISRA, Francesc GUIM BERNAT, Kshitij A. DOSHI, Marcos E. CARRANZA, John J. BROWNE, Arun HODIGERE
  • Publication number: 20230222025
    Abstract: Reliability, availability, and serviceability (RAS)-based memory domains can enable applications to store data in memory domains having different degrees of reliability to reduce downtime and data corruption due to memory errors. In one example, memory resources are classified into different RAS-based memory domains based on their expected likelihood of encountering errors. The mapping of memory resources into RAS-based memory domains can be dynamically managed and updated when information indicative of reliability (such as the occurrence of errors or other information) suggests that a memory resource is becoming less reliable. The RAS-based memory domains can be exposed to applications to enable applications to allocate memory in high reliability memory for critical data.
    Type: Application
    Filed: March 21, 2023
    Publication date: July 13, 2023
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Mark A. SCHMISSEUR, Thomas WILLHALM, Marcos E. CARRANZA
  • Publication number: 20230156826
    Abstract: Various approaches for the integration and use of edge computing operations in satellite communication environments are discussed herein. For example, connectivity and computing approaches are discussed with reference to: identifying satellite coverage and compute operations available in low earth orbit (LEO) satellites, establishing connection streams via LEO satellite networks, identifying and implementing geofences for LEO satellites, coordinating and planning data transfers across ephemeral satellite connected devices, service orchestration via LEO satellites based on data cost, handover of compute and data operations in LEO satellite networks, and managing packet processing, among other aspects.
    Type: Application
    Filed: December 24, 2020
    Publication date: May 18, 2023
    Inventors: Stephen T. Palermo, Francesc Guim Bernat, Marcos E. Carranza, Kshitij Arun Doshi, Cesar Martinez-Spessot, Thijs Metsch, Ned M. Smith, Srikathyayani Srikanteswara, Timothy Verrall, Rita H. Wouhaybi, Yi Zhang, Weiqiang MA, Atul Kwatra
  • Publication number: 20230138094
    Abstract: Methods and apparatus for opportunistic memory pools. The memory architecture is extended with logic that divides and tracks the memory fragmentation in each of a plurality of smart devices in two virtual memory partitions: (1) the allocated-unused partition containing memory that is earmarked for (allocated to), but remained un-utilized by the actual workloads running, or, by the device itself (bit-streams, applications, etc.); and (2) the unallocated partition that collects unused memory ranges and pushes them in to an Opportunistic Memory Pool (OMP) which is exposed to the platform's memory controller and operating system. The two partitions of the OMP allow temporary utilization of otherwise unused memory. Under alternate configurations, the total amount of memory resources is presented as a monolithic resource or two monolithic memory resources (unallocated and allocated but unused) available for utilization by the devices and applications running in the platform.
    Type: Application
    Filed: December 28, 2022
    Publication date: May 4, 2023
    Inventors: Francesc GUIM BERNAT, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT, Kshitij A. DOSHI, Ned SMITH
  • Publication number: 20230135645
    Abstract: Various approaches for deploying and controlling distributed compute operations with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Kshitij Arun Doshi, Marcos E. Carranza
  • Publication number: 20230135938
    Abstract: Various approaches for service mech switching, including the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. For example, a packet that includes a service request for a service may be received at a networking infrastructure device. The service may include an application that spans multiple nodes in a network. An outbound interface of the networking infrastructure device may be selected through which to route the packet. The selection of the outbound interface may be based on a service component of the service request in the packet and network metrics that correspond to the service. The packet may then be transmitted using the outbound interface.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Marcos E. Carranza, Francesc Guim Bernat, Kshitij Arun Doshi, Karthik Kumar, Srikathyayani Srikanteswara, Mateo Guzman
  • Publication number: 20230136615
    Abstract: Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Kshitij Arun Doshi
  • Publication number: 20230134683
    Abstract: Various approaches for configuring interleaving in a memory pool used in an edge computing arrangement, including with the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. An example system may discover and map disaggregated memory resources at respective compute locations connected to each another via at least one interconnect. The system may identify workload requirements for use of the compute locations by respective workloads, for workloads provided by client devices to the compute locations. The system may determine an interleaving arrangement for a memory pool that fulfills the workload requirements, to use the interleaving arrangement to distribute data for the respective workloads among the disaggregated memory resources. The system may configure the memory pool for use by the client devices of the network, as the memory pool causes the disaggregated memory resources to host data based on the interleaving arrangement.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Marcos E Carranza, Francesc Guim Bernat, Karthik Kumar, Kshitij Arun Doshi
  • Publication number: 20230140252
    Abstract: Various approaches for deploying and controlling distributed compute operations with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. For example, a request to verify integrity of a device is received at a networking infrastructure device. A representation of device components of the device may be obtained. The representation of the device components may be compared with a reference value held by the networking infrastructure device. A response to the request may be transmitted based on matching the representation of the device components and the reference value. Here, the response indicates that the integrity of the device is intact.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Marcos E Carranza, Dario Nicolas Oliver, Francesc Guim Bernat, Mateo Guzman, Cesar Martinez-Spessot
  • Publication number: 20230022544
    Abstract: In one embodiment, an apparatus couples to a host processor over a Compute Express Link (CXL)-based link. The apparatus includes a transaction queue to queue memory transactions to be completed in an addressable memory coupled to the apparatus, a transaction cache, conflict detection circuitry to determine whether a conflict exists between memory transactions, and transaction execution circuitry. The transaction execution circuitry may access a transaction from the transaction queue, the transaction to implement one or more memory operations in the memory, store data from the memory to be accessed by the transaction operations in the transaction cache, execute operations of the transaction, including modifying data from the memory location stored in the transaction cache, and based on completion of the transaction, cause the modified data from the transaction cache to be stored in the memory.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 26, 2023
    Applicant: Intel Corporation
    Inventors: Thomas J. Willhalm, Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza
  • Publication number: 20230013452
    Abstract: System and techniques for an environmental control loop are described herein. A device for an environmental control loop can include a memory including instructions and processing circuitry that when in operation, can be configured by the instructions to receive environmental sensor data from a first component in a set of heterogeneous components installed in an environment with a controller. The environmental sensor data can indicate a service level value sensed by the first component. The controller can also measure a violation of a service level objective based on comparing the environmental sensor data to a threshold. The controller can also transmit an adjustment to an operating parameter of a second component of the set of heterogeneous components. The adjustment can be operative to attenuate the violation of the service level objective when implemented by the second component.
    Type: Application
    Filed: September 27, 2022
    Publication date: January 19, 2023
    Inventors: S M Iftekharul Alam, Marcos E. Carranza, Francesc Guim Bernat, Mateo Guzman, Satish Chandra Jha, Cesar Martinez-Spessot, Arvind Merwaday, Rajesh Poornachandran, Vesh Raj Sharma Banjade, Kathiravetpillai Sivanesan, Ned M. Smith, Liuyang Lily Yang, Mario Jose Divan Koller
  • Publication number: 20220408401
    Abstract: An apparatus and system are described to provide indoor positioning and movement information using a private next generation (NG) network. A heatmap of pathloss vs distance from a remote radio unit (RRU) is provided from the UE and federated with other heatmaps from different UEs under similar conditions. The federated heatmap is provided to the UE. A private location server containing an AI module is trained using data from the UEs. The location and movement of the UE is determined to a particular pixel based on the heatmap. WiFi reference points (RP) are used if multiple pixels satisfy data of the heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: December 22, 2022
    Inventors: Majdi Abdulqader, Marcos E. Carranza, Francesc Guim Bernat, Cesar Martinez-Spessot
  • Publication number: 20220334878
    Abstract: System and techniques for generating a virtual shared resource pool are described herein. The system may include means for reserving, by a controller of a first computing device, a resource on a second computing device. Means for instantiating, by the controller of the first computing device, a local service including a virtual function for the resource. The system may also include means for executing a process on the first computing device using the resource from the second computing device via the virtual function.
    Type: Application
    Filed: June 30, 2022
    Publication date: October 20, 2022
    Inventors: Francesc Guim Bernat, Marcos E. Carranza, Akhilesh Thyagaturu
  • Publication number: 20220317749
    Abstract: A method is described. The method includes performing the following within a data center: a) recognizing that excess power derived from one or more ambient sources is available; b) determining allocations of respective portions of the excess power for different units of hardware within the data center; c) determining respective higher performance and higher power operational states for certain functional blocks within the different units of the hardware to utilize the excess power.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 6, 2022
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT, Trevor COOPER
  • Publication number: 20220317692
    Abstract: A computation offloading device includes a plurality of communication nodes, each configured to wirelessly send and receive data; a processor, configured to instruct a computation node of a plurality of computation nodes to process a data payload received from a robot at a first communication node of the plurality of communication nodes; select one or more second communication nodes of the plurality of communication nodes, different from the first communication node, based on a predicted location of the robot; and instruct the one or more second communication nodes to send a result of the processed data payload to the robot at the predicted location.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 6, 2022
    Inventors: Francesc GUIM BERNAT, Marcos E. CARRANZA, Akhilesh THYAGATURU, Rony FERZLI, Teemu KAERKKAEINEN
  • Publication number: 20220222010
    Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.
    Type: Application
    Filed: March 31, 2022
    Publication date: July 14, 2022
    Inventors: Alexander BACHMUTSKY, Francesc GUIM BERNAT, Karthik KUMAR, Marcos E. CARRANZA
  • Publication number: 20220197819
    Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
    Type: Application
    Filed: March 10, 2022
    Publication date: June 23, 2022
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT
  • Publication number: 20220121455
    Abstract: Various systems and methods for implementing intent-based cluster administration are described herein. An orchestrator system includes: a processor; and memory to store instructions, which when executed by the processor, cause the orchestrator system to: receive, at the orchestrator system, an administrative intent-based service level objective (SLO) for an infrastructure configuration of an infrastructure; map the administrative intent-based SLO to a set of imperative policies; deploy the set of imperative policies to the infrastructure; monitor performance of the infrastructure; detect non-compliance with the set of imperative policies; and modify the administrative intent-based SLO to generate a revised set of imperative policies that cause the performance of the infrastructure to be compliant with the revised set of imperative policies.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Adrian Hoban, Thijs Metsch, Francesc Guim Bernat, John J. Browne, Kshitij Arun Doshi, Mark Yarvis, Bin Li, Susanne M. Balle, Benjamin Walker, David Cremins, Mats Gustav Agerstam, Marcos E. Carranza, MIkko Ylinen, Dario Nicolas Oliver, John Mangan
  • Publication number: 20220124005
    Abstract: Various systems and methods for reactive intent-driven end-to-end (E2E) orchestration are described herein.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Kshitij Arun Doshi, John J. Browne, Marcos E. Carranza, Francesc Guim Bernat, Mats Gustav Agerstam, Adrian Hoban, Thijs Metsch
  • Publication number: 20220121481
    Abstract: Examples described herein relate to offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch. Based on telemetry data of one or more nodes and network traffic, one or more processes can be allocated to execute on the one or more nodes and a memory pool can be selected to store data generated by the one or more processes.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 21, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Marcos E. Carranza, Cesar Ignacio Martinez Spessot