Patents by Inventor Marcos E. Carranza

Marcos E. Carranza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220408401
    Abstract: An apparatus and system are described to provide indoor positioning and movement information using a private next generation (NG) network. A heatmap of pathloss vs distance from a remote radio unit (RRU) is provided from the UE and federated with other heatmaps from different UEs under similar conditions. The federated heatmap is provided to the UE. A private location server containing an AI module is trained using data from the UEs. The location and movement of the UE is determined to a particular pixel based on the heatmap. WiFi reference points (RP) are used if multiple pixels satisfy data of the heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: December 22, 2022
    Inventors: Majdi Abdulqader, Marcos E. Carranza, Francesc Guim Bernat, Cesar Martinez-Spessot
  • Publication number: 20220334878
    Abstract: System and techniques for generating a virtual shared resource pool are described herein. The system may include means for reserving, by a controller of a first computing device, a resource on a second computing device. Means for instantiating, by the controller of the first computing device, a local service including a virtual function for the resource. The system may also include means for executing a process on the first computing device using the resource from the second computing device via the virtual function.
    Type: Application
    Filed: June 30, 2022
    Publication date: October 20, 2022
    Inventors: Francesc Guim Bernat, Marcos E. Carranza, Akhilesh Thyagaturu
  • Publication number: 20220317692
    Abstract: A computation offloading device includes a plurality of communication nodes, each configured to wirelessly send and receive data; a processor, configured to instruct a computation node of a plurality of computation nodes to process a data payload received from a robot at a first communication node of the plurality of communication nodes; select one or more second communication nodes of the plurality of communication nodes, different from the first communication node, based on a predicted location of the robot; and instruct the one or more second communication nodes to send a result of the processed data payload to the robot at the predicted location.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 6, 2022
    Inventors: Francesc GUIM BERNAT, Marcos E. CARRANZA, Akhilesh THYAGATURU, Rony FERZLI, Teemu KAERKKAEINEN
  • Publication number: 20220317749
    Abstract: A method is described. The method includes performing the following within a data center: a) recognizing that excess power derived from one or more ambient sources is available; b) determining allocations of respective portions of the excess power for different units of hardware within the data center; c) determining respective higher performance and higher power operational states for certain functional blocks within the different units of the hardware to utilize the excess power.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 6, 2022
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT, Trevor COOPER
  • Publication number: 20220222010
    Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.
    Type: Application
    Filed: March 31, 2022
    Publication date: July 14, 2022
    Inventors: Alexander BACHMUTSKY, Francesc GUIM BERNAT, Karthik KUMAR, Marcos E. CARRANZA
  • Publication number: 20220197819
    Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
    Type: Application
    Filed: March 10, 2022
    Publication date: June 23, 2022
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT
  • Publication number: 20220121481
    Abstract: Examples described herein relate to offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch. Based on telemetry data of one or more nodes and network traffic, one or more processes can be allocated to execute on the one or more nodes and a memory pool can be selected to store data generated by the one or more processes.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 21, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Marcos E. Carranza, Cesar Ignacio Martinez Spessot
  • Publication number: 20220121455
    Abstract: Various systems and methods for implementing intent-based cluster administration are described herein. An orchestrator system includes: a processor; and memory to store instructions, which when executed by the processor, cause the orchestrator system to: receive, at the orchestrator system, an administrative intent-based service level objective (SLO) for an infrastructure configuration of an infrastructure; map the administrative intent-based SLO to a set of imperative policies; deploy the set of imperative policies to the infrastructure; monitor performance of the infrastructure; detect non-compliance with the set of imperative policies; and modify the administrative intent-based SLO to generate a revised set of imperative policies that cause the performance of the infrastructure to be compliant with the revised set of imperative policies.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Adrian Hoban, Thijs Metsch, Francesc Guim Bernat, John J. Browne, Kshitij Arun Doshi, Mark Yarvis, Bin Li, Susanne M. Balle, Benjamin Walker, David Cremins, Mats Gustav Agerstam, Marcos E. Carranza, MIkko Ylinen, Dario Nicolas Oliver, John Mangan
  • Publication number: 20220124009
    Abstract: Various systems and methods for implementing intent-based orchestration in heterogenous compute platforms are described herein. An orchestration system is configured to: receive, at the orchestration system, a workload request for a workload, the workload request including an intent-based service level objective (SLO); generate rules for resource allocation based on the workload request; generate a deployment plan using the rules for resource allocation and the intent-based SLO; deploy the workload using the deployment plan; monitor performance of the workload using real-time telemetry; and modify the rules for resource allocation and the deployment plan based on the real-time telemetry.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Thijs Metsch, Susanne M. Balle, Patrick Koeberl, Bin Li, Mark Yarvis, Adrian Hoban, Kshitij Arun Doshi, Francesc Guim Bernat, Cesar Martinez-Spessot, Mats Gustav Agerstam, Dario Nicolas Oliver, Marcos E. Carranza, John J. Browne, Mikko Ylinen, David Cremins
  • Publication number: 20220124005
    Abstract: Various systems and methods for reactive intent-driven end-to-end (E2E) orchestration are described herein.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Kshitij Arun Doshi, John J. Browne, Marcos E. Carranza, Francesc Guim Bernat, Mats Gustav Agerstam, Adrian Hoban, Thijs Metsch
  • Publication number: 20220114251
    Abstract: Various systems and methods for implementing reputation management and intent-based security mechanisms are described herein.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 14, 2022
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Adrian Hoban, Thijs Metsch, Dario Nicolas Oliver, Marcos E. Carranza, Mats Gustav Agerstam, Bin Li, Patrick Koeberl, Susanne M. Balle, John J. Browne, Cesar Martinez-Spessot, Ned M. Smith
  • Publication number: 20220114032
    Abstract: System and techniques for infrastructure managed workload distribution are described herein. An infrastructure processing unit (IPU) receives a workload that includes a workload definition. The workload definition includes stages of the workload and a performance expectation. The IPU provides the workload, for execution, to a processing unit of a compute node to which the IPU belongs. The IPU monitors execution of the workload to determine that a stage of the workload is performing outside of the performance expectation from the workload definition. In response, the IPU modifies the execution of the workload.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Marcos E. Carranza, Rita H. Wouhaybi
  • Publication number: 20220012088
    Abstract: Techniques for expanded trusted domains are disclosed. In the illustrative embodiment, a trusted domain can be established that includes hardware components from a processor as well as an off-load device. The off-load device may provide compute resources for the trusted domain. The trusted domain can be expanded and contracted on-demand, allowing for a flexible approach to creating and using trusted domains.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ravi L. Sahita, Marcos E. Carranza
  • Publication number: 20220012490
    Abstract: System and techniques for abandoned object detection are described herein. a fence is established about a person and an object is detected within the fence. An entry is created in an object-person relationship data structure to establish a relationship between the person and the object within the fence. Then, the position of the object is monitored until an indication that the fence is terminated is received. If it is detected that the object is outside the fence during the monitoring, the person is alerted.
    Type: Application
    Filed: September 23, 2021
    Publication date: January 13, 2022
    Inventors: Charmaine Rui Qin Chan, Chia Chuan Wu, Marcos E. Carranza, Ignacio Javier Alvarez Martinez, Wei Seng Yeap, Tung Lun Loo
  • Publication number: 20210326221
    Abstract: Examples described herein relate to a network interface device that comprises circuitry, when operational, to select a platform to execute a function and based on load of the platform, selectively cause the function to execute on one or more other platforms to attempt to achieve or finish before the time-to-completion. In some examples, the circuitry is to detect progress of function execution to determine whether completion of execution of the function is predicted to not finish within the time-to-completion and cause the function to execute on one or more other platforms based on completion of execution of the function predicted to not finish within the time-to-completion. In some examples, the circuitry is to select the one or more other platforms to execute the function based on one or more of: processor computing utilization, available memory capacity, available cache capacity, network availability, or malfunction of a processor, memory, and/or cache.
    Type: Application
    Filed: June 26, 2021
    Publication date: October 21, 2021
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Alexander BACHMUTSKY, Patrick G. KUTCH, Marcos E. CARRANZA
  • Publication number: 20210325954
    Abstract: System and techniques for power-based adaptive hardware reliability on a device are described herein. A hardware platform is divided into multiple partitions. Here, each partition includes a hardware component with an adjustable reliability feature. The several partitions are placed into one of multiple reliability categories. A workload with a reliability requirement is obtained and executed on a partition in a reliability category that satisfies the reliability requirements. A change in operating parameters for the device is detected and the adjustable reliability feature for the partition is modified based on the change in the operating parameters of the device.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 21, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Mustafa Hajeer
  • Publication number: 20210329354
    Abstract: Examples described herein relate to network interface device that is configured to identify a trigger condition to cause transmission of a request to a next node to request the next node to pre-load a telemetry collection service prior to performance of a service and to collect specific telemetry data during performance of the service. In some examples, the request is transmitted using a connection with a particular quality of service. In some examples, the next node comprises a computing platform and a second network interface device and wherein the second network interface device is to transmit telemetry related to performance of the service to a target destination. In some examples, the network interface device comprises one or more of: network interface controller (NIC), SmartNIC, infrastructure processing unit (IPU), or data processing unit (DPU).
    Type: Application
    Filed: June 26, 2021
    Publication date: October 21, 2021
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Alexander BACHMUTSKY, Cesar Ignacio MARTINEZ SPESSOT, Marcos E. CARRANZA
  • Publication number: 20210327018
    Abstract: In one embodiment, a compute device implements a node of a computing infrastructure used to execute a computer vision (CV) pipeline. The compute device receives a video stream to be processed through the CV pipeline and performs a first portion of the CV pipeline. The compute device then receives a peer node availability indicating peer nodes in the computing infrastructure that are available to perform a second portion of the CV pipeline. Based on the peer node availability, the compute device partitions the second portion of the CV pipeline into one or more partial CV pipelines and offloads the partial CV pipeline(s) to a subset of the peer nodes.
    Type: Application
    Filed: June 26, 2021
    Publication date: October 21, 2021
    Applicant: Intel Corporation
    Inventors: Marcos E. Carranza, Francesc Guim Bernat, Dario N. Oliver, Mateo Guzman, Cesar I. Martinez Spessot
  • Publication number: 20210318929
    Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 14, 2021
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Mark A. SCHMISSEUR, Thomas WILLHALM, Marcos E. CARRANZA
  • Publication number: 20210116261
    Abstract: Disclosed herein are systems and methods for vehicle-occupancy-based and user-preference-based smart routing, and autonomous volumetric-occupancy measurement. In an embodiment, a system is configured to receive from a user device associated with a user, a routing-options request for routing options between two locations, and to responsively identify one or more routing options between the two locations based at least in part on occupancy data for a vehicle that would be utilized for at least a portion of at least one of the identified routing options. The occupancy data is based on an output of an automated occupancy-measurement system onboard the vehicle. The system is also configured to provide the one or more identified routing options to the user device. In some embodiments, the occupancy data is obtained using volumetric-occupancy measurement. Some embodiments relate to volumetric-occupancy measurement conducted by autonomous mesh nodes.
    Type: Application
    Filed: December 26, 2020
    Publication date: April 22, 2021
    Inventors: Francesc Guim Bernat, Marcos E. Carranza, Satish Chandra Jha, Sindhu Pandian, Lakshmi Talluru, Cesar Martinez-Spessot, Mateo Guzman, Dario Nicolas Oliver, Ignacio J. Alvarez, David Gonzalez Aguirre, Javier Felip Leon, S M Iftekharul Alam