Patents by Inventor Amir Roozbeh

Amir Roozbeh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11140594
    Abstract: Source and target service nodes and methods are described, for transferring a service session for a wireless device from the source node to the target node when the wireless device is handed over from a first base station associated with the source node to a second base station associated with the target node. A first data amount indication is obtained, which indicates how much downlink data is pending at the first base station. The first base station is requested to delete some or all pending downlink data from a downlink buffer, and the source node transfers to the target node application data and a data amount indication related to a first data amount indication. The target node can then recreate the downlink buffer at the second base station by sending a first part of the application data, corresponding to the second data amount indication, to the second base station.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: October 5, 2021
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Vinay Yadhav, Jonas Pettersson, Amir Roozbeh, Johan Rune
  • Publication number: 20210191777
    Abstract: A memory allocator in a computer system comprising a plurality of CPU cores (5101-5104) and a first (530) and a second (5120) memory unit having different data access times and wherein each one of the first and the second memory units is divided into memory portions wherein each memory portion (SLICE 0-3) in the second memory unit is associated with at least one memory portion (A-G) in the first memory unit, and wherein each memory portion in the second memory unit is associated with a CPU core. If at least a predetermined number of memory portions in the first memory unit being part of the available requested memory is associated with the memory portion in the second memory unit that is associated with the CPU core on which the requesting application is running, the requested available memory is allocated to the requesting application.
    Type: Application
    Filed: June 20, 2019
    Publication date: June 24, 2021
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir ROOZBEH, Alireza FARSHIN, Dejan KOSTIC, Gerald Q MAGUIRE
  • Publication number: 20210105682
    Abstract: Source and target service nodes and methods are described, for transferring a service session for a wireless device from the source node to the target node when the wireless device is handed over from a first base station associated with the source node to a second base station associated with the target node. A first data amount indication is obtained, which indicates how much downlink data is pending at the first base station. The first base station is requested to delete some or all pending downlink data from a downlink buffer, and the source node transfer to the target node application data and a data amount indication related to a first data amount indication. The target node can then recreate the downlink buffer at the second base station by sending a first part of the application data, corresponding to the second data amount indication, to the second base station.
    Type: Application
    Filed: April 21, 2017
    Publication date: April 8, 2021
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Vinay YADHAV, Jonas PETTERSSON, Amir ROOZBEH, Johan RUNE
  • Publication number: 20210099929
    Abstract: Source and target service nodes and methods are described for transferring a service session for a wireless device from the source node to the target node when a service application is executed in the source node for the wireless device by the service session. In particular, the amount of application data to be transferred from the source node to the target node for a wireless device that is handed over from a first base station to a second base station can be reduced by truncating the application data by an amount corresponding to the data pending in a downlink buffer at the first base station. Thereby, the target node is able to recreate the complete application buffer from the truncated application data and data from the downlink buffer, the latter data being transferred from the first base station (206) to the second base station (208), according to a handover procedure.
    Type: Application
    Filed: April 21, 2017
    Publication date: April 1, 2021
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Vinay YADHAV, Jonas PETTERSSON, Amir ROOZBEH, Johan RUNE
  • Publication number: 20200387402
    Abstract: A method and a resource scheduler for enabling a computing unit to use memory resources in a remote memory pool. The resource scheduler allocates a memory unit in the remote memory pool to the computing unit for usage of memory resources in the allocated memory unit, and assigns an optical wavelength for communication between the computing unit and the allocated memory unit over an optical network. The resource scheduler further configures at least the computing unit with a first mapping between the assigned optical wavelength and the allocated memory unit. Thereby, the optical network can be utilized efficiently to achieve rapid and reliable communication of messages from the computing unit to the allocated memory unit.
    Type: Application
    Filed: December 20, 2017
    Publication date: December 10, 2020
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Joao MONTEIRO SOARES, Chakri PADALA, Amir ROOZBEH, Mozhgan MAHLOO
  • Publication number: 20200379811
    Abstract: A computing unit, a memory pool and methods therein, for enabling the computing unit to use memory resources in the memory pool, e.g. as configured by a resource scheduler. When a memory unit in the memory pool is allocated to the computing unit and an optical wavelength is assigned for communication between the computing unit and the allocated memory unit over an optical network, the computing unit is configured with a first mapping between the assigned optical wavelength and the allocated memory unit. Thereby, the optical network can be utilized efficiently to achieve rapid and reliable communication of messages from the computing unit to the allocated memory unit.
    Type: Application
    Filed: December 20, 2017
    Publication date: December 3, 2020
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Joao MONTEIRO SOARES, Amir ROOZBEH, Mozhgan MAHLOO, Chakri PADALA
  • Publication number: 20200334070
    Abstract: It is disclosed a resource sharing manager, RSM, operative to provide efficient utilization of central processing units, CPUs, within virtual servers, each virtual server having an operating system, OS. The RSM dynamically obtains (902) information about ownership and sharable status of said CPUs, and dynamically determines (904) which CPUs are sharable to which virtual servers. The RSM obtains (906) information about that one or more sharable CPUs are available; and obtains (908) information about that one or more virtual servers require more processing resources. The RSM also assigns (910) a first CPU of said sharable CPUs when available, to a first virtual server of said virtual servers. Information about ownership and sharable status of the first CPU, is hence provided to the OS of the first virtual server. Overhead is reduced by circumventing a hypervisor when sharing CPUs in virtual servers. An increase in efficiency of task execution is provided.
    Type: Application
    Filed: January 15, 2018
    Publication date: October 22, 2020
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir ROOZBEH, Mozhgan MAHLOO, Joao MONTEIRO SOARES, Daniel TURULL
  • Publication number: 20200319940
    Abstract: It is disclosed a resource sharing manager, RSM, operative to provide efficient utilization of central processing units, CPUs, within virtual servers. The RSM dynamically obtains (102) information about which CPUs are sharable to which virtual servers, (104) information about that one or more of said sharable CPUs are available, and (106) information about that one or more virtual servers of said virtual servers require more processing resources. It also dynamically assigns (108) a first CPU of said one or more sharable CPUs when available, to a first virtual server of said one or more virtual servers. This enables the first virtual server of said one or more virtual servers, to use the first CPU, until the RSM receives information that the first CPU no longer is sharable to, or needed by, the first virtual server. Overhead is reduced by circumventing a hypervisor when sharing CPUs in virtual servers.
    Type: Application
    Filed: December 21, 2017
    Publication date: October 8, 2020
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir ROOZBEH, Mozhgan MAHLOO, Joao MONTEIRO SOARES, Daniel TURULL
  • Publication number: 20200285587
    Abstract: To manage memory, a computer system, responsive to receiving a message indicating an availability of a memory page of the computer system, generates a mapping between a logical address of the memory page and at least two physical memory addresses at which respective copies of the memory page are available. The computer system provides one of the at least two physical memory addresses in response to a request for access to the memory page.
    Type: Application
    Filed: May 26, 2020
    Publication date: September 10, 2020
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull
  • Patent number: 10713175
    Abstract: A method and a Memory Availability Managing Module (110) “MAMM” for managing availability of memory pages (130) are disclosed. A disaggregated hardware system (100) comprises sets of memory blades (105, 106, 107) and computing pools (102, 103, 104). The MAMM (110) receives (A010) a message relating to allocation of at least one memory page to at least one operating system (120). The message comprises an indication about availability for said at least one memory page. The MAMM (110) translates (A020) the indication about availability to a set of memory blade parameters, identifying at least one memory blade (105, 106, 107). The MAMM (110) generates (A030) address mapping information for said at least one memory page, including a logical address of said at least one memory page mapped to at least two physical memory addresses of said at least one memory blade (105, 106, 107).
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: July 14, 2020
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull
  • Publication number: 20200174926
    Abstract: There is provided a method performed by a memory allocator, MA, and a MA, for allocating memory to an application on a logical server having a memory block allocated from at least one memory pool. In one action of the method, the MA 5 obtains performance characteristics associated with a first portion of the memory block and obtains performance characteristics associated with a second portion of the memory block. The MA further receives information associated with the application and selects one of the first portion and the second portion of the memory block for allocation of memory to the application, based on the received 0 information and at least one of the performance characteristics associated with the first portion of the memory block and the performance characteristics associated with the second portion of the memory block. An arrangement and methods performed therein, computer programs, computer program products and carriers are also provided.
    Type: Application
    Filed: June 22, 2017
    Publication date: June 4, 2020
    Applicant: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Amir ROOZBEH, Mozhgan MAHLOO
  • Publication number: 20200117596
    Abstract: A memory allocation manager and a method performed thereby for managing memory allocation, within a data centre, to an application are provided. The data centre comprises at least a Central Processing Unit, CPU, pool and at least one memory pool. The method comprises receiving (210) information associated with a plurality of instances associated with an application to be initiated, wherein individual instances are associated with individual memory requirements, the information further comprising information about an internal relationship between the instances; and determining (230) for a plurality of instances, a minimum number of memory blocks and associated sizes required based on the received information, by identifying parts of memory blocks and associated sizes that may be shared by two or more instances based on their individual memory requirements and/or the internal relationship between the instances.
    Type: Application
    Filed: March 23, 2017
    Publication date: April 16, 2020
    Inventors: Mozhgan MAHLOO, Amir ROOZBEH
  • Publication number: 20200012508
    Abstract: A performance manager (400, 500) and a method (200) performed thereby are provided, for managing the performance of a logical server of a data center. The data center comprises at least one memory pool in which a memory block has been allocated to the logical server. The method (200) comprises determining (230) performance characteristics associated with a first portion of the memory block, comprised in a first memory unit of the at least one memory pool; and identifying (240) a second portion of the memory block, comprised in a second memory unit of the at least one memory pool, to which data of the first portion of the memory block may be migrated to apply performance characteristics associated with the second portion. The method (200) further comprises initiating migration (250) of the data to the second portion of the memory block.
    Type: Application
    Filed: March 31, 2017
    Publication date: January 9, 2020
    Inventors: Mozhgan Mahloo, Amir Roozbeh
  • Patent number: 10416916
    Abstract: A Memory Merging Function “MMF” for merging memory pages. A hardware system comprises a set of memory blades and a set of computing pools. At least one instance of an operating system executes on the hardware system. The MMF is independent of the operating system. The MMF finds a first and a second memory page. The first and second memory pages include identical information. The first and second memory pages are associated with at least one computing unit of the computing units. The MMF obtains a respective memory blade parameter relating to memory blade of the first and second memory pages and a respective latency parameter relating to latency for accessing the first and second memory pages. The MMF releases at least one of the first and second memory pages based on the respective memory blade and latency parameters.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: September 17, 2019
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull
  • Publication number: 20180341482
    Abstract: It is disclosed a processing arrangement utilizing function capable of efficiently utilizing a processing arrangement and a method performed therein, where the processing arrangement comprises a first and a second physical processing unit, PU, where the first physical PU is adapted to be assigned to a first logical PU. When the first physical PU is turned off, it may be assigned to another logical PU in need to be activated. When the first logical PU needs to be activated, it can be assigned to a second physical PU. A notification is sent to a power management unit to activate the physical PU to which a logical PU is assigned. Embodiments of this disclosure increase the utilization of processing arrangements, taking advantage of statistical multiplexing.
    Type: Application
    Filed: December 18, 2015
    Publication date: November 29, 2018
    Inventors: Amir ROOZBEH, Daniel TURULL, Joao MONTEIRO SOARES
  • Publication number: 20180314453
    Abstract: A Memory Merging Function “MMF” for merging memory pages. A hardware system comprises a set of memory blades and a set of computing pools. At least one instance of an operating system executes on the hardware system. The MMF is independent of the operating system. The MMF finds a first and a second memory page. The first and second memory pages include identical information. The first and second memory pages are associated with at least one computing unit of the computing units. The MMF obtains a respective memory blade parameter relating to memory blade of the first and second memory pages and a respective latency parameter relating to latency for accessing the first and second memory pages. The MMF releases at least one of the first and second memory pages based on the respective memory blade and latency parameters.
    Type: Application
    Filed: October 19, 2015
    Publication date: November 1, 2018
    Inventors: Amir ROOZBEH, Joao MONTEIRO SOARES, Daniel TURULL
  • Publication number: 20180314641
    Abstract: A method and a Memory Availability Managing Module (110) “MAMM” for managing availability of memory pages (130) are disclosed. A disaggregated hardware system (100) comprises sets of memory blades (105, 106, 107) and computing pools (102, 103, 104). The MAMM (110) receives (A010) a message relating to allocation of at least one memory page to at least one operating system (120). The message comprises an indication about availability for said at least one memory page. The MAMM (110) translates (A020) the indication about availability to a set of memory blade parameters, identifying at least one memory blade (105, 106, 107). The MAMM (110) generates (A030) address mapping information for said at least one memory page, including a logical address of said at least one memory page mapped to at least two physical memory addresses of said at least one memory blade (105, 106, 107).
    Type: Application
    Filed: December 2, 2015
    Publication date: November 1, 2018
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull
  • Patent number: 9875057
    Abstract: A method of migrating of an application from a source host to a destination host, wherein the application is associated with a plurality of memory pages, the source host comprises a first instance of the application and a source memory region, and each memory page has an associated source memory block in the source memory region, the method comprising: at the destination host, reserving a destination memory region such that each memory page has an associated destination memory block in the destination memory region, a second instance of the application at the destination host; on receipt of an input to the application, processing the input in parallel at the first and second instances at respective source and destination hosts: at the source host, if the processing requires a read or a write call to a memory page, respectively reading from or writing to the associated source memory block; the destination host, if the processing requires a write call to a memory page, writing to the associated destination memo
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: January 23, 2018
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull
  • Publication number: 20170139637
    Abstract: A method of migrating of an application from a source host to a destination host, wherein the application is associated with a plurality of memory pages, the source host comprises a first instance of the application and a source memory region, and each memory page has an associated source memory block in the source memory region, the method comprising: at the destination host, reserving a destination memory region such that each memory page has an associated destination memory block in the destination memory region, a second instance of the application at the destination host; on receipt of an input to the application, processing the input in parallel at the first and second instances at respective source and destination hosts: at the source host, if the processing requires a read or a write call to a memory page, respectively reading from or writing to the associated source memory block; the destination host, if the processing requires a write call to a memory page, writing to the associated destination memo
    Type: Application
    Filed: June 16, 2015
    Publication date: May 18, 2017
    Inventors: Amir Roozbeh, Joao Monteiro Soares, Daniel Turull