Patents by Inventor Kei Fujimoto

Kei Fujimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250150964
    Abstract: A resource scheduling unit of a base station 100 assigns an RE multiplexed in a time domain and a frequency domain to each UE. The resource scheduling unit includes the following: a resource assignment calculation unit that performs scheduling of assigning a resource by assigning the resource to one side of a time axis in a case where traffic between the UE and the base station is equal to or less than a predetermined value; and a sleep control unit that distributes a scheduling result of the resource assignment calculation unit as resource assignment information or sleepable time information to each functional unit that is a part of the base station and is capable of sleeping.
    Type: Application
    Filed: February 24, 2022
    Publication date: May 8, 2025
    Inventors: Kei FUJIMOTO, Akinori SHIRAGA, Ikuo OTANI, Shogo SAITO, Ko NATORI
  • Publication number: 20250110901
    Abstract: A server (30) includes a timing estimation unit (33) that estimates a processing time during which a dedicated processing unit (32) executes dedicated processing on current target data offloaded to the dedicated processing unit (32); and a polling control unit (38) that waits for execution of polling processing for requesting the dedicated processing unit (32) to transmit an execution result of the dedicated processing until the estimated processing time elapses, and executes the polling processing on the dedicated processing unit (32) when the estimated processing time has elapsed.
    Type: Application
    Filed: February 8, 2022
    Publication date: April 3, 2025
    Inventors: Shogo SAITO, Kei FUJIMOTO
  • Publication number: 20250103382
    Abstract: A server delay controller includes: a sleep management unit that makes a thread sleep when no packet arrives for a predetermined period and cancels the sleep of the thread with a hardware interrupt when the packet arrives; a HW interrupt frequency management table that stores the number of times of hardware interrupt; and a HW interrupt frequency control unit that calculates a HW interrupt frequency on the basis of the number of times of hardware interrupt and controls HW interrupt permission or prohibition by sleep of the sleep management unit on the basis of the calculated HW interrupt frequency.
    Type: Application
    Filed: January 25, 2022
    Publication date: March 27, 2025
    Inventors: Kei FUJIMOTO, Ko NATORI
  • Publication number: 20250097134
    Abstract: A server delay control device includes: a data arrival notifier that notifies a data processing APL of data acquired by a data arrival monitor and passes the data to the data processing APL (1); and a manager that causes a polling thread to sleep and cancels sleep of the polling thread by a hardware interrupt when a packet arrives, and the manager controls a timing of the sleep by permitting the hardware interrupt on the basis of characteristics of an application program.
    Type: Application
    Filed: January 27, 2022
    Publication date: March 20, 2025
    Inventors: Ko NATORI, Kei FUJIMOTO
  • Publication number: 20250055779
    Abstract: Provided is a device including: computing hardware including one or more central processing units (CPUs); and an operating system (OS) implemented on the computing hardware and comprising a kernel. The device is configured to perform packet processing according to New API (NAPI) polling model in the kernel of the OS.
    Type: Application
    Filed: October 21, 2024
    Publication date: February 13, 2025
    Inventor: Kei FUJIMOTO
  • Patent number: 12225103
    Abstract: In a time synchronization system, a network controller sets transmission distance information on a transmission distance through which an optical signal is transmitted between transmission and receiving servers via a queueless network to a receiving server, based on network topology information. The transmitting server transmits a transmitting side current time synchronized with a reference time to the receiving server 12 via the queueless network. The receiving server calculates a transmission delay time between the transmission and receiving servers by dividing the transmission distance, which is based on the set transmission distance information, by light speed in the queueless network. A receiving side current time is calculated by adding the transmission delay time to the transmitting side current time.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: February 11, 2025
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Kei Fujimoto, Masashi Kaneko, Takeshi Fukumoto
  • Patent number: 12160359
    Abstract: Provided is a server delay control device deployed in a kernel of an OS of a server. The OS includes: the kernel; a ring buffer managed by the kernel, in a memory space in which the server deploys the OS; and a poll list in which packet arrival information indicative of the presence of a packet in the ring buffer is to be registered. The server delay control device spawns a thread configured to monitor a packet arrival according to a polling model. The thread includes: a packet arrival monitoring part configured to monitor whether the packet arrival information has been registered in the poll list, and a packet dequeuer configured to, when the packet arrival information has been registered in the poll list, dequeue the packet from the ring buffer on the basis of the packet arrival information.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: December 3, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventor: Kei Fujimoto
  • Publication number: 20240380707
    Abstract: An NIC hardware includes an accelerator function and argument data parsing unit that deserializes packet data input from a client side according to a predetermined protocol format and obtains a function name and multiple arguments; an accelerator function and return value data packetization unit that serializes a function name and arguments input from an accelerator according to a predetermined protocol format and packetizes the function name and arguments as a payload; and a data transfer unit that transfers data from the accelerator function and argument data parsing unit to a server.
    Type: Application
    Filed: October 1, 2021
    Publication date: November 14, 2024
    Inventors: Shogo SAITO, Kei FUJIMOTO
  • Publication number: 20240333541
    Abstract: An on-server data transmission device (200) that performs data transfer control on an interface part in a user space includes: a data transfer part (220) configured to launch a thread that monitors packet arrivals using a polling model; and a sleep control manager (210) configured to manage data arrival schedule information and deliver the data arrival schedule information to the data transfer part (220), to perform sleep control on the data transfer part (220). The data transfer part (220) is configured to put the thread into a sleep state based on the data arrival schedule information delivered from the sleep control manager (210) and cause a timer to expire immediately before an arrival of data to perform cancellation of the sleep state, causing the thread to wake up.
    Type: Application
    Filed: July 19, 2021
    Publication date: October 3, 2024
    Inventors: Kei FUJIMOTO, Shogo SAITO, Tetsuro NAKAMURA
  • Publication number: 20240296073
    Abstract: A resource dynamic allocation device includes: a unit thread ID acquisitor that acquires identification information of an application process initialized at a time of activation, identification information of a thread which is included in the application process and to which a possible physical resource on a physical server is allocated, and identification information of the physical resource allocated to the thread; and a resource recorder/allocator that freezes all threads of the initialized application process and freezes the thread or unfreezes the thread in accordance with any one of transfer traffic, a resource load, and a load of the application process.
    Type: Application
    Filed: June 24, 2021
    Publication date: September 5, 2024
    Inventors: Tetsuro NAKAMURA, Kei FUJIMOTO, Shogo SAITO
  • Publication number: 20240291767
    Abstract: An OS (120) of a client (100) includes: an L3/L4 protocol/ACC function/argument data packetizing unit (121) that serializes a function name/argument input from an application side according to a format of a predetermined protocol and packetizes the function name/argument as a payload; and an L3/L4 protocol/ACC function/return value data parsing unit (122) that deserializes packet data input from a server (200) side according to a format of a predetermined protocol and acquires a function name/execution result.
    Type: Application
    Filed: July 5, 2021
    Publication date: August 29, 2024
    Inventors: Shogo SAITO, Kei FUJIMOTO, Tetsuro NAKAMURA
  • Patent number: 12039375
    Abstract: The processing performance of an entire system is enhanced by efficiently using CPU resources shared by a plurality of guests. A server 10 includes a host OS 104 and a plurality of guest OSs 110A and 110B running on a plurality of virtual machines 108A and 108B, respectively, which are virtually constructed on the host OS 104. The plurality of virtual machines 108A and 108B shares CPU resources implemented by hardware 102. A guest priority calculation unit 202 of a resource management device (resource management unit 20) calculates a processing priority of at least one of the guest OSs 110 based on at least one of a packet transfer rate from the host OS 104 to the guest OS 110 and an available capacity status of a kernel buffer of the host OS 104. A resource utilization control unit 204 controls allocation of a utilization time for CPU resources to be used by the plurality of guest OSs 110 based on the calculated processing priority.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: July 16, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Kei Fujimoto, Kohei Matoba, Makoto Araoka
  • Publication number: 20240231904
    Abstract: A scheduling device includes a controller unit that acquires a model used by each task, an FPGA control unit that performs control to switch a setting of the FPGA in such a manner that the model acquired by the controller unit becomes processable, and a scheduler unit that refers to a queue that stores a task for each model used by each task, reads a task using a model that has become processable by switching by the FPGA control unit, and causes the FPGA to execute the task.
    Type: Application
    Filed: March 5, 2021
    Publication date: July 11, 2024
    Inventors: Tetsuro NAKAMURA, Kei FUJIMOTO, Shogo SAITO
  • Patent number: 12001895
    Abstract: Provided is a server delay control system for performing, on a server including a Host OS, packet transfer between a physical NIC connected to the Host OS and an application deployed in a user space. A server delay control device configured to perform polling for packet transfer on behalf of the application is deployed in the user space. The server delay control device creates, between the application and the physical NIC, a communication path for communication via socket communication. The communication path includes a first queue and a second queue. The server delay control device includes: a packet dequeuer configured to poll whether a packet has been enqueued into the first queue and to dequeue the enqueued packet from the first queue; and a packet enqueuer configured to enqueue the dequeued packet into the second queue in the same context as the polling and dequeuing without causing a context switch.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: June 4, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Kei Fujimoto, Maiko Arai
  • Publication number: 20240160468
    Abstract: An OS (70) includes: a ring buffer (72); and a poll list (186). A kernel (171) includes a server delay control device (100) that spawns a thread configured to monitor packet arrivals according to a polling model. A packet arrival monitoring part (110) configured to monitor the poll list (186), a packet dequeuer (120) configured to, when a packet has arrived, reference the packet held in the ring buffer (72), and perform, on the basis of the processing to be performed next, dequeuing to remove the corresponding queue entry from the ring buffer (72), and a sleep management part (130) configured to, when there is no packet arrival over a predetermined period of time, cause a thread to sleep and, when a packet arrives, cancel the sleep by a hardware interrupt of the thread are provided.
    Type: Application
    Filed: March 18, 2021
    Publication date: May 16, 2024
    Inventors: Kei FUJIMOTO, Masashi KANEKO
  • Publication number: 20240129255
    Abstract: Provided is a server delay control device for a server in which an OS having a kernel is deployed. The OS includes: a ring buffer managed by the kernel; and a poll list in which information on a net device of a hardware interrupt from an NIC is registered. The server delay control device is deployed in the server and configured to receive a timer interrupt at predetermined specified intervals and monitors a packet arrival and includes: a packet arrival monitoring part configured to check the presence or absence of a packet in the poll list upon being triggered by the timer interrupt to monitor the poll list; and a packet dequeuer configured to, when a packet has arrived, reference the packet held in the ring buffer, and perform dequeuing to remove the corresponding queue entry from the ring buffer.
    Type: Application
    Filed: February 10, 2021
    Publication date: April 18, 2024
    Inventors: Kei FUJIMOTO, Masashi KANEKO
  • Publication number: 20240098318
    Abstract: A communication system in which a server-side system and a client-side system are communicably connected via a network, the server-side system including: an information reception unit that receives an input information packet including input information and loss chunk numbers from the client-side system; a video processing unit that generates video data; a chunking function unit that chunks the video data and assigns a number to each chunk; a chunk loss determination unit that calculates a chunk loss rate and determines a transmission interval on the basis of a transmission interval determination logic; and a transmission control unit that transmits the chunked video data to the client-side system.
    Type: Application
    Filed: March 22, 2021
    Publication date: March 21, 2024
    Inventors: Mizuki IKEGAYA, Kei FUJIMOTO, Shogo SAITO, Tetsuro NAKAMURA
  • Publication number: 20240014999
    Abstract: In a time synchronization system, a network controller sets transmission distance information on a transmission distance through which an optical signal is transmitted between transmission and receiving servers via a queueless network to a receiving server, based on network topology information. The transmitting server transmits a transmitting side current time synchronized with a reference time to the receiving server 12 via the queueless network. The receiving server calculates a transmission delay time between the transmission and receiving servers by dividing the transmission distance, which is based on the set transmission distance information, by light speed in the queueless network. A receiving side current time is calculated by adding the transmission delay time to the transmitting side current time.
    Type: Application
    Filed: September 15, 2020
    Publication date: January 11, 2024
    Inventors: Kei FUJIMOTO, Masashi KANEKO, Takeshi FUKUMOTO
  • Publication number: 20230359501
    Abstract: A management device of an accelerator management mechanism includes a workload management unit that manages correspondence information that associates each workload with an accelerator of a calculation node that may support the workload, and an accelerator management unit that receives an inquiry about a workload from a user terminal, acquires information of an accelerator of a calculation node that may support the workload of the inquiry with reference to the correspondence information, responds to the user terminal with the acquired information, and thereby offloads the workload of the inquiry from the user terminal to the accelerator of the calculation node for which the response has been made.
    Type: Application
    Filed: September 17, 2020
    Publication date: November 9, 2023
    Inventors: Kei FUJIMOTO, Shogo SAITO, Tetsuro Nakamura
  • Publication number: 20230083604
    Abstract: An in-server frequency control apparatus includes a preliminary grasp unit which preliminarily grasps a start of a service requiring low delay in a server and grasps termination of the service by a predetermined method, and an operating frequency change unit which changes an operating frequency of a CPU as a target to be controlled, which is a CPU allocated in advance to a receiver of the service in the server, at the time of the preliminarily grasped start of the service and at the time of the termination of the service. The operating frequency change unit makes the operating frequency of the target CPU higher than a predetermined value at the time of the start of the service and makes the operating frequency of the target CPU lower than the predetermined value at the time of the termination of the service.
    Type: Application
    Filed: February 27, 2020
    Publication date: March 16, 2023
    Inventors: Maiko ARAI, Kei FUJIMOTO, Yusuke OGATA