Patents Examined by Adam Lee
  • Patent number: 11675631
    Abstract: In an approach for balancing mainframe and distributed workloads, a processor receives a request to allocate an application workload to a mainframe platform and a distributed computing platform. The application workload includes a plurality of work units. A processor collects performance and cost data associated with the application workload, the mainframe platform, and the distributed computing platform. A processor determines the mainframe platform and the distributed computing platform for the plurality of work units of the application workload, based on the analysis of the performance and cost data. A processor allocates the plurality of work units of the application workload to run on the mainframe platform and the distributed computing platform respectively to balance performance and cost in real time.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: June 13, 2023
    Assignee: Kyndryl, Inc.
    Inventors: Allan Douglas Moreira Martins, Tiago Battiva Ferreira, Jose Gilberto Biondo Junior, Tiago Dias Generoso, Robert Justiniano Ferreira
  • Patent number: 11674904
    Abstract: This disclosure describes various system and methods for monitoring photons emitted by a heat source of an additive manufacturing device. Sensor data recorded while monitoring the photons can be used to predict metallurgical, mechanical and geometrical properties of a part produced during an additive manufacturing operation. In some embodiments, a test pattern can be used to calibrate an additive manufacturing device.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 13, 2023
    Assignee: Sigma Additive Solutions, Inc.
    Inventors: Vivek R. Dave, Mark J. Cola, R. Bruce Madigan, Alberto Castro, Glenn Wikle, Lars Jacquemetton, Peter Campbell
  • Patent number: 11663052
    Abstract: A method for allocating resources to applications in a distributed datacenter based on generated contact lists is described. The method includes, receiving, by a first resource manager, a placement request, which identifies resources needed for execution of an application; determining a policy associated with the application; generating a first contact list for the first resource manager based on the determined policy for the application; and searching resources in the distributed datacenter, based on the first contact list, to attempt to meet the identified resources of the placement request.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 30, 2023
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Joacim Halén, Chunyan Fu, Mina Sedaghat, Wolfgang John
  • Patent number: 11650848
    Abstract: Controlling allocation of resources in network function virtualization. Data defining a pool of available physical resources is maintained. Data defining one or more resource allocation rules is identified. An application request is received. Physical resources from the pool are allocated to virtual resources to implement the application request, on the basis of the maintained data, the identified data and the received application request.
    Type: Grant
    Filed: January 21, 2016
    Date of Patent: May 16, 2023
    Inventors: Ignacio Aldama, Ruben Sevilla Giron, Javier Garcia-Lopez
  • Patent number: 11650858
    Abstract: A method for maintaining version consistency of resources. The method provides for one or more processors to receive a submitted request to run a job in which the job includes a processing element and a timestamp associated with running the job. Identification of a resource type associated with the processing element is determined, based on a tag included in the job, associated with the processing element. A version of the resource type of the processing element is determined, based on a mapping of the tag associated with the identified resource type and the timestamp of the job. The resource type of the determined version is requested from a resource manager, and responsive to a confirmation of assigning the version of the resource type from the resource manager, the process element of the job is performed on the version of the resource type assigned by the resource manager.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: May 16, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bradley William Fawcett, Jingdong Sun, Henry Chiu, Jason A. Nikolai
  • Patent number: 11625600
    Abstract: A neural network system for predicting a polling time and a neural network model processing method using the neural network system are provided. The neural network system includes a first resource to generate a first calculation result obtained by performing at least one calculation operation corresponding to a first calculation processing graph and a task manager to calculate a first polling time taken for the first resource to perform the at least one calculation operation and to poll the first calculation result from the first resource based on the calculated first polling time.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: April 11, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Seung-soo Yang
  • Patent number: 11625267
    Abstract: There is provided an information processing apparatus including a processing unit that performs a series of processes with an external device, and a detection unit that detects an interrupt of a process other than the series of processes after the series of processes is started, in which the processing unit changes contents of a process to be performed after the interrupt is detected on the basis of a detection state of the interrupt.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: April 11, 2023
    Assignee: FELICA NETWORKS, INC.
    Inventors: Yasumasa Nakatsugawa, Seiji Kawamura, Naofumi Hanaki
  • Patent number: 11620510
    Abstract: Computing resources may be optimally allocated for a multipath neural network using a multipath neural network analyzer that includes an interface and a processing device. The interface receives a multipath neural network. The processing device generates the multipath neural network to include one or more layers of a critical path through the multipath neural network that are allocated a first allocation of computing resources that are available to execute the multipath neural network. The critical path limits throughput of the multipath neural network. The first allocation of computing resources reduces an execution time of the multipath neural network to be less than a baseline execution time of a second allocation of computing resources for the multipath neural network. The first allocation of computing resources for a first layer of the critical path is different than the second allocation of computing resources for the first layer of the critical path.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: April 4, 2023
    Inventors: Behnam Pourghassemi Najafabadi, Joo Hwan Lee, Yang Seok Ki
  • Patent number: 11609776
    Abstract: An elastic Internet Protocol (IP) address for hypervisor and virtual router management in a branch environment may be provided. First, an IP address may be assigned to a hypervisor associated with a virtual branch. Next, it may be determined that a virtual machine (VM) has been instantiated at the virtual branch. In response to determining that the VM has been instantiated at the virtual branch, the IP address may then be released. It may next be determined that the VM is in a failed state and then, in response to determining that the VM is in the failed state, the IP address may be reassigned to the hypervisor.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: March 21, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Yanping Qu, Sabita Jasty, Yegappan Lakshmanan, Kaushik Pratap Biswas
  • Patent number: 11593233
    Abstract: Techniques provide for data synchronization. For example, such a technique may involve: obtaining respective synchronization characteristics of a group of synchronization jobs to be processed, each synchronization characteristic indicating at least one of an expected completion time instant and an amount of data to be synchronized of a corresponding synchronization job; prioritizing the group of the synchronization jobs based on the synchronization characteristics; and controlling execution of the group of the synchronization jobs based on a result of the prioritizing. Accordingly, high priority is given to the synchronization jobs which can be rapidly completed thereby improving the Recovery Point Objective (RPO) achievement rate before occurrence of a failure.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: February 28, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Fang Du, Pan Xiao, Xu Chen, Peilei Chen
  • Patent number: 11593180
    Abstract: In an approach, a processor receives a request to deploy a workload in a container environment, where: the container environment comprises a plurality of external providers running container environment clusters; and the request (i) includes one or more requirements of the workload and (ii) does not specify a particular external provider of the plurality of external providers. A processor determines a cluster, from the plurality of external providers running the container environment clusters, that meets the one or more requirements of the workload. A processor deploys the workload on the determined cluster.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: February 28, 2023
    Assignee: KYNDRYL, INC.
    Inventors: Manish Gupta, Gopal S Pingali, Kiranmai Bhagavatula
  • Patent number: 11579940
    Abstract: A publish and subscribe architecture can be utilized to manage records, which can be used to accomplish the various functional goals. At least one template having definitions for managing production and consumption of data within an unconfigured group of computing resources is maintained. Records organized by topic collected from multiple disparate previously configured producers are utilized to initiate configuration of the unconfigured group of computing resources. Records within a topic are organized by a corresponding topic sequence. A first portion of the computing resources are configured as consumers based on the at least one template. The consumers to consume records at a pace independent of record production. A second portion of the computing resources are configured as producers based on the at least one template. The producers to produce records at a pace independent of record consumption.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: February 14, 2023
    Assignee: salesforce.com, inc.
    Inventors: Seamus Carroll, Morgan Galpin, Adam Matthew Elliott, Chris Mueller, Graham Campbell
  • Patent number: 11579933
    Abstract: A method for establishing system resource prediction and resource management model through multi-layer correlations is provided. The method builds an estimation model by analyzing the relationship between a main application workload, resource usage of the main application, and resource usage of sub-application resources and prepares in advance the specific resources to meet future requirements. This multi-layer analysis, prediction, and management method is different from the prior arts, which only focus on single-level estimation and resource deployment. The present invention can utilize more interactive relationships at different layers to effectively perform predictions, thereby achieving the advantage of reducing hidden resource management costs when operating application services.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: February 14, 2023
    Assignee: ProphetStor Data Services, Inc.
    Inventors: Wen-Shyen Chen, Wan-Chi Chang
  • Patent number: 11573834
    Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an asynchronous packet network; a plurality of configurable circuits arranged in an array, each configurable circuit coupled to the asynchronous packet network and adapted to perform a plurality of computations; and a dispatch interface circuit adapted to partition the plurality of configurable circuits into one or more separate partitions of configurable circuits and to load one or more computation kernels into each partition of configurable circuits. The dispatch interface circuit may load balance across the partitions of configurable circuits by starting threads for execution in the partition having the highest number of available thread identifiers. The dispatch interface may also assert a partition enable signal to merge the one or more separate partitions and assert a stop signal to all configurable circuits of the one or more separate partitions of configurable circuits.
    Type: Grant
    Filed: August 16, 2020
    Date of Patent: February 7, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11561840
    Abstract: The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: January 24, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Liang Han, Yang Jiao
  • Patent number: 11550505
    Abstract: A data stream may include a plurality of records that are ordered, and the plurality of records may be assigned to a processing shard. A first set of virtual shards may be formed, the first set of virtual shards having a first quantity of virtual shards that perform parallel processing operations on behalf of the processing shard. First records of the plurality of records may be processed using the first set of virtual shards. The first quantity of virtual shards may be modified, based at least in part on an observed record age, to a second quantity of virtual shards that perform parallel processing operations on behalf of the processing shard. A second set of virtual shards may be formed having the second quantity of virtual shards. Second records of the plurality of records may be processed using the second set of virtual shards.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: January 10, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dinesh Saheblal Gupta, Deepak Verma, Jiaxuan Lu
  • Patent number: 11550372
    Abstract: An information processing apparatus includes a fan that cools a first processor, a dust-proof bezel that prevents dust from entering a casing, a memory, and a second processor coupled to the memory. The second processor is configured to measure a temperature of the first processor and an air volume of an air flow which passes through the dust-proof bezel, compare a registered air volume to the measured air volume when the temperature matches a registered temperature included in comparison information stored in the memory. The registered air volume being included in the comparison information in association with the matched temperature and the comparison information including a registered temperature of the first processor and a registered air volume of an air flow generated by the fan in association with each other. The second processor determines an abnormality in the dust-proof bezel based on a comparison result.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: January 10, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Masakazu Matsubara, Kohei Kida, Hiromichi Okabe, Minoru Hirano
  • Patent number: 11544099
    Abstract: At an interface an analytic model for processing data is received. The analytic model is inspected to determine a language, an action, an input type, and an output type. A virtualized execution environment is generated for an analytic engine that includes executable code to implement the analytic model for processing an input data stream.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 3, 2023
    Assignee: ModelOp, Inc.
    Inventors: Stuart Bailey, Matthew Mahowald, Maksym Kharchenko
  • Patent number: 11513855
    Abstract: A method, computer program product, and computing system for allocating a first set of cores of a plurality of cores of a multicore central processing unit (CPU) for processing host input-output (IO) operations of a plurality of operations on a storage system. A second set of cores of the plurality of cores may be allocated for processing flush operations of the plurality of operations on the storage system. A third set of cores of the plurality of cores may be allocated for processing rebuild operations of the plurality of operations on the storage system. At least one of one or more host IO operations, one or more rebuild operations, and one or more flush operations may be processed, via the plurality of cores and based upon, at least in part, the allocation of the plurality of cores for processing the plurality of operations.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: November 29, 2022
    Assignee: EMC IP Holding Company, LLC
    Inventors: Jian Gao, Vamsi K. Vankamamidi, Hongpo Gao, Jamin Kang
  • Patent number: 11500626
    Abstract: Methods for intelligent automatic merging of source control queue items are performed by systems and apparatuses. Project changes are submitted in build requests to a gated check-in build queue requiring successful builds to commit changes to a code repository according to source control. Multiple pending build requests in the build queue are intelligently and automatically merged into a single, pending merged request based on risk factor values associated with the build requests. For merged requests successfully built, files in the build requests are committed and the build requests are removed from the queue. Merged requests unsuccessfully built are divided into equal subsets based on updated risk factor values using information from the unsuccessful build. Successful builds of subsets allow for committing of files and removal from the build queue, while unsuccessful builds are further divided and processed until single build requests are processed to identify root cause errors.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 15, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Elad Iwanir, Gal Tamir, Mario A. Rodriguez, Chen Lahav