Patents Examined by Bradley Teets
-
Patent number: 12265819Abstract: Disclosed are a code updating method and apparatus, an electronic device, and a Non-Volatile computer-readable storage medium. The method comprises: acquiring a target character string, wherein the target character string is a common code of source codes of at least two products, first line spacings in the source codes of the at least two products are the same; matching the target character string with source codes of a first product, and taking a line where a character string, that matches the target character string, in the first product is located as a first line of first target codes of the first product; generating a patch file for the first target codes of the first product; and respectively applying the patch file to the first target codes of the at least two products.Type: GrantFiled: December 27, 2022Date of Patent: April 1, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventor: Bo Liu
-
Patent number: 12265800Abstract: Some embodiments provide a notebook tool enhanced with minimoremap functionality. Notebooks are multicell documents with varied mime types present in the content of various cells of a given notebook, including executable source code such as scripts, non-executable content, markdown text, natural language text, photos, videos, maps, and more. The notebook tool includes a main view and a superimposed minimoremap view which is functionally coordinated with the main view, e.g., for navigation and cell selection. Notebook cell operations are commanded via the minimoremap. Cells outside the main view viewport are modified without changing the user interface main view focus. Tool-human interaction commands cause the performance of notebook operations such as cell execution, cell rearrangement, cell collapse, cells merge, cells grouping, or cell dependency analysis. Minimoremap cell images are rendered as icons, as graphics, or as scaled-down versions of previously rendered full-size main view images.Type: GrantFiled: November 15, 2022Date of Patent: April 1, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Peng Lyu, Kai-Uwe Maetzel
-
Patent number: 12265802Abstract: A platform as a service (“PaaS”) automation engine receives a request to generate a digital twin of a source computing system. The source computing system is mounted for on demand access to at least part of the source computing system. Parameter information representing aspects of the source computing system and the digital twin is accessed and application programming comprised in the source computing system is replicated. The replicated application programming is migrated to a digital twin staging store associated with the digital twin and provided to the digital twin. A delta between the replicated application programming and the source computing system is identified and the application programming of the digital twin is updated. Data associated with at least part of the source computing system are provided to the digital twin and the source computing system is unmounted.Type: GrantFiled: April 14, 2023Date of Patent: April 1, 2025Assignee: Saudi Arabian Oil CompanyInventors: AlAlaa N. Tashkandi, Ali H. Khatam, Dlaim M. Qahtani, Turki I. Mohammed
-
Patent number: 12254314Abstract: A computing platform may configure a dependency knowledge graph indicating file dependencies for mainframe applications, and an error knowledge graph indicating errors and corresponding solutions for the mainframe applications. The computing platform may receive mainframe source code. The computing platform may analyze, using the knowledge graphs, the mainframe source code to identify potential errors and corresponding solutions. Based on identifying an error in the mainframe source code, the computing platform may cause the mainframe source code to be updated according to the corresponding solution. The computing platform may analyze, using the dependency knowledge graph and the error knowledge graph, the updated mainframe source code to identify remaining errors.Type: GrantFiled: April 13, 2023Date of Patent: March 18, 2025Assignee: Bank of America CorporationInventors: John Iruvanti, Komuraiah Kannaveni, Panduranga Dongle
-
Patent number: 12254334Abstract: A system may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include receiving data and generating a contextual execution dependency graph with said data. The operations may include producing agents with said data and calculating an agent sequence for said agents based at least in part on said contextual execution dependency graph. The operations may include executing an automation script using said agent sequence and said contextual execution dependency graph.Type: GrantFiled: May 10, 2022Date of Patent: March 18, 2025Assignee: International Business Machines CorporationInventors: Sampath Dechu, Kushal Mukherjee, Neelamadhav Gantayat, Naveen Eravimangalath Purushothaman
-
Patent number: 12254343Abstract: A controller may control input to, output from, and execution of a component of a data-processing pipeline via an interface. The interface facilitates replacing the component with a different and/or updated component and/or changing a type of controller that controls the component via the interface. For example, the different types of controllers may facilitate communication between components controlled by other controllers (and/or that aren't controlled by a controller), controllers that generate reproducibility data so that component behavior may be reproduced, controllers that reproduce component behavior, and/or controllers that tune performance of the data-processing pipeline, e.g., by varying input, output, and execution of respective component(s).Type: GrantFiled: November 21, 2018Date of Patent: March 18, 2025Assignee: Zoox, Inc.Inventors: Jacob Lee Askeland, Ryan Martin Cahoon, Christopher Yeates Brown
-
Patent number: 12236263Abstract: In some aspects, finer grained parallelism is achieved by segmenting programmatic workloads into smaller discretized portions, where a first element can be indicative both of a configuration or program to be executed, and a first data set to be used in such execution, while a second element can be indicative of a second data element or group. The discretized portions can cause program execute on distributed processors. Approaches to selecting processors, and allocating local memory associated with those processors are disclosed. In one example, discretized portions that share a program have an anti-affinity to cause dispersion, for initial execution assignment. Flags, such as programmer and compiler generated flags can be used in determining such allocations. Workloads can be grouped according to compatibility of memory usage requirements.Type: GrantFiled: August 12, 2016Date of Patent: February 25, 2025Assignee: Imagination Technologies LimitedInventors: Stephen John Clohset, James Alexander McCombe, Luke Tilman Peterson
-
Patent number: 12217083Abstract: An HTM-assisted Combining Framework (HCF) may enable multiple (combiner and non-combiner) threads to access a shared data structure concurrently using hardware transactional memory (HTM). As long as a combiner executes in a hardware transaction and ensures that the lock associated with the data structure is available, it may execute concurrently with other threads operating on the data structure. HCF may include attempting to apply operations to a concurrent data structure utilizing HTM and if the HTM attempt fails, utilizing flat combining within HTM transactions. Publication lists may be used to announce operations to be applied to a concurrent data structure. A combiner thread may select a subset of the operations in the publication list and attempt to apply the selected operations using HTM. If the thread fails in these HTM attempts, it may acquire a lock associated with the data structure and apply the selected operations without HTM.Type: GrantFiled: May 5, 2021Date of Patent: February 4, 2025Assignee: Oracle International CorporationInventors: Alex Kogan, Yosef Lev
-
Patent number: 12204942Abstract: An apparatus may include first and second processors. A first user may be bound to the first processor such that processes of the first user execute on the first processor and do not execute on the second processor. A second user may be bound to the second processor such that processes of the second user execute on the second processor and do not execute on the first processor.Type: GrantFiled: July 6, 2021Date of Patent: January 21, 2025Assignee: CFPH, LLCInventor: Jacob Loveless
-
Patent number: 12190112Abstract: Techniques are disclosed for eliding load and store barriers while maintaining garbage collection invariants. Embodiments described herein include techniques for identifying an instruction, such as a safepoint poll, that checks whether to pause a thread between execution of a dominant and dominated access to the same data field. If a poll instruction is identified between the two data accesses, then a pointer for the data field may be recorded in an entry associated with the poll instruction. When the thread is paused to execute a garbage collection operation, the recorded information may be used to update values associated with the data field in memory such that the dominated access may be executed without any load or store barriers.Type: GrantFiled: January 24, 2022Date of Patent: January 7, 2025Assignee: Oracle International CorporationInventors: Erik Österlund, Nils Erik Eliasson
-
Patent number: 12175298Abstract: A method and apparatus are disclosed of monitoring a number of virtual machines operating in an enterprise network. One example method of operation may include identifying a number of virtual machines currently operating in an enterprise network and determining performance metrics for each of the virtual machines. The method may also include identifying at least one candidate virtual machine from the virtual machines to optimize its active application load and modifying the candidate virtual machine to change its active application load.Type: GrantFiled: November 28, 2021Date of Patent: December 24, 2024Assignee: Google LLCInventor: John Michael Suit
-
Patent number: 12174722Abstract: An aspect of the present disclosure facilitates characterizing operation of software applications having large number of components. In one embodiment, a digital processing system receives a first data indicating invocation types and corresponding invocation counts at an entry component for multiple block durations, where the entry component causes execution of internal component of the software application. The system also receives a second data indicating values for a processing metric at the internal components for the same block durations. The system then constructs for each internal component, a corresponding component model correlating the values for the processing metrics at the internal component indicated in the second data to the invocation types and invocation counts of the entry component indicated in the first data. The component models can aid in the performance management of the software application.Type: GrantFiled: September 8, 2020Date of Patent: December 24, 2024Assignee: APPNOMIC SYSTEMS PRIVATE LIMITEDInventors: Padmanabhan Desikachari, Pranav Kumar Jha
-
Patent number: 12155210Abstract: Disclosed techniques relate to orchestrating power consumption reductions across a number of hosts. A number of response levels may be utilized, each having an association to a corresponding set of reduction actions. The impact to customers, hosts, and/or workloads can be computed at run time based on current and/or predicted conditions and workloads, and a particular response level can be selected based on the computed impact. These techniques enable a sufficient, but least impactful response to be employed.Type: GrantFiled: June 21, 2023Date of Patent: November 26, 2024Assignee: Oracle International CorporationInventors: Roy Mehdi Zeighami, Sumeet Kochar, Jonathan Luke Herman, Mark Lee Huang
-
Patent number: 12147693Abstract: A data management and storage (DMS) cluster of peer DMS nodes manages data of a tenant of a multi-tenant compute infrastructure. The compute infrastructure includes an envoy connecting the DMS cluster to virtual machines of the tenant executing on the compute infrastructure. The envoy provides the DMS cluster with access to the virtual tenant network and the virtual machines of the tenant connected via the virtual tenant network for DMS services such as data fetch jobs to generate snapshots of the virtual machines. The envoy sends the snapshot from the virtual machine to a peer DMS node via the connection for storage within the DMS cluster. The envoy provides the DMS cluster with secure access to authorized tenants of the compute infrastructure while maintaining data isolation of tenants within the compute infrastructure.Type: GrantFiled: August 22, 2022Date of Patent: November 19, 2024Assignee: Rubrik, Inc.Inventors: Abdul Jabbar Abdul Rasheed, Soham Mazumdar, Hardik Vohra, Mudit Malpani
-
Patent number: 12113361Abstract: Disclosed techniques relate to orchestrating power consumption reductions across a number of hosts. A number of response levels may be utilized, each having an association to a corresponding set of reduction actions. The impact to customers, hosts, and/or workloads can be computed at run time based on current and/or predicted conditions and workloads, and a particular response level can be selected based on the computed impact. These techniques enable a sufficient, but least impactful response to be employed.Type: GrantFiled: June 21, 2023Date of Patent: October 8, 2024Assignee: Oracle International CorporationInventors: Roy Mehdi Zeighami, Sumeet Kochar, Jonathan Luke Herman, Mark Lee Huang
-
Patent number: 12079666Abstract: Systems and methods for inter-cluster deployment of compute services using federated operator components are generally described. In some examples, a first request to deploy a compute service may be received by a federated operator component. In various examples, the federated operator component may send a second request to provision a first compute resource for the compute service to a first cluster of compute nodes. In various examples, the first cluster of compute nodes may be associated with a first hierarchical level of a computing network. In some examples, the federated operator component may send a third request to provision a second compute resource for the compute service to a second cluster of compute nodes. The second cluster of compute nodes may be associated with a second hierarchical level of the computing network that is different from the first hierarchical level.Type: GrantFiled: April 10, 2023Date of Patent: September 3, 2024Assignee: Red Hat, Inc.Inventor: Huamin Chen
-
Patent number: 12068979Abstract: A method, computer program product, and computer system for dividing a physical Ethernet port is provided. The method may include dividing, by a computing device, a first physical Ethernet port of a plurality of physical Ethernet ports into a plurality of partitions. A first partition of the plurality of partitions for the first Ethernet port may be assigned to a N-virtual distributed switch. A second partition of the plurality of partitions for the first Ethernet port may be assigned with a plurality of functions. Ethernet packets may be switched between the plurality of functions in the second partition.Type: GrantFiled: May 1, 2020Date of Patent: August 20, 2024Assignee: EMC IP Holding Company, LLCInventors: Mukesh Gupta, Daniel E. Cummins
-
Patent number: 12045669Abstract: A method for execution of a synchronous operation in an asynchronous operational environment includes receiving, by a processor, a first operation from program code executing within the asynchronous operational environment with the program code being run on an execution thread and a communication thread. The method also includes determining, by the processor, if the first operation is a synchronous operation. The method further includes that if the first operation is a synchronous operation, sending a request from the execution thread to the communication thread to perform the first operation and blocking execution of a subsequent operation until a response to the request from the communication thread for the first operation has been completed.Type: GrantFiled: December 18, 2020Date of Patent: July 23, 2024Assignee: Micro Focus LLCInventors: Boris Kozorovitzky, Kobi Gana, Marina Gofman
-
Patent number: 12008394Abstract: A method comprises receiving from a distributed app (dApp), a shard creation transaction in a blockchain block of a blockchain, the block comprising multiple shards; collecting, with a join block in the blockchain, transactions, the join block adjacent the blockchain block; encapsulating the shard creation transaction; applying the block including the shard creation transaction to yield a new shard in the block; and broadcasting the block.Type: GrantFiled: February 21, 2023Date of Patent: June 11, 2024Assignee: EZBLOCK LTD.Inventor: Rundong Huang
-
Patent number: 11995479Abstract: A computer-implemented method according to one aspect includes determining and storing characteristics of a plurality of cloud vendors; dividing a workload into a plurality of logical stages; determining characteristics of each of the plurality of logical stages; and for each of the plurality of logical stages, assigning the logical stage to one of the plurality of cloud vendors, based on a comparison of the characteristics of the plurality of cloud vendors to the characteristics of the logical stage. Data migration between the cloud vendors is performed during an implementation of the workload to ensure data is located at necessary cloud vendors during the corresponding tasks of the workload.Type: GrantFiled: January 2, 2020Date of Patent: May 28, 2024Assignee: International Business Machines CorporationInventors: Abhishek Jain, Sasikanth Eda, Dileep Dixith, Sandeep Ramesh Patil, Anbazhagan Mani