Patents Examined by Dong U Kim
  • Patent number: 12135994
    Abstract: An electronic device includes at least one processor including a first processor and a second processor separate from the first processor, a memory electrically connected to the at least one processor and storing instructions, wherein the at least one processor is further configure to execute the instructions to assign foreground tasks to a boosting foreground control group and a non-boosting foreground control group in response to a user input, based on completion of booting of the electronic device, schedule at least one task assigned to the boosting foreground control group for the first processor, and schedule at least one task assigned to the non-boosting foreground control group for the second processor, and performance of the second processor may be lower than performance of the first processor.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: November 5, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kiljae Kim, Byungsoo Kwon, Jaeho Kim, Daehyun Cho
  • Patent number: 12124865
    Abstract: Methods and apparatus for providing page migration of pages among tiered memories identify frequently accessed memory pages in each memory tier and generate page hotness ranking information indicating how frequently memory pages are being accessed. Methods and apparatus provide the page hotness ranking information to an operating system or hypervisor depending on which is used in the system, the operating system or hypervisor issues a page move command to a hardware data mover, based on the page hotness ranking information and the hardware data mover moves a memory page to a different memory tier in response to the page move command from the operating system.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 22, 2024
    Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC
    Inventors: Sean T. White, Philip Ng
  • Patent number: 12118400
    Abstract: A computer-implemented method according to one embodiment includes identifying a machine learning pipeline and a plurality of training data batches; creating a plurality of tasks, based on the machine learning pipeline; and determining an order in which the plurality of tasks is executed, utilizing a resource usage-aware approach.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: October 15, 2024
    Assignee: International Business Machines Corporation
    Inventors: Martin Hirzel, Kiran A. Kate, Avraham Ever Shinnar
  • Patent number: 12112191
    Abstract: The present disclosure relates to devices and methods for creating one or more proxy devices in a guest device mirroring the devices hosted by a host device. The proxy devices may provide full device access functionality to applications running in the guest device. The devices and methods may load a proxy driver inside the guest device, which communicates with the host device. When applications running on the guest device interact with the proxy devices, the proxy driver communicates the interaction to the host device, which communicates with the device driver managing the device. The devices and methods allow applications running on the host and applications running on the guest to shares access to the same device.
    Type: Grant
    Filed: October 26, 2023
    Date of Patent: October 8, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alessandro Domenico Scarpantoni, Shyamal Kaushik Varma, Ajay Preetham Barboza, Jason Christopher Knichel, Adam Joseph Lenart, Samuel David Adams
  • Patent number: 12112185
    Abstract: According to an embodiment, a communication apparatus includes a task and a notification unit. The task stores, in a storage unit, notification information to be notified to a virtual machine as a notification destination via a virtual machine monitor after execution of predetermined processing. The notification unit collectively notifies the virtual machine monitor of a plurality of pieces of notification information stored in the storage unit.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: October 8, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yuta Kobayashi, Takahiro Yamaura
  • Patent number: 12112155
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: examining target application container configuration data to identify one or more target container base image referenced in the target application container configuration: subjecting script data associated to the one or more target container base image to text based processing for evaluation of security risk associated to the one or more container base image, the script data obtained from at least one candidate hosting computing environment; and selecting a hosting computing environment from the at least one computing environment for hosting the target application container, the selecting in dependence on the text based processing.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: October 8, 2024
    Assignee: Kyndryl, Inc.
    Inventors: Igor Monteiro Vieira, Marcelo Mota Manhaes, Thiago Bianchi, Suellen Caroline Da Silva
  • Patent number: 12113678
    Abstract: Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E2 unit, to perform an operation. The method uses a driver of the machine to communicate directly with the hardware accelerator to direct the hardware accelerator to perform a set of computations associated with the operation. This driver allows the communication with the hardware accelerator to bypass an intervening set of drivers executing on the host computer between the machine's driver and the hardware accelerator. Through this driver, the application in some embodiments receives the computation results, which it then provides to one or more O-RAN components (e.g.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: October 8, 2024
    Assignee: VMware LLC
    Inventors: Giridhar Subramani Jayavelu, Aravind Srinivasan, Amit Singh
  • Patent number: 12099871
    Abstract: A method of batch and scheduler migration assesses a batch job, scans it's scheduling mechanism and components, ascertains a quantum change for migrating the batch job to a target batch service and forecasts an assessment statistic that provides at least one functional readiness and a timeline to complete the migration of the batch job. The method generates a transformed batch job structure by breaking the batch job according to the target batch service while retaining the scheduling mechanism. Further, it updates containerized batch service components of the target batch service as per the forecasted assessment statistic and the transformed batch job structure, and migrates the batch job to the target batch service by re-platforming the updated containerized batch service components.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: September 24, 2024
    Assignee: HEXAWARE TECHNOLOGIES LIMITED
    Inventors: Chirodip Pal, Natarajan Ganapathi, Meenakshisundaram Padmanaban
  • Patent number: 12099884
    Abstract: There are provided a cloud management method and a cloud management apparatus for rapidly scheduling arrangements of service resources by considering equal distribution of resources in a large-scale container environment of a distributed collaboration type. The cloud management method according to an embodiment includes: receiving, by a cloud management apparatus, a resource allocation request for a specific service; monitoring, by the cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that is able to be allocated a requested resource; calculating, by the cloud management apparatus, a suitable score with respect to each of the selected clusters; and selecting, by the cloud management apparatus, a cluster that is most suitable to the requested resource for executing a requested service from among the selected clusters, based on the respective suitable scores.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: September 24, 2024
    Assignee: Korea Electronics Technology Institute
    Inventors: Jae Hoon An, Young Hwan Kim
  • Patent number: 12099862
    Abstract: Example methods are provided to identify unused memory regions in pages that are allocated for storing executable code. One or more of the unused memory regions are usable as a secure location to store confidential information shared between a hypervisor on the host and a guest (such as a guest virtual computing instance) that runs on the host. The one or more unused memory regions may also be used to store executable code (such as valid executable code of antivirus software or other security program) that has been prevented/delayed in its execution by malicious code that has occupied the pages, thereby providing the executable code with sufficient memory resources to enable the executable code to at least partially complete execution.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: September 24, 2024
    Assignee: VMware LLC
    Inventors: Goresh Musalay, Sachin Shinde, Zubraj Singha, Tanay Ganguly, Kashish Bhatia
  • Patent number: 12093743
    Abstract: Embodiments operate a machine learning (“ML”) notebook in a cloud infrastructure executing a plurality of ML notebooks. Embodiments receive a plurality of previously executed ML notebook feature engineering commands from the plurality of ML notebooks. Embodiments store the plurality of previously executed ML notebook feature engineering commands, including a relationship between the feature engineering commands. Embodiments mine the stored commands to generate feature engineering sets of feature engineering commands, the feature engineering sets comprising feature engineering commands that are frequently used together and an order of use of the feature engineering commands. Embodiments then receive a context of a current feature engineering command and data used in the context and recommend a next feature engineering command to be executed after the current feature engineering command.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: September 17, 2024
    Assignee: Oracle International Corporation
    Inventors: Hari Bhaskar Sankaranarayanan, Viral Rathod
  • Patent number: 12086624
    Abstract: Live mounting a virtual machine (VM) causes the VM to run off a backup copy or snapshot previously taken of a “live” production VM. The live-mounted VM is generally intended for temporary use such as to validate the integrity and contents of the backup copy for disaster recovery validation, or to access some contents of the backup copy from the live-mounted VM without restoring all backed up files. These uses contemplate that changes occurring during live mount are not preserved after the live-mounted VM expires or is taken down. Thus, live mounting a VM is not a restore operation and usually does not involve access to every block of data in the backup copy. However, live mounting provides live VM service in the cloud sooner than waiting for all of the backup copy/snapshot to be restored.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: September 10, 2024
    Assignee: Commvault Systems, Inc.
    Inventors: Sanjay Kumar, Sumedh Pramod Degaonkar
  • Patent number: 12073218
    Abstract: A system and method for the storage, within one or more virtual execution context registers, tracing information indicative of process/code flow within a processor system. This stored information can include a time stamp, information indicative of where the instruction pointer of the system was pointing prior to any process discontinuity, information indicative of where the instruction pointer of the system was pointing after any process discontinuity, and the number of times a specific instruction or sub-process is executed during a particular process. The data collected and stored can be utilized within such a system for the identification and analysis of processing hot-spots.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: August 27, 2024
    Assignee: Unisys Corporation
    Inventors: Andrew Ward Beale, David Strong
  • Patent number: 12073253
    Abstract: Bitmaps for managing computing resources are described. Example bitmaps described in this application use less memory space by varying the sizes of the nodes in the bitmap's tree structure, and/or by limiting the number of nodes in the bitmap's leaf layer. Other example bitmaps described in this application reduce the time needed to traverse the bitmap by tailoring the search direction according to the bitmap's configuration.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: August 27, 2024
    Assignee: Amazon Technologies, Inc.
    Inventor: Meher Aditya Kumar Addepalli
  • Patent number: 12073265
    Abstract: The present disclosure generally discloses an event handling capability configured to support handling of events. The event handling capability may be configured to support handling of events in a distributed event handling system, which may use distributed queuing of events, distributed processing of events, and so forth. The distributed event handling system may be serverless cloud system or other type of distributed event handling system. The event handling capability may be configured to support handling of events in a distributed event handling system based on use of a message bus for queuing of events and based on use of hosts for queuing and processing of events.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: August 27, 2024
    Assignee: NOKIA SOLUTIONS AND NETWORKS OY
    Inventors: Ivica Rimac, Istemi Ekin Akkus, Ruichuan Chen, Manuel Stein, Volker Hilt
  • Patent number: 12067428
    Abstract: An apparatus to facilitate thread synchronization is disclosed. The apparatus comprises one or more processors to execute a producer thread to generate a plurality of commands, execute a consumer thread to process the plurality of commands and synchronize the producer thread with the consumer thread, including updating a producer fence value upon generation of in-order commands, updating a consumer fence value upon processing of the in-order commands and performing a synchronization operation based on the consumer fence value, wherein the producer fence value and the consumer fence value each correspond to an order position of an in-order command.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Stav Gurtovoy, Mateusz Maria Przybylski, Michael Apodaca, Manjunath D S
  • Patent number: 12067427
    Abstract: Examples include registering a device driver with an operating system, including registering available hardware offloads. The operating system receives a call to a hardware offload, inserts a binary filter representing the hardware offload into a hardware component and causes the execution of the binary filter by the hardware component when the hardware offload is available, and executes the binary filter in software when the hardware offload is not available.
    Type: Grant
    Filed: July 19, 2022
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Eliezer Tamir, Johannes Berg, Andrew Cunningham, Peter Waskiewicz, Jr., Andrey Chilikin
  • Patent number: 12050925
    Abstract: Aspects of the subject disclosure may include, for example, instantiating a virtual smartphone in a cloud infrastructure, installing a smartphone application on the virtual smartphone, receiving input sensor data from a physical user device, providing the input sensor data to the smartphone application on the virtual smartphone, receiving output data from the smartphone application on the virtual smartphone, and providing the output data to the physical user device. Other embodiments are disclosed.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: July 30, 2024
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yaron Kanza, Arun Jotshi, Raghvendra Savoor
  • Patent number: 12050931
    Abstract: System and computer-implemented method for migrating partial tree structures of virtual disks for virtual computing instances between sites in a computer system uses a compressed trie, which is created from target tree structures of virtual disks at a plurality of target sites in the computer system. For a virtual computing instance selected, the compressed trie is used to find candidate target sites based on a disk chain string of the virtual computing instance. For each candidate target site, a cost value for migrating the virtual computing instance along with a partial source tree structure of virtual disks corresponding to the virtual computing instance from the source site to the candidate target site is calculated to select a target site with a lowest cost value as a migration option to reduce storage resource usage in the computer system.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: July 30, 2024
    Assignee: VMware, Inc.
    Inventors: Vipin Balachandran, Hemanth Kumar Pannem
  • Patent number: 12045664
    Abstract: Techniques for a cloud-based workload optimization service to identify customer workloads that are optimized to run on burstable instance types. The techniques include identifying workloads that are successfully running on burstable instance types, and using historical-utilization data for those workloads to train classification models. The optimization service can extract feature data from the historical-utilization data, where the feature data represents utilization characteristics that are indicative of burstable workloads. The feature data is then used to train classification models to receive utilization data for candidate workloads, and determine whether the candidate workloads would be optimized for burstable instance types. The optimization service can then migrate suitable workloads to burstable instance types, and/or provide users with recommendations that their workloads are optimized or suitable for burstable instance types.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: July 23, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Siyu Wang, Chia-Yu Kao, Leslie Johann Lamprecht, Qijia Chen, Letian Feng