Patents Examined by Dong U Kim
-
Patent number: 12236274Abstract: A method for the automatic scheduling of tasks obtains data processing tasks and data sources. A job queue is formed based on the data processing tasks. The job tasks are extracted in order from the job queue. Computing resources are distributed based on the extracted job tasks. A result of the data processing task is obtained by the pre-trained model based on the data source. An electronic device and a computer readable storage medium applying the method are also provided.Type: GrantFiled: November 29, 2021Date of Patent: February 25, 2025Assignee: Fulian Precision Electronics (Tianjin) Co., LTD.Inventors: Chi-Chun Chiang, Shao-Hsuan Fan
-
Patent number: 12236278Abstract: Methods, systems, and apparatuses for optimizing the configuration of virtual warehouses for execution of queries on one or more data warehouses are described herein. A plurality of different events associated with a data sharing platform may be logged. The data sharing platform may enable users to access one or more databases managed by the data sharing platform. The data sharing platform may be configured to provide access to the data stored by the data sharing platform via one or more of a plurality of virtual warehouses. A testing database may be generated. An optimized virtual warehouse configuration may be predicted for a first virtual warehouse by selecting a plurality of different warehouse configurations for the first virtual warehouse, measuring performance parameters of each of the plurality of different warehouse configurations by emulating, and selecting the optimized virtual warehouse configuration based on the performance parameters.Type: GrantFiled: February 1, 2022Date of Patent: February 25, 2025Assignee: Capital One Services, LLCInventors: Syed Shamaz Salim, Ganesh Bharathan
-
Patent number: 12236281Abstract: A method for providing bare-metal (BMS) resources includes receiving, by a system control processor, a partitioning request from a system control processor manager that specify a quantity of physical partitions, in response to the partitioning request: identifying a set of physical partitions of a bare-metal system (BMS) that are capable of servicing the partitioning request based on the quantity of physical partitions, updating a partitioning configuration data structure based on the set of physical partitions and the virtual partition, and implementing the virtual partition based on the set of resources using the partitioning configuration data structure.Type: GrantFiled: September 10, 2021Date of Patent: February 25, 2025Assignee: Dell Products L.P.Inventors: Sumedh Wasudeo Sathaye, Gaurav Chawla, John S. Harwood
-
Patent number: 12223339Abstract: Techniques are described for deploying a virtualized computing environment configured in a user-specific configuration, the virtualized network function comprising a plurality of virtual machines. A solution definition file (SDF) identifies a configuration for the deployment. The SDF replaces each secret needed for the deployment with an identifier for the secret. A schema defines a format for each identifier for each secret included in the SDF and a format of the secrets. The secrets and corresponding identifiers are stored in a secure storage. The identifiers are sent to the deployed virtual machines, the identifiers being usable by the virtual machines to obtain the secrets from the secure storage.Type: GrantFiled: November 30, 2021Date of Patent: February 11, 2025Assignee: Microsoft Technology Licensing, LLCInventors: James Duncan Parsons, Peter John Whiting
-
Patent number: 12217039Abstract: In one embodiment, a system for managing a virtualization environment includes a set of host machines, each of which includes a hypervisor, virtual machines, and a virtual machine controller, and a first virtualized file server configured to receive a request to access a storage item located at a second virtualized file server, determine that the storage item is designated as being accessible by other virtualized file servers, identify an FSVM of the second virtualized file server at which the storage item is located, and forward the request to the FSVM of the second virtualized file server. The storage item may be designated as being accessible by other virtualized file servers when the storage item is associated with a predetermined tag value indicating that the storage item is shared among virtualized file servers. The predetermined tag value may be stored in a sharding map in association with the storage item.Type: GrantFiled: February 5, 2021Date of Patent: February 4, 2025Assignee: Nutanix, Inc.Inventors: Anil Kumar Gopalapura Venkatesh, Richard James Sharpe, Durga Mahesh Arikatla, Kalpesh Ashok Bafna
-
Patent number: 12197397Abstract: Systems and methods are provided for handling file operations from a hosted computing instance via a secure compute layer. The secure compute layer is presented to the instance as a virtualized service device that is locally addressable by the instance. Software within the instance can submit file operations to the virtualized service device, which the secure compute layer can translate into calls to a network-accessible storage service. Results from the calls can then be passed back to the instance through the virtualized service device. As a result, the instance can communicate with a variety of different network services, without itself implementing network communications for those services.Type: GrantFiled: December 10, 2021Date of Patent: January 14, 2025Assignee: Amazon Technologies, Inc.Inventors: Christopher Magee Greenwood, Marc Stephen Olson, Jacob Wires, Andrew Kent Warfield
-
Patent number: 12199759Abstract: Described herein is a graphics processor configured to perform asynchronous input dependency resolution among a group of interdependent workloads. The graphics processor can dynamically resolve input dependencies among the workloads according to a dependency relationship defined for the workloads. Dependency resolution be performed via a deferred submission mode which resolves input dependencies prior to thread dispatch to the processing resources or via immediate submission mode which resolves input dependencies at the processing resources.Type: GrantFiled: March 7, 2022Date of Patent: January 14, 2025Assignee: Intel CorporationInventors: Michal Mrozek, Vinod Tipparaju
-
Patent number: 12197942Abstract: One embodiment of the invention provides an apparatus preventing degradation of accuracy of a search result while reducing a total required period of a series of processing in a case where input parameters for a series of processing including a plurality of processing with different required periods are searched for. The apparatus includes a calculator and a generator. The calculator calculates evaluation values for output parameters of a series of processing including first processing and second processing. The first processing uses a first input parameter. The second processing use a second input parameter. The generator regenerates first and second input parameters corresponding to one time of a series of processing based on first and second input parameters corresponding to selected output parameters. The number of input parameters for shorter one of the first and the second processing is larger than the number of input parameters for the other.Type: GrantFiled: September 9, 2021Date of Patent: January 14, 2025Assignees: Kabushiki Kaisha Toshiba, Kioxia CorporationInventors: Masashi Tomita, Daiki Kiribuchi, Satoru Yokota, Soh Koike
-
Patent number: 12190210Abstract: A method of using a computing device to manage a lifecycle of machine learning models includes receiving, by a computing device, multiple pre-defined machine learning lifecycle tasks. The computing device manages executing a management-layer software layer for the multiple pre-defined machine learning lifecycle tasks. The computing device further generates and updates a machine learning pipeline using the management-layer software layer.Type: GrantFiled: December 17, 2021Date of Patent: January 7, 2025Assignee: International Business Machines CorporationInventors: Benjamin Herta, Darrell Christopher Reimer, Evelyn Duesterwald, Gaodan Fang, Punleuk Oum, Debashish Saha, Archit Verma
-
Patent number: 12182607Abstract: A method and a system to perform the method are disclosed, the method includes receiving, by a virtualization server communicatively coupled with a client device, a request to provide a virtual machine (VM) to a client device, accessing a profile associated with the client device, instantiating a VM on the virtualization server, wherein the VM is a linked clone VM of a base VM, wherein the linked clone VM has (1) a read-only access to a shared range of a persistent memory associated with the base VM, wherein the shared range of the persistent memory is determined in view of the profile associated with the client device and stores at least one application installed on the virtualization server, (2) a write access to a private range of the persistent memory, wherein the private range is associated with the VM, and providing the VM to the client device.Type: GrantFiled: June 14, 2023Date of Patent: December 31, 2024Assignee: Parallels International GmbHInventors: Ivan Korobov, Nikolay Dobrovolskiy
-
Patent number: 12175279Abstract: Systems and methods for controlling access to services. Methods may comprise receiving, from a first client of a plurality of clients, a first request to access a first service of a plurality of services. The first service may be associated with a first bulkhead. A first count of concurrent active requests to the first service via the first bulkhead may be determined. If the first count is equal to a first bulkhead maximum value, access to the first service via the first bulkhead may consequently be refused. A second count of concurrent active requests via a shared burst bulkhead may be determined. The second count may correspond to concurrent active requests to any of the plurality of services via the shared burst bulkhead. If the second count is less than a shared burst maximum value, the first request to the first service may be routed via the shared burst bulkhead.Type: GrantFiled: March 16, 2022Date of Patent: December 24, 2024Assignee: Shopify Inc.Inventors: Damian Arpad Polan, Justin Li
-
Patent number: 12175267Abstract: Hot restart of a hypervisor by replacing a running first hypervisor by a second hypervisor with minimally perceptible downtime to guest partitions. A first hypervisor is executed on a computing system. The first hypervisor is configured to create one or more guest partitions. During the hot restart, a service partition is generated and initialized with a second hypervisor. At least a portion of runtime state of the first hypervisor is migrated and synchronized to the second hypervisor using inverse hypercalls. After the synchronization, the second hypervisor is devirtualized from the service partition to replace the first hypervisor. Devirtualizing includes transferring control of hardware resources from the first hypervisor to the second hypervisor, using the previously migrated and synchronized runtime state.Type: GrantFiled: December 13, 2023Date of Patent: December 24, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Bruce J. Sherwin, Jr., Sai Ganesh Ramachandran
-
Patent number: 12159166Abstract: Methods and systems disclosed herein relate generally to evaluating resource loads to determine when to transform queues and to specific techniques for transforming at least part of queues so as to correspond to alternative resources.Type: GrantFiled: November 16, 2023Date of Patent: December 3, 2024Assignee: Live Nation Entertainment, Inc.Inventors: Debbie Hsu, Gary Yu, Jonathan Philpott, Suzanne Lai, Hong Zhou
-
Patent number: 12135994Abstract: An electronic device includes at least one processor including a first processor and a second processor separate from the first processor, a memory electrically connected to the at least one processor and storing instructions, wherein the at least one processor is further configure to execute the instructions to assign foreground tasks to a boosting foreground control group and a non-boosting foreground control group in response to a user input, based on completion of booting of the electronic device, schedule at least one task assigned to the boosting foreground control group for the first processor, and schedule at least one task assigned to the non-boosting foreground control group for the second processor, and performance of the second processor may be lower than performance of the first processor.Type: GrantFiled: December 23, 2021Date of Patent: November 5, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kiljae Kim, Byungsoo Kwon, Jaeho Kim, Daehyun Cho
-
Patent number: 12124865Abstract: Methods and apparatus for providing page migration of pages among tiered memories identify frequently accessed memory pages in each memory tier and generate page hotness ranking information indicating how frequently memory pages are being accessed. Methods and apparatus provide the page hotness ranking information to an operating system or hypervisor depending on which is used in the system, the operating system or hypervisor issues a page move command to a hardware data mover, based on the page hotness ranking information and the hardware data mover moves a memory page to a different memory tier in response to the page move command from the operating system.Type: GrantFiled: March 31, 2021Date of Patent: October 22, 2024Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULCInventors: Sean T. White, Philip Ng
-
Patent number: 12118400Abstract: A computer-implemented method according to one embodiment includes identifying a machine learning pipeline and a plurality of training data batches; creating a plurality of tasks, based on the machine learning pipeline; and determining an order in which the plurality of tasks is executed, utilizing a resource usage-aware approach.Type: GrantFiled: November 29, 2021Date of Patent: October 15, 2024Assignee: International Business Machines CorporationInventors: Martin Hirzel, Kiran A. Kate, Avraham Ever Shinnar
-
Patent number: 12113678Abstract: Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E2 unit, to perform an operation. The method uses a driver of the machine to communicate directly with the hardware accelerator to direct the hardware accelerator to perform a set of computations associated with the operation. This driver allows the communication with the hardware accelerator to bypass an intervening set of drivers executing on the host computer between the machine's driver and the hardware accelerator. Through this driver, the application in some embodiments receives the computation results, which it then provides to one or more O-RAN components (e.g.Type: GrantFiled: July 15, 2021Date of Patent: October 8, 2024Assignee: VMware LLCInventors: Giridhar Subramani Jayavelu, Aravind Srinivasan, Amit Singh
-
Patent number: 12112155Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: examining target application container configuration data to identify one or more target container base image referenced in the target application container configuration: subjecting script data associated to the one or more target container base image to text based processing for evaluation of security risk associated to the one or more container base image, the script data obtained from at least one candidate hosting computing environment; and selecting a hosting computing environment from the at least one computing environment for hosting the target application container, the selecting in dependence on the text based processing.Type: GrantFiled: May 19, 2021Date of Patent: October 8, 2024Assignee: Kyndryl, Inc.Inventors: Igor Monteiro Vieira, Marcelo Mota Manhaes, Thiago Bianchi, Suellen Caroline Da Silva
-
Patent number: 12112185Abstract: According to an embodiment, a communication apparatus includes a task and a notification unit. The task stores, in a storage unit, notification information to be notified to a virtual machine as a notification destination via a virtual machine monitor after execution of predetermined processing. The notification unit collectively notifies the virtual machine monitor of a plurality of pieces of notification information stored in the storage unit.Type: GrantFiled: August 26, 2021Date of Patent: October 8, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Yuta Kobayashi, Takahiro Yamaura
-
Patent number: 12112191Abstract: The present disclosure relates to devices and methods for creating one or more proxy devices in a guest device mirroring the devices hosted by a host device. The proxy devices may provide full device access functionality to applications running in the guest device. The devices and methods may load a proxy driver inside the guest device, which communicates with the host device. When applications running on the guest device interact with the proxy devices, the proxy driver communicates the interaction to the host device, which communicates with the device driver managing the device. The devices and methods allow applications running on the host and applications running on the guest to shares access to the same device.Type: GrantFiled: October 26, 2023Date of Patent: October 8, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alessandro Domenico Scarpantoni, Shyamal Kaushik Varma, Ajay Preetham Barboza, Jason Christopher Knichel, Adam Joseph Lenart, Samuel David Adams