Patents Examined by Gregory Kessler
-
Patent number: 11487578Abstract: Methods and systems for dynamically scheduling data processing are disclosed. An example method includes: identifying a data model to be built, the data model being associated with a data model definition defining input data to be used in building that data model; determining a size of the input data; obtaining an expected access time for the data model; estimating a total time required for building the data model based on the size of the input data and the definition of the data model; determining a time to start building the data model based on the expected access time for the data model and the estimated total time required to build the data model; and scheduling the building of the data model to start at the determined time.Type: GrantFiled: September 14, 2020Date of Patent: November 1, 2022Assignee: SHOPIFY INC.Inventor: Mohammad Zeeshan Qureshi
-
Patent number: 11474881Abstract: Aspects of the disclosure relate to providing and maintaining efficient and effective processing of sets of work items in enterprise computing environments by optimizing distributed and parallelized batch data processing. A computing platform may initialize at least two processing workers. Subsequently, the computing platform may cause a first processing worker to perform a first query on a work queue database and initiate parallel processing of a first set of work items. Thereafter, the computing platform may cause the second processing worker to perform a second query on the work queue database and initiate parallel processing of a second set of work items. In some instances, performing the second query on the work queue database comprises reading at least one work item that was read and locked by the first processing worker.Type: GrantFiled: March 31, 2021Date of Patent: October 18, 2022Assignee: Bank of America CorporationInventor: Marcus Matos
-
Patent number: 11474882Abstract: The system of the present technology includes an embodiment that provides a host audio, video and control operating system configured to establish or interact with one or more virtual machines, each with a guest operating system.Type: GrantFiled: January 11, 2021Date of Patent: October 18, 2022Assignee: QSC, LLCInventor: Gerrit Eimbertus Rosenboom
-
Patent number: 11467870Abstract: Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task.Type: GrantFiled: July 24, 2020Date of Patent: October 11, 2022Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Anirudh R. Acharya, Michael J. Mantor, Rex Eldon McCrary, Anthony Asaro, Jeffrey Gongxian Cheng, Mark Fowler
-
Patent number: 11467872Abstract: Availability of future capacity can be determined for various categories of resources. In some embodiments, logistic regression models are used to obtain binary predictions of availability. In this way, a customer or other entity can obtain a determination as to the availability of a specific number and category or resources, or resource instances, at a future period of time. In some embodiments alternatives may be provided, which may provide improved characteristics or may have a higher likelihood of availability. Such approaches can also be used to monitor current and future capacity demands and enable a provider to better plan appropriate adjustments to the physical resources that provide that capacity.Type: GrantFiled: August 1, 2019Date of Patent: October 11, 2022Assignee: Amazon Technologies, Inc.Inventors: Gustav Mauer, James Michael Braunstein, Jin Zhang, Scott Sikora
-
Patent number: 11467871Abstract: A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship.Type: GrantFiled: November 30, 2020Date of Patent: October 11, 2022Assignee: Palantir Technologies Inc.Inventor: Kaan Tekelioglu
-
Patent number: 11461135Abstract: In an approach to dynamically identifying and modifying the parallelism of a particular task in a pipeline, the optimal execution time of each stage in a dynamic pipeline is calculated. The actual execution time of each stage in the dynamic pipeline is measured. Whether the actual time of completion of the data processing job will exceed a threshold is determined. If it is determined that the actual time of completion of the data processing job will exceed the threshold, then additional instances of the stages are created.Type: GrantFiled: October 25, 2019Date of Patent: October 4, 2022Assignee: International Business Machines CorporationInventors: Yannick Saillet, Namit Kabra, Ritesh Kumar Gupta
-
Patent number: 11461136Abstract: In a substrate processing device, a technique that achieves OR transfer satisfying a specified condition is provided. The substrate processing device includes an identifying unit configured to refer to usage states of a plurality of process modules at process time slots to identify one or more process modules that can execute processes of substrates included in a control job to be processed among process modules that can be used at the process time slots, a calculating unit configured to assign the processes of the substrates to respective process time slots of the one or more process modules identified by the identifying unit to calculate a time duration from a start to an end of the processes of the substrates, and a determining unit configured to determine start timing of starting the processes of the substrates so that the time duration calculated by the calculating unit satisfies a specified condition.Type: GrantFiled: June 20, 2019Date of Patent: October 4, 2022Assignee: Tokyo Electron LimitedInventor: Ken Watanabe
-
Patent number: 11461134Abstract: Provided is a method for scheduling of tasks for an operating system on a multi-core processor. The method includes receiving a system call for initiating a scheduling operation on a second core and invoking a scheduling instance to the second core, and the scheduling instance notifies the scheduling operation of an incoming high priority task. Further, the method includes deferring a switching context instance at the second core, and the deferring the switching context instance at the second core includes unblocking the first core to perform other tasks.Type: GrantFiled: January 22, 2021Date of Patent: October 4, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Anup Manohar Kaveri, Vinayak Hanagandi, Nischal Jain, Rohit Kumar Saraf, Shwetang Singh, Samarth Varshney, Srinivasa Rao Kola, Younjo Oh
-
Patent number: 11461126Abstract: A method for exchanging data among several enterprise management systems includes receiving and processing data sent by a first system, and recording a task of writing data to a second system in a database of an electronic device, and setting the recorded task as unfinished task. A list of unfinished tasks is acquired from the database in predetermined time period, and a result of query can be generated by searching or interrogating the second system as to the list of unfinished tasks. When the second system has finished the task, the task for writing data in the second system is set as finished.Type: GrantFiled: October 28, 2019Date of Patent: October 4, 2022Assignee: Goldtek Technology Co., Ltd.Inventors: Yen-Ching Lee, Chun-Chi Chen, Po-Sheng Wang, Chih-Yung Chang
-
Patent number: 11455192Abstract: A serverless query processing system receives a query and determines whether the query is a recurring query or a non-recurring query. The system may predict, in response to determining that the query is the recurring query, a peak resource requirement during an execution of the query. The system may compute, in response to determining that the query is the non-recurring query, a tight resource requirement corresponding to an amount of resources that satisfy a performance requirement over the execution of the query, where the tight resource requirement is less than the peak resource requirement. The system allocates resources to the query based on an applicable one of the peak resource requirement or the tight resource requirement. The system then starts the execution of the query using the resources.Type: GrantFiled: November 27, 2019Date of Patent: September 27, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Hiren Shantilal Patel, Shi Qiao, Alekh Jindal, Malay Kumar Bag, Rathijit Sen, Carlo Aldo Curino
-
Patent number: 11455194Abstract: An indication that an allocation unit of a memory sub-system has become unmapped can be received. In response to receiving the indication that the allocation unit of the memory sub-system has become unmapped, the allocation unit can be programmed with a data pattern. Data to be written to the unmapped allocation unit can be received. A write operation can be performed to program the received data at the unmapped allocation unit by using a read voltage that is based on the data pattern.Type: GrantFiled: July 12, 2019Date of Patent: September 27, 2022Assignee: MICRON TECHNOLOGY, INC.Inventors: Tingjun Xie, Zhengang Chen, Zhenlei Shen
-
Patent number: 11453131Abstract: A computing device for compatibility in robotic process automation (RPA) includes a memory that includes a plurality of RPA tool driver versions, and a processor communicatively coupled with the memory. Upon the processor receiving a request for a first RPA tool driver version of the plurality of RPA tool driver versions, the processor loads the first RPA tool version for processing.Type: GrantFiled: December 30, 2019Date of Patent: September 27, 2022Assignee: UIPATH, INC.Inventors: Alexandru Caba, Marius Stan
-
Patent number: 11442784Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to handle dependencies associated with resource deployment requests are disclosed herein. An example apparatus includes a dependency graph generator to generate a dependency graph based on a resource request file specifying a first resource and a second resource to deploy to a resource-based service, the dependency graph representative of the first resource being dependent on a second resource, and a resource controller to send a first request a second request to the resource-based service based on the dependency graph and in response to a verification controller determining that a time-based ordering of the first request relative to the second request satisfies the dependency graph, send the first request and the second request to a user device.Type: GrantFiled: July 17, 2020Date of Patent: September 13, 2022Assignee: VMware, Inc.Inventors: Sergio Sanchez, Georgi Muleshkov, Stanislav Asenov Hadjiiski, Miroslav Shipkovenski, Radostin Georgiev
-
Patent number: 11429437Abstract: An arbitration between a plurality of flows for access to a shared resource is disclosed. The plurality of flows may be associated with a single channel or multiple channels. When the plurality of flows are associated with a single channel, one flow is selected from the plurality of flows for accessing the shared resource based on flow priority levels associated with flows that are currently arbitrating for the access. Flow data associated with the selected flow is then outputted for granting the access. When the plurality of flows are associated with multiple channels, a flow associated with each channel is selected based on the flow priority levels. Further, a channel is selected based on channel priority levels of channels that are currently arbitrating for the access. Based on the selected channel, flow data associated with one of the selected flows is outputted for granting the access to the shared resource.Type: GrantFiled: August 25, 2020Date of Patent: August 30, 2022Assignee: NXP USA, Inc.Inventors: Arvind Kaushik, Puneet Khandelwal, Pradeep Singh
-
Patent number: 11422857Abstract: Embodiments described herein provide multi-level scheduling for threads in a data processing system. One embodiment provides a data processing system comprising one or more processors, a computer-readable memory coupled to the one or more processors, the computer-readable memory to store instructions which, when executed by the one or more processors, configure the one or more processors to receive execution threads for execution on the one or more processors, map the execution threads into a first plurality of buckets based at least in part on a quality of service class of the execution threads, schedule the first plurality of buckets for execution using a first scheduling algorithm, schedule a second plurality thread groups within the first plurality of buckets for execution using a second scheduling algorithm, and schedule a third plurality of threads within the second plurality of thread groups using a third scheduling algorithm.Type: GrantFiled: May 22, 2020Date of Patent: August 23, 2022Assignee: Apple Inc.Inventors: Kushal Dalmia, Jeremy C. Andrus, Daniel A. Chimene, Nigel R. Gamble, James M. Magee, Daniel A. Steffen
-
Patent number: 11416312Abstract: Embodiments disclosed herein are related to implementing a near-real-time stream processing system using the same distributed file system as a batch processing system. A data container and partition files are generated according to a partition window that specifies a time range that controls when data is to be included in the partition files. The data container is scanned to determine if the partition files are within a partition lifetime window that specifies a time range that controls how long the partition files are active for processing. For each partition file within the lifetime window, processing tasks are created based on an amount of data included in the partition files. The data in the partition files is accessed and the processing tasks are performed. Information about the partition files is recorded in a configuration data store.Type: GrantFiled: February 12, 2021Date of Patent: August 16, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Xu Liu, Steve Chun-Hao Hu, Abhishank Sahu, Yingji Ju, Gunter Leeb, Jose Fernandez, Swadhin Ajay Thakkar, William Edward Miao, Sravanthi Pereddy, Jordan Robert Fitzgibbon, Raveena Dayani
-
Patent number: 11416311Abstract: Approaches in accordance with various embodiments can reduce scheduling delays due to concurrent processing requests, as may involve VSyncs in multi-streaming systems. The software synchronization signals can be staggered relative to each other by offsetting an initial synchronization signal. These software synchronization signals can be readjusted over time such that each synchronization signal maintains the same relative offset, as may be with respect to other applications or containers.Type: GrantFiled: November 5, 2020Date of Patent: August 16, 2022Assignee: NVIDIA CORPORATIONInventors: Bimal Poddar, Donghan Ryu, Michael Gold, Samuel Reed Koser, Xiao Bo Zhao Zhang
-
Patent number: 11409578Abstract: A computer-implemented method and system for resilient adaptive biased locking. The method includes adding, in a system including an adaptive lock reservation scheme having a learning state, a component comprising a per class counter that counts, collectively, a number of learning failures and a number of revocation failures. An embodiment includes initializing the per class counter upon loading a class with a predetermined value representing at least one of a maximum number of learning failures and cancellation instances associated with the class. An embodiment includes initializing, based on a determination of an operational state of the per class counter for an object transitioning from one of the learning state and a biased state to a flatlock state, a lock word of the object directly to the flatlock state while bypassing the biased state.Type: GrantFiled: November 27, 2019Date of Patent: August 9, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Andrew James Craik
-
Patent number: 11409566Abstract: A process control unit (41) causes a plurality of control-target processes to operate in a memory area of a size equal to or smaller than a limiting value x. When a stopping process is detected, a resource allocation unit (43) allocates a size of a usable memory area for each of the control-target processes as a relaxed limiting value. When the stopping process is detected, the process control unit (41) causes each of the control-target processes to perform fallback operation in a memory area of a size equal to or smaller than the relaxed limiting value allocated to the process by the resource allocation unit (43).Type: GrantFiled: February 28, 2018Date of Patent: August 9, 2022Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Yuya Ono, Hirotaka Motai, Masahiro Deguchi, Shinichi Ochiai, Hiroki Konaka, Shunsuke Nishio, Toshiaki Tomisawa