Patents Examined by Gregory A Kessler
  • Patent number: 11455194
    Abstract: An indication that an allocation unit of a memory sub-system has become unmapped can be received. In response to receiving the indication that the allocation unit of the memory sub-system has become unmapped, the allocation unit can be programmed with a data pattern. Data to be written to the unmapped allocation unit can be received. A write operation can be performed to program the received data at the unmapped allocation unit by using a read voltage that is based on the data pattern.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: September 27, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Tingjun Xie, Zhengang Chen, Zhenlei Shen
  • Patent number: 11453131
    Abstract: A computing device for compatibility in robotic process automation (RPA) includes a memory that includes a plurality of RPA tool driver versions, and a processor communicatively coupled with the memory. Upon the processor receiving a request for a first RPA tool driver version of the plurality of RPA tool driver versions, the processor loads the first RPA tool version for processing.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: September 27, 2022
    Assignee: UIPATH, INC.
    Inventors: Alexandru Caba, Marius Stan
  • Patent number: 11455192
    Abstract: A serverless query processing system receives a query and determines whether the query is a recurring query or a non-recurring query. The system may predict, in response to determining that the query is the recurring query, a peak resource requirement during an execution of the query. The system may compute, in response to determining that the query is the non-recurring query, a tight resource requirement corresponding to an amount of resources that satisfy a performance requirement over the execution of the query, where the tight resource requirement is less than the peak resource requirement. The system allocates resources to the query based on an applicable one of the peak resource requirement or the tight resource requirement. The system then starts the execution of the query using the resources.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: September 27, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hiren Shantilal Patel, Shi Qiao, Alekh Jindal, Malay Kumar Bag, Rathijit Sen, Carlo Aldo Curino
  • Patent number: 11442784
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to handle dependencies associated with resource deployment requests are disclosed herein. An example apparatus includes a dependency graph generator to generate a dependency graph based on a resource request file specifying a first resource and a second resource to deploy to a resource-based service, the dependency graph representative of the first resource being dependent on a second resource, and a resource controller to send a first request a second request to the resource-based service based on the dependency graph and in response to a verification controller determining that a time-based ordering of the first request relative to the second request satisfies the dependency graph, send the first request and the second request to a user device.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: September 13, 2022
    Assignee: VMware, Inc.
    Inventors: Sergio Sanchez, Georgi Muleshkov, Stanislav Asenov Hadjiiski, Miroslav Shipkovenski, Radostin Georgiev
  • Patent number: 11429437
    Abstract: An arbitration between a plurality of flows for access to a shared resource is disclosed. The plurality of flows may be associated with a single channel or multiple channels. When the plurality of flows are associated with a single channel, one flow is selected from the plurality of flows for accessing the shared resource based on flow priority levels associated with flows that are currently arbitrating for the access. Flow data associated with the selected flow is then outputted for granting the access. When the plurality of flows are associated with multiple channels, a flow associated with each channel is selected based on the flow priority levels. Further, a channel is selected based on channel priority levels of channels that are currently arbitrating for the access. Based on the selected channel, flow data associated with one of the selected flows is outputted for granting the access to the shared resource.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: August 30, 2022
    Assignee: NXP USA, Inc.
    Inventors: Arvind Kaushik, Puneet Khandelwal, Pradeep Singh
  • Patent number: 11422857
    Abstract: Embodiments described herein provide multi-level scheduling for threads in a data processing system. One embodiment provides a data processing system comprising one or more processors, a computer-readable memory coupled to the one or more processors, the computer-readable memory to store instructions which, when executed by the one or more processors, configure the one or more processors to receive execution threads for execution on the one or more processors, map the execution threads into a first plurality of buckets based at least in part on a quality of service class of the execution threads, schedule the first plurality of buckets for execution using a first scheduling algorithm, schedule a second plurality thread groups within the first plurality of buckets for execution using a second scheduling algorithm, and schedule a third plurality of threads within the second plurality of thread groups using a third scheduling algorithm.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 23, 2022
    Assignee: Apple Inc.
    Inventors: Kushal Dalmia, Jeremy C. Andrus, Daniel A. Chimene, Nigel R. Gamble, James M. Magee, Daniel A. Steffen
  • Patent number: 11416312
    Abstract: Embodiments disclosed herein are related to implementing a near-real-time stream processing system using the same distributed file system as a batch processing system. A data container and partition files are generated according to a partition window that specifies a time range that controls when data is to be included in the partition files. The data container is scanned to determine if the partition files are within a partition lifetime window that specifies a time range that controls how long the partition files are active for processing. For each partition file within the lifetime window, processing tasks are created based on an amount of data included in the partition files. The data in the partition files is accessed and the processing tasks are performed. Information about the partition files is recorded in a configuration data store.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: August 16, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Xu Liu, Steve Chun-Hao Hu, Abhishank Sahu, Yingji Ju, Gunter Leeb, Jose Fernandez, Swadhin Ajay Thakkar, William Edward Miao, Sravanthi Pereddy, Jordan Robert Fitzgibbon, Raveena Dayani
  • Patent number: 11416311
    Abstract: Approaches in accordance with various embodiments can reduce scheduling delays due to concurrent processing requests, as may involve VSyncs in multi-streaming systems. The software synchronization signals can be staggered relative to each other by offsetting an initial synchronization signal. These software synchronization signals can be readjusted over time such that each synchronization signal maintains the same relative offset, as may be with respect to other applications or containers.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: August 16, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Bimal Poddar, Donghan Ryu, Michael Gold, Samuel Reed Koser, Xiao Bo Zhao Zhang
  • Patent number: 11409578
    Abstract: A computer-implemented method and system for resilient adaptive biased locking. The method includes adding, in a system including an adaptive lock reservation scheme having a learning state, a component comprising a per class counter that counts, collectively, a number of learning failures and a number of revocation failures. An embodiment includes initializing the per class counter upon loading a class with a predetermined value representing at least one of a maximum number of learning failures and cancellation instances associated with the class. An embodiment includes initializing, based on a determination of an operational state of the per class counter for an object transitioning from one of the learning state and a biased state to a flatlock state, a lock word of the object directly to the flatlock state while bypassing the biased state.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: August 9, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Andrew James Craik
  • Patent number: 11409566
    Abstract: A process control unit (41) causes a plurality of control-target processes to operate in a memory area of a size equal to or smaller than a limiting value x. When a stopping process is detected, a resource allocation unit (43) allocates a size of a usable memory area for each of the control-target processes as a relaxed limiting value. When the stopping process is detected, the process control unit (41) causes each of the control-target processes to perform fallback operation in a memory area of a size equal to or smaller than the relaxed limiting value allocated to the process by the resource allocation unit (43).
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: August 9, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Yuya Ono, Hirotaka Motai, Masahiro Deguchi, Shinichi Ochiai, Hiroki Konaka, Shunsuke Nishio, Toshiaki Tomisawa
  • Patent number: 11403139
    Abstract: An information processing device includes: a plurality of threads, each of the plurality of threads being configured to process any of a plurality of tasks, the plurality of tasks being obtained by dividing a job; and a control circuit configured to execute processing when designating a next task in scheduling for the plurality of threads, the processing including inquiring of an assignment destination thread out of the plurality of threads as to whether the next task is to be completed by a scheduled time, and preferentially assigning a task supposed to be completed by the scheduled time in the assignment destination thread, as the next task from among the plurality of tasks.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: August 2, 2022
    Assignee: Fujitsu Limited
    Inventor: Ken Iizawa
  • Patent number: 11392426
    Abstract: Embodiments of the present disclosure provide multitask parallel processing method and apparatus, a computer device and a storage medium. The method is applied to a neural network consisting of a plurality of nodes, the neural network including at least one closed-loop path, and the method includes: inputting a data sequence to be computed into the neural network in a form of data packets, each of the data packets including multiple pieces of data; and computing, by the nodes in the closed-loop path, all the data in a currently received data packet each time a computation flow is started.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: July 19, 2022
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Yangshu Shen, Yaolong Zhu, Wei He, Luping Shi
  • Patent number: 11392388
    Abstract: Provided is a process for determining a number of parallel threads for a request. The process involves receiving availability data regarding processing resources, wherein the availability data indicates which processing resources are idle or are to become idle. Based on the availability data, a number of parallel threads for the request is determined.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: July 19, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Mahesh Kumar Behera, Prasanna Venkatesh Ramamurthi, Antoni Wolski
  • Patent number: 11385935
    Abstract: Various embodiments provide an electronic device and a method, the electronic device comprising: a memory; a first processor; a second processor which has attributes different from those of the first processor; and a control unit, wherein the control unit is configured to identify a task loaded into the memory, select which of the first processor and the second processor is to execute the task, on the basis of attribute information corresponding to a user interaction associated with the task, and allocate the task to the selected processor. Other embodiments are also possible.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: July 12, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kiljae Kim, Jaeho Kim, Daehyun Cho
  • Patent number: 11385932
    Abstract: An electronic apparatus includes: a memory; a storage; and a processor, wherein: the electronic apparatus is configured to execute a plurality of processes as data of the plurality of processes is loaded into the memory based on execution of at least one program stored in the storage, the processor is configured to: identify a function currently running among a plurality of functions providable by the electronic apparatus, and based on a relationship between the plurality of processes and the identified function, terminate at least one process among the plurality of running processes, and allow a storage area of the memory loaded with the data of the terminated process to be available for another process.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: July 12, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Changhyeon Chae, Jusun Song, Jaehoon Jeong, Jihun Jung, Jaeook Kwon, Seokjae Jeong, Youngho Choi, Cheulhee Hahm
  • Patent number: 11379272
    Abstract: The allocation system comprises an interface and a processor. The interface is configured to receive an indication to deactivate idle cluster machines of a set of cluster machines. The processor is configured to determine a list of cluster machines storing one or more intermediate data files of a set of intermediate data files; determine a set of idle cluster machines of the set of cluster machines that are neither running one or more tasks of a set of tasks executing or pending on the set of cluster machines nor storing the one or more intermediate data files of the set of intermediate data files, where the set of intermediate data files is associated with the set of tasks executing or pending on the cluster machines; and deactivate each cluster machine of the set of idle cluster machines.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: July 5, 2022
    Assignee: Databricks Inc.
    Inventors: Srinath Shankar, Eric Keng-Hao Liang
  • Patent number: 11372691
    Abstract: In a data processor that executes programs to perform data processing operations for groups of execution threads, when the threads of a thread group are all to process a same, common input data value, different portions of the common input data value are loaded into respective registers of different threads of the group of threads, such that the common input data value is stored in a distributed fashion across registers of plural different threads of the thread group. Then, when the threads of the thread group are to process a portion the common input data value, the portion is provided from the thread that stores it to all the threads in the thread group.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: June 28, 2022
    Assignee: Arm Limited
    Inventors: Tord Kvestad Oygard, Samuel Martin
  • Patent number: 11366698
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for efficiently using computing resources when responding to content requests. Methods include using a prioritization model and a specified threshold specifying the maximum allowable negative outcome for a content provider, to determine whether a received content request is a low priority request. Methods further include throttling access to computing resources to respond to low priority requests, while providing access to computing resources for other content requests. Methods also include regularly updating the prioritization model and the specified threshold based on data for a new set of content requests.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: June 21, 2022
    Assignee: Google LLC
    Inventors: Jiefu Zheng, Hossein Karkeh Abadi
  • Patent number: 11361253
    Abstract: A modularized model interaction system and method of use, including an orchestrator, a set of hardware modules each including a standard set of hardware submodules with hardware-specific logic, and a set of model modules each including a standard set of model submodules with model-specific logic. In operation, the orchestrator determines a standard set of submodule calls to the standard submodules of a given hardware module and model module to implement model interaction on hardware associated with the hardware module.
    Type: Grant
    Filed: August 18, 2021
    Date of Patent: June 14, 2022
    Assignee: Grid.ai, Inc.
    Inventors: Williams Falcon, Adrian Wälchli, Thomas Henri Marceau Chaton, Sean Presley Narenthiran
  • Patent number: 11360815
    Abstract: An electronic apparatus includes: a memory; a storage; and a processor, wherein: the electronic apparatus is configured to execute a plurality of processes as data of the plurality of processes is loaded into the memory based on execution of at least one program stored in the storage, the processor is configured to: identify a function currently running among a plurality of functions providable by the electronic apparatus, and based on a relationship between the plurality of processes and the identified function, terminate at least one process among the plurality of running processes, and allow a storage area of the memory loaded with the data of the terminated process to be available for another process.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: June 14, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Changhyeon Chae, Jusun Song, Jaehoon Jeong, Jihun Jung, Jaeook Kwon, Seokjae Jeong, Youngho Choi, Cheulhee Hahm