Patents Examined by Jonathan R Labud
  • Patent number: 12346724
    Abstract: Disclosed here are systems and methods that allow users, upon detecting errors within a running workflow, to either 1) pause the workflow and directly correct its design before resuming the workflow, or 2) pause the workflow, correct the erred action within the workflow, resume running the workflow, and afterwards apply the corrections to the design of the workflow. The disclosure comprises functionality that pauses a single workflow and other relevant workflows as soon as the error is detected and while it is corrected. The disclosed systems and methods improve communication technology between the networks and servers of separate parties relevant and/or dependent on successful execution of other workflows.
    Type: Grant
    Filed: February 22, 2024
    Date of Patent: July 1, 2025
    Assignee: Nintex USA, Inc.
    Inventors: Joshua Joo Hou Tan, Alain Marie Patrice Gentilhomme
  • Patent number: 12340247
    Abstract: An electronic device is provided. The electronic device includes a memory, and a processor including a resource management unit and a neural processing unit. The processor may be configured to obtain an execution request for a specific function operating based on a specific neural network model, identify an available bandwidth of the memory through the resource management unit, and quantize the specific neural network model based on the available bandwidth of the memory through the neural processing unit.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: June 24, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Junhyuk Lee, Hyunbin Park, Seungjin Yang, Jin Choi
  • Patent number: 12326913
    Abstract: A computer-implemented method, computer program product and computing system for defining a data description model and a function description model corresponding to a website on one or more of a plurality of machine-accessible public computing platforms; processing a complex task to define a plurality of discrete tasks each having a discrete goal; executing the plurality of discrete tasks on the plurality of machine-accessible public computing platforms; determining if any of the plurality of discrete tasks failed to achieve its discrete goal; and if a specific discrete task failed to achieve its discrete goal, defining a substitute discrete task having a substitute discrete goal.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: June 10, 2025
    Assignee: GROKIT DATA, INC.
    Inventors: James A. Harding, Anthony J. Paquin, Scott Thibault, Jason A. Boatman
  • Patent number: 12327144
    Abstract: Techniques described herein relate to a method for managing a distributed multi-tiered computing (DMC) environment. The method includes obtaining, by an endpoint controller associated with a device, an initial resource buffer from a local controller; in response to obtaining the initial resource buffer: maintaining the initial resource buffer during task provision for the device; obtaining device metrics based on performance of tasks on the device; making a determination that a resource buffer change event is identified; and in response to the determination: updating the initial resource buffer based on the resource buffer change event.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: June 10, 2025
    Assignee: Dell Products L.P.
    Inventors: William Jeffery White, Said Tabet
  • Patent number: 12314775
    Abstract: Embodiments described herein relate to methods and apparatuses for selecting a first virtualisation engine to execute an application deployment request. A method in a selection engine (104, 700) comprises receiving (300) an application deployment request (101) comprising an identification of an application image (102); selecting (306) the first virtualisation engine from a plurality of virtualisation engines based on a plurality of respective values of at least one characteristic associated with execution of the application image by each of the plurality of virtualisation engines; and initiating (308) execution of the application image by the first virtualisation engine.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: May 27, 2025
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Dániel Géhberger, András Császár, Dávid Kovács
  • Patent number: 12293208
    Abstract: The disclosure provides an approach for device redirection in a remote computing environment. Embodiments include receiving, at a remote device from a client device over a network, input data of a peripheral device associated with the client device. Embodiments include receiving, at an emulated device running on the remote device, a request for device data from an application running on the remote device. Embodiments include responding, by the emulated device to the application, to the request with a response message having a format associated with the request, the response message being based on the input data. Embodiments include transmitting, from the remote device to the client device over the network, image data representing the application running on the remote device as controlled based on the input data.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: May 6, 2025
    Assignee: Omnissa, LLC
    Inventors: Zhongzheng Tu, Joe Huiyong Huo, Mingsheng Zang, Jinxing Hu, Yueting Zhang
  • Patent number: 12293055
    Abstract: Methods and systems for application publishing in a virtualized environment are described herein. A system may facilitate publishing of one or more shortcuts based on inputs made in the virtual desktop environment (e. g., when a user “drag-and-drops” a shortcut onto a publishing icon on a desktop). The system may determine application information and instance information for the application, and may publish a shortcut for that application to the storefront. As a result, users may be permitted to self-publish shortcuts for preferred applications onto personalized storefronts, which may be unique to each user.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: May 6, 2025
    Inventors: Yedong Yu, Yajun Yao
  • Patent number: 12293218
    Abstract: Aspects of the disclosure provide methods and an apparatus including processing circuitry configured to receive workflow information of a workflow. The processing circuitry generates, based on the workflow information, the workflow to process input data. The workflow includes a first processing task, a second processing task, and a first buffering task. The first processing task is caused to enter a running state where a subset of the input data is processed and output to the first buffering task as first processed subset data. The first processing task is caused to transition to a paused state based on an amount of the first processed subset data in the first buffering task being equal to a first threshold. State information of the first processing task is stored in the paused state. Subsequently, the second processing task is caused to enter a running state where the first processed subset data is processed.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: May 6, 2025
    Assignee: TENCENT AMERICA LLC
    Inventor: Iraj Sodagar
  • Patent number: 12293216
    Abstract: Some embodiments provide a system and method to receive, as an input, configuration properties of a group of operators of a data pipeline, the data pipeline including a specified multiplicity greater than one (1); generate, as an output, a configuration for two new operators, including a first new operator and a second new operator; and automatically insert the first new operator and the second new operator into a deployment of the data pipeline, the first new operator being inserted before a number of replicas of the group of operators of the data pipeline corresponding to the specified multiplicity and the second new operator being inserted after the number of replicas of the group of operators of the data pipeline corresponding to the specified multiplicity.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: May 6, 2025
    Assignee: SAP SE
    Inventor: Eric Simon
  • Patent number: 12277456
    Abstract: An apparatus comprising neural processors, a command processor, and a shared memory. The command processor receives a context start signal indicating a start of a context of a neural network model from a host system. The command processor determines whether neural network model data is entirely or partially updated based on the context start signal. The command processor updates the neural network model data in the shared memory based on a determination on whether neural network model data is entirely or partially updated based on the context start signal. The command processor generates a plurality of task descriptors describing neural network model tasks based on the neural network model data, and distributes the plurality of task descriptors to the neural processors.
    Type: Grant
    Filed: March 29, 2024
    Date of Patent: April 15, 2025
    Assignee: REBELLIONS INC.
    Inventor: Hongyun Kim
  • Patent number: 12265857
    Abstract: A method of managing resources is provided in embodiments of the present disclosure. The method includes determining a set of candidate historical requests associated with a target request. Here, the set of candidate historical requests has the same request type and target resource as the target request. The method further includes determining a target request pattern of the target request based on at least one previous request of the target request. The method includes determining a target historical request from the set of candidate historical requests based on the target request pattern. The method includes generating a target response to the target request based on a historical response to the target historical request. In this way, by determining a response to a historical request that has the most similar request pattern to the target request, a simulated response that is more in line with the context can be generated.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: April 1, 2025
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Qi Wang, Ren Wang, Yun Zhang, Ming Zhang, Weiyang Liu
  • Patent number: 12260242
    Abstract: Examples for managing virtual infrastructure resources in cloud environments can include (1) instantiating an orchestration node for managing local control planes at multiple clouds, (2) instantiating first and second local control planes at different respective clouds, the first and second local control planes interfacing with different respective virtualized infrastructure managers (“VIMs”), where the first and second local control planes establish secure communication with the orchestration node, and (3) deploying, by the orchestration node, services to the first and second local control planes. Further, the first and second local control planes can cause the respective VIMs to manage the services at the different respective clouds.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: March 25, 2025
    Assignee: VMware LLC
    Inventors: Shruti Parihar, Mark Whipple, Sachin Thakkar, Akshatha Sathyanarayan
  • Patent number: 12248798
    Abstract: A method and system determining whether the deployment has been prepared for launch on cloud. The method including receiving, by a server computer, a set of associated image templates to a template repository. The method further including receiving, in the template repository by a processing device of the server computer, a compatible deployable template that is compatible with, and distinct from, the set of associated image templates, wherein the compatible deployable template comprises information for launching the cloud server by starting the plurality of virtual machines from the plurality of virtual machine images together to create a cloud server. The method further including providing the compatible deployable.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: March 11, 2025
    Assignee: Red Hat, Inc.
    Inventors: Dan Macpherson, Scott Wayne Seago
  • Patent number: 12229587
    Abstract: A command processor determines whether a command descriptor describing a current command is in a first format or in a second format, wherein the first format includes a source memory address pointing to a memory area in a shared memory having a binary code to be accessed according to direct memory access (DMA) scheme, and the second format includes one or more object indices, a respective one of the one or more object indices indicating an object in an object database. If the command descriptor describing the current command is in the second format, the command processor converts a format of the command descriptor to the first format, generates one or more task descriptors describing neural network model tasks based on the command descriptor in the first format, and distributes the one or more task descriptors to the one or more neural processors.
    Type: Grant
    Filed: March 29, 2024
    Date of Patent: February 18, 2025
    Assignee: REBELLIONS INC.
    Inventors: Hongyun Kim, Chang-Hyo Yu, Yoonho Boo
  • Patent number: 12229583
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for transaction management. One of the methods includes, for a first transaction from a plurality of transactions, in response to determining that the first transaction for a corresponding user account at a first entity satisfies the threshold criteria for a corresponding second entity, accessing account data for the corresponding user account, first data for the first entity, and second data for the second entity to complete the first transaction. For a second transaction, in response to determining that the second transaction for a corresponding user account at the first entity does not satisfy the threshold criteria for the corresponding second entity, a determination is made to not access second data for the corresponding second entity, and account data is accessed for the corresponding user account and the first data for the first entity to complete the second transaction.
    Type: Grant
    Filed: January 5, 2024
    Date of Patent: February 18, 2025
    Assignee: Jane Technologies, Inc.
    Inventors: Socrates Munaf Rosenfeld, Abraham Munaf Rosenfeld, Howard Hong, Simon James Roddy, Benjamin Aaron Green, Andrew Michael Livingston, Harry Kainen, Scott Bramble
  • Patent number: 12210899
    Abstract: Data payloads from an external data storage system are processed in an observability pipeline system. In some aspects, the observability pipeline system defines a leader role and worker roles. The leader role generates a data discovery task based on configuration information for a data collection task. A worker role executes the data discovery task, which includes communicating with an external data storage system to identify a data payload that is stored on the external data storage system and contains event data that meet event filter criteria. The leader role generates data collection tasks based on the data payload. Worker roles execute the data collection tasks. Executing a data collection task includes: communicating with the external data storage system to obtain a subset of filtered event data from the data payload; and streaming the subset of filtered event data to an observability pipeline process.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: January 28, 2025
    Assignee: Cribl, Inc.
    Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
  • Patent number: 12197359
    Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: January 14, 2025
    Assignee: Nutanix, Inc.
    Inventors: Hema Venkataramani, Felipe Franciosi, Sreejith Mohanan, Alok Nemchand Kataria, Umang Sureshkumar Patel
  • Patent number: 12169728
    Abstract: Technology is disclosed for non-fragmenting memory ballooning. An example method may involve: receiving, by a processing device, a request associated with a memory balloon; searching for available memory chunks in a memory, wherein the memory is fragmented and comprises a set of available chunks that are separate from each other; selecting, by the processing device, a first chunk and a second chunk of the set of available chunks, wherein the first chunk is smaller than the second chunk and is selected before the second chunk; and associating the first chunk and the second chunk with the memory balloon.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: December 17, 2024
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, David Hildenbrand
  • Patent number: 12159165
    Abstract: The invention relates to an electronic system, comprising components and/or units of various kind, hence the electronic system can be called a heterogeneous system and special interfaces therein between. The invented electronic system can be applied in the electric system digital control domain and in particular it is targeting (but not limited to) control of power train of pure electric or hybrid vehicle electric motors that require hard real time and safe control.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: December 3, 2024
    Assignee: Silicon Mobility SAS
    Inventors: Loïc Jean Dominique Vezier, Anselme Joseph Francis Lebrun
  • Patent number: 12153900
    Abstract: Sparse data handling and/or buffer sharing are implemented. Data may be buffered in reusable buffer arrays. Data may comprise fixed or variable length vectors, which may be represented as sparse or dense vectors in a values array and indices array. Data may be materialized from a dataview comprising a non-materialized view of data in a machine-learning pipeline by cursoring over rows of the dataview and calling delegate functions to compute data for rows in an active column. A buffer and/or its set of arrays storing a first vector may be reused for a second and additional vectors, for example, when the length of buffer arrays is equal to or greater than the length of the second and additional vectors, which may be selectively stored as sparse or dense vectors to fit the array set. Shared buffers may be passed as references between delegate functions for reuse.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: November 26, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Gary Shon Katzenberger, Petro Luferenko, Costin I. Eseanu, Eric Anthony Erhardt, Ivan Matantsev