Patents Examined by Jonathan R Labud
-
Patent number: 12293216Abstract: Some embodiments provide a system and method to receive, as an input, configuration properties of a group of operators of a data pipeline, the data pipeline including a specified multiplicity greater than one (1); generate, as an output, a configuration for two new operators, including a first new operator and a second new operator; and automatically insert the first new operator and the second new operator into a deployment of the data pipeline, the first new operator being inserted before a number of replicas of the group of operators of the data pipeline corresponding to the specified multiplicity and the second new operator being inserted after the number of replicas of the group of operators of the data pipeline corresponding to the specified multiplicity.Type: GrantFiled: December 17, 2021Date of Patent: May 6, 2025Assignee: SAP SEInventor: Eric Simon
-
Patent number: 12293218Abstract: Aspects of the disclosure provide methods and an apparatus including processing circuitry configured to receive workflow information of a workflow. The processing circuitry generates, based on the workflow information, the workflow to process input data. The workflow includes a first processing task, a second processing task, and a first buffering task. The first processing task is caused to enter a running state where a subset of the input data is processed and output to the first buffering task as first processed subset data. The first processing task is caused to transition to a paused state based on an amount of the first processed subset data in the first buffering task being equal to a first threshold. State information of the first processing task is stored in the paused state. Subsequently, the second processing task is caused to enter a running state where the first processed subset data is processed.Type: GrantFiled: September 21, 2020Date of Patent: May 6, 2025Assignee: TENCENT AMERICA LLCInventor: Iraj Sodagar
-
Patent number: 12293055Abstract: Methods and systems for application publishing in a virtualized environment are described herein. A system may facilitate publishing of one or more shortcuts based on inputs made in the virtual desktop environment (e. g., when a user “drag-and-drops” a shortcut onto a publishing icon on a desktop). The system may determine application information and instance information for the application, and may publish a shortcut for that application to the storefront. As a result, users may be permitted to self-publish shortcuts for preferred applications onto personalized storefronts, which may be unique to each user.Type: GrantFiled: January 4, 2019Date of Patent: May 6, 2025Inventors: Yedong Yu, Yajun Yao
-
Patent number: 12293208Abstract: The disclosure provides an approach for device redirection in a remote computing environment. Embodiments include receiving, at a remote device from a client device over a network, input data of a peripheral device associated with the client device. Embodiments include receiving, at an emulated device running on the remote device, a request for device data from an application running on the remote device. Embodiments include responding, by the emulated device to the application, to the request with a response message having a format associated with the request, the response message being based on the input data. Embodiments include transmitting, from the remote device to the client device over the network, image data representing the application running on the remote device as controlled based on the input data.Type: GrantFiled: December 16, 2021Date of Patent: May 6, 2025Assignee: Omnissa, LLCInventors: Zhongzheng Tu, Joe Huiyong Huo, Mingsheng Zang, Jinxing Hu, Yueting Zhang
-
Patent number: 12277456Abstract: An apparatus comprising neural processors, a command processor, and a shared memory. The command processor receives a context start signal indicating a start of a context of a neural network model from a host system. The command processor determines whether neural network model data is entirely or partially updated based on the context start signal. The command processor updates the neural network model data in the shared memory based on a determination on whether neural network model data is entirely or partially updated based on the context start signal. The command processor generates a plurality of task descriptors describing neural network model tasks based on the neural network model data, and distributes the plurality of task descriptors to the neural processors.Type: GrantFiled: March 29, 2024Date of Patent: April 15, 2025Assignee: REBELLIONS INC.Inventor: Hongyun Kim
-
Patent number: 12265857Abstract: A method of managing resources is provided in embodiments of the present disclosure. The method includes determining a set of candidate historical requests associated with a target request. Here, the set of candidate historical requests has the same request type and target resource as the target request. The method further includes determining a target request pattern of the target request based on at least one previous request of the target request. The method includes determining a target historical request from the set of candidate historical requests based on the target request pattern. The method includes generating a target response to the target request based on a historical response to the target historical request. In this way, by determining a response to a historical request that has the most similar request pattern to the target request, a simulated response that is more in line with the context can be generated.Type: GrantFiled: November 9, 2021Date of Patent: April 1, 2025Assignee: EMC IP HOLDING COMPANY LLCInventors: Qi Wang, Ren Wang, Yun Zhang, Ming Zhang, Weiyang Liu
-
Patent number: 12260242Abstract: Examples for managing virtual infrastructure resources in cloud environments can include (1) instantiating an orchestration node for managing local control planes at multiple clouds, (2) instantiating first and second local control planes at different respective clouds, the first and second local control planes interfacing with different respective virtualized infrastructure managers (“VIMs”), where the first and second local control planes establish secure communication with the orchestration node, and (3) deploying, by the orchestration node, services to the first and second local control planes. Further, the first and second local control planes can cause the respective VIMs to manage the services at the different respective clouds.Type: GrantFiled: December 22, 2021Date of Patent: March 25, 2025Assignee: VMware LLCInventors: Shruti Parihar, Mark Whipple, Sachin Thakkar, Akshatha Sathyanarayan
-
Patent number: 12248798Abstract: A method and system determining whether the deployment has been prepared for launch on cloud. The method including receiving, by a server computer, a set of associated image templates to a template repository. The method further including receiving, in the template repository by a processing device of the server computer, a compatible deployable template that is compatible with, and distinct from, the set of associated image templates, wherein the compatible deployable template comprises information for launching the cloud server by starting the plurality of virtual machines from the plurality of virtual machine images together to create a cloud server. The method further including providing the compatible deployable.Type: GrantFiled: July 26, 2021Date of Patent: March 11, 2025Assignee: Red Hat, Inc.Inventors: Dan Macpherson, Scott Wayne Seago
-
Patent number: 12229587Abstract: A command processor determines whether a command descriptor describing a current command is in a first format or in a second format, wherein the first format includes a source memory address pointing to a memory area in a shared memory having a binary code to be accessed according to direct memory access (DMA) scheme, and the second format includes one or more object indices, a respective one of the one or more object indices indicating an object in an object database. If the command descriptor describing the current command is in the second format, the command processor converts a format of the command descriptor to the first format, generates one or more task descriptors describing neural network model tasks based on the command descriptor in the first format, and distributes the one or more task descriptors to the one or more neural processors.Type: GrantFiled: March 29, 2024Date of Patent: February 18, 2025Assignee: REBELLIONS INC.Inventors: Hongyun Kim, Chang-Hyo Yu, Yoonho Boo
-
Patent number: 12229583Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for transaction management. One of the methods includes, for a first transaction from a plurality of transactions, in response to determining that the first transaction for a corresponding user account at a first entity satisfies the threshold criteria for a corresponding second entity, accessing account data for the corresponding user account, first data for the first entity, and second data for the second entity to complete the first transaction. For a second transaction, in response to determining that the second transaction for a corresponding user account at the first entity does not satisfy the threshold criteria for the corresponding second entity, a determination is made to not access second data for the corresponding second entity, and account data is accessed for the corresponding user account and the first data for the first entity to complete the second transaction.Type: GrantFiled: January 5, 2024Date of Patent: February 18, 2025Assignee: Jane Technologies, Inc.Inventors: Socrates Munaf Rosenfeld, Abraham Munaf Rosenfeld, Howard Hong, Simon James Roddy, Benjamin Aaron Green, Andrew Michael Livingston, Harry Kainen, Scott Bramble
-
Patent number: 12210899Abstract: Data payloads from an external data storage system are processed in an observability pipeline system. In some aspects, the observability pipeline system defines a leader role and worker roles. The leader role generates a data discovery task based on configuration information for a data collection task. A worker role executes the data discovery task, which includes communicating with an external data storage system to identify a data payload that is stored on the external data storage system and contains event data that meet event filter criteria. The leader role generates data collection tasks based on the data payload. Worker roles execute the data collection tasks. Executing a data collection task includes: communicating with the external data storage system to obtain a subset of filtered event data from the data payload; and streaming the subset of filtered event data to an observability pipeline process.Type: GrantFiled: June 14, 2021Date of Patent: January 28, 2025Assignee: Cribl, Inc.Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
-
Patent number: 12197359Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.Type: GrantFiled: January 29, 2021Date of Patent: January 14, 2025Assignee: Nutanix, Inc.Inventors: Hema Venkataramani, Felipe Franciosi, Sreejith Mohanan, Alok Nemchand Kataria, Umang Sureshkumar Patel
-
Patent number: 12169728Abstract: Technology is disclosed for non-fragmenting memory ballooning. An example method may involve: receiving, by a processing device, a request associated with a memory balloon; searching for available memory chunks in a memory, wherein the memory is fragmented and comprises a set of available chunks that are separate from each other; selecting, by the processing device, a first chunk and a second chunk of the set of available chunks, wherein the first chunk is smaller than the second chunk and is selected before the second chunk; and associating the first chunk and the second chunk with the memory balloon.Type: GrantFiled: March 1, 2021Date of Patent: December 17, 2024Assignee: Red Hat, Inc.Inventors: Michael Tsirkin, David Hildenbrand
-
Patent number: 12159165Abstract: The invention relates to an electronic system, comprising components and/or units of various kind, hence the electronic system can be called a heterogeneous system and special interfaces therein between. The invented electronic system can be applied in the electric system digital control domain and in particular it is targeting (but not limited to) control of power train of pure electric or hybrid vehicle electric motors that require hard real time and safe control.Type: GrantFiled: December 14, 2018Date of Patent: December 3, 2024Assignee: Silicon Mobility SASInventors: Loïc Jean Dominique Vezier, Anselme Joseph Francis Lebrun
-
Patent number: 12153900Abstract: Sparse data handling and/or buffer sharing are implemented. Data may be buffered in reusable buffer arrays. Data may comprise fixed or variable length vectors, which may be represented as sparse or dense vectors in a values array and indices array. Data may be materialized from a dataview comprising a non-materialized view of data in a machine-learning pipeline by cursoring over rows of the dataview and calling delegate functions to compute data for rows in an active column. A buffer and/or its set of arrays storing a first vector may be reused for a second and additional vectors, for example, when the length of buffer arrays is equal to or greater than the length of the second and additional vectors, which may be selectively stored as sparse or dense vectors to fit the array set. Shared buffers may be passed as references between delegate functions for reuse.Type: GrantFiled: October 31, 2019Date of Patent: November 26, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Gary Shon Katzenberger, Petro Luferenko, Costin I. Eseanu, Eric Anthony Erhardt, Ivan Matantsev
-
Patent number: 12141626Abstract: The disclosure relates to an interprocessor synchronization system, comprising a plurality of processors; a plurality of unidirectional notification lines connecting the processors in a chain; in each processor: a synchronization register having bits respectively associated with the notification lines, connected to record the respective states of upstream notification lines, propagated by an upstream processor, and a gate controlled by a configuration register to propagate the states of the upstream notification lines on downstream notification lines to a downstream processor.Type: GrantFiled: December 27, 2019Date of Patent: November 12, 2024Assignee: KalrayInventors: Benoit Dupont de Dinechin, Arnaud Odinot, Vincent Ray
-
Patent number: 12141596Abstract: Remote desktop services are accessed by a remote desktop from a pool of remote desktops. When the remote desktop detects a user request to launch an application and determines that the application to be launched is from another remote desktop, the remote desktop establishes a connection with the other remote desktop to launch and display the application seamlessly. In addition, the remote desktop retrieves drive configuration data indicating drives or folders that are shared by each of the remote desktops in the pool and creates a mapping of the shared drives and folders based on the drive configuration data. In response to a user request to open a shared drive or folder of the second remote desktop, the remote desktop establishes a connection between the first remote desktop and the second remote desktop to acquire contents of the shared drive or folder.Type: GrantFiled: June 10, 2021Date of Patent: November 12, 2024Assignee: Omnissa, LLCInventors: Lin Lv, Yanchao Zhang, Yang Liu
-
Patent number: 12141607Abstract: A machine learning device is provided in a vehicle able to supply electric power to an outside, and includes a processor configured to perform processing relating to training a machine learning model used in the vehicle. The processor is configured to lower an electric power consumption amount in the processing relating to training when acquiring disaster information compared with when not acquiring the disaster information.Type: GrantFiled: June 16, 2021Date of Patent: November 12, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Daiki Yokoyama, Norimi Asahara
-
Patent number: 12131183Abstract: Technologies for providing efficient message polling include a compute device. The compute device includes circuitry to determine a memory location to monitor for a change indicative of a message from a device connected to a local bus of the compute device. The circuitry is also to determine whether data at the memory location satisfies reference data. Additionally, the circuitry is to process, in response to a determination that the data at the memory location satisfies the reference data, one or more messages in a message queue associated with the memory location.Type: GrantFiled: June 28, 2019Date of Patent: October 29, 2024Assignee: Intel CorporationInventors: Anjaneya Reddy Chagam Reddy, Scott D. Peterson
-
Patent number: 12099853Abstract: A process for invoking a robot from an application may include launching the application from a computing system to invoke a robot link embedded within the application. The process may also include initiating from the application a port discovery process to identify a port, port details, and a token. The process may further include generating by the application a randomized code and invoking a consent application requesting approval from a user of the computing system to invoke the robot from the application. The process may also include registering the randomized code with a local listener module and passing user information and the token to a global listener module. The process may further include receiving from the global listener module the token and port identification, allowing the application to authenticate itself with, and communicate with, the robot, thereby completing the robot invoking process.Type: GrantFiled: April 23, 2021Date of Patent: September 24, 2024Assignee: UiPath, Inc.Inventors: Evan Cohen, Ankit Saraf, Naren Venkateswaran, Sankara Narayanan Venkataraman