Patents Examined by Qing-Yuan Wu
  • Patent number: 11801643
    Abstract: A method of enhancing a performance characteristic of an additive manufacturing apparatus, the method including: (a) dispensing a batch of a light polymerizable resin into the additive manufacturing apparatus, the batch characterized by at least one physical characteristic; (b) determining the unique identity of the batch; (c) sending the unique identity of the batch to a database; then (d) either: (i) receiving on the controller from the database modified operating instructions for the resin batch, which modified operating instructions have been modified based on the at least one physical characteristic, or (ii) receiving on the controller from the database the at least one physical characteristic for the specific resin batch and modifying the operating instructions based on the at least one physical characteristic; and then (e) producing the object from the batch of light polymerizable resin on the additive manufacturing apparatus with the modified operating instructions.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: October 31, 2023
    Assignee: Carbon, Inc.
    Inventors: John R. Tumbleston, Clarissa Gutierrez, Ronald Truong, Kyle Laaker, Craig B. Carlson, Roy Goldman, Abhishek Parmar
  • Patent number: 11782754
    Abstract: The disclosure provides for repositioning applications from physical devices to a cloud location without removing the applications from the physical devices. This provides advantages of cloud-based availability for the applications while preserving device configurations. Thus, a user may continue to use the local version during transition to cloud usage so that if a problem arises during transition, adverse effects on user productivity are mitigated. Examples include generating, on a device, a first virtualization layer, and uninstalling an application from the first virtualization layer while capturing uninstallation traffic within the first virtualization layer. Examples further include generating, on the device, a second virtualization layer, installing the application in the second virtualization layer, and generating, from the second virtualization layer with the installed application, an application package. Examples are able to position the application package on a remote node for execution.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: October 10, 2023
    Assignee: VMware, Inc.
    Inventors: Vignesh Raja Jayaraman, Sisimon Soman
  • Patent number: 11775334
    Abstract: Methods and apparatuses are described for provisioning and managing data orchestration platforms in a cloud computing environment. A server provisions in a first region a first data orchestration platform comprising (i) a first data transformation instance, (ii) first endpoints, and (iii) a first data integration instance. The server provisions in a second region a second data orchestration platform comprising (i) a second data transformation instance, (ii) second endpoints, and (iii) a second data integration instance. The server integrates the first data integration instance and the second data integration instance with an identity authentication service. The server monitors operational status of the first orchestration platform and the second orchestration platform using a monitoring service. The server refreshes virtual computing resources in each of the first orchestration platform and the second orchestration platform using a rehydration service.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: October 3, 2023
    Assignee: FMR LLC
    Inventors: Terence Doherty, Saurabh Singh, Aniruththan Somu Duraisamy, Digvijay Narayan Singh, Avinash Mysore Geethananda, Aravind Ganesan
  • Patent number: 11774121
    Abstract: An actuator in a HVAC system includes a mechanical transducer, a processing circuit, a wireless transceiver, and a power circuit. The processing circuit includes a processor and memory and is configured to operate the mechanical transducer according to a control program stored in the memory. The wireless transceiver is configured to facilitate bidirectional wireless data communications between the processing circuit and an external device. The power circuit is configured to draw power from a wireless signal received via the wireless transceiver and power the processing circuit and the wireless transceiver using the drawn power. The processing circuit is configured to use the power drawn from the wireless signal to wirelessly transmit data stored in the memory of the actuator to the external device via the wireless transceiver, wirelessly receive data from the external device via the wireless transceiver, and store the data received from the external device in the memory.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: October 3, 2023
    Assignee: Johnson Controls Technology Company
    Inventors: Robert K. Alexander, Christopher Merkl, Gary A. Romanowich, Bernard Clement, Kevin Weiss
  • Patent number: 11775335
    Abstract: Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilization of GPU resources. A virtual machine configuration can be identified to include a platform independent graphics computing requirement. Hosts can be identified as available in a computing environment based on the platform independent graphics computing requirement. The virtual machines can be migrated and placed to maximize usage the total memory of GPU resources of the hosts.
    Type: Grant
    Filed: January 19, 2023
    Date of Patent: October 3, 2023
    Assignee: VMWARE, INC.
    Inventors: Akshay Bhandari, Muralidhara Gupta, Nidhin Urmese
  • Patent number: 11775340
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for dynamically modeling a page using dynamic data. One of the methods includes obtaining, from a user device associated with a first resource of a dynamic modeling system, a dynamic final event comprising data representing a transaction of the dynamic modeling system; generating, by a rule monitor of the dynamic modeling system and using the data representing the transaction, a task chain for the plurality of tasks, comprising: generating a plurality of tasks in the task chain, and determining, for each task in the task chain, one or more criteria for executing the task; and for each task of the plurality of tasks: determining, by the rule monitor, that the one or more criteria for executing the task are satisfied; and in response to determining that the one or more criteria are satisfied, executing the task.
    Type: Grant
    Filed: January 14, 2022
    Date of Patent: October 3, 2023
    Inventor: Hayssam Hamze
  • Patent number: 11768702
    Abstract: An apparatus and a method for scheduling a task in an electronic device including a heterogeneous multi-processor are provided. The electronic device includes a memory and a processor operatively connected to the memory and including a plurality of heterogeneous cores. The processor may be configured to identify, when a task to be scheduled occurs, a scheduling group having the task among a plurality of predefined scheduling groups, and to perform scheduling for the task, based on the identified scheduling group having the task and a priority of the task.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: September 26, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyunchul Seok, Choonghoon Park, Byungsoo Kwon, Bumgyu Park, Jonglae Park, Junhwa Seo, Youngcheol Shin, Youngtae Lee
  • Patent number: 11762711
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: September 19, 2023
    Assignee: Oracle International Corporation
    Inventors: David Dice, Virendra J. Marathe
  • Patent number: 11747776
    Abstract: A fault prediction system for building equipment includes one or more memory devices configured to store instructions that, when executed on one or more processors, cause the one or more processors to receive device data for a plurality of devices of the building equipment, the device data indicating performance of the plurality of devices; generate, based on the received device data, a plurality of prediction models comprising at least one of single device prediction models generated for each of the plurality of devices or cluster prediction models generated for device clusters of the plurality of devices; label each of the plurality of prediction models as an accurately predicting model or an inaccurately predicting model based on a performance of each of the plurality of prediction models; and predict a device fault with each of the plurality of prediction models labeled as an accurately predicting model.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: September 5, 2023
    Assignee: JOHNSON CONTROLS TYCO IP HOLDINGS LLP
    Inventors: Young M. Lee, Sugumar Murugesan, ZhongYi Jin, Jaume Amores, Kelsey Carle Schuster, Steven R. Vitullo, Henan Wang
  • Patent number: 11748107
    Abstract: An apparatus, and corresponding method, for input/output (I/O) value determination, generates an I/O instruction for an I/O device, the I/O device including a state machine with state transition logic. The apparatus comprises a controller that includes a simplified state machine with a reduced version of the state transition logic of the state machine of the I/O device. The controller is configured to improve instruction execution performance of a processor core by employing the simplified state machine to predict at least one state value of at least one I/O device true state value to be affected by the I/O instruction at the I/O device.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: September 5, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Jason D. Zebchuk, Wilson P. Snyder, II, Michael S. Bertone
  • Patent number: 11741413
    Abstract: The disclosed techniques generally relate to the use of action paths comprising sequences of steps performed by a user to efficiently perform tasks or resolve incidents. Action paths as discussed herein may be used to achieve more efficient outcomes, to train new employees, or to anticipate the future needs of a user.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: August 29, 2023
    Assignee: ServiceNow, Inc.
    Inventors: Ivan Rodrigo Garay, Erick Koji Hasegawa
  • Patent number: 11740673
    Abstract: Methods and apparatus to provide holistic global performance and power management are described. In an embodiment, logic (e.g., coupled to each compute node of a plurality of compute nodes) causes determination of a policy for power and performance management across the plurality of compute nodes. The policy is coordinated across the plurality of compute nodes to manage a job to one or more objective functions, where the job includes a plurality of tasks that are to run concurrently on the plurality of compute nodes. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: August 29, 2023
    Assignee: Intel Corporation
    Inventors: Jonathan M. Eastep, Richard J. Greco
  • Patent number: 11734062
    Abstract: Various techniques are used to schedule computing jobs for execution by a computing resource. In an example method, a schedule is generated by selecting, for a first slot in the schedule, a first computing job based on a first priority of the first computing job with respect to a first characteristic. A second computing job is selected for a second slot in the schedule based on a second priority of the second computing job with respect to a second characteristic. The second slot occurs after the first slot in the schedule, and the second characteristic is different than the first characteristic. The first characteristic or the second characteristic includes an execution frequency. The computing jobs are executed based on the schedule.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 22, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Rohit Bahl, Stephen Williams, Debashish Ghosh
  • Patent number: 11734014
    Abstract: Embodiments of the present disclosure provides a device for implementing resource index replacement, comprising an instruction scheduling unit configured to receive a first type, resource index from a resource allocating unit and then issue an instruction to an instruction executing unit for execution, to receive a second type resource index from the resource allocating unit, to execute the instruction from the instruction scheduling unit, and to issue a result of the instruction execution and the second type resource index to a result storing unit. The result storing unit comprises a plurality of resource for storing instruction execution results and execution results. The result storing unit is configured to allocate the first type resource index to an instruction entering the instruction scheduling unit and to allocate the second type resource index to an instruction entering the instruction execution unit.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: August 22, 2023
    Assignee: C-SKY Microsystems Co., Ltd.
    Inventor: Chang Liu
  • Patent number: 11734079
    Abstract: The present disclosure relates to a processor that includes one or more processing elements associated with one or more instruction set architectures. The processor is configured to receive a request from an application executed by a first processing element of the one or more processing elements to enable a feature associated with an instruction set architecture. Additionally, the processor is configured to enable the application to utilize the feature without a system call occurring when the feature is associated with an instruction set architecture associated with the first processing element.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: August 22, 2023
    Assignee: Intel Corporation
    Inventors: Toby Opferman, Eliezer Weissmann, Robert Valentine, Russell Cameron Arnold
  • Patent number: 11726836
    Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: August 15, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shandan Zhou, Saurabh Agarwal, Karthikeyan Subramanian, Thomas Moscibroda, Paul Naveen Selvaraj, Sandeep Ramji, Sorin Iftimie, Nisarg Sheth, Wanghai Gu, Ajay Mani, Si Qin, Yong Xu, Qingwei Lin
  • Patent number: 11714672
    Abstract: A system is provided that includes one management cluster to manage network function virtualization infrastructure (NFVI) resources lifecycle in more than one edge POD locations, where resources include hardware and/or software, and where software resources lifecycle includes software development, upgrades, downgrades, logging, monitoring etc. Methods are provided for decoupling storage from compute and network functions in each virtual machine (VM)-based NFVI deployment location and moving it to a centralized location. Centralized storage could simultaneously interact with more than one edge PODs, and the security is built-in with periodic key rotation. Methods are provided for increasing NFVI system viability by dedicating (fencing) CPU core pairs for specific controller operations and workload operations, and sharing the CPU cores for specific tasks.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: August 1, 2023
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Santanu Dasgupta, Chandragupta Ganguly, Ian Wells, Rajiv Asati, Om Prakash Suthar, Vinod Pandarinathan, Ajay Kalambur, Yichen Wang, John Wei-I Wu
  • Patent number: 11714681
    Abstract: A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: August 1, 2023
    Assignee: Visa International Service Association
    Inventors: Hao Yang, Biswajit Das, Yu Gu, Peter Walker, Igor Karpenko, Robert Brian Christensen
  • Patent number: 11714668
    Abstract: An implementation of the disclosure provides identifying an amount of a resource associated with a virtual machine (VM) hosted by a first host machine of a plurality of host machines that are coupled to and are managed by a host controller, wherein a part of a quality manager is executed at the first host machine and another part of the quality manager is executed in the host controller. A requirement of an additional amount of resource by the VM is determined in view of an occurrence of an event associated with the VM. The VM may be migrated to a second host machine of the plurality of host machines for a duration of the event in view of the additional amount of the resource.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: August 1, 2023
    Assignee: Red Hat Israel, Ltd.
    Inventor: Yaniv Kaul
  • Patent number: 11714674
    Abstract: A method of handling a first input/output operation (IO) from a first virtual machine (VM), wherein the first VM is located in a first data center and the first IO is directed to a data store in a second data center, includes the steps of: connecting, by a proxy located in the first data center, to the data store; after connecting to the data store, caching, by the proxy, data of the first VM stored in the data store, wherein caching the data of the first VM comprises storing the data of the first VM in a cache located in the first data center; redirecting, by a redirection filter to the proxy, the first IO; and performing, by the proxy, the first IO on the cache in the first data center.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: August 1, 2023
    Assignee: VMware, Inc.
    Inventor: Brian Forney