Patents by Inventor Naresh Patel
Naresh Patel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240113950Abstract: This disclosure describes techniques for monitoring, scheduling, and performance management for virtualization infrastructures within networks. In one example, a computing system includes a plurality of different cloud-based compute clusters (e.g., different cloud projects), each comprising a set of compute nodes. Policy agents execute on the compute nodes to monitor performance and usage metrics relating to resources of the compute nodes. Policy controllers within each cluster deploy policies to the policy agents and evaluate performance and usage metrics from the policy agents by application of one or more rulesets for infrastructure elements of the compute cluster. Each of the policy controllers outputs data to a multi-cluster dashboard software system indicative of a current health status for the infrastructure elements based on the evaluation of the performance and usage metrics for the cluster.Type: ApplicationFiled: December 7, 2023Publication date: April 4, 2024Inventors: Harshit Naresh Chitalia, Avi K. Patel, Parantap Roy, Travis Gregory Newhouse, Sumeet Singh, Neeren Shripad Patki
-
Patent number: 11820290Abstract: Systems and methods for determining an alignment of a trailer relative to a docking bay or a vehicle bay door using dynamic depth filtering. Image data and position data is captured by a 3D camera system with an at least partially downward-facing field of view. When a trailer is approaching the docking bay or door, the captured image data includes a top surface of the trailer. A dynamic height range is determined based on an estimated height of the top surface of the trailer in the image data and a dynamic depth filter is applied to filter out image data corresponding to heights outside of the dynamic height range. An angular position and/or lateral offset of the trailer is determined based on the depth-filtered image data.Type: GrantFiled: April 14, 2021Date of Patent: November 21, 2023Assignee: Niagara Bottling, LLCInventors: Jimmy Erik Penaloza, Parth Naresh Patel
-
Publication number: 20230186601Abstract: Apparatus having model-based machine learning and inferencing logic for controlling object transfer, comprises: an image input component to receive image data derived from a captured image of an object; a captured image classifier to 5 generate a first classification of the object by activating a trained model to analyse the image data; an input component to receive an object identifier for the object; an object identification classifier to generate a second classification of the object according to the object identifier; matching logic to detect failure to reconcile the first and second classification; heuristic logic responsive to the matching logic to 10 determine a causal factor in the failure; and training logic, operable when the heuristic logic determines that a causal factor in the failure to reconcile is a deficient first classification, to provide model training input comprising the image data and the object identifier to the model-based machine learning logic.Type: ApplicationFiled: March 17, 2021Publication date: June 15, 2023Inventors: Timothy David HARTLEY, David Charles MCCAFFREY, Michael Andrew PALLISTER, Nimesh Naresh PATEL
-
Publication number: 20230177391Abstract: Provided is machine learning apparatus comprising: a dataset for input to a training procedure of a machine learning model; data capture logic operable to capture from an object at least one datum for inclusion in the dataset; association logic operable to derive an additional characteristic of the object; annotator logic operable in response to the data capture logic and the association logic to create an annotation linking the additional characteristic with the at least one datum; storage logic operable to store the or each datum with an associated annotation in the dataset; and input logic to supply the dataset as machine learning input.Type: ApplicationFiled: March 17, 2021Publication date: June 8, 2023Inventors: David PACKWOOD, Michael Andrew PALLISTER, Ariel Edgar RUIZ-GARCIA, Nimesh Naresh PATEL, Eleftherios FANIOUDAKIS
-
Publication number: 20220332250Abstract: Systems and methods for determining an alignment of a trailer relative to a docking bay or a vehicle bay door using dynamic depth filtering. Image data and position data is captured by a 3D camera system with an at least partially downward-facing field of view. When a trailer is approaching the docking bay or door, the captured image data includes a top surface of the trailer. A dynamic height range is determined based on an estimated height of the top surface of the trailer in the image data and a dynamic depth filter is applied to filter out image data corresponding to heights outside of the dynamic height range. An angular position and/or lateral offset of the trailer is determined based on the depth-filtered image data.Type: ApplicationFiled: April 14, 2021Publication date: October 20, 2022Inventors: Jimmy Erik Penaloza, Parth Naresh Patel
-
Patent number: 10686906Abstract: A method, non-transitory computer readable medium and storage controller computing device that receives a read request from a client device. Data corresponding to the read request is retrieved from a flash cache comprising local flash memory. The data is returned to the client device in response to the read request. A determination is made when the data is stored in a flash pool. The flash pool comprises a plurality of solid state drives (SSDs). The data is inserted into the flash pool, when the determining indicates that the data is not stored in the flash pool. With this technology, a flash pool is populated based on hits in a flash cache. Accordingly, flash cache is utilized to provide low latency reads while the most important data is preserved in the flash pool to be used by another storage controller computing device in the event of a failover.Type: GrantFiled: May 2, 2016Date of Patent: June 16, 2020Assignee: NetApp, Inc.Inventors: Mark Smith, Brian Naylor, Naresh Patel
-
Publication number: 20170318114Abstract: A method, non-transitory computer readable medium and storage controller computing device that receives a read request from a client device. Data corresponding to the read request is retrieved from a flash cache comprising local flash memory. The data is returned to the client device in response to the read request. A determination is made when the data is stored in a flash pool. The flash pool comprises a plurality of solid state drives (SSDs). The data is inserted into the flash pool, when the determining indicates that the data is not stored in the flash pool. With this technology, a flash pool is populated based on hits in a flash cache. Accordingly, flash cache is utilized to provide low latency reads while the most important data is preserved in the flash pool to be used by another storage controller computing device in the event of a failover.Type: ApplicationFiled: May 2, 2016Publication date: November 2, 2017Inventors: Mark Smith, Brian Naylor, Naresh Patel
-
Patent number: 9405695Abstract: A system and method for determining an optimal cache size of a computing system is provided. In some embodiments, the method comprises selecting a portion of an address space of a memory structure of the computing system. A workload of data transactions is monitored to identify a transaction of the workload directed to the portion of the address space. An effect of the transaction on a cache of the computing system is determined, and, based on the determined effect of the transaction, an optimal cache size satisfying a performance target is determined. In one such embodiment the determining of the effect of the transaction on a cache of the computing system includes determining whether the effect would include a cache hit for a first cache size and determining whether the effect would include a cache hit for a second cache size different from the first cache size.Type: GrantFiled: November 5, 2013Date of Patent: August 2, 2016Assignee: NETAPP, INC.Inventors: Koling Chang, Ravikanth Dronamraju, Mark Smith, Naresh Patel
-
Publication number: 20150127905Abstract: A system and method for determining an optimal cache size of a computing system is provided. In some embodiments, the method comprises selecting a portion of an address space of a memory structure of the computing system. A workload of data transactions is monitored to identify a transaction of the workload directed to the portion of the address space. An effect of the transaction on a cache of the computing system is determined, and, based on the determined effect of the transaction, an optimal cache size satisfying a performance target is determined. In one such embodiment the determining of the effect of the transaction on a cache of the computing system includes determining whether the effect would include a cache hit for a first cache size and determining whether the effect would include a cache hit for a second cache size different from the first cache size.Type: ApplicationFiled: November 5, 2013Publication date: May 7, 2015Applicant: NETAPP, INC.Inventors: Koling Chang, Ravikanth Dronamraju, Mark Smith, Naresh Patel
-
Patent number: 8255630Abstract: The present invention includes storing in a main memory data block tags corresponding to blocks of data previously inserted into a buffer cache memory and then evicted from the buffer cache memory or written over in the buffer cache memory. Counters associated with the tags are updated when look-up requests to look up data block tags are received from a cache look-up algorithm.Type: GrantFiled: August 13, 2008Date of Patent: August 28, 2012Assignee: Network Appliance, Inc.Inventors: Naveen Bali, Naresh Patel
-
Patent number: 8176251Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.Type: GrantFiled: August 5, 2008Date of Patent: May 8, 2012Assignee: Network Appliance, Inc.Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
-
Patent number: 8112585Abstract: A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data.Type: GrantFiled: April 30, 2009Date of Patent: February 7, 2012Assignee: NetApp, Inc.Inventors: Naresh Patel, Jeffrey S. Kimmel, Garth Goodson
-
Patent number: 8015427Abstract: A system and method for prioritization of clock rates in a multi-core processor is provided. Instruction arrival rates are measured during a time interval Ti?1 to Ti by a monitoring module either internal to the processor or operatively interconnected with the processor. Using the measured instruction arrival rates, the monitoring module calculates an optimal instruction arrival rate for each core of the processor. For processors that support continuous frequency changes for cores, each core is then set to an optimal service rate. For processors that only support a discrete set of arrival rates, the optimal rates are mapped to a closest supported rate and the cores are set to the closest supported rate. This procedure is then repeated for each time interval.Type: GrantFiled: April 23, 2007Date of Patent: September 6, 2011Assignee: NetApp, Inc.Inventors: Steven C. Miller, Naresh Patel
-
Publication number: 20100281216Abstract: A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data.Type: ApplicationFiled: April 30, 2009Publication date: November 4, 2010Applicant: NetApp, Inc.Inventors: Naresh Patel, Jeffrey S. Kimmel, Garth Goodson
-
Publication number: 20080294846Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.Type: ApplicationFiled: August 5, 2008Publication date: November 27, 2008Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
-
Publication number: 20080263384Abstract: A system and method for prioritization of clock rates in a multi-core processor is provided. Instruction arrival rates are measured during a time interval Ti?1 to Ti by a monitoring module either internal to the processor or operatively interconnected with the processor. Using the measured instruction arrival rates, the monitoring module calculates an optimal instruction arrival rate for each core of the processor. For processors that support continuous frequency changes for cores, each core is then set to an optimal service rate. For processors that only support a discrete set of arrival rates, the optimal rates are mapped to a closest supported rate and the cores are set to the closest supported rate. This procedure is then repeated for each time interval.Type: ApplicationFiled: April 23, 2007Publication date: October 23, 2008Inventors: Steven C. Miller, Naresh Patel
-
Patent number: 7430639Abstract: The present invention includes storing in a main memory data block tags corresponding to blocks of data previously inserted into a buffer cache memory and then evicted from the buffer cache memory or written over in the buffer cache memory. Counters associated with the tags are updated when look-up requests to look up data block tags are received from a cache look-up algorithm.Type: GrantFiled: August 26, 2005Date of Patent: September 30, 2008Assignee: Network Appliance, Inc.Inventors: Naveen Bali, Naresh Patel
-
Patent number: 7424577Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.Type: GrantFiled: August 26, 2005Date of Patent: September 9, 2008Assignee: Network Appliance, Inc.Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
-
Publication number: 20070050548Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.Type: ApplicationFiled: August 26, 2005Publication date: March 1, 2007Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
-
Publication number: 20060204179Abstract: An optical fiber connector plug includes a housing through which extends a cable containing at least one optical fiber. A ferrule, which is supported by the housing, is provided for receiving the optical fiber. The ferrule has a mating facet and an opposing rear facet located in the housing. The ferrule has at least one guide pin thru-hole and at least one optical fiber thru-hole extending between the mating facet and the opposing rear facet. The guide pin thru-hole has an opening portion extending inward from the mating facet. The opening portion is tapered outward to meet the mating facet in an oblique manner such that the opening portion has a diameter in the plane of the mating facet that is greater than a diameter of a remainder of the guide pin thru-hole.Type: ApplicationFiled: May 3, 2006Publication date: September 14, 2006Inventors: Naresh Patel, Glenn Wilder, Gerald Nykolak, Lars Eskildsen