Patents by Inventor Anand Kulkarni

Anand Kulkarni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250144386
    Abstract: A fluid applicator (100) with precise volume dispensing control for dispensing fluids such as a disinfectant in a controlled manner that utilizes a pumping device (180) at least partially inserted into a housing (110) having an applicator sponge (122).
    Type: Application
    Filed: November 8, 2022
    Publication date: May 8, 2025
    Applicant: BECTON, DICKINSON AND COMPANY
    Inventors: Kevin M. Ryan, Shishir Prasad, Aniket Anand Kulkarni, Ruben Rodrigues, Praveen Nalawade, Rahul Malviya, Amarsinh Deeliprao Jadhav
  • Publication number: 20240393858
    Abstract: In one example in accordance with the present disclosure, an electronic device is described. An example electronic device includes a processor, a port to receive a connector of a peripheral device, and a power supply. An example power supply is to supply power to the processor and the peripheral device when connected to the electronic device. The example electronic device also includes an electronic device controller. The electronic device controller is to alter an amount of power from a first value to a second value to supply to the peripheral device based on an amount of power available from the power supply and a lower threshold power level that enables data communication from the peripheral device. The example electronic device also includes a power delivery controller. The power delivery controller is to provide the amount of power having the second value to the peripheral device based on an output from the electronic device controller.
    Type: Application
    Filed: February 10, 2021
    Publication date: November 28, 2024
    Applicant: Hewlett-Packard Development Company, L.P.
    Inventors: Fangyong Dai, Anand Kulkarni, Qijun Chen, Asjad Shamim
  • Publication number: 20240193088
    Abstract: A memory device includes a first memory and a second memory that caches data stored in the first memory. At least one controller of the memory device receives page fault information from a host. The page fault information results from a request for data by the host that is stored in the first memory but is not cached in the second memory when requested by the host. The memory device uses the received page fault information for one or more inputs into a prefetch model trained by Machine Learning (ML) to generate at least one inference. Based at least in part on the at least one inference, prefetch data is cached in the second memory. In one aspect, the page fault information is used to train the prefetch model. In another aspect, the page fault information includes at least one virtual address used by the host for the requested data.
    Type: Application
    Filed: August 8, 2023
    Publication date: June 13, 2024
    Inventors: Chao Sun, Qingbo Wang, Minghai Qin, Jaco Hofmann, Anand Kulkarni, Dejan Vucinic, Zvonimir Bandic
  • Publication number: 20240004719
    Abstract: Certain aspects of the present disclosure provide techniques for partitioning feature maps to improve machine learning model processing. In one aspect, a method, includes partitioning a feature map row-wise into a plurality of feature sub-maps such that: each respective feature sub-map of the plurality of feature sub-maps is defined with respect to a split row determined based on a dense data element count for each row of the feature map; and each feature sub-map of the plurality of feature sub-maps has a same column dimensionality as the feature map; and assigning each of the plurality of feature sub-maps to one of a plurality of tensor compute units and one of a plurality of tensor feature map memory units for processing in parallel.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Western Digital Technologies, Inc.
    Inventors: Kiran Kumar GUNNAM, Vikram Varadarajan KALKUNTE, Matheus Almeida OGLEARI, Anand KULKARNI, Zvonimir Z. BANDIC
  • Patent number: 11797830
    Abstract: An apparatus includes a tensor compute cluster having a plurality of tensor compute units to process a plurality of sub-feature maps in a machine learning application and a tensor memory cluster having a plurality of tensor feature map memory units to store the plurality of sub-feature maps. The apparatus also includes circuitry to partition an input feature map into the plurality of sub-feature maps such that sparsity in each of the plurality of sub-feature maps satisfies a predetermined threshold, and assign each of the plurality of sub-feature maps to one of the plurality of tensor compute units and one of the plurality of tensor feature map memory units for processing in parallel.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: October 24, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Kiran Gunnam, Anand Kulkarni, Zvonimir Bandic
  • Patent number: 11755683
    Abstract: An apparatus includes a first tensor compute cluster configured to receive first input feature tensors, a second tensor compute cluster configured to receive second input feature tensors more sparse than the first input feature tensors, and a vector accelerator. The apparatus also includes circuitry configured to partition an input feature map into a plurality of input feature tensors based on a compression criteria and assign each of the plurality of input feature tensors to one of the first tensor compute cluster, the second tensor compute cluster, or the vector accelerator based upon at least one of parameters including a sparsity and an optimization parameter.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: September 12, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Kiran Gunnam, Anand Kulkarni, Zvonimir Bandic
  • Publication number: 20230129307
    Abstract: In some examples, the disclosure describes a device that includes a docking station and a processor. The processor may determine that an error condition involving disconnection of a computing device from the docking station couplable to the computing device has occurred and receive, responsive to the determination, a signal indicative of performance of an operation to re-establish communication with the docking station. The processor may further perform, responsive to receipt of the signal, the operation to re-establish communication with the docking station.
    Type: Application
    Filed: October 27, 2021
    Publication date: April 27, 2023
    Inventors: Chun Chang, Ming-Hong Lee, Anand Kulkarni, Li-Pin Lu, Rajesh Shah
  • Patent number: 11544547
    Abstract: A non-volatile memory device includes an array of non-volatile memory cells that are configured to store weights of a neural network. Associated with the array is a data latch structure that includes a page buffer, which can store weights for a layer of the neural network that is read out of the array, and a transfer buffer, that can store inputs for the neural network. The memory device can perform multiply and accumulate operations between inputs and weight of the neural network within the latch structure, avoiding the need to transfer data out of the array and associated latch structure for portions of an inference operation. By using binary weights and inputs, multiplication can be performed by bit-wise XNOR operations. The results can then be summed and activation applied, all within the latch structure.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: January 3, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Anand Kulkarni, Won Ho Choi, Martin Lueker-Boden
  • Patent number: 11462003
    Abstract: A system with a multiplication circuit having a plurality of multipliers is disclosed. Each of the plurality of multipliers is configured to receive a data value and a weight value to generate a product value in a convolution operation of a machine learning application. The system also includes an accumulator configured to receive the product value from each of the plurality of multipliers and a register bank configured to store an output of the convolution operation. The accumulator is further configured to receive a portion of values stored in the register bank and combine the received portion of values with the product values to generate combined values. The register bank is further configured to replace the portion of values with the combined values.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: October 4, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Kiran Gunnam, Anand Kulkarni, Zvonimir Bandic
  • Publication number: 20220290999
    Abstract: A method of implementing a mobility as a service policy may include associating an identifier with a mobility service policy. The method may include assigning the mobility service policy to at least one mobility service provider of a plurality of mobility service providers. The method may include setting at least one usage restriction for the mobility service policy. The at least one usage restriction may limit operation of the at least one mobility service provider when the policy is activated. The method may include setting a geographical restriction associated with the mobility service policy. The method may include setting a time restriction associated with when the mobility service policy is to be active. The method may include enabling the mobility service policy.
    Type: Application
    Filed: April 1, 2022
    Publication date: September 15, 2022
    Inventors: Aravind Asam, Alexander Wilhelm, Carina Nicklaw, Frederick Rodolfo, Dongwook Kim, Anand Kulkarni, Bruno Alves, Aaron Bannister, Rakesh Prasad, Chintan Gokani, Santhosh Kumar Doddi, Divya Komadam, Katherine Aurelia
  • Patent number: 11397886
    Abstract: A non-volatile memory structure capable of storing layers of a deep neural network (DNN) and perform an inferencing operation within the structure is presented. A stack of bonded die pairs is connected by through silicon vias. Each bonded die pair includes a memory die, having one or more memory arrays onto which layers of the neural network are mapped, and a peripheral circuitry die, including the control circuits for performing the convolution or multiplication for the bonded die pair. The multiplications can either be done in-array on the memory die or in-logic on the peripheral circuitry die. The arrays can be formed into columns along the vias, allowing an inferencing operation to be performed by propagating an input up and down the columns, with the output of one level being the input of the subsequent layer.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 26, 2022
    Assignee: SanDisk Technologies LLC
    Inventors: Tung Thanh Hoang, Martin Lueker-Boden, Anand Kulkarni
  • Patent number: 11397885
    Abstract: A non-volatile memory structure capable of storing layers of a deep neural network (DNN) and perform an inferencing operation within the structure is presented. A stack of bonded die pairs is connected by through silicon vias. Each bonded die pair includes a memory die, having one or more memory arrays onto which layers of the neural network are mapped, and a peripheral circuitry die, including the control circuits for performing the convolution or multiplication for the bonded die pair. The multiplications can either be done in-array on the memory die or in-logic on the peripheral circuitry die. The arrays can be formed into columns along the vias, allowing an inferencing operation to be performed by propagating an input up and down the columns, with the output of one level being the input of the subsequent layer.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: July 26, 2022
    Assignee: SanDisk Technologies LLC
    Inventors: Tung Thanh Hoang, Martin Lueker-Boden, Anand Kulkarni
  • Publication number: 20220118131
    Abstract: The present invention discloses the device for a urinary catheter employing electromagnetic radiation and/or vibration transducer. The device comprises a clip-on (4) including a source of electromagnetic radiation (5) and/or a vibration transducer and a coupler (3) that allows electromagnetic radiation access from clip-on to the inside of the coupler. The combination of electromagnetic radiation and photo catalyst material may increase effectiveness for antimicrobial activity. The device aims at preventing the catheter-associated urinary tract infection caused by both intraluminal and extraluminal routes.
    Type: Application
    Filed: December 3, 2019
    Publication date: April 21, 2022
    Inventors: Nirmal KUMAR, Aniket Anand KULKARNI, Deepika DIXIT, Yasuyuki MATSUURA, Prashant JHA, Harpal SINGH, Hitender GAUTAM
  • Patent number: 11281601
    Abstract: Example multi-device storage systems, storage devices, and methods provide hosted services on peer storage devices. Storage devices include a storage medium, a logical mapping memory, and a processor for executing hosted services using the logical mapping memory. Each storage device is configured to communicate with peer storage devices over an interconnect fabric. The logical mapping memory includes storage device media logical mapping information configured in continuous logical blocks with a media block size equal to a page programming size of the storage medium. The logical mapping memory also includes host logical mapping information, configured in host logical blocks with a host block size smaller than the media block size, for the peer storage devices.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: March 22, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Sanjay Subbarao, Vladislav Bolkhovitin, Anand Kulkarni, Brian Walter O'Krafka
  • Publication number: 20220031237
    Abstract: The invention provides systems, methods and computer program products for monitoring vascular perfusion in replanted tissue flaps. Specifically, the invention provides a non-invasive solution for monitoring of vascular perfusion at a site of tissue replantation, and that is capable of detecting problems in vascular perfusion and raising alarms in real times. The invention achieves the above function objectives by means of non-invasive sensors that continuously monitor selected parameters related to tissue condition. The monitored data parameters are used to determine a real time condition of replanted tissue.
    Type: Application
    Filed: December 2, 2019
    Publication date: February 3, 2022
    Inventors: Nirmal Kumar, Aniket Anand Kulkarni, Deepika Dixit, Yasuyuki Matsuura, Prashant Jha, Ashish Bichpuriya
  • Publication number: 20210397930
    Abstract: A non-volatile memory device includes an array of non-volatile memory cells that are configured to store weights of a neural network. Associated with the array is a data latch structure that includes a page buffer, which can store weights for a layer of the neural network that is read out of the array, and a transfer buffer, that can store inputs for the neural network. The memory device can perform multiply and accumulate operations between inputs and weight of the neural network within the latch structure, avoiding the need to transfer data out of the array and associated latch structure for portions of an inference operation. By using binary weights and inputs, multiplication can be performed by bit-wise XNOR operations. The results can then be summed and activation applied, all within the latch structure.
    Type: Application
    Filed: June 22, 2020
    Publication date: December 23, 2021
    Applicant: Western Digital Technologies, Inc.
    Inventors: Anand Kulkarni, Won Ho Choi, Martin Lueker-Boden
  • Publication number: 20210342671
    Abstract: A non-volatile memory structure capable of storing layers of a deep neural network (DNN) and perform an inferencing operation within the structure is presented. A stack of bonded die pairs is connected by through silicon vias. Each bonded die pair includes a memory die, having one or more memory arrays onto which layers of the neural network are mapped, and a peripheral circuitry die, including the control circuits for performing the convolution or multiplication for the bonded die pair. The multiplications can either be done in-array on the memory die or in-logic on the peripheral circuitry die. The arrays can be formed into columns along the vias, allowing an inferencing operation to be performed by propagating an input up and down the columns, with the output of one level being the input of the subsequent layer.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 4, 2021
    Applicant: SanDisk Technologies LLC
    Inventors: Tung Thanh Hoang, Martin Lueker-Boden, Anand Kulkarni
  • Publication number: 20210342676
    Abstract: Anon-volatile memory structure capable of storing layers of a deep neural network (DNN) and perform an inferencing operation within the structure is presented. A stack of bonded die pairs is connected by through silicon vias. Each bonded die pair includes a memory die, having one or more memory arrays onto which layers of the neural network are mapped, and a peripheral circuitry die, including the control circuits for performing the convolution or multiplication for the bonded die pair. The multiplications can either be done in-array on the memory die or in-logic on the peripheral circuitry die. The arrays can be formed into columns along the vias, allowing an inferencing operation to be performed by propagating an input up and down the columns, with the output of one level being the input of the subsequent layer.
    Type: Application
    Filed: June 12, 2020
    Publication date: November 4, 2021
    Applicant: SanDisk Technologies LLC
    Inventors: Tung Thanh Hoang, Martin Lueker-Boden, Anand Kulkarni
  • Publication number: 20210303976
    Abstract: An apparatus includes a tensor compute cluster having a plurality of tensor compute units to process a plurality of sub-feature maps in a machine learning application and a tensor memory cluster having a plurality of tensor feature map memory units to store the plurality of sub-feature maps. The apparatus also includes circuitry to partition an input feature map into the plurality of sub-feature maps such that sparsity in each of the plurality of sub-feature maps satisfies a predetermined threshold, and assign each of the plurality of sub-feature maps to one of the plurality of tensor compute units and one of the plurality of tensor feature map memory units for processing in parallel.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Applicant: Western Digital Technologies, Inc.
    Inventors: Kiran Gunnam, Anand Kulkarni, Zvonimir Bandic
  • Publication number: 20210303909
    Abstract: A system with a multiplication circuit having a plurality of multipliers is disclosed. Each of the plurality of multipliers is configured to receive a data value and a weight value to generate a product value in a convolution operation of a machine learning application. The system also includes an accumulator configured to receive the product value from each of the plurality of multipliers and a register bank configured to store an output of the convolution operation. The accumulator is further configured to receive a portion of values stored in the register bank and combine the received portion of values with the product values to generate combined values. The register bank is further configured to replace the portion of values with the combined values.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Applicant: Western Digital Technologies, Inc.
    Inventors: Kiran Gunnam, Anand Kulkarni, Zvonimir Bandic