Patents by Inventor Manoj Aggarwal

Manoj Aggarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11550631
    Abstract: In some examples, a computing system receives an indication of an increased workload portion to be added to a workload of a storage system, the workload comprising buckets of operations of different characteristics. The computing system computes, based on quantities of operations of the different characteristics in the workload, factor values that indicate distribution of operations of the increased workload portion to the buckets of operations of the different characteristics, and distributes, according to the factor values, the operations of the increased workload portion into the buckets of operations of the different characteristics.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: January 10, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Mayukh Dutta, Manoj Srivatsav, Jharna Aggarwal, Manu Sharma
  • Patent number: 11537813
    Abstract: During a training phase, a first machine learning system is trained using actual data, such as multimodal images of a hand, to generate synthetic image data. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user. For example, the candidate synthetic image data is compared to previously generated synthetic image data to avoid duplicative synthetic identities. The second machine learning system is then trained using the approved candidate synthetic image data.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: December 27, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Igor Kviatkovsky, Nadav Israel Bhonker, Alon Shoshan, Manoj Aggarwal, Gerard Guy Medioni
  • Publication number: 20220398021
    Abstract: In some examples, a system creates a training data set based on features of sample workloads, the training data set comprising labels associated with the features of the sample workloads, where the labels are based on load indicators generated in a computing environment relating to load conditions of the computing environment resulting from execution of the sample workloads. The system groups selected workloads into a plurality of workload clusters based on features of the selected workloads, and computes, using a model trained based on the training data set, parameters representing contributions of respective workload clusters of the plurality of workload clusters to a load in the computing environment. The system performs workload management in the computing environment based on the computed parameters.
    Type: Application
    Filed: June 9, 2021
    Publication date: December 15, 2022
    Inventors: Mayukh Dutta, Aesha Dhar Roy, Manoj Srivatsav, Ganesha Devadiga, Geethanjali N. Rao, Prasenjit Saha, Jharna Aggarwal
  • Patent number: 11527092
    Abstract: Images of a hand may be used to identify users. Quality, detail, and so forth of these images may vary. An image is processed to determine a first spatial mask. A first neural network comprising many layers uses the first spatial mask at a first layer and a second spatial mask at a second layer to process images and produce an embedding vector representative of features in the image. The first spatial mask provides information about particular portions of the input image, and is determined by processing the image with an algorithm such as an orientation certainty level (OCL) algorithm. The second spatial mask is determined using unsupervised training and represents weights of particular portions of the input image as represented at the second layer. The use of the masks allows the first neural network to learn to use or disregard particular portions of the image, improving overall accuracy.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: December 13, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Miriam Farber, Manoj Aggarwal, Gerard Guy Medioni
  • Patent number: 11288490
    Abstract: This disclosure describes techniques for identifying users that are enrolled for use of a user-recognition system and updating enrollment data of these users over time. To enroll in the user-recognition system, the user may initially scan his or her palm. The resulting image data may later be used when the user requests to be identified by the system by again scanning his or her palm. However, because the characteristics of user palms may change over the time, the user-recognition system may continue to build more and more data for use in recognizing the user, in addition to removing older data that may no longer accurately represent current characteristics of respective user palms.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: March 29, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Manoj Aggarwal, Jason Garfield, Korwin Jon Smith, Jordan Tyler Williams
  • Publication number: 20210196406
    Abstract: Described are methods and systems for operating devices in an operating room (OR), according to some embodiments. An OR hub can provide an operations user interface (UI) that is provisioned by a hub software developer to enable authorized users to access permitted software functions run by the system software on the OR hub to operate one or more medical devices in the OR. The operations UI can be configured to prevent an interaction of the one or more medical devices and the OR hub with a user until that user is authenticated through the operations U. In some embodiments, the operations UI of the OR hub implements role-based security in which the operations UI provides an authenticated user with different sets of permitted software and/or security functions based on a type of credential possessed by the authenticated user.
    Type: Application
    Filed: December 30, 2020
    Publication date: July 1, 2021
    Applicant: Stryker Corporation
    Inventors: Amit A. MAHADIK, Ramanan PARAMASIVAN, Suraj BHAT, Afshin JILA, Manoj AGGARWAL, Sourabh CHOUDHARY
  • Patent number: 11017203
    Abstract: This disclosure describes techniques for identifying users that are enrolled for use of a user-recognition system and updating enrollment data of these users over time. To enroll in the user-recognition system, the user may initially scan his or her palm. The resulting image data may later be used when the user requests to be identified by the system by again scanning his or her palm. However, because the characteristics of user palms may change over the time, the user-recognition system may continue to build more and more data for use in recognizing the user, in addition to removing older data that may no longer accurately represent current characteristics of respective user palms.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: May 25, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Manoj Aggarwal, Jason Garfield, Korwin Jon Smith, Jordan Tyler Williams
  • Patent number: 10902237
    Abstract: This disclosure describes techniques for identifying users that are enrolled for use of a user-recognition system and updating enrollment data of these users over time. To enroll in the user-recognition system, the user may initially scan his or her palm. The resulting image data may later be used when the user requests to be identified by the system by again scanning his or her palm. However, because the characteristics of user palms may change over the time, the user-recognition system may continue to build more and more data for use in recognizing the user, in addition to removing older data that may no longer accurately represent current characteristics of respective user palms.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: January 26, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Manoj Aggarwal, Jason Garfield, Korwin Jon Smith, Jordan Tyler Williams
  • Patent number: 10872221
    Abstract: A non-contact biometric identification system includes a hand scanner that generates images of a user's palm. Images are acquired using light of a first polarization at a first time that show surface characteristics such as wrinkles in the palm while images acquired using light of a second polarization at a second time show deeper characteristics such as veins. Within the images, the palm is identified and subdivided into sub-images. The sub-images are subsequently processed to determine feature vectors present in each sub-image. A current signature is determined using the feature vectors. A user may be identified based on a comparison of the current signature with a previously stored reference signature that is associated with a user identifier.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: December 22, 2020
    Assignee: AMAZON TECHNOLOGIES, INC
    Inventors: Dilip Kumar, Manoj Aggarwal, George Leifman, Gerard Guy Medioni, Nikolai Orlov, Natan Peterfreund, Korwin Jon Smith, Dmitri Veikherman, Sora Kim
  • Publication number: 20190392189
    Abstract: A non-contact biometric identification system includes a hand scanner that generates images of a user's palm. Images are acquired using light of a first polarization at a first time show surface characteristics such as wrinkles in the palm while images acquired using light of a second polarization at a second time show deeper characteristics such as veins. Within the images, the palm is identified and subdivided into sub-images. The sub-images are subsequently processed to determine feature vectors present in each sub-image. A current signature is determined using the feature vectors. A user may be identified based on a comparison of the current signature with a previously stored reference signature that is associated with a user identifier.
    Type: Application
    Filed: June 21, 2018
    Publication date: December 26, 2019
    Inventors: Dilip Kumar, Manoj Aggarwal, George Leifman, Gerard Guy Medioni, Nikolai Orlov, Natan Peterfreund, Korwin Jon Smith, Dmitri Veikherman, Sora Kim
  • Patent number: 10332113
    Abstract: The present disclosure describes systems and methods for authorization. The method may include accessing, by an authorization engine for a transaction by a user, an activity pattern model of the user from a database. The activity pattern model of the user may be indicative of a geospatial behavior of the user over time. The authorization engine may determine a set of sensors available for facilitating the transaction, each of the sensors assigned with a usability value prior to the transaction. The authorization engine may access an activity pattern model of the sensors, the activity pattern model of the sensors indicative of geospatial characteristics of one or more of the sensors over time. The authorization engine may determine a convenience metric for each of a plurality of subsets of the sensors, using the activity pattern model of the user, the activity pattern model of the sensors, and usability values of corresponding sensors.
    Type: Grant
    Filed: November 18, 2015
    Date of Patent: June 25, 2019
    Assignee: Eyelock LLC
    Inventors: Keith Hanna, Mikhail Teverovskiy, Manoj Aggarwal, Sarvesh Makthal
  • Publication number: 20170244991
    Abstract: This disclosure describes methods and systems for managing video frame rate at a video production site. A video processor of a first video production site may process received video frames received via a network with dynamic transmission properties. A frame rate controller may monitor at an output buffer of the site, an average rate of processed video frames received from the video processor, and may detect that the average rate of processed video frames received from the video processor has decreased to a level below a predefined output frame rate for transmitting processed video frames to a third video production site. The frame rate controller may increase a rate of video frames being provided to the video processor, to a level above the predefined output frame rate to restore the average rate of processed video frames received from the video processor, to the predefined output frame rate.
    Type: Application
    Filed: December 9, 2016
    Publication date: August 24, 2017
    Inventors: Manoj Aggarwal, Keith J. Hanna
  • Publication number: 20170244894
    Abstract: This disclosure describes methods and systems for managing video frame rate at a video production site. A video editing processor at one of a plurality of video production sites may monitor an instantaneous rate of video frames processed by a user, and may determine an average rate of video frames processed by the user based on instantaneous rates monitored over time. The video editing processor may detect that the average rate has dropped below a predefined output rate. A controller may determine a rate of change of an instantaneous input rate of video frames being presented to the user for processing, and may increase the input rate of video frames being presented to the user, over a period of time such that the rate of change of the instantaneous input rate is below a predetermined threshold, to restore the average rate to the predefined output rate.
    Type: Application
    Filed: November 18, 2016
    Publication date: August 24, 2017
    Inventors: Manoj Aggarwal, Keith J. Hanna
  • Publication number: 20170244984
    Abstract: This disclosure describes methods and systems for integrating video production over a network of video production sites. The video production sites may each provide a video production function as part of a distributed video production process across the network. An integration server may receive production quality metrics from at least a first video production site and a second video production site. Each production quality metric may be indicative of quality of a corresponding sequence of video frames produced at a corresponding video production site. The video production integration server may dynamically control selection or merging of video frames from the first and second video production sites to generate an integrated video production, by comparing production quality metrics received from the first and second video production sites that correspond to a same sequence of video frames, against a threshold value or against each other.
    Type: Application
    Filed: December 29, 2016
    Publication date: August 24, 2017
    Inventors: Manoj Aggarwal, Keith J. Hanna
  • Publication number: 20170125064
    Abstract: This disclosure describes methods and systems for improving quality in video production. A video production device receives inputs from a user for only a subset of image frames presented for use in producing a video. Each of the inputs indicates a point of interest (POI) in a corresponding image frame, indicative of a region of interest for inclusion as a scene in the video. A video processor evaluates a spatial path of the indicated POIs relative to a bounding scene of interest (BSI), which represents an extent of a field of view of a corresponding camera, and dynamically adjusts the field of view to steer subsequently acquired image frames to be within the BSI. The video processor produces the video with all scenes arranged successively at a periodic interval. Use of the spatial path for scene or camera-viewpoint selection is designed to optimize quality of the produced video.
    Type: Application
    Filed: November 3, 2016
    Publication date: May 4, 2017
    Inventors: Manoj Aggarwal, Keith J. Hanna
  • Publication number: 20160140567
    Abstract: The present disclosure describes systems and methods for authorization. The method may include accessing, by an authorization engine for a transaction by a user, an activity pattern model of the user from a database. The activity pattern model of the user may be indicative of a geospatial behavior of the user over time. The authorization engine may determine a set of sensors available for facilitating the transaction, each of the sensors assigned with a usability value prior to the transaction. The authorization engine may access an activity pattern model of the sensors, the activity pattern model of the sensors indicative of geospatial characteristics of one or more of the sensors over time. The authorization engine may determine a convenience metric for each of a plurality of subsets of the sensors, using the activity pattern model of the user, the activity pattern model of the sensors, and usability values of corresponding sensors.
    Type: Application
    Filed: November 18, 2015
    Publication date: May 19, 2016
    Inventors: Keith Hanna, Mikhail Teverovskiy, Manoj Aggarwal, Sarvesh Makthal
  • Publication number: 20140270404
    Abstract: This disclosure is directed to methods and systems for managing risk in a transaction with a user, which presents to the user, with sufficient detail for inspection, an image of the user blended with information about the transaction. A processor of a biometric device may blend an acquired image of a user of the device during a transaction with information about the transaction. The acquired image may comprise an image of the user suitable for manual or automatic recognition. The information may include a location determined via the device, an identifier of the device, and a timestamp for the image acquisition. The system may include a display, for presenting the blended image to the user. The presented image may show purposeful integration of the information about the transaction with the acquired image, to comprise a record of the transaction to be stored if the user agrees to proceed with the transaction.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: Eyelock, Inc.
    Inventors: Keith J. Hanna, Manoj Aggarwal
  • Publication number: 20140270409
    Abstract: This disclosure is directed to methods and systems for selective identification of biometric data for efficient compression. An evaluation module operating on a biometric device may determine if a set of acquired biometric data satisfies a quality threshold for subsequent automatic or manual recognition, while satisfying a set of predefined criteria for efficient compression of a corresponding type of biometric data, the determination performed prior to performing data compression on the acquired biometric data. The evaluation module may classify, decide or identify, based on the determination, whether to retain the acquired set of acquired biometric data for subsequent data compression.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: Eyelock, Inc.
    Inventors: KEITH J. HANNA, Manoj Aggarwal
  • Patent number: 8639046
    Abstract: A scalable method and apparatus is described to provide personalized interactive visualization of a plurality of compressed image data to a plurality of concurrent users. A plurality of image sources are digitally processed in the compressed domain to provide controllable enhanced user-specific interactive visualization with support for adjustment in viewing parameters such frame-rate, field of view, resolution, color format, viewpoint and bandwidth.
    Type: Grant
    Filed: May 4, 2010
    Date of Patent: January 28, 2014
    Assignee: Mamigo Inc
    Inventor: Manoj Aggarwal
  • Patent number: 8538082
    Abstract: The present invention provides a system and method for detecting and tracking a moving object. First, robust change detection is applied to find initial candidate regions in consecutive frames. These initial detections in consecutive frames are stacked to produce space-time bands which are extracted by Hough transform and entropy minimization based band detection algorithm.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: September 17, 2013
    Assignee: SRI International
    Inventors: Tao Zhao, Manoj Aggarwal, Changjiang Yang