Patents by Inventor Nishitkumar Ashokkumar Desai

Nishitkumar Ashokkumar Desai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11922729
    Abstract: Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Kaustav Kundu, Pahal Kamlesh Dalal, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Geoffrey A. Franz, Gerard Guy Medioni, Hoi Cheung Pang, Rakesh Ramakrishnan
  • Patent number: 11922728
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11869065
    Abstract: This disclosure describes systems and techniques for identifying events that occur within an environment using image data captured at the environment. For example, one or more cameras may generate image data representative of a user interacting with an item on the shelf. This image data may be used to generate feature data associated with the user and the item, which may be analyzed by one or more classifiers for identifying an interaction between the user and the item. The systems and techniques may then generate interaction data, which in turn may be analyzed by one or more additional classifiers for identifying an event, such as the user picking a particular item from the shelf within the environment. Event data indicative of the event may then be used to update a virtual cart of the user.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jayakrishnan Kumar Eledath, Nikhil Chacko, Alessandro Bergamo, Kaustav Kundu, Marian Nasr Amin George, Jingjing Liu, Nishitkumar Ashokkumar Desai, Pahal Kamlesh Dalal, Keshav Nand Tripathi
  • Patent number: 11851279
    Abstract: Described is a system and method for collecting item and user information and utilizing that information to determine materials handling facility patterns, item trends and inventory location trends. User monitoring data for users located in a materials handling facility may include information identifying inventory locations approached by users, gaze directions of users, user dwell times, an identification of items picked by users, an identification of items placed by users, and/or in-transit times for items. This information may be aggregated and processed to determine materials handling facility patterns, item trends and/or inventory location trends.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ammar Chinoy, Sudarshan Narasimha Raghavan, Emilio Ian Maldonado, Daniel Bibireata, Nishitkumar Ashokkumar Desai
  • Publication number: 20230394831
    Abstract: This disclosure is directed to techniques in which a first user in an environment scans visual indicia associated with an item, such as a barcode, before handing the item to a second user. One or more computing devices may receive an indication of the scan, retrieve image data of the interaction from a camera within the environment, identify the user that received the item, and update a virtual cart associated with the second user to indicate addition of the item.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 7, 2023
    Inventors: Ivan Stankovic, Joseph M Alyea, Jiajun Zhao, Kartik Muktinutalapati, Waqas Ahmed, Dilip Kumar, Danny Guan, Nishitkumar Ashokkumar Desai, Longlong Zhu
  • Patent number: 11810362
    Abstract: This disclosure describes techniques for updating planogram data associated with a facility. The planogram may indicate inventory locations within the facility for various types of items supported by product fixtures. In particular an image of a product fixture is analyzed to identify image segments corresponding to product groups, where each product group consists of instances of the same product and each image segment corresponds to a group of image points. Image data is further analyzed to determine coordinates of the points of each image segment. A product space corresponding to the product group is then defined based on the coordinates of the points of the product group. In some cases, for example, a product space may be defined in terms of the coordinates of the corners of a rectangular bounding box or volume.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: November 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Jayakrishnan Kumar Eledath, Petko Tsonev, Nishitkumar Ashokkumar Desai, Gerard Guy Medioni, Jean Laurent Guigues, Chuhang Zou, Connor Spencer Blue Worley, Claire Law, Paul Ignatius Dizon Echevarria, Matthew Fletcher Harrison, Pahal Kamlesh Dalal
  • Patent number: 11790630
    Abstract: This disclosure describes techniques for updating planogram data associated with a facility. The planogram may indicate, for different shelves and other inventory locations within the facility, which items are on which shelves. For example, the planogram data may indicate that a particular item is located on a particular shelf. Therefore, when a system identifies that a user has taken an item from that shelf, the system may update a virtual cart of that user to indicate addition of the particular item. In some instances, however, a new item may be stocked on the example shelf instead of a previous item. The techniques described herein may use sensor data generated in the facility to identify this change and update the planogram data to indicate an association between the shelf and the new item.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: October 17, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Petko Tsonev, Claire Law, Connor Spencer Blue Worley, Jue Wang, Bharat Singh, Hue Tuan Thi, Jayakrishnan Kumar Eledath, Nishitkumar Ashokkumar Desai
  • Patent number: 11688170
    Abstract: This disclosure is directed to techniques in which a first user in an environment scans visual indicia associated with an item, such as a barcode, before handing the item to a second user. One or more computing devices may receive an indication of the scan, retrieve image data of the interaction from a camera within the environment, identify the user that received the item, and update a virtual cart associated with the second user to indicate addition of the item.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: June 27, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ivan Stankovic, Joseph M Alyea, Jiajun Zhao, Kartik Muktinutalapati, Waqas Ahmed, Dilip Kumar, Danny Guan, Nishitkumar Ashokkumar Desai, Longlong Zhu
  • Patent number: 11657617
    Abstract: A system may generate image data of users within a facility. Such users may be part of a group that is unknown to the system. The system can predict group data for two or more users based on a resemblance or the users being within a threshold distance of each other. However, if a confidence level associated with the predicted group data is below a threshold value, the group data is deemed unreliable and assistance from an associate is deemed necessary. A user interface that includes a portion of the image data, information about the predicted group data, and other interface elements is presented to the associate via a display. Based on the input data received from the associate, the group data can be confirmed or rejected. If the group data is confirmed, an association is made between the users and a group identifier.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: May 23, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Thomas Meilandt Mathiesen, Maren Marie Costa, Nishitkumar Ashokkumar Desai, Christopher Richard Fescoe, Casey Louis Thurston, Jason Michael Famularo, Sudarshan Narasimha Raghavan, Waqas Ahmed, Danny Guan
  • Patent number: 11580785
    Abstract: Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: February 14, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Kaustav Kundu, Pahal Kamlesh Dalal, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Geoffrey A. Franz, Gerard Guy Medioni, Hoi Cheung Pang, Rakesh Ramakrishnan
  • Patent number: 11494830
    Abstract: Described is a multiple-camera system and process for determining an item involved in an event. For example, when a user picks an item or places an item at an inventory location, image information for the item may be obtained and processed to identify the item involved in the event and associate that item with the user.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: November 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Hao Jiang, Yasser Baseer Asmi, Nishitkumar Ashokkumar Desai, Emilio Ian Maldonado, Ammar Chinoy, Daniel Bibireata, Sudarshan Narasimha Raghavan
  • Patent number: 11482045
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: October 25, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468698
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468400
    Abstract: One or more load cells measure the weight of items at a fixture. Weight changes occur as items are picked from or placed to the fixture and may be used to determine when the item was picked or placed, quantity and so forth. Individual weights for a type of item may vary. A set of data comprising weight changes associated with interactions involving a single one of a particular type of item is gathered. These may be weight changes due to picks, places, or both. A model, such as a probability distribution, may be created that relates a particular weight of that type of item to a probability. The model may then be used to process other weight changes and attempt to determine what type of item was involved in an interaction.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: October 11, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11468681
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the sets of vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Jaechul Kim, Kushagra Srivastava, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11436557
    Abstract: One or more load cells measure the weight of items on a shelf or other fixture. Weight changes occur as items are picked from or placed to the fixture. Output from the load cells is processed to produce denoised data. The denoised data is processed to determine event data representative of a pick or a place of an item. Hypotheses are generated using information about where particular types of items are stowed, the weights of those particular types of items, and the event data. A high scoring hypothesis is used to determine interaction data indicative of the type and quantity of an item that was added to or removed from the fixture. If ambiguity exists between hypotheses, additional techniques such as data about locations of weight changes and fine grained analysis may be used to determine the interaction data.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: September 6, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11412185
    Abstract: Sensors in a facility generate sensor data associated with a region of the facility, which can be used to determine a 3D location of an object in the facility. Some sensors may sense overlapping regions of the facility. For example, a first sensor may generate data associated with a first region of the facility, while a second sensor may generate data associated with a second region of the facility that partially overlaps the first region. Sensors may fail at times as determined from sensor output data or status data. In response to identifying a failed sensor, an undetected region corresponding to the failed sensor is identified, as well as a substitute sensor that partially senses the undetected region. Sensor data from the substitute sensor, such as 2D data, is acquired and used to estimate a 3D location of an object in the undetected region.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: August 9, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Emilio Ian Maldonado, Daniel Bibireata, Nishitkumar Ashokkumar Desai, Yasser Baseer Asmi, Xiaofeng Ren, Jaechul Kim
  • Publication number: 20220171972
    Abstract: This disclosure is directed to techniques in which a first user in an environment scans visual indicia associated with an item, such as a barcode, before handing the item to a second user. One or more computing devices may receive an indication of the scan, retrieve image data of the interaction from a camera within the environment, identify the user that received the item, and update a virtual cart associated with the second user to indicate addition of the item.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Ivan Stankovic, Joseph M. Alyea, Jiajun Zhao, Kartik Muktinutalapati, Waqas Ahmed, Dilip Kumar, Danny Guan, Nishitkumar Ashokkumar Desai, Longlong Zhu
  • Patent number: 11308442
    Abstract: Sensor data from load cells at a shelf is processed using a first time window to produce first event data describing coarse and sub-events. Location data is determined that indicates where on the shelf weight changes occurred at particular times. Hypotheses are generated using information about where items are stowed, weights of those of items, type of event, and the location data. If confidence values of these hypotheses are below a threshold value, second event data is determined by merging adjacent sub-events. This second event data is then used to determine second hypotheses which are then assessed. A hypothesis with a high confidence value is used to generate interaction data indicative of picks or places of particular quantities of particular types of items from the shelf.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 19, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11301984
    Abstract: Sensors in a facility obtain sensor data about a user's interaction with a fixture, such as a shelf. The sensor data may include images such as obtained from overhead cameras, weight changes from weight sensors at the shelf, and so forth. Based on the sensor data, one or more hypotheses that indicate the items and quantity may be determined and assessed. The hypotheses may be based on information such as the location of a user and where their hands are, weight changes, physical layout data indicative of where items are stowed, cart data indicative of what items are in the possession of the user, and so forth. A hypothesis having a greatest confidence value may be deemed to be representative of the user's interaction, and interaction data indicative of the item and quantity may be stored.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: April 12, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Nikhil Chacko, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Kushagra Srivastava