Patents by Inventor Jayakrishnan Kumar Eledath

Jayakrishnan Kumar Eledath has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11922729
    Abstract: Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Kaustav Kundu, Pahal Kamlesh Dalal, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Geoffrey A. Franz, Gerard Guy Medioni, Hoi Cheung Pang, Rakesh Ramakrishnan
  • Patent number: 11922728
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11869065
    Abstract: This disclosure describes systems and techniques for identifying events that occur within an environment using image data captured at the environment. For example, one or more cameras may generate image data representative of a user interacting with an item on the shelf. This image data may be used to generate feature data associated with the user and the item, which may be analyzed by one or more classifiers for identifying an interaction between the user and the item. The systems and techniques may then generate interaction data, which in turn may be analyzed by one or more additional classifiers for identifying an event, such as the user picking a particular item from the shelf within the environment. Event data indicative of the event may then be used to update a virtual cart of the user.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jayakrishnan Kumar Eledath, Nikhil Chacko, Alessandro Bergamo, Kaustav Kundu, Marian Nasr Amin George, Jingjing Liu, Nishitkumar Ashokkumar Desai, Pahal Kamlesh Dalal, Keshav Nand Tripathi
  • Patent number: 11810362
    Abstract: This disclosure describes techniques for updating planogram data associated with a facility. The planogram may indicate inventory locations within the facility for various types of items supported by product fixtures. In particular an image of a product fixture is analyzed to identify image segments corresponding to product groups, where each product group consists of instances of the same product and each image segment corresponds to a group of image points. Image data is further analyzed to determine coordinates of the points of each image segment. A product space corresponding to the product group is then defined based on the coordinates of the points of the product group. In some cases, for example, a product space may be defined in terms of the coordinates of the corners of a rectangular bounding box or volume.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: November 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Jayakrishnan Kumar Eledath, Petko Tsonev, Nishitkumar Ashokkumar Desai, Gerard Guy Medioni, Jean Laurent Guigues, Chuhang Zou, Connor Spencer Blue Worley, Claire Law, Paul Ignatius Dizon Echevarria, Matthew Fletcher Harrison, Pahal Kamlesh Dalal
  • Patent number: 11790630
    Abstract: This disclosure describes techniques for updating planogram data associated with a facility. The planogram may indicate, for different shelves and other inventory locations within the facility, which items are on which shelves. For example, the planogram data may indicate that a particular item is located on a particular shelf. Therefore, when a system identifies that a user has taken an item from that shelf, the system may update a virtual cart of that user to indicate addition of the particular item. In some instances, however, a new item may be stocked on the example shelf instead of a previous item. The techniques described herein may use sensor data generated in the facility to identify this change and update the planogram data to indicate an association between the shelf and the new item.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: October 17, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Petko Tsonev, Claire Law, Connor Spencer Blue Worley, Jue Wang, Bharat Singh, Hue Tuan Thi, Jayakrishnan Kumar Eledath, Nishitkumar Ashokkumar Desai
  • Patent number: 11625838
    Abstract: Devices and techniques are generally described for articulated three-dimensional pose tracking. In some examples, a plurality of frames of image data captured by one or more cameras may be received. First feature data representing the plurality of frames of image data may be determined using a backbone network. The first feature data may be projected into three-dimensional (3D) space. In some examples, 3D location data describing respective 3D locations of one or more persons represented by the first feature data projected in the 3D space may be determined. The first feature data and the 3D location data may be sent to a four-dimensional (4D) convolutional neural network (CNN). The 4D CNN may generate second feature data comprising respective 3D representations of the one or more persons. Three dimensional pose data representing articulated 3D pose information for the one or more persons may be generated.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 11, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dinesh Reddy Narapureddy, Jean Laurent Guigues, Leonid Pishchulin, Jayakrishnan Kumar Eledath
  • Patent number: 11580785
    Abstract: Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: February 14, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Kaustav Kundu, Pahal Kamlesh Dalal, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Geoffrey A. Franz, Gerard Guy Medioni, Hoi Cheung Pang, Rakesh Ramakrishnan
  • Patent number: 11482045
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: October 25, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468681
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the sets of vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Jaechul Kim, Kushagra Srivastava, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468400
    Abstract: One or more load cells measure the weight of items at a fixture. Weight changes occur as items are picked from or placed to the fixture and may be used to determine when the item was picked or placed, quantity and so forth. Individual weights for a type of item may vary. A set of data comprising weight changes associated with interactions involving a single one of a particular type of item is gathered. These may be weight changes due to picks, places, or both. A model, such as a probability distribution, may be created that relates a particular weight of that type of item to a probability. The model may then be used to process other weight changes and attempt to determine what type of item was involved in an interaction.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: October 11, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11468698
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11436557
    Abstract: One or more load cells measure the weight of items on a shelf or other fixture. Weight changes occur as items are picked from or placed to the fixture. Output from the load cells is processed to produce denoised data. The denoised data is processed to determine event data representative of a pick or a place of an item. Hypotheses are generated using information about where particular types of items are stowed, the weights of those particular types of items, and the event data. A high scoring hypothesis is used to determine interaction data indicative of the type and quantity of an item that was added to or removed from the fixture. If ambiguity exists between hypotheses, additional techniques such as data about locations of weight changes and fine grained analysis may be used to determine the interaction data.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: September 6, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11393301
    Abstract: This disclosure describes, in part, systems for enabling physical retail stores and other facilities to implement both automated- and manual-checkout techniques for customers of the stores and/or facilities. For example, the described systems may enable a retail store to implement technology where users are able to pick items from shelves and other inventory locations and exit the store without performing manual checkout of the items, as well as technology to allow users to pay for their items using point-of-sale (POS) and/or other manual-checkout techniques. The systems described herein thus enable hybrid retail facilities, as opposed to a retail facility that is either entirely traditional or entirely enabled for automated checkout.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: July 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Shane Michael Wilson, Matthew Norman, Jayakrishnan Kumar Eledath
  • Patent number: 11308442
    Abstract: Sensor data from load cells at a shelf is processed using a first time window to produce first event data describing coarse and sub-events. Location data is determined that indicates where on the shelf weight changes occurred at particular times. Hypotheses are generated using information about where items are stowed, weights of those of items, type of event, and the location data. If confidence values of these hypotheses are below a threshold value, second event data is determined by merging adjacent sub-events. This second event data is then used to determine second hypotheses which are then assessed. A hypothesis with a high confidence value is used to generate interaction data indicative of picks or places of particular quantities of particular types of items from the shelf.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 19, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11301684
    Abstract: This disclosure describes systems and techniques for detecting certain activity in image data, such as frames of video data. For example, the systems and techniques may create and utilize an activity classifier for detecting and classifying certain human activity in video data of a facility. In some instances, the classifier may be trained to identify, from the video data, certain predefined activity such as a user picking an item from a shelf, a user returning an item to a shelf, a first user passing an item to a second user, or the like. In some instances, the techniques enable activity detection using only video data, rather than in addition to data acquired by other sensors.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: April 12, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Liefeng Bo, Keunhong Park, Gerard Guy Medioni, Nikhil Chacko, Jayakrishnan Kumar Eledath, Nishitkumar Ashokkumar Desai
  • Patent number: 11301984
    Abstract: Sensors in a facility obtain sensor data about a user's interaction with a fixture, such as a shelf. The sensor data may include images such as obtained from overhead cameras, weight changes from weight sensors at the shelf, and so forth. Based on the sensor data, one or more hypotheses that indicate the items and quantity may be determined and assessed. The hypotheses may be based on information such as the location of a user and where their hands are, weight changes, physical layout data indicative of where items are stowed, cart data indicative of what items are in the possession of the user, and so forth. A hypothesis having a greatest confidence value may be deemed to be representative of the user's interaction, and interaction data indicative of the item and quantity may be stored.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: April 12, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Nikhil Chacko, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Kushagra Srivastava
  • Patent number: 11284041
    Abstract: In a materials handling facility, events may be associated with users based on imaging data captured from multiple fields of view. When an event is detected at a location within the fields of view of multiple cameras, two or more of the cameras may be identified as having captured images of the location at a time of the event. Users within the materials handling facility may be identified from images captured prior to, during or after the event, and visual representations of the respective actors may be generated from the images. The event may be associated with one of the users based on distances between the users' hands and the location of the event, as determined from the visual representations, or based on imaging data captured from the users' hands, which may be processed to determine which, if any, of such hands includes an item associated with the event.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: March 22, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Alessandro Bergamo, Pahal Kamlesh Dalal, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Marian Nasr Amin George, Jean Laurent Guigues, Gerard Guy Medioni, Kartik Muktinutalapati, Robert Matthias Steele, Lu Xia
  • Patent number: 11263583
    Abstract: Load cells measure the weight of items on a shelf. Weight changes occur as items are picked from or placed to the fixture. Information about these weight changes is used to determine an estimated location on the shelf of a weight change. Hypotheses are generated using information about where particular types of items are stowed, the weights of those particular types of items, information about the weight changes, and the estimated locations of the weight changes. A model is used to produce confidence values in the hypotheses based on a change in weight measured at a first side and a change in weight measured at a second side of the shelf. A hypothesis with a confidence value that exceeds the threshold may be selected and used to determine interaction data indicative of a quantity picked or placed, type of item, and location on the shelf.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: March 1, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dilip Kumar, Liefeng Bo, Nikhil Chacko, Robert Crandall, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gopi Prashanth Gopal, Gerard Guy Medioni, Paul Eugene Munger, Kushagra Srivastava
  • Patent number: 11195140
    Abstract: A user may pick an item from a first inventory location, such as in a lane on a shelf, and may return it another location that is assigned to another type of item. Described are techniques to generate tidiness data that is indicative of whether an item has been returned to an inventory location assigned to that type of item. As items are taken, information about the type of item taken and its weight are stored. When an increase in weight at a lane indicates a return of an item to the lane, the weight of the return is compared to the stored weight of the items previously taken by a user. If the weights correspond to within a threshold value, the type of item associated with the stored weight is deemed to be returned and tidiness data indicative of a tidy return of the item to its appointed lane may be generated.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: December 7, 2021
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Paul Eugene Munger, Jayakrishnan Kumar Eledath, Daniel Bibireata, Gopi Prashanth Gopal, Liefeng Bo
  • Patent number: 11093785
    Abstract: This disclosure describes techniques for updating planogram data associated with a facility. The planogram may indicate, for different shelves and other inventory locations within the facility, which items are on which shelves. For example, the planogram data may indicate that a particular item is located on a particular shelf. Therefore, when a system identifies that a user has taken an item from that shelf, the system may update a virtual cart of that user to indicate addition of the particular item. In some instances, however, a new item may be stocked on the example shelf instead of a previous item. The techniques described herein may use sensor data generated in the facility to identify this change and update the planogram data to indicate an association between the shelf and the new item.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: August 17, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Behjat Siddiquie, Petko Tsonev, Claire Law, Connor Spencer Blue Worley, Jue Wang, Bharat Singh, Hue Tuan Thi, Jayakrishnan Kumar Eledath, Nishitkumar Ashokkumar Desai