Patents by Inventor Sumedh Vilas Datar
Sumedh Vilas Datar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220414924Abstract: A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.Type: ApplicationFiled: June 29, 2021Publication date: December 29, 2022Inventors: Sailesh Bharathwaaj Krishnamurthy, Sumedh Vilas Datar, Shantanu Yadunath Thakurdesai, Crystal Maung
-
Publication number: 20220414899Abstract: A device configured to identify a first pixel location within a first plurality of pixels corresponding with an item in a first image and to apply a first homography to the first pixel location to determine a first (x,y) coordinate. The device is further configured to identify a second pixel location within a second plurality of pixels corresponding with the item in a second image and to apply a second homography to the second pixel location to determine a second (x,y) coordinate. The device is further configured to determine that the distance between the first (x,y) coordinate and the second (x,y) coordinate is less than or equal to the distance threshold value, to associate the first plurality of pixels and the second plurality of pixels with a cluster for the item, and to output the first plurality of pixels and the second plurality of pixels.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Crystal Maung, Shahmeer Ali Mirza
-
Publication number: 20220414375Abstract: A device configured to capture a first image of an item on a platform using a camera and to determine a first number of pixels in the first image that corresponds with the item. The device is further configured to capture a first depth image of an item on the platform using a three-dimensional (3D) sensor and to determine a second number of pixels within the first depth image that corresponds with the item. The device is further configured to determine that the difference between the first number of pixels in the first image and the second number of pixels in the first depth image is less than the difference threshold value, to extract the plurality of pixels corresponding with the item in the first image from the first image to generate a second image, and to output the second image.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sailesh Bharathwaaj Krishnamurthy, Sumedh Vilas Datar, Crystal Maung, Shantanu Yadunath Thakurdesai
-
Publication number: 20220414379Abstract: A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza, Sumedh Vilas Datar
-
Publication number: 20220414374Abstract: A device configured to receive a first encoded vector and receive one or more feature descriptors for a first object. The device is further configured to remove one or more encoded vectors from an encoded vector library that are not associated with the one or more feature descriptors and to identify a second encoded vector in the encoded vector library that most closely matches the first encoded vector based on the numerical values within the first encoded vector. The device is further configured to identify a first item identifier in the encoded vector library that is associated with the second encoded vector and to output the first item identifier.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sailesh Bharathwaaj Krishnamurthy, Sumedh Vilas Datar, Crystal Maung, Tejas Pradip Rode, Shantanu Yadunath Thakurdesai, Shahmeer Ali Mirza
-
Publication number: 20220414378Abstract: A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sumedh Vilas Datar, Tejas Pradip Rode, Sailesh Bharathwaaj Krishnamurthy, Crystal Maung
-
Publication number: 20220414900Abstract: A device configured to detect a triggering event corresponding with a user placing a first item on the platform, to capture a first image of the first item on the platform using a camera, and to input the first image into a machine learning model that is configured to output a first encoded vector based on features of the first item that are present in the first image. The device is further configured to identify a second encoded vector in an encoded vector library that most closely matches the first encoded vector and to identify a first item identifier in the encoded vector library that is associated with the second encoded vector. The device is further configured to identify the user, to identify an account that is associated with the user, and to associate the first item identifier with the account of the user.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Crystal Maung, Shahmeer Ali Mirza
-
Publication number: 20220414399Abstract: A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.Type: ApplicationFiled: November 19, 2021Publication date: December 29, 2022Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shantanu Yadunath Thakurdesai, Shahmeer Ali Mirza
-
Publication number: 20220327837Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.Type: ApplicationFiled: June 23, 2022Publication date: October 13, 2022Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20220327836Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates images of the items stored on the rack. Over a period of time, a tracking subsystem tracks a pixel position of the wrist of a person interacting with items stored on the rack. receives image frames of the angled-view images. The tracking subsystem determines whether an item was interacted with by a person and, if so, the identified item is assigned to the person.Type: ApplicationFiled: June 15, 2022Publication date: October 13, 2022Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Patent number: 11423657Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.Type: GrantFiled: January 28, 2021Date of Patent: August 23, 2022Assignee: 7-ELEVEN, INC.Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Patent number: 11403852Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem receives image frames of the angled-view images. The tracking subsystem detects that a trigger event has occurred. A set of one or more image frames from the image feed are determined that are associated with the detected trigger event. A region-of-interest of the image frame is determined based on the pixel position of the wrist of the person. The region-of-interest includes a subset of the pixels of the image frame. A first item in the determined region-of-interest using an object detection algorithm. The identified first item is assigned to the person.Type: GrantFiled: November 25, 2020Date of Patent: August 2, 2022Assignee: 7-ELEVEN, INC.Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210287016Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a structure configured to store items. The image sensor generates angled-view images of the items stored on the structure. A tracking subsystem determines that a person has interacted with the structure and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the structure. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the structure, the first item is assigned to the person.Type: ApplicationFiled: June 2, 2021Publication date: September 16, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Patent number: 11113541Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person has interacted with the rack and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the rack. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the rack, the first item is assigned to the person.Type: GrantFiled: November 25, 2020Date of Patent: September 7, 2021Assignee: 7-ELEVEN, INC.Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210158052Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.Type: ApplicationFiled: January 28, 2021Publication date: May 27, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Patent number: 11003918Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.Type: GrantFiled: November 25, 2020Date of Patent: May 11, 2021Assignee: 7-Eleven, Inc.Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210124943Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.Type: ApplicationFiled: November 25, 2020Publication date: April 29, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharatthwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210124944Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person has interacted with the rack and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the rack. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the rack, the first item is assigned to the person.Type: ApplicationFiled: November 25, 2020Publication date: April 29, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210124942Abstract: An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem receives image frames of the angled-view images. The tracking subsystem detects that a trigger event has occurred. A set of one or more image frames from the image feed are determined that are associated with the detected trigger event. A region-of-interest of the image frame is determined based on the pixel position of the wrist of the person. The region-of-interest includes a subset of the pixels of the image frame. A first item in the determined region-of-interest using an object detection algorithm. The identified first item is assigned to the person.Type: ApplicationFiled: November 25, 2020Publication date: April 29, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza
-
Publication number: 20210124947Abstract: An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor. The first frame includes a region-of-interest (ROI) marker within the space. The system is further configured to identify pixel locations within the first frame corresponding with the ROI marker and to define a zone for subsequent frames from the sensor corresponding with the pixel locations. The system is further configured to receive a second frame from the sensor, to detect an object within the zone, and to identify the object. The system is further configured to determine a person is within the second frame and to modify a digital cart that is associated with the person based on the identified object.Type: ApplicationFiled: November 25, 2020Publication date: April 29, 2021Inventors: Sumedh Vilas Datar, Sailesh Bharathwaaj Krishnamurthy, Shahmeer Ali Mirza