Patents by Inventor Daniel Bibireata

Daniel Bibireata has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11922728
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: March 5, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11851279
    Abstract: Described is a system and method for collecting item and user information and utilizing that information to determine materials handling facility patterns, item trends and inventory location trends. User monitoring data for users located in a materials handling facility may include information identifying inventory locations approached by users, gaze directions of users, user dwell times, an identification of items picked by users, an identification of items placed by users, and/or in-transit times for items. This information may be aggregated and processed to determine materials handling facility patterns, item trends and/or inventory location trends.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ammar Chinoy, Sudarshan Narasimha Raghavan, Emilio Ian Maldonado, Daniel Bibireata, Nishitkumar Ashokkumar Desai
  • Publication number: 20230136672
    Abstract: A model management system performs error analysis on results predicted by a machine learning model. The model management system identifies an incorrectly classified image outputted from a machine learning model and identifies using the Neural Template Matching (NTM) algorithm, an additional image correlated to the selected image. The system outputs correlated images based on a given image and a selection by a user through a user interface of a region of interest (ROI) of the given image. The region is defined by a bounding polygon input and the correlated images include features correlated to the features within the ROI. The system prompts a task associated with the additional image. The system receives a response that includes an indication that the additional image is incorrectly labeled and including a replacement label and instruct that the machine learning model be retrained using an updated training dataset that includes the replacement label.
    Type: Application
    Filed: October 21, 2022
    Publication date: May 4, 2023
    Inventors: Mark William Sabini, Kai Yang, Andrew Yan-Tak Ng, Daniel Bibireata, Dillon Laird, Whitney Blodgett, Yan Liu, Yazhou Cao, Yuxiang Zhang, Gregory Diamos, YuQing Zhou, Sanjay Boddhu, Quinn Killough, Shankaranand Jagadeesan, Camilo Zapata, Sebastian Rodriguez
  • Patent number: 11494830
    Abstract: Described is a multiple-camera system and process for determining an item involved in an event. For example, when a user picks an item or places an item at an inventory location, image information for the item may be obtained and processed to identify the item involved in the event and associate that item with the user.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: November 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Hao Jiang, Yasser Baseer Asmi, Nishitkumar Ashokkumar Desai, Emilio Ian Maldonado, Ammar Chinoy, Daniel Bibireata, Sudarshan Narasimha Raghavan
  • Patent number: 11482045
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: October 25, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468698
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaechul Kim, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Kartik Muktinutalapati, Shaonan Zhang, Hoi Cheung Pang, Dilip Kumar, Kushagra Srivastava, Gerard Guy Medioni, Daniel Bibireata
  • Patent number: 11468681
    Abstract: Where an event is determined to have occurred at a location within a vicinity of a plurality of actors, imaging data captured using cameras having the location is processed using one or more machine learning systems or techniques operating on the cameras to determine which of the actors is most likely associated with the event. For each relevant pixel of each image captured by a camera, the camera returns a set of vectors extending to pixels of body parts of actors who are most likely to have been involved with an event occurring at the relevant pixel, along with a measure of confidence in the respective vectors. A server receives the sets of vectors from the cameras, determines which of the images depicted the event in a favorable view, based at least in part on the quality of such images, and selects one of the actors as associated with the event accordingly.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dilip Kumar, Jaechul Kim, Kushagra Srivastava, Nishitkumar Ashokkumar Desai, Jayakrishnan Kumar Eledath, Gerard Guy Medioni, Daniel Bibireata
  • Publication number: 20220300855
    Abstract: A model management system adaptively refines a training dataset for more effective visual inspection. The system trains a machine learning model using the initial training dataset and sends the trained model to a client for deployment. The deployment process generates outputs that are sent back to the system. The system determines that performance of predictions for noisy data points are inadequate and determines a cause of failure based on a mapping of the noisy data point to a distribution generated for the training dataset across multiple dimensions. The system determines a cause of failure based on an attribute of the noisy datapoint that deviates from the distribution of the training dataset and performs refinement towards the training dataset based on the identified cause of failure. The system retrains the machine learning model with the refined training dataset and sends the retrained machine learning model back to the client for re-deployment.
    Type: Application
    Filed: September 9, 2021
    Publication date: September 22, 2022
    Inventors: Daniel Bibireata, Andrew Yan-Tak Ng, Pingyang He, Zeqi Qiu, Camilo Iral, Mingrui Zhang, Aldrin Leal, Junjie Guan, Ramesh Sampath, Dillion Anthony Laird, Yu Qing Zhou, Juan Camilo Fernancez, Camilo Zapata, Sebastian Rodriguez, Cristobal Silva, Sanjay Bodhu, Mark William Sabini, Seshu Reddy, Kai Yang, Yan Liu, Whit Blodgett, Ankur Rawat, Francisco Matias Cuenca-Acuna, Quinn Killough
  • Patent number: 11412185
    Abstract: Sensors in a facility generate sensor data associated with a region of the facility, which can be used to determine a 3D location of an object in the facility. Some sensors may sense overlapping regions of the facility. For example, a first sensor may generate data associated with a first region of the facility, while a second sensor may generate data associated with a second region of the facility that partially overlaps the first region. Sensors may fail at times as determined from sensor output data or status data. In response to identifying a failed sensor, an undetected region corresponding to the failed sensor is identified, as well as a substitute sensor that partially senses the undetected region. Sensor data from the substitute sensor, such as 2D data, is acquired and used to estimate a 3D location of an object in the undetected region.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: August 9, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Emilio Ian Maldonado, Daniel Bibireata, Nishitkumar Ashokkumar Desai, Yasser Baseer Asmi, Xiaofeng Ren, Jaechul Kim
  • Patent number: 11263795
    Abstract: Described are systems and techniques to generate data for display to present visualizations of data acquired from sensors in a facility. The data visualizations may be used to develop, configure, administer, or otherwise support operation of the facility. In one implementation, the visualization may include a view incorporating aggregated images acquired from multiple cameras, depth data, tracking information about objects in the facility, and so forth. An analyst may use the data visualization to determine occurrence of an action in the facility such as a pick of an item, place of an item, what item was involved with an action, what user was involved with the action, and so forth. Based on the information presented by the data visualization, changes may be made to data processing parameters.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: March 1, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Nishitkumar Ashokkumar Desai, Frank Florian Liberato, Jr., I-Hung Wang, Daniel Bibireata, Muralidhar Koka
  • Patent number: 11195140
    Abstract: A user may pick an item from a first inventory location, such as in a lane on a shelf, and may return it another location that is assigned to another type of item. Described are techniques to generate tidiness data that is indicative of whether an item has been returned to an inventory location assigned to that type of item. As items are taken, information about the type of item taken and its weight are stored. When an increase in weight at a lane indicates a return of an item to the lane, the weight of the return is compared to the stored weight of the items previously taken by a user. If the weights correspond to within a threshold value, the type of item associated with the stored weight is deemed to be returned and tidiness data indicative of a tidy return of the item to its appointed lane may be generated.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: December 7, 2021
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Paul Eugene Munger, Jayakrishnan Kumar Eledath, Daniel Bibireata, Gopi Prashanth Gopal, Liefeng Bo
  • Patent number: 10963949
    Abstract: Described is a multiple-camera system and process for determining an item involved in an event. For example, when a user picks an item or places an item at an inventory location, image information for the item may be obtained and processed to identify the item involved in the event and associate that item with the user.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: March 30, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Hao Jiang, Yasser Baseer Asmi, Nishitkumar Ashokkumar Desai, Emilio Ian Maldonado, Ammar Chinoy, Daniel Bibireata, Sudarshan Narasimha Raghavan
  • Patent number: 10963835
    Abstract: Described is a multiple-camera system for use in capturing images of users within a materials handling facility and processing those images to monitor the movement of users. For large materials handling facilities, a large number of cameras may be required to monitor the facility. Processing of the data generated from a large number of cameras becomes difficult. The implementations described herein include a hierarchy that allows image data from any number of cameras within a materials handling facility to be processed without substantially increasing the processing time needed or sacrificing processing capabilities.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: March 30, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Frank Florian Liberato, Daniel Bibireata, Muralidhar Koka, Yasser Baseer Asmi, Nishitkumar Ashokkumar Desai
  • Patent number: 10873726
    Abstract: During operation, a facility may utilize many sensors, such as cameras, to generate sensor data. The sensor data may be processed by an inventory management system to track objects, determine the occurrence of events at the facility, and so forth. At any given time, some of these sensors may fail to provide timely data, may fail to provide any data, may generate inaccurate data, and so forth. Described are techniques to determine failure of sensors and adjust operation of the inventory management system to maintain operability during sensor failure.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: December 22, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Emilio Ian Maldonado, Daniel Bibireata, Nishitkumar Ashokkumar Desai, Yasser Baseer Asmi, Xiaofeng Ren, Jaechul Kim
  • Patent number: 10679177
    Abstract: Described is a multiple-camera system and process for detecting a user within a materials handling facility and tracking a position of the user as the user moves through the materials handling facility. In one implementation, a plurality of depth sensing cameras are positioned above a surface of the materials handling facility and oriented to obtain an overhead view of the surface of the materials handling facility, along with any objects (e.g., users) on the surface of the materials handling facility. The depth information from the cameras may be utilized to detect objects on the surface of the materials handling facility, track a movement of those objects and determine if those objects are users.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: June 9, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Frank Florian Liberato, Jr., Jaechul Kim, Daniel Bibireata, Chen Xu, Xiaofeng Ren
  • Patent number: 10552750
    Abstract: Described is a multiple-camera system and process for disambiguating between multiple users and identifying which of the multiple users performed an event. For example, when an event is detected, user patterns near the location of the event are determined, along with touch points at the location of the event. User pattern orientation and/or arm trajectories between the event location and the user patterns may be determined and processed to disambiguate between multiple users and determine which user pattern is involved in the event.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Sudarshan Narasimha Raghavan, Emilio Ian Maldonado, David Allen Smith, Min Xu, Nishitkumar Ashokkumar Desai, Daniel Bibireata, Kevin Kar Wai Lai, Pahal Kamlesh Dalal
  • Patent number: 10475185
    Abstract: Described is a multiple-camera system and process for identifying a user that performed an event and associating that user with the event. For example, when an event is detected, user patterns near the location of the event are determined, along with touch points at the location of the event. User pattern orientation and/or arm trajectories between the event location and the user pattern may be determined and processed to link the user pattern to the event, thereby confirming the association between the event and the user.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: November 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Sudarshan Narasimha Raghavan, Emilio Ian Maldonado, Dilip Kumar, Daniel Bibireata, Ammar Chinoy, Nishitkumar Ashokkumar Desai
  • Patent number: 10438277
    Abstract: Described is a multiple-camera system and process for determining an item involved in an event. For example, when a user picks an item or places an item at an inventory location, image information for the item may be obtained and processed to identify the item involved in the event and associate that item with the user.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: October 8, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Hao Jiang, Yasser Baseer Asmi, Nishitkumar Ashokkumar Desai, Emilio Ian Maldonado, Ammar Chinoy, Daniel Bibireata, Sudarshan Narasimha Raghavan
  • Patent number: 10332089
    Abstract: Frames of sensor data may be obtained from many sensors arranged throughout a facility. These frames may be time synchronized to support further processing. For example, frames containing image data obtained at about the same time from many cameras within the facility may be used to create an aggregate or “stitched” view of the facility at that time. The synchronization may involve storing the frames from several sensors in buffers. A time window may be specified and used in conjunction with timestamps of the frames to select a set of sensor data from the buffers that are deemed to be synchronized data. The synchronized data may then be used for further processing.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: June 25, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Yasser Baseer Asmi, Frank Florian Liberato, Jr., Daniel Bibireata, Bradley David Volen, Prafulla Jinendra Masalkar, Todd Nelson Schoepflin
  • Patent number: 10291862
    Abstract: Described is a multiple-camera system for use in capturing images of users within a materials handling facility and processing those images to monitor the movement of users. For large materials handling facilities, a large number of cameras may be required to monitor the facility. Processing of the data generated from a large number of cameras becomes difficult. The implementations described herein include a hierarchy that allows image data from any number of cameras within a materials handling facility to be processed without substantially increasing the processing time needed or sacrificing processing capabilities.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: May 14, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Frank Florian Liberato, Sudarshan Narasimha Raghavan, Emilio Ian Maldonado, Muralidhar Koka, Ammar Chinoy, Daniel Bibireata, Yasser Baseer Asmi, Hao Jiang, Jaechul Kim, Nishitkumar Ashokkumar Desai