Patents by Inventor Christian Lee McDaniel

Christian Lee McDaniel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230252760
    Abstract: A single item image is captured of an item situated within a given zone of a transaction area is captured, each different zone for each given item is associated a plurality of single item images captured by different cameras at different angles and perspectives of the transaction area. The single item images are passed to an existing segmentation Machine-Learning Model (MLM) and accurate masks for the items produced by the existing MLM are retained. A background image of an empty transaction area is obtained, each retained single item image is cropped and superimposed into the background image with one or more different cropped and superimposed singe item images creating a composite multi-item image. The composite multi-item images are labeled to identify the boundaries of each single item image and the existing segmentation MLM is trained on the composite multi-item and labeled images producing an enhanced segmentation MLM.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 10, 2023
    Inventors: Stefan Bjelcevic, Hunter Blake Wilson Germundsen, Christian Lee McDaniel, Brent Vance Zucker
  • Publication number: 20230252444
    Abstract: An apparatus is provided that executes instructions to move an item within a scan zone to specific X-Y coordinates of the scan zone. The item is placed on a platform and is rotated 360 degrees at each X-Y coordinate within the scan zone. Item images are captured by cameras at each X-Y coordinate and for each rotation at the corresponding X-Y coordinate. The item images are labeled and retained. The item images are used as input to a Machine-Learning Model (MLM) to train the MLM to recognize item codes for the items when subsequent images are captured for the item during a checkout. In an embodiment, during a checkout unknown item images are flagged and labeled with the corresponding item code when the corresponding item's barcode is scanned during the checkout; the labeled item images are also retailed for training the MLM for item recognition.
    Type: Application
    Filed: April 29, 2022
    Publication date: August 10, 2023
    Inventors: Stefan BJELCEVIC, Hunter Blake Wilson GERMUNDSEN, Catherine LEE, Christian Lee MCDANIEL, Brent Vance ZUCKER
  • Publication number: 20230252343
    Abstract: Multiple images of multiple items are captured of a transaction area during a checkout. The Red-Green-Blue (RGB) data associated with each item image patch is collected across the images and provided as input by a Machine-Learning Model (MLM), which returns an item code for the item. When the given MLM is unable to satisfactorily predict an item code for a given set of image patches, the patches associated with the images are presented to an operator of a checkout and the operator is asked to scan an item barcode for that item. The patches are labeled within the images with the item code and additional images of the item are captured and labeled with the item code when the barcode is scanned by the operator. The labeled images are used in a subsequent training session with the MLM to improve its item recognition accuracy for the item.
    Type: Application
    Filed: April 29, 2022
    Publication date: August 10, 2023
    Inventors: Christian Lee McDaniel, Stefan Bjelcevic, Justin Paul, Georgiy Pyantkovs’ky, Brent Vance Zucker
  • Publication number: 20230252542
    Abstract: Multiple images of a designated area are taken. The designated area comprises multiple items that are to be identified from the images. Depth information and Red, Green, Blue (RGB) data from each image is processed to create a point cloud for each image of the designated area. The point clouds are patched together or synchronized into a single point cloud for the designated area. Known background pixels associated with backgrounds for each image are removed from the single point cloud. The depth information and RGB data for the single point cloud is clustered together and bounding boxes are placed around each item in the single point cloud. At least the RGB data for each bounding box is provided to a machine-learning model (MLM) and the MLM returns an item code for the corresponding item. The item codes are fed to a transaction manager for a transaction associated with a customer.
    Type: Application
    Filed: February 4, 2022
    Publication date: August 10, 2023
    Inventors: Stefan Bjelcevic, Christian Lee McDaniel, Brent Vance Zucker
  • Publication number: 20230252609
    Abstract: Depth camera settings are adjusted based on characteristics of items presented in a scan zone and based on depth values returned for the items in depth images. Red-Green-Blue (RGB) images and depth images are captured of items within the scan zone. Quality of the depth values are accessed. Bad depth values are replaced with good known depth values. When the depth values are not replaced, one or more depth value interpolation algorithms are selectively processed to enhance the depth values. The depth values are processed to place each item within a specific location of the scan zone and map that location to pixel values in the corresponding RGB images. The pixel values from the RGB images are passed to a classification model and an item code is returned. The item codes are provided to checkout a customer without any scanning of item barcodes of the items.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 10, 2023
    Inventors: Christian Lee McDaniel, Stefan Bjelcevic, Layton Christopher Hayes, Brent Vance Zucker
  • Publication number: 20230252750
    Abstract: Images of a transaction area comprising items are received during a checkout. Red-Green-Blue (RGB) data and Depth data are received with each image. Pixels captured in any given image by a given camera are pre-aligned with physical locations of a transaction area. Depth data provided by each camera and that camera's pre-alignment to the area are processed to map pixels in each image taken to X-Y coordinates within the area. X-Y coordinates for each item and for each image are grouped together as a single item within the area. RGB data for each image and item is used as a set of image patches per item. For each item, the corresponding patches are passed to a classification Machine-Learning Model (MLM) that returns an item code for each patch. A particular item code is selected for each set and the item codes are used to process the checkout.
    Type: Application
    Filed: April 29, 2022
    Publication date: August 10, 2023
    Inventors: Stefan Bjelcevic, Christian Lee McDaniel, Brent Vance Zucker
  • Publication number: 20230252443
    Abstract: A first Machine-Learning Model (MLM) is processed on multiple images of a scene, each scene comprising a different perspective view of each of a plurality of items. The first MLM produces masks for the items within each image, each mask representing a portion of a given item within a given image. Depth information associated with the images and the masks are processed to isolate each portion of each item within each image. A single scene image is generated from the images by stitching each image’s pixel data from each portion or each image into a composite item image within the single scene image. Each item’s composite item image is passed to a second MLM and the second MLM returns an item code for the corresponding item associated with the corresponding composite item image. The item codes for each item is passed to a transaction manager to process a transaction.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 10, 2023
    Inventors: Christian Lee McDaniel, Stefan Bjelcevic, Brent Vance Zucker
  • Publication number: 20230005049
    Abstract: A list of items to pick for an order at a store is obtained. A hierarchical graph of the store is maintained based on regions within the store, endpoints within the store, and locations of items relative to the regions and endpoints. Each item in the list is connected to its nearest endpoint within the graph and a path is found between the endpoints. An optimized and order list of the items is found based on an optimal path through each endpoint. For each segment within the path a list of traversed endpoints is identified. The endpoints are grouped by region; a new navigation instruction is generated only when a given region is changed. The process is repeated for each pair of items in the list; the list is reduced; and translated into text as an optimal path to pick the items of the order within the store.
    Type: Application
    Filed: July 2, 2021
    Publication date: January 5, 2023
    Inventors: Christian Lee McDaniel, Alexander Simon Lewin
  • Publication number: 20230005046
    Abstract: An optimized route to pick a list of items through a store is obtained. Sensor data for a mobile device of a user is evaluated in real time to provide fine-grain orientation, direction, location, and behaviors of the user along the route during a picking session. A determination is made based on the sensor data and a current portion of the route that the user has picked a current item from the store and a next item along with its route guidance is provided to the user without any user action being required. In an embodiment, tactile, speech, and/or audible feedback is provided from the device when the determination is made that an item was picked by the user. In an embodiment, predefined movements of the device are identified as user-provided route commands and processed on behalf of the user during the session.
    Type: Application
    Filed: April 28, 2022
    Publication date: January 5, 2023
    Inventor: Christian Lee McDaniel