Patents by Inventor Abhinav Yarlagadda
Abhinav Yarlagadda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250069362Abstract: Systems and methods include extracting item parameters from images of items positioned at a POS system. The item parameters associated with each item are indicative as to an identification of each item thereby enabling the identification of each item based on the item parameters. The item parameters are analyzed to determine whether the item parameters match item parameters stored in a database. The database stores different combinations of item parameters to thereby identify each item based on each different combination of item parameters associated with each item. Each item positioned at the POS system is identified when the item parameters for each item match item parameters as stored in the database and fail to identify each item when the item parameters fail to match item parameters. The item parameters associated with the items that fail to match are streamed to the database thereby enabling the identification of each failed item.Type: ApplicationFiled: November 11, 2024Publication date: February 27, 2025Inventors: Abhinav Yarlagadda, Aykut Dengi, Sai Krishna Bashetty, Rahul Santhosh Kumar Varma, Daniel King, Kamalesh Kalirathinam, Nathan Kelly, Sri Priyanka Madduluri, Thomas Strich
-
Patent number: 12236662Abstract: Assisted checkout devices, including point-of-sale stations, can use computer vision and machine learning to speed the checkout process while maintaining human verification, assistance, and customer interaction provided by human clerks. A plurality of optical sensors, including cameras, can be arranged with different views of a checkout plane upon which items being purchased by a buyer are placed. Moreover, one or more support towers can be utilized to elevate the optical sensors to vertical heights at which the checkout plane, and items placed thereon, is within the field of view of the optical sensors. The information captured by the plurality of optical sensors can be analyzed using machine learning models to detect and identify the items placed on the checkout plane.Type: GrantFiled: January 16, 2024Date of Patent: February 25, 2025Assignee: RadiusAI, Inc.Inventors: Abhinav Yarlagadda, Aykut Dengi, Sai Krishna Bashetty, Rahul Santhosh Kumar Varma, Daniel King, Kamalesh Kalirathinam, Nathan Kelly, Sri Priyanka Madduluri, Thomas Strich
-
Publication number: 20240394779Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: ApplicationFiled: August 6, 2024Publication date: November 28, 2024Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Krishna Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Publication number: 20240265434Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: ApplicationFiled: April 12, 2024Publication date: August 8, 2024Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Krishna Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Patent number: 12056752Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: GrantFiled: February 13, 2023Date of Patent: August 6, 2024Assignee: RadiusAI, Inc.Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Krishna Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Publication number: 20240249266Abstract: Systems and methods include extracting item parameters from images of items positioned at a POS system. The item parameters associated with each item are indicative as to an identification of each item thereby enabling the identification of each item based on the item parameters. The item parameters are analyzed to determine whether the item parameters match item parameters stored in a database. The database stores different combinations of item parameters to thereby identify each item based on each different combination of item parameters associated with each item. Each item positioned at the POS system is identified when the item parameters for each item match item parameters as stored in the database and fail to identify each item when the item parameters fail to match item parameters. The item parameters associated with the items that fail to match are streamed to the database thereby enabling the identification of each failed item.Type: ApplicationFiled: January 16, 2024Publication date: July 25, 2024Inventors: Abhinav Yarlagadda, Enis Dengi, Sai Krishna Bashetty, Rahul Santhosh Kumar Varma, Daniel King, Kamalesh Kalirathinam, Nathan Kelly, Sri Priyanka Madduluri, Thomas Strich
-
Publication number: 20240242578Abstract: Assisted checkout devices, including point-of-sale stations, can use computer vision and machine learning to speed the checkout process while maintaining human verification, assistance, and customer interaction provided by human clerks. A plurality of optical sensors, including cameras, can be arranged with different views of a checkout plane upon which items being purchased by a buyer are placed. Moreover, one or more support towers can be utilized to elevate the optical sensors to vertical heights at which the checkout plane, and items placed thereon, is within the field of view of the optical sensors. The information captured by the plurality of optical sensors can be analyzed using machine learning models to detect and identify the items placed on the checkout plane.Type: ApplicationFiled: January 16, 2024Publication date: July 18, 2024Inventors: Abhinav Yarlagadda, Enis Dengi, Sai Krishna Bashetty, Rahul Santhosh Kumar Varma, Daniel King, Kamalesh Kalirathinam, Nathan Kelly, Sri Priyanka Madduluri, Thomas Strich
-
Publication number: 20240242505Abstract: Visual analytics systems can use video data from cameras placed throughout a location (e.g., a store or a hospital) to determine poses, gestures, tracks, journeys, and actions, and thereby to provide broader analytics useful in realizing efficiencies and improving human behaviors. Data from visual analytics systems can be used to generate alerts, make useful predictions, and run simulations to test hypotheses.Type: ApplicationFiled: January 16, 2024Publication date: July 18, 2024Inventors: Abhinav Yarlagadda, Enis Dengi, Sai Krishna Bashetty, Alexander Magsam, Sharan Raja, Stephen McCracken, Xaio Liu, Raghavendra Nakka, Nathan Kelly, Tarik Temur, Siva Naga Raju Balusu, Uday Bhargav Reddy Murikinati, Girisha Lakshmi Nakka, Marek Niemyjski, Todd Ellering, Jeffrey Kershner, Dana Altier, Paul Mills
-
Publication number: 20240242470Abstract: Systems and methods include extracting item parameters from images of items positioned at a POS system. The item parameters associated with each item are indicative as to an identification of each item thereby enabling the identification of each item based on the item parameters. The item parameters are analyzed to determine whether the item parameters match item parameters stored in a database. The database stores different combinations of item parameters to thereby identify each item based on each different combination of item parameters associated with each item. Each item positioned at the POS system is identified when the item parameters for each item match item parameters as stored in the database and fail to identify each item when the item parameters fail to match item parameters. The item parameters associated with the items that fail to match are streamed to the database thereby enabling the identification of each failed item.Type: ApplicationFiled: January 16, 2024Publication date: July 18, 2024Inventors: Abhinav Yarlagadda, Enis Dengi, Sai Krishna Bashetty, Rahul Santhosh Kumar Varma, Daniel King, Kamalesh Kalirathinam, Nathan Kelly, Sri Priyanka Madduluri, Thomas Strich
-
Publication number: 20240029138Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: ApplicationFiled: February 13, 2023Publication date: January 25, 2024Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Krishna Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Patent number: 11580648Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: GrantFiled: May 3, 2021Date of Patent: February 14, 2023Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Publication number: 20210304421Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: ApplicationFiled: May 3, 2021Publication date: September 30, 2021Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda
-
Patent number: 11024043Abstract: A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.Type: GrantFiled: March 27, 2020Date of Patent: June 1, 2021Inventors: Abraham Othman, Enis Aykut Dengi, Ishan Agrawal, Jeff Kershner, Peter Martinez, Paul Mills, Abhinav Yarlagadda