Patents by Inventor Mehul Nariyawala
Mehul Nariyawala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240349961Abstract: An autonomous vacuum implements a semi-waterproof waste bag capable of collecting dry waste from vacuum debris and liquid waste from mopping. The autonomous vacuum leverages a dual-roller cleaning head, one for dry cleaning and another for wet cleaning. The autonomous vacuum utilizes vacuum force to ingest both dry waste and liquid waste from the cleaning head into the semi-waterproof waste bag. The semi-waterproof waste bag includes a non-permeable portion formed of a waterproof material and a permeable portion formed of an air-permeable material. The semi-waterproof waste bag may further include a water-absorbent material, e.g., water-absorbing spherical beads.Type: ApplicationFiled: April 22, 2024Publication date: October 24, 2024Inventors: Karthik Chandrashekaraiah, Anshuman Kumar, Yu Yang Liu, Shu Yan Liu, Mehul Nariyawala, Navneet Dalal
-
Publication number: 20240349962Abstract: A muffler for an autonomous vacuum includes one or more chambers and one or more noise-absorbing elements for noise reduction of air exhausted into an external environment of the autonomous vacuum. The muffler is coupled to a vacuum stack inclusive of a vacuum motor. The muffler includes a main body, a foam layer, and a sealing plate. The main body includes an inlet for air exhausted by the vacuum motor and forms the one or more chambers fluidically connected to the inlet. The foam layer is disposed within one of the chambers of the main body. The foam layer absorbs sound waves incident on the foam layer. The sealing plate is affixed to the main body and includes an outlet to exhaust air from the chambers to an external environment of the autonomous vacuum.Type: ApplicationFiled: April 22, 2024Publication date: October 24, 2024Inventors: Karthik Chandrashekaraiah, Anshuman Kumar, Zhen Bo Bian, Mehul Nariyawala, Navneet Dalal
-
Publication number: 20240349971Abstract: An autonomous vacuum leverages a dual-roller cleaning head, one for dry cleaning and another for wet cleaning. The autonomous vacuum utilizes vacuum force to ingest both dry waste and liquid waste from the cleaning head into the waste bag. The autonomous vacuum further comprises a cleaning head coupled to the vacuum motor, the cleaning head forming a mop roller cavity including a first end and a second end opposite the first end, wherein a mop opening exposes the mop roller cavity to an external environment. The cleaning head comprises a mop motor positioned at the first end of the mop roller cavity and including a driver clutch. The cleaning head further comprises mop roller core having a first end, a second end opposite to the first end, and an outer surface covered with a fabric material, the mop roller comprising a spring clutch positioned at the first end of the mop roller and removably couplable to the driver clutch of the mop motor.Type: ApplicationFiled: April 22, 2024Publication date: October 24, 2024Inventors: Anshuman Kumar, William George Plummer, Nathan Elio Madonia, Shu Yan Liu, Mehul Nariyawala, Navneet Dalal
-
Publication number: 20230062104Abstract: Systems and methods for navigating an autonomous vacuum are disclosed. According to one method, the autonomous vacuum traverses a cleaning environment having a plurality of surfaces. As the autonomous vacuum is traversing the cleaning environment, sensors on the autonomous vacuum capture sensor data describing a first section of a surface on which the autonomous vacuum is currently traversing. Based on the received sensor data, the autonomous vacuum can determine that the first section is of a first surface type of a plurality of surface types. The autonomous vacuum can generate a user interface with a background displaying the determined first surface type to notify the user of where the autonomous vacuum is cleaning.Type: ApplicationFiled: August 9, 2022Publication date: March 2, 2023Inventors: Anshuman Kumar, Karthik Chandrashekharaiah, Vishal Jain, Nathan Elio Madonia, William George Plummer, Tristan Pierre Gervais, Prabhakar Manoj Naik, Clayton Haight, Vivek Kumar Bagaria, Seungho Yang, Navneet Dalal, Mehul Nariyawala
-
Publication number: 20210378472Abstract: An autonomous cleaning robot (e.g., an autonomous vacuum) may clean an environment using a cleaning head that is self-actuated. The cleaning head includes an actuator assembly comprising an actuator configured to control rotation and vertical movement of a cleaning roller, a controller, and a cleaning roller having an elongated cylindrical length connected to the actuator assembly. The cleaning head also includes a computer processor connected to the actuator assembly and a non-transitory computer-readable storage medium that causes the computer processor to map the environment based on sensor data captured by the autonomous vacuum. The computer processor may determine an optimal height for the cleaning head based on the map and instruct the actuator assembly to adjust the height of the cleaning head.Type: ApplicationFiled: August 23, 2021Publication date: December 9, 2021Inventors: Anshuman Kumar, Vishal Jain, Seungho Yang, Gavin Li, Mehul Nariyawala, Navneet Dalal
-
Publication number: 20210244254Abstract: An autonomous cleaning robot (e.g., an autonomous vacuum) may use a sensor system to map an environment that may be used to determine where to clean. The autonomous vacuum receives visual data about the environment and determines a ground plane of the environment based on the visual data. The autonomous vacuum detects objects within the environment based on the ground plane. For each object, the autonomous vacuum segments a three-dimensional (3D) representation of the object out of the visual data and determines whether the object is static or dynamic. The autonomous vacuum adds static objects to a long-term level of a map of the environment and dynamic objects to an intermediate level of the map. The autonomous vacuum may further add virtual borders, flags, walls, and messes to the map.Type: ApplicationFiled: February 9, 2021Publication date: August 12, 2021Inventors: Navneet Dalal, Seungho Yang, Gavin Li, Mehul Nariyawala
-
Patent number: 10957171Abstract: A computing system obtains a first category for a first motion event. The system sends a first alert indicative of the first category to a user. After sending the first alert, it obtains a second category for a second motion event. In accordance with a determination that the second category is the same as the first category, the system determines whether a third motion event of the first category has been detected in a preceding predetermined amount of time before the second motion event. If the third motion event has not been detected in the preceding predetermined amount of time before the second motion event, the system sends a second alert associated with the second motion event indicative of the first category to the user. If the third motion event has been detected in the preceding predetermined amount of time before the second motion event, the system forgoes sending the second alert.Type: GrantFiled: July 11, 2016Date of Patent: March 23, 2021Assignee: GOOGLE LLCInventors: George Alban Heitz, III, Mehul Nariyawala, Akshay R. Bapat
-
Publication number: 20200211347Abstract: A method at a computing system includes obtaining video of an environment including a plurality of objects; defining a zone including a portion of the environment; subsequent to the defining, detecting a motion event captured in the video occurring at least partially within the zone, wherein the motion event is associated with a first object of the plurality of objects; identifying an object type of the first object; and based on the object type of the first object, causing a notification of the motion event to be issued or not issued.Type: ApplicationFiled: March 10, 2020Publication date: July 2, 2020Inventors: James Edward Stewart, George Alban Heitz, III, Joe Delone Venters, Seungho Yang, Mehul Nariyawala, Cameron Hill, Yohannes Berhanu Kifle, Sayed Yusef Shafi, Sahana Mysore
-
Patent number: 10586433Abstract: A method at a computing system includes: obtaining video of an environment including a plurality of objects, wherein the video has a field of view; identifying one or more objects of the plurality of objects within the field of view; defining a zone of interest associated with a first object of the one or more objects, including identifying the zone of interest as one of an alerting zone or a suppression zone; subsequent to the defining, detecting one or more motion events captured in the video occurring at least partially within the zone of interest; when the zone of interest is an alerting zone, causing one or more notifications of the one or more motion events to be issued; and when the zone is a suppression zone, suppressing notifications of the one or more motion events.Type: GrantFiled: February 13, 2017Date of Patent: March 10, 2020Assignee: GOOGLE LLCInventors: James Edward Stewart, George Alban Heitz, III, Joe Delone Venters, Seungho Yang, Mehul Nariyawala, Cameron Hill, Yohannes Berhanu Kifle, Sayed Yusef Shafi, Sahana Mysore
-
Patent number: 10192415Abstract: The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) receiving a plurality of video frames from a camera, the plurality of video frames including a motion event candidate; (2) categorizing the motion event candidate by processing the plurality of video frames, the categorizing including: (a) associating the motion event candidate with a first category of a plurality of motion event categories; and (b) generating a confidence level for the association of the motion event candidate with the first category; and (3) sending an alert indicative of the first category and the confidence level to a user associated with the camera.Type: GrantFiled: July 11, 2016Date of Patent: January 29, 2019Assignee: GOOGLE LLCInventors: George Alban Heitz, III, Akshay R. Bapat, Mehul Nariyawala
-
Patent number: 10139917Abstract: Systems and methods are disclosed for gesture-initiated actions in videoconferences. In one implementation, a processing device receives content streams during a communication session, identifies a request for feedback within one of the content streams, based on an identification of the request for feedback, processes the content streams to identify one or more gestures within at least one of the content streams, and based on a determination that a first gesture of the one or more gestures is relatively more prevalent across the content streams than one or more other gestures, initiates an action with respect to the communication session.Type: GrantFiled: September 12, 2016Date of Patent: November 27, 2018Assignee: Google LLCInventors: Mehul Nariyawala, Rahul Garg, Navneet Dalal, Thor Carpenter, Gregory Burgess, Timothy Psiaki, Mark Chang, Antonio Bernardo Monteiro Costa, Christian Plagemann, Chee Chew
-
Publication number: 20180232592Abstract: A method at a computing system includes: obtaining video of an environment including a plurality of objects, wherein the video has a field of view; identifying one or more objects of the plurality of objects within the field of view; defining a zone of interest associated with a first object of the one or more objects, including identifying the zone of interest as one of an alerting zone or a suppression zone; subsequent to the defining, detecting one or more motion events captured in the video occurring at least partially within the zone of interest; when the zone of interest is an alerting zone, causing one or more notifications of the one or more motion events to be issued; and when the zone is a suppression zone, suppressing notifications of the one or more motion events.Type: ApplicationFiled: February 13, 2017Publication date: August 16, 2018Inventors: JAMES EDWARD STEWART, GEORGE ALBAN HEITZ, III, JOE DELONE VENTERS, SEUNGHO YANG, MEHUL NARIYAWALA, CAMERON HILL, YOHANNES BERHANU KIFLE, SAYED YUSEF SHAFI, SAHANA MYSORE
-
Publication number: 20180012462Abstract: The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) obtaining a first category for a first motion event, the first motion event corresponding to a first plurality of video frames; (2) sending a first alert indicative of the first category to a user; (3) after sending the first alert, obtaining a second category for a second motion event corresponding to a second plurality of video frames; (4) in accordance with a determination that the second category is the same as the first category, determining whether a predetermined amount of time has elapsed since the sending of the first alert; (5) if the predetermined amount of time has elapsed, sending a second alert indicative of the second category to the user; and (6) if the predetermined amount of time has not elapsed, forgoing sending the second alert.Type: ApplicationFiled: July 11, 2016Publication date: January 11, 2018Inventors: George Alban Heitz, III, Mehul Nariyawala, Akshay R. Bapat
-
Publication number: 20180012460Abstract: The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) receiving a plurality of video frames from a camera, the plurality of video frames including a motion event candidate; (2) categorizing the motion event candidate by processing the plurality of video frames, the categorizing including: (a) associating the motion event candidate with a first category of a plurality of motion event categories; and (b) generating a confidence level for the association of the motion event candidate with the first category; and (3) sending an alert indicative of the first category and the confidence level to a user associated with the camera.Type: ApplicationFiled: July 11, 2016Publication date: January 11, 2018Inventors: George Alban Heitz, III, Akshay R. Bapat, Mehul Nariyawala
-
Patent number: 9445048Abstract: Systems and methods are disclosed for gesture-initiated actions in videoconferences. In one implementation, a processing device receives one or more content streams as part of a communication session. The processing device identifies, within the one or more content streams, a request for feedback. The processing device processes, based on an identification of a request for feedback within the one of the plurality of content streams, the one or more content streams to identify a presence of one or more gestures within at least one of the one or more content streams. The processing device initiates, based on an identification of the presence of one or more gestures within at least one of the one or more content streams, an action with respect to the communication session.Type: GrantFiled: July 29, 2014Date of Patent: September 13, 2016Assignee: GOOGLE INC.Inventors: Mehul Nariyawala, Rahul Garg, Navneet Dalal, Thor Carpenter, Greg Burgess, Tim Psiaki, Mark Chang, Antonio Bernardo Monteiro Costa, Christian Plagemann, Chee Chew
-
Publication number: 20140157209Abstract: A system and method that includes detecting an application change within a multi-application operating framework; updating an application hierarchy model for gesture-to-action responses with the detected application change; detecting a gesture; according to the hierarchy model, mapping the detected gesture to an action of an application; and triggering the action.Type: ApplicationFiled: March 12, 2013Publication date: June 5, 2014Applicant: Google Inc.Inventors: Navneet Dalal, Mehul Nariyawala, Ankit Mohan, Varun Gulshan
-
Patent number: D1042999Type: GrantFiled: March 2, 2022Date of Patent: September 17, 2024Assignee: Matic Robots, Inc.Inventors: Seungho Yang, Alexis De Stasio, Kendison Givens Ma, Navneet Dalal, Mehul Nariyawala