Patents by Inventor Mayank Gupta

Mayank Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978267
    Abstract: A method and related system operations include obtaining a video stream with an image sensor of a camera device, detecting a plurality of target objects by executing a neural network model based on the video stream with a vision processor unit of the camera device. The method also includes generating a plurality of bounding boxes, determining a plurality of character sequences by, for each respective bounding box of the plurality of bounding boxes, performing a set of optical character recognition (OCR) operations to determine a respective character sequence of the plurality of character sequences. The method also includes updating a plurality of tracklets to indicate the plurality of bounding boxes and storing the plurality of tracklets in association with the plurality of character sequences in a memory of the camera device.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: May 7, 2024
    Assignee: Verkada Inc.
    Inventors: Mayank Gupta, Suraj Arun Vathsa, Song Cao, Yi Xu, Yuanyuan Chen, Yunchao Gong
  • Patent number: 11971741
    Abstract: Aspects of the present disclosure control aging of a signal path in an idle mode to mitigate aging. In one example, an input of the signal path is alternately parked low and high over multiple idle periods to balance the aging of devices (e.g., transistors) in the signal path. In another example, a clock signal (e.g., a clock signal with a low frequency) is input to the signal path during idle periods to balance the aging of devices (e.g., transistors) in the signal path. In another example, the input of the signal path is parked high or low during each idle period based on an aging pattern.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: April 30, 2024
    Assignee: QUALCOMM INCORPORATED
    Inventors: Mukund Narasimhan, Murali Krishna Ade, Arun David Arul Diraviyam, Mayank Gupta, Boris Dimitrov Andreev
  • Patent number: 11973844
    Abstract: A machine implemented method and system, including: receiving at a near real-time processor module, one or more tenant-specific business objects from a message handler module; receiving at the near real-time processor module, contextual data related to the received one or more tenant-specific business objects from a platform analytics module; forming at the near real-time processor module, one or more events by applying one or more pre-defined analytic models to the received contextual data and the received one or more tenant-specific business objects; receiving at a message publisher module, one or more events from the near real-time processor module; and transmitting the received one or more events to one or more subscribers for the one or more events.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: April 30, 2024
    Assignee: GLOBALLOGIC, INC.
    Inventors: James Francis Walsh, Suhail Murtaza Khaki, Manu Sinha, Juan Manuel Caracoche, Artem Mygaiev, Francis Michael Borkin, Bhaskar Chaturvedi, Mayank Gupta, Biju Varghese
  • Patent number: 11972201
    Abstract: In some embodiments, a computing system computes a hierarchical entity data model to facilitate autocompleting forms by generating an electronic schema extraction from an electronic form lacking data for one or more fields. The computing system generates an electronic schema including an input category and input field elements. The computing system accesses a hierarchical entity-data model including and entity category and entity-data elements. The computing system identifies associations between the entity category and input category based on semantic matching including text of an entity category label and an input field category label or matching a number of fields within an entity category to an input category. The computing system verifies the association by applying a natural language processing engine to the input field elements and the entity-data elements. The computing system autocompletes one or more input field elements with entity data from one or more of the entity-data elements.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: April 30, 2024
    Assignee: ADOBE INC.
    Inventors: Mayank Gupta, Mandeep Gandhi
  • Patent number: 11948373
    Abstract: Automatic license plate recognition occurs when a light sensor that continually captures video detects motion as a vehicle is driven through a gate. The light sensor detects the vehicle and license plate in the video stream captured by the light sensor. An algorithm associated with the video stream of the light sensor is trained to detect license plates. The light sensor starts executing the recognition algorithm when it detects motion. Recognition of characters in the license plate is based upon an aggregation of several captured video frames in which a license plate is detected.
    Type: Grant
    Filed: December 12, 2022
    Date of Patent: April 2, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Yunchao Gong, Suraj Arun Vathsa, Mayank Gupta, Naresh Nagabushan
  • Patent number: 11900688
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: February 13, 2024
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Publication number: 20230401748
    Abstract: In some embodiments, a method includes receiving a first image and a second image from a stereo camera pair. The method includes selecting a first row of pixels from the rectified image and a set of rows of pixels from the second image and comparing the first row of pixels with each row of pixels from the set of rows of pixels to determine disparity values. The method includes determining a pair of rows of pixels having the first row of pixels and a second row of pixels from the set of rows of pixels. The pair of rows of pixels has an offset no greater than an offset between the first row of pixels and each row of pixels from remaining rows of pixels. The method includes adjusting, based on the offset, the relative rotational position between the first stereo camera and the second stereo camera.
    Type: Application
    Filed: August 14, 2023
    Publication date: December 14, 2023
    Inventors: Anurag GANGULI, Timothy P. DALY, JR., Mayank GUPTA, Wenbin WANG, Huan Yang CHANG
  • Publication number: 20230394850
    Abstract: A method and related system operations includes, determining, for each respective frame of a frame sequence, a respective bounding box that surrounds a respective sub-image of the respective frame, determining a respective string and respective confidence values associated with the respective string based on the respective sub-image, updating the tracklet to comprise the respective bounding box based on the respective string and at least one string generated by the object recognition model for a previous frame, and updating a voting table by adding the respective confidence values to the voting table. The method also includes generating an aggregated string based on the voting table by, for a set of positions of the aggregated string, determining a character associated with a maximum confidence value indicated by the voting table and associating the aggregated string with the tracklet in a data structure.
    Type: Application
    Filed: August 22, 2023
    Publication date: December 7, 2023
    Inventors: Mayank GUPTA, Suraj Arun VATHSA, Song CAO, Yi XU, Yuanyuan CHEN, Yunchao GONG
  • Publication number: 20230385134
    Abstract: An artificial intelligence (AI) based method to enhance launching of an application at a user equipment (UE) is provided. The method includes monitoring launch of the application on the UE. The method includes determining an event to be executed upon the launch of the application. The method includes categorizing the event into one of a UI updating event and a non-UI updating event. Further, generating an execution flow based on the categorization, wherein the execution flow is indicative of prioritizing the UI updating event for execution before the non-UI updating event such that prioritization prevents a mismatch of the UI components during the launch. The method includes storing the execution flow and executing the stored execution flow during a subsequent launch of the application, such that launching of the at application is enhanced.
    Type: Application
    Filed: June 14, 2023
    Publication date: November 30, 2023
    Inventors: Sripurna Mutalik, Manith Shetty, Anuradha Kanukotla, Mayank Gupta, Sumeen Agrawal
  • Publication number: 20230367808
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 16, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Publication number: 20230350664
    Abstract: In some embodiments, a method comprises receiving, at a processor of an autonomous vehicle and from at least one sensor, sensor data distributed within a time window. A first event being a first event type occurring at a first time in the time window is identified by the processor using a software model based on the sensor data. At least one first attribute associated with the first event is extracted by the processor. A second event being the first event type occurring at a second time in the time window is identified by the processor based on the at least one first attribute. In response to determining that the second event is not yet recognized as being the first event type, a first label for the second event is generated by the processor.
    Type: Application
    Filed: June 19, 2023
    Publication date: November 2, 2023
    Inventors: Gael Gurvan COLAS, Mayank GUPTA, Anurag GANGULI, Timothy P. DALY, JR.
  • Publication number: 20230343114
    Abstract: A method and related system operations include obtaining a video stream with an image sensor of a camera device, detecting a plurality of target objects by executing a neural network model based on the video stream with a vision processor unit of the camera device. The method also includes generating a plurality of bounding boxes, determining a plurality of character sequences by, for each respective bounding box of the plurality of bounding boxes, performing a set of optical character recognition (OCR) operations to determine a respective character sequence of the plurality of character sequences. The method also includes updating a plurality of tracklets to indicate the plurality of bounding boxes and storing the plurality of tracklets in association with the plurality of character sequences in a memory of the camera device.
    Type: Application
    Filed: February 13, 2023
    Publication date: October 26, 2023
    Inventors: Mayank GUPTA, Suraj Arun VATHSA, Song CAO, Yi XU, Yuanyuan CHEN, Yunchao GONG
  • Publication number: 20230343113
    Abstract: Automatic license plate recognition occurs when a light sensor that continually captures video detects motion as a vehicle is driven through a gate. The light sensor detects the vehicle and license plate in the video stream captured by the light sensor. An algorithm associated with the video stream of the light sensor is trained to detect license plates. The light sensor starts executing the recognition algorithm when it detects motion. Recognition of characters in the license plate is based upon an aggregation of several captured video frames in which a license plate is detected.
    Type: Application
    Filed: December 12, 2022
    Publication date: October 26, 2023
    Inventors: Yi XU, Yunchao GONG, Suraj Arun VATHSA, Mayank GUPTA, Naresh NAGABUSHAN
  • Publication number: 20230336515
    Abstract: A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
    Type: Application
    Filed: June 22, 2023
    Publication date: October 19, 2023
    Inventors: Amit Kumar Agrawal, Mayank Gupta, Rachit Mittal
  • Publication number: 20230319158
    Abstract: A machine implemented method and system, including: receiving at a near real-time processor module, one or more tenant-specific business objects from a message handler module; receiving at the near real-time processor module, contextual data related to the received one or more tenant-specific business objects from a platform analytics module; forming at the near real-time processor module, one or more events by applying one or more pre-defined analytic models to the received contextual data and the received one or more tenant-specific business objects; receiving at a message publisher module, one or more events from the near real-time processor module; and transmitting the received one or more events to one or more subscribers for the one or more events.
    Type: Application
    Filed: January 30, 2023
    Publication date: October 5, 2023
    Inventors: JAMES FRANCIS WALSH, SUHAIL MURTAZA KHAKI, MANU SINHA, JUAN MANUEL CARACOCHE, ARTEM MYGAIEV, FRANCIS MICHAEL BORKIN, BHASKAR CHATURVEDI, MAYANK GUPTA, BIJU VARGHESE
  • Publication number: 20230298359
    Abstract: A computer vision processor of a camera generates hyperzooms for persons or vehicles from image frames captured by the camera. The hyperzooms include a first hyperzoom associated with the persons or vehicles. The computer vision processor tracks traffic patterns of the persons or vehicles while obviating network usage by the camera by predicting positions of the persons or vehicles using a Kalman Filter from the first hyperzoom. The persons or vehicles are detected in the second hyperzoom. The positions of the persons or vehicles are updated based on detecting the persons or vehicles in the second hyperzoom. The first hyperzoom is removed from the camera. Tracks of the persons or vehicles are generated based on the updated positions. The second hyperzoom is removed from the camera. Track metadata is generated from the tracks for storing in a key-value database located on a non-transitory computer-readable storage medium of the camera.
    Type: Application
    Filed: January 17, 2023
    Publication date: September 21, 2023
    Inventors: Yi XU, Mayank GUPTA, Xia YANG, Yuanyuan CHEN, Zixiao (Shawn) WANG, Qiang (Kevin) FU, Yunchao GONG, Naresh NAGABUSHAN
  • Patent number: 11763492
    Abstract: In some embodiments, a method includes receiving a first image and a second image from a stereo camera pair. The method includes selecting a first row of pixels from the rectified image and a set of rows of pixels from the second image and comparing the first row of pixels with each row of pixels from the set of rows of pixels to determine disparity values. The method includes determining a pair of rows of pixels having the first row of pixels and a second row of pixels from the set of rows of pixels. The pair of rows of pixels has an offset no greater than an offset between the first row of pixels and each row of pixels from remaining rows of pixels. The method includes adjusting, based on the offset, the relative rotational position between the first stereo camera and the second stereo camera.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: September 19, 2023
    Assignee: PlusAI, Inc.
    Inventors: Anurag Ganguli, Timothy P. Daly, Jr., Mayank Gupta, Wenbin Wang, Huan Yang Chang
  • Patent number: 11734343
    Abstract: A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: August 22, 2023
    Assignee: Verkada Inc.
    Inventors: Yi Xu, Mayank Gupta, Xia Yang, Yuanyuan Chen, Zixiao (Shawn) Wang, Qiang (Kevin) Fu, Yunchao Gong, Naresh Nagabushan
  • Patent number: 11720343
    Abstract: In some embodiments, a method comprises receiving, at a processor of an autonomous vehicle and from at least one sensor, sensor data distributed within a time window. A first event being a first event type occurring at a first time in the time window is identified by the processor using a software model based on the sensor data. At least one first attribute associated with the first event is extracted by the processor. A second event being the first event type occurring at a second time in the time window is identified by the processor based on the at least one first attribute. In response to determining that the second event is not yet recognized as being the first event type, a first label for the second event is generated by the processor.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: August 8, 2023
    Assignee: PlusAI, Inc.
    Inventors: Gael Gurvan Colas, Mayank Gupta, Anurag Ganguli, Timothy P. Daly, Jr.
  • Patent number: 11722453
    Abstract: A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: August 8, 2023
    Assignee: GOOGLE TECHNOLOGY HOLDINGS LLC
    Inventors: Amit Kumar Agrawal, Mayank Gupta, Rachit Mittal