Patents by Inventor Preeti Pillai

Preeti Pillai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230086009
    Abstract: Embodiments are generally directed to generating and providing normalized representation in a three-dimensional (3D) model.
    Type: Application
    Filed: September 23, 2022
    Publication date: March 23, 2023
    Inventors: Luke Andrew SCHOENFELDER, Saayuj DHANAK, Dhruva RAJENDRA, Ivan Almaral SOLE, Preeti PILLAI
  • Patent number: 10586118
    Abstract: A method receives situation data including images from vehicles; clusters, into an image cluster, the images included in the situation data of vehicle(s) located in a geographic region from among the vehicles; locates related situation object(s) in image(s) of the image cluster; matches images from different vehicles in the image cluster, the matched images having corresponding feature(s) of the related situation object(s); determines three-dimensional (3D) sensor coordinates of the related situation object(s) relative to a sensor position of a target vehicle associated with at least one matched image, using the corresponding feature(s) of the related situation object(s) in the matched images; converts the 3D sensor coordinates of the related situation object(s) to geolocation coordinates of the related situation object(s) using geolocation data of the different vehicles associated with the matched images; and determines a coverage area of a traffic situation based on the geolocation coordinates of the relate
    Type: Grant
    Filed: January 13, 2018
    Date of Patent: March 10, 2020
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventors: Rui Guo, Preeti Pillai, Kentaro Oguchi
  • Publication number: 20190220678
    Abstract: A method receives situation data including images from vehicles; clusters, into an image cluster, the images included in the situation data of vehicle(s) located in a geographic region from among the vehicles; locates related situation object(s) in image(s) of the image cluster; matches images from different vehicles in the image cluster, the matched images having corresponding feature(s) of the related situation object(s); determines three-dimensional (3D) sensor coordinates of the related situation object(s) relative to a sensor position of a target vehicle associated with at least one matched image, using the corresponding feature(s) of the related situation object(s) in the matched images; converts the 3D sensor coordinates of the related situation object(s) to geolocation coordinates of the related situation object(s) using geolocation data of the different vehicles associated with the matched images; and determines a coverage area of a traffic situation based on the geolocation coordinates of the relate
    Type: Application
    Filed: January 13, 2018
    Publication date: July 18, 2019
    Inventors: Rui Guo, Preeti Pillai, Kentaro Oguchi
  • Patent number: 10043084
    Abstract: In an example embodiment, a computer-implemented method receives image data from one or more sensors of a moving platform and detecting one or more objects from the image data. The one or more objects potentially represent extremities of a user associated with the moving platform. The method processes the one or more objects using two or more context processors and context data retrieved from a context database. The processing produces at least two confidence values for each of the one or more objects. The method filters at least one of the one or more objects from consideration based on the confidence value of each of the one or more objects.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: August 7, 2018
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Tian Zhou, Preeti Pillai, Veeraganesh Yalla
  • Publication number: 20170344838
    Abstract: In an example embodiment, a computer-implemented method receives image data from one or more sensors of a moving platform and detecting one or more objects from the image data. The one or more objects potentially represent extremities of a user associated with the moving platform. The method processes the one or more objects using two or more context processors and context data retrieved from a context database. The processing produces at least two confidence values for each of the one or more objects. The method filters at least one of the one or more objects from consideration based on the confidence value of each of the one or more objects.
    Type: Application
    Filed: May 27, 2016
    Publication date: November 30, 2017
    Inventors: Tian Zhou, Preeti Pillai, Veeraganesh Yalla
  • Patent number: 9129161
    Abstract: The disclosure describes novel technology for inferring scenes from images. In one example, the technology includes a system that can determine partition regions from one or more factors that are independent of the image data, for an image depicting a scene; receive image data including pixels forming the image; classify pixels of the image into one or more pixel types based on one or more pixel-level features; determine, for each partition region, a set of pixel characteristic data describing a portion of the image included in the partition region based on the one or more pixel types of pixels in the partition region; and classify a scene of the image based on the set of pixel characteristic data of each of the partition regions.
    Type: Grant
    Filed: February 3, 2014
    Date of Patent: September 8, 2015
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: John Mark Agosta, Preeti Pillai, Kentaro Oguchi, Ganesh Yalla
  • Publication number: 20140355879
    Abstract: The disclosure describes novel technology for inferring scenes from images. In one example, the technology includes a system that can determine partition regions from one or more factors that are independent of the image data, for an image depicting a scene; receive image data including pixels forming the image; classify pixels of the image into one or more pixel types based on one or more pixel-level features; determine, for each partition region, a set of pixel characteristic data describing a portion of the image included in the partition region based on the one or more pixel types of pixels in the partition region; and classify a scene of the image based on the set of pixel characteristic data of each of the partition regions.
    Type: Application
    Filed: February 3, 2014
    Publication date: December 4, 2014
    Inventors: John Mark Agosta, Preeti Pillai, Kentaro Oguchi, Ganesh Yalla