Patents by Inventor Veeraganesh Yalla

Veeraganesh Yalla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10970561
    Abstract: The disclosure includes a method that receives a real-time image of a road from a camera sensor communicatively coupled to an onboard computer of a vehicle. The method includes dividing the real-time image into superpixels. The method includes merging the superpixels to form superpixel regions. The method includes generating prior maps from a dataset of road scene images. The method includes drawing a set of bounding boxes where each bounding box surrounds one of the superpixel regions. The method includes comparing the bounding boxes in the set of bounding boxes to a road prior map to identify a road region in the real-time image. The method includes pruning bounding boxes from the set of bounding boxes to reduce the set to remaining bounding boxes. The method may include using a categorization module that identifies the presence of a road scene object in the remaining bounding boxes.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: April 6, 2021
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto
  • Patent number: 10611379
    Abstract: By way of example, the technology disclosed by this document is capable of receiving signal data from one or more sensors; inputting the signal data into an input layer of a deep neural network (DNN), the DNN including one or more layers; generating, using the one or more layers of the DNN, one or more spatial representations of the signal data; generating, using one or more hierarchical temporal memories (HTMs) respectively associated with the one or more layers of the DNNs, one or more temporal predictions by the DNN based on the one or more spatial representations; and generating an anticipation of a future outcome by recognizing a temporal pattern based on the one or more temporal predictions.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: April 7, 2020
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventors: Oluwatobi Olabiyi, Veeraganesh Yalla, Eric Martinson
  • Patent number: 10438076
    Abstract: The system includes a 3D biometric image sensor and processing module operable to generate a 3D surface map of a biometric object, wherein the 3D surface map includes a plurality of 3D coordinates. The system performs one or more anti-spoofing techniques to determine a fake biometric.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: October 8, 2019
    Assignee: Flashscan3D, LLC
    Inventors: Michael Spencer Troy, Raymond Charles Daley, Veeraganesh Yalla
  • Patent number: 10043084
    Abstract: In an example embodiment, a computer-implemented method receives image data from one or more sensors of a moving platform and detecting one or more objects from the image data. The one or more objects potentially represent extremities of a user associated with the moving platform. The method processes the one or more objects using two or more context processors and context data retrieved from a context database. The processing produces at least two confidence values for each of the one or more objects. The method filters at least one of the one or more objects from consideration based on the confidence value of each of the one or more objects.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: August 7, 2018
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Tian Zhou, Preeti Pillai, Veeraganesh Yalla
  • Publication number: 20180173969
    Abstract: The disclosure includes a method that receives a real-time image of a road from a camera sensor communicatively coupled to an onboard computer of a vehicle. The method includes dividing the real-time image into superpixels. The method includes merging the superpixels to form superpixel regions. The method includes generating prior maps from a dataset of road scene images. The method includes drawing a set of bounding boxes where each bounding box surrounds one of the superpixel regions. The method includes comparing the bounding boxes in the set of bounding boxes to a road prior map to identify a road region in the real-time image. The method includes pruning bounding boxes from the set of bounding boxes to reduce the set to remaining bounding boxes. The method may include using a categorization module that identifies the presence of a road scene object in the remaining bounding boxes.
    Type: Application
    Filed: January 30, 2018
    Publication date: June 21, 2018
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto
  • Patent number: 9916508
    Abstract: The disclosure includes a method that receives a real-time image of a road from a camera sensor communicatively coupled to an onboard computer of a vehicle. The method includes dividing the real-time image into superpixels. The method includes merging the superpixels to form superpixel regions. The method includes generating prior maps from a dataset of road scene images. The method includes drawing a set of bounding boxes where each bounding box surrounds one of the superpixel regions. The method includes comparing the bounding boxes in the set of bounding boxes to a road prior map to identify a road region in the real-time image. The method includes pruning bounding boxes from the set of bounding boxes to reduce the set to remaining bounding boxes. The method may include using a categorization module that identifies the presence of a road scene object in the remaining bounding boxes.
    Type: Grant
    Filed: March 12, 2015
    Date of Patent: March 13, 2018
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto
  • Publication number: 20180053093
    Abstract: By way of example, the technology disclosed by this document is capable of receiving signal data from one or more sensors; inputting the signal data into an input layer of a deep neural network (DNN), the DNN including one or more layers; generating, using the one or more layers of the DNN, one or more spatial representations of the signal data; generating, using one or more hierarchical temporal memories (HTMs) respectively associated with the one or more layers of the DNNs, one or more temporal predictions by the DNN based on the one or more spatial representations; and generating an anticipation of a future outcome by recognizing a temporal pattern based on the one or more temporal predictions.
    Type: Application
    Filed: August 16, 2016
    Publication date: February 22, 2018
    Inventors: Oluwatobi Olabiyi, Veeraganesh Yalla, Eric Martinson
  • Publication number: 20170344838
    Abstract: In an example embodiment, a computer-implemented method receives image data from one or more sensors of a moving platform and detecting one or more objects from the image data. The one or more objects potentially represent extremities of a user associated with the moving platform. The method processes the one or more objects using two or more context processors and context data retrieved from a context database. The processing produces at least two confidence values for each of the one or more objects. The method filters at least one of the one or more objects from consideration based on the confidence value of each of the one or more objects.
    Type: Application
    Filed: May 27, 2016
    Publication date: November 30, 2017
    Inventors: Tian Zhou, Preeti Pillai, Veeraganesh Yalla
  • Patent number: 9805276
    Abstract: In an example embodiment, a computer-implemented method is disclosed that generates a spectral signature describing one or more dynamic objects and a scene layout of a current road scene; identifies, from among one or more scene clusters included in a familiarity graph associated with a user, a road scene cluster corresponding to the current road scene; determine a position of the spectral signature relative to other spectral signatures comprising the identified road scene cluster; and generates a familiarity index estimating familiarity of the user with the current road scene based on the position of the spectral signature. The method can further include determining an assistance level based on the familiarity index of the user; and providing one or more of an auditory instruction, a visual instruction, and a tactile instruction to the user via one or more output devices of a vehicle at the determined assistance level.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: October 31, 2017
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Preeti J. Pillai, Veeraganesh Yalla, Rahul Ravi Parundekar, Kentaro Oguchi
  • Patent number: 9792821
    Abstract: In an example embodiment, a computer-implemented method is disclosed that receives road scene data and vehicle operation data from one or more sensors associated with a first vehicle on a road segment; receives situation ontology data; automatically generates a semantic road scene description of the road segment using the road scene data, the vehicle operation data, and the situation ontology data; and transmits, via a communication network, the semantic road scene description to one or more other vehicles associated with the road segment. Automatically generating the semantic road scene description of the road segment can include determining lane-level activity information for each lane based on lane information and dynamic road object information and determining a lane-level spatial layout for each lane based on the lane information and the dynamic road object information.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: October 17, 2017
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Veeraganesh Yalla, Rahul Ravi Parundekar, Preeti J. Pillai, Kentaro Oguchi
  • Publication number: 20170286782
    Abstract: In an example embodiment, a computer-implemented method is disclosed that generates a spectral signature describing one or more dynamic objects and a scene layout of a current road scene; identifies, from among one or more scene clusters included in a familiarity graph associated with a user, a road scene cluster corresponding to the current road scene; determine a position of the spectral signature relative to other spectral signatures comprising the identified road scene cluster; and generates a familiarity index estimating familiarity of the user with the current road scene based on the position of the spectral signature. The method can further include determining an assistance level based on the familiarity index of the user; and providing one or more of an auditory instruction, a visual instruction, and a tactile instruction to the user via one or more output devices of a vehicle at the determined assistance level.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Preeti J. Pillai, Veeraganesh Yalla, Rahul Ravi Parundekar, Kentaro Oguchi
  • Publication number: 20170278402
    Abstract: In an example embodiment, a computer-implemented method is disclosed that receives road scene data and vehicle operation data from one or more sensors associated with a first vehicle on a road segment; receives situation ontology data; automatically generates a semantic road scene description of the road segment using the road scene data, the vehicle operation data, and the situation ontology data; and transmits, via a communication network, the semantic road scene description to one or more other vehicles associated with the road segment. Automatically generating the semantic road scene description of the road segment can include determining lane-level activity information for each lane based on lane information and dynamic road object information and determining a lane-level spatial layout for each lane based on the lane information and the dynamic road object information.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 28, 2017
    Inventors: Veeraganesh Yalla, Rahul Ravi Parundekar, Preeti J. Pillai, Kentaro Oguchi
  • Patent number: 9694496
    Abstract: The disclosure includes methods for determining a current location for a user in an environment; detecting obstacles within the environment; estimating one or more physical capabilities of the user based on an EHR associated with the user; generating, with a processor-based device that is programmed to perform the generating, instructions for a robot to perform a task based on the obstacles within the environment and one or more physical capabilities of the user; and instructing the robot to perform the task.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: July 4, 2017
    Inventors: Eric Martinson, Emrah Akin Sisbot, Veeraganesh Yalla, Kentaro Oguchi, Yusuke Nakano
  • Patent number: 9686451
    Abstract: The disclosure includes a system and method for determine a real time driving difficulty category. The method may include determining image feature vector data based on one or more features depicted in a real time image of a road scene. The image feature vector data may describe an image feature vector for an edited version of the real time image. The method may include determining offline road map data for the road scene, which includes a static label for a road included in the road scene and offline road information describing a regulatory speed limit for the road. The method may include selecting, based on the static label, a classifier for analyzing the image feature vector. The method may include executing the selected classifier to determine a real time driving difficulty category describing the difficulty for a user of the client device to drive in the road scene.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: June 20, 2017
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto
  • Patent number: 9542626
    Abstract: By way of example, the technology disclosed by this document receives image data; extracts a depth image and a color image from the image data; creates a mask image by segmenting the depth image; determines a first likelihood score from the depth image and the mask image using a layered classifier; determines a second likelihood score from the color image and the mask image using a deep convolutional neural network; and determines a class of at least a portion of the image data based on the first likelihood score and the second likelihood score. Further, the technology can pre-filter the mask image using the layered classifier and then use the pre-filtered mask image and the color image to calculate a second likelihood score using the deep convolutional neural network to speed up processing.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: January 10, 2017
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Eric Martinson, Veeraganesh Yalla
  • Publication number: 20160328622
    Abstract: The system includes a 3D biometric image sensor and processing module operable to generate a 3D surface map of a biometric object, wherein the 3D surface map includes a plurality of 3D coordinates. The system performs one or more anti-spoofing techniques to determine a fake biometric.
    Type: Application
    Filed: July 18, 2016
    Publication date: November 10, 2016
    Applicant: Flashscan3D, LLC
    Inventors: Michael Spencer Troy, Raymond Charles Daley, Veeraganesh Yalla
  • Publication number: 20160267331
    Abstract: The disclosure includes a method that receives a real-time image of a road from a camera sensor communicatively coupled to an onboard computer of a vehicle. The method includes dividing the real-time image into superpixels. The method includes merging the superpixels to form superpixel regions. The method includes generating prior maps from a dataset of road scene images. The method includes drawing a set of bounding boxes where each bounding box surrounds one of the superpixel regions. The method includes comparing the bounding boxes in the set of bounding boxes to a road prior map to identify a road region in the real-time image. The method includes pruning bounding boxes from the set of bounding boxes to reduce the set to remaining bounding boxes. The method may include using a categorization module that identifies the presence of a road scene object in the remaining bounding boxes.
    Type: Application
    Filed: March 12, 2015
    Publication date: September 15, 2016
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto
  • Publication number: 20160250751
    Abstract: The disclosure includes methods for determining a current location for a user in an environment; detecting obstacles within the environment; estimating one or more physical capabilities of the user based on an EHR associated with the user; generating, with a processor-based device that is programmed to perform the generating, instructions for a robot to perform a task based on the obstacles within the environment and one or more physical capabilities of the user; and instructing the robot to perform the task.
    Type: Application
    Filed: February 26, 2015
    Publication date: September 1, 2016
    Inventors: Eric Martinson, Emrah Akin Sisbot, Veeraganesh Yalla, Kentaro Oguchi, Yusuke Nakano
  • Patent number: 9409519
    Abstract: The disclosure includes a system and method for spatial information for a heads-up display. The system includes a processor and a memory storing instructions that, when executed, cause the system to: receive sensor data about an entity, assign the entity to a category, estimate a danger index for the entity based on vehicle data, category data, and a position of the entity, generate entity data that includes the danger index, identify a graphic that is a representation of the entity based on the entity data, determine a display modality for the graphic based on the danger index, and position the graphic to correspond to a user's eye frame.
    Type: Grant
    Filed: August 27, 2014
    Date of Patent: August 9, 2016
    Inventors: Emrah Akin Sisbot, Veeraganesh Yalla
  • Publication number: 20160207458
    Abstract: The disclosure includes a system and method for determine a real time driving difficulty category. The method may include determining image feature vector data based on one or more features depicted in a real time image of a road scene. The image feature vector data may describe an image feature vector for an edited version of the real time image. The method may include determining offline road map data for the road scene, which includes a static label for a road included in the road scene and offline road information describing a regulatory speed limit for the road. The method may include selecting, based on the static label, a classifier for analyzing the image feature vector. The method may include executing the selected classifier to determine a real time driving difficulty category describing the difficulty for a user of the client device to drive in the road scene.
    Type: Application
    Filed: January 21, 2015
    Publication date: July 21, 2016
    Inventors: Preeti Jayagopi Pillai, Veeraganesh Yalla, Kentaro Oguchi, Hirokazu Nomoto