Patents by Inventor Vidya Elangovan

Vidya Elangovan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10992860
    Abstract: Embodiments of the present disclosure are directed to providing a more accurate presentation of a vehicle's surroundings when the presentation combines images from multiple cameras. Cameras may capture the same portion of the environment. If an object is within view of at least two cameras, each camera produces a portion of an overall view, such as a 360 degree synthetic top-down view. The seam, angle and/or geometry, angle between any two images is dynamically determined, such as due to proximity (or lack thereof) to an object. As a result the synthetic top-down view may present an image of an object more prominently and/or avoid having the seam between camera images fall on the image of the object, which may otherwise result in the image of the object being omitted from both camera images.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 27, 2021
    Assignee: NIO USA, INC.
    Inventors: Anthony Tao Liang, Vidya Elangovan, Divya Ramakrishnan, Prashant Jain
  • Patent number: 10982968
    Abstract: Embodiments of the present disclosure are directed to providing an Augmented Reality (AR) navigation display in a vehicle. More specifically, embodiments are directed to rendering AR indications of a navigation route over a camera video stream in perspective. According to one embodiment, visual tracking can be performed on features in the video data and camera pose, i.e., a matrix encapsulating position and orientation, can be determined for each frame of video based on both the visual tracking and navigation sensor data. These separately determined camera poses can then be merged or fused into a single camera pose that is more accurate and more stable and which can then be used in rendering more realistic AR route indicators onto the video of the real-world route captured by the camera.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: April 20, 2021
    Assignee: NIO USA, INC.
    Inventors: Vidya Elangovan, Prashant Jain, Anthony Tao Liang, Guan Wang
  • Patent number: 10885712
    Abstract: Embodiments of the present disclosure are directed to an augmented reality based user's manual for a vehicle implemented as an application on a mobile device which allows the user to point a mobile phone, tablet or an augmented reality headset at any part of the vehicle interior or exterior and experience augmented annotations, overlays, popups, etc. displayed on images of real parts of the car captured by the user's mobile device. Embodiments provide for estimating the camera pose in six degrees of freedom based on the content of the captured image or video and using a neural network trained on a dense sampling of a three-dimensional model of the car rendered with realistic textures to identify and properly align the augmented reality presentation with the image of the vehicle being captured by the mobile device.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: January 5, 2021
    Assignee: NIO USA, Inc.
    Inventors: Vidya Elangovan, Prashant Jain, Anthony Tao Liang, Guan Wang
  • Publication number: 20200314333
    Abstract: Embodiments of the present disclosure are directed to providing a more accurate presentation of a vehicle's surroundings when the presentation combines images from multiple cameras. Cameras may capture the same portion of the environment. If an object is within view of at least two cameras, each camera produces a portion of an overall view, such as a 360 degree synthetic top-down view. The seam, angle and/or geometry, angle between any two images is dynamically determined, such as due to proximity (or lack thereof) to an object. As a result the synthetic top-down view may present an image of an object more prominently and/or avoid having the seam between camera images fall on the image of the object, which may otherwise result in the image of the object being omitted from both camera images.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Inventors: Anthony Tao Liang, Vidya Elangovan, Divya Ramakrishnan, Prashant Jain
  • Publication number: 20190301886
    Abstract: Embodiments of the present disclosure are directed to providing an Augmented Reality (AR) navigation display in a vehicle. More specifically, embodiments are directed to rendering AR indications of a navigation route over a camera video stream in perspective. According to one embodiment, visual tracking can be performed on features in the video data and camera pose, i.e., a matrix encapsulating position and orientation, can be determined for each frame of video based on both the visual tracking and navigation sensor data. These separately determined camera poses can then be merged or fused into a single camera pose that is more accurate and more stable and which can then be used in rendering more realistic AR route indicators onto the video of the real-world route captured by the camera.
    Type: Application
    Filed: March 29, 2018
    Publication date: October 3, 2019
    Inventors: Vidya Elangovan, Prashant Jain, Anthony Tao Liang, Guan Wang
  • Publication number: 20190019335
    Abstract: Embodiments of the present disclosure are directed to an augmented reality based user's manual for a vehicle implemented as an application on a mobile device which allows the user to point a mobile phone, tablet or an augmented reality headset at any part of the vehicle interior or exterior and experience augmented annotations, overlays, popups, etc. displayed on images of real parts of the car captured by the user's mobile device. Embodiments provide for estimating the camera pose in six degrees of freedom based on the content of the captured image or video and using a neural network trained on a dense sampling of a three-dimensional model of the car rendered with realistic textures to identify and properly align the augmented reality presentation with the image of the vehicle being captured by the mobile device.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 17, 2019
    Inventors: Vidya Elangovan, Prashant Jain, Anthony Tao Liang, Guan Wang
  • Patent number: 9473748
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game, e.g., players, coaches and umpires, and to update a digital record of the game. The start and/or end of a half-inning can be indicated by factors such as: the participants leaving the dugout region and entering the playing field, including the outfield, a participant leaving the dugout region and entering the region of a base coach's box, a pitcher throwing pitches when no batter is present, and players on the playing field throwing the ball back and forth to one another. In one approach, the combination of detecting an event which is known to occur most often or always between half-innings followed by detecting another event which is known to occur most often or always during a half-inning, can be used to signal that a half-inning has started.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: October 18, 2016
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P Heidmann
  • Patent number: 9007463
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game and to update a digital record of the game. The merging of participants in a video frame is resolved by associating the participants' tracks before and/or after the merging with a most likely participant role, such as a player, coach or umpire role. The role of one merged participant can be used to deduce the role of the other merged participant. In this way, the digital record can be completed even for the merged period. The role of a participant can be based, e.g., on the location of the participant relative to a base, a coach's box region, a pitcher's mound, a dugout, or a fielding position, or by determining that a participant is running along a path to a base or performing some other movement.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: April 14, 2015
    Assignee: Sportsvision, Inc.
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P. Heidmann
  • Publication number: 20140125807
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game, e.g., players, coaches and umpires, and to update a digital record of the game. The start and/or end of a half-inning can be indicated by factors such as: the participants leaving the dugout region and entering the playing field, including the outfield, a participant leaving the dugout region and entering the region of a base coach's box, a pitcher throwing pitches when no batter is present, and players on the playing field throwing the ball back and forth to one another. In one approach, the combination of detecting an event which is known to occur most often or always between half-innings followed by detecting another event which is known to occur most often or always during a half-inning, can be used to signal that a half-inning has started.
    Type: Application
    Filed: January 9, 2014
    Publication date: May 8, 2014
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P. Heidmann
  • Patent number: 8659663
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game, e.g., players, coaches and umpires, and to update a digital record of the game. The start and/or end of a half-inning can be indicated by factors such as: the participants leaving the dugout region and entering the playing field, including the outfield, a participant leaving the dugout region and entering the region of a base coach's box, a pitcher throwing pitches when no batter is present, and players on the playing field throwing the ball back and forth to one another. In one approach, the combination of detecting an event which is known to occur most often or always between half-innings followed by detecting another event which is known to occur most often or always during a half-inning, can be used to signal that a half-inning has started.
    Type: Grant
    Filed: December 22, 2010
    Date of Patent: February 25, 2014
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P. Heidmann
  • Patent number: 8558883
    Abstract: A video broadcast of a live event is enhanced by providing graphics in the video in real time to depict the fluid flow around a moving object in the event and to provide other informative graphics regarding aerodynamic forces on the object. A detailed flow field around the object is calculated before the event, on an offline basis, for different speeds of the object and different locations of other nearby objects. The fluid flow data is represented by baseline data and modification factors or adjustments which are based on the speed of the object and the locations of the other objects. During the event, the modification factors are applied to the baseline data to determine fluid flow in real time, as the event is captured on video. In an example implementation, the objects are race cars which transmit their location and/or speed to a processing facility which provides the video.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: October 15, 2013
    Assignee: Sportvision, Inc.
    Inventors: Richard H. Cavallaro, Alina Alt, Vidya Elangovan, James O. McGuffin, Timothy P. Heidmann, Reuben Halper
  • Patent number: 8456527
    Abstract: An object is detected in images of a live event by storing and indexing templates based on representations of the object from previous images. For example, the object may be a vehicle which repeatedly traverses a course. A first set of images of the live event is captured when the object is at different locations in the live event. A representation of the object in each image is obtained, such as by image recognition techniques, and a corresponding template is stored. When the object again traverses the course, for each location, the stored template which is indexed to the location can be retrieved for use in detecting the object in a current image. The object's current location may be obtained from GPS data from the object, for instance, or from camera sensor data, e.g., pan, tilt and zoom, which indicates a direction in which the camera is pointed.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: June 4, 2013
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Richard H. Cavallaro, Timothy P. Heidmann, Marvin S. White, Kenneth A. Milnes
  • Patent number: 8457392
    Abstract: A representation of an object in an image of a live event is obtained by determining a color profile of the object. The color profile may be determined from the image in real time and compared to stored color profiles to determine a best match. For example, the color profile of the representation of the object can be obtained by classifying color data of the representation of the object into different bins of a color space, in a histogram of color data. The stored color profiles may be indexed to object identifiers, object viewpoints, or object orientations. Color data which is common to different objects or to a background color may be excluded. Further, a template can be used as an additional aid in identifying the representation of the object. The template can include, e.g., a model of the object or pixel data of the object from a prior image.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: June 4, 2013
    Assignee: Sportvision, Inc.
    Inventors: Richard H. Cavallaro, Vidya Elangovan, Marvin S. White, Kenneth A. Milnes
  • Patent number: 8401304
    Abstract: A representation of an object in a live event is detected in an image of the event. A location of the object in the live event is translated to an estimated location in the image based on camera sensor and/or registration data. A search area is determined around the estimated location in the image. A direction of motion of the object in the image is also determined. A representation of the object is identified in the search area by detecting edges of the object, e.g., perpendicular to the direction of motion and parallel to the direction of motion, performing morphological processing, and matching against a model or other template of the object. Based on the position of the representation of the object, the camera sensor and/or registration data can be updated, and a graphic can be located in the image substantially in real time.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: March 19, 2013
    Assignee: Sportvision, Inc.
    Inventors: Richard H. Cavallaro, Vidya Elangovan
  • Patent number: 8385658
    Abstract: A representation of an object in an image of a live event is detected by matching potential representation of the object against multiple types of templates. For example, the templates can include monochrome data, chrominance and/or luminance data, pixel data of the object from an earlier image, e.g., as a video template, an edge and morphology based template, a model of the object, or a predetermined static texture which is based on an appearance of the object. A weighting function may also be used. In one possible approach, a first type of template is used in an initial search area, and a second type of template is used in a smaller region of the initial search area. Based on a position of the optimum representation of the object in the image, a graphic can be provided in the image, or sensor and/or registration data of a camera can be updated.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: February 26, 2013
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Richard H. Cavallaro, Marvin S. White, Kenneth A. Milnes
  • Patent number: 8253799
    Abstract: An object is detected in images of a live event by storing and indexing camera registration-related data from previous images. For example, the object may be a vehicle which repeatedly traverses a course. A first set of images of the live event is captured when the object is at different locations in the live event. The camera registration-related data for each image is obtained and stored. When the object again traverses the course, for each location, the stored camera registration-related data which is indexed to the location can be retrieved for use in estimating a position of a representation of the object in a current image, such as by defining a search area in the image. An actual position of the object in the image is determined, in response to which the camera registration-related data may be updated, such as for use in a subsequent traversal of the course.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: August 28, 2012
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Kenneth A. Milnes, Timothy P. Heidmann
  • Publication number: 20120162435
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game, e.g., players, coaches and umpires, and to update a digital record of the game. The start and/or end of a half-inning can be indicated by factors such as: the participants leaving the dugout region and entering the playing field, including the outfield, a participant leaving the dugout region and entering the region of a base coach's box, a pitcher throwing pitches when no batter is present, and players on the playing field throwing the ball back and forth to one another. In one approach, the combination of detecting an event which is known to occur most often or always between half-innings followed by detecting another event which is known to occur most often or always during a half-inning, can be used to signal that a half-inning has started.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 28, 2012
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P. Heidmann
  • Publication number: 20120162434
    Abstract: Video frames of a baseball game are analyzed to determine a track for the participants in the game and to update a digital record of the game. The merging of participants in a video frame is resolved by associating the participants' tracks before and/or after the merging with a most likely participant role, such as a player, coach or umpire role. The role of one merged participant can be used to deduce the role of the other merged participant. In this way, the digital record can be completed even for the merged period. The role of a participant can be based, e.g., on the location of the participant relative to a base, a coach's box region, a pitcher's mound, a dugout, or a fielding position, or by determining that a participant is running along a path to a base or performing some other movement.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 28, 2012
    Inventors: Vidya Elangovan, Marlena Fecho, Timothy P. Heidmann
  • Patent number: 8077981
    Abstract: Camera registration and/or sensor data is updated during a live event by determining a difference between an estimated position of an object in an image and an actual position of the object in the image. The estimated position of the object in the image can be based on an estimated position of the object in the live event, e.g., based on GPS or other location data. This position is transformed to the image space using current camera registration and/or sensor data. The actual position of the object in the image can be determined by template matching which accounts for an orientation of the object, a shape of the object, an estimated size of the representation of the object in the image, and the estimated position of the object in the image. The updated camera registration/sensor data can be used in detecting an object in a subsequent image.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: December 13, 2011
    Assignee: Sportvision, Inc.
    Inventors: Vidya Elangovan, Richard H. Cavallaro, Marvin S. White, Kenneth A. Milnes
  • Publication number: 20090027494
    Abstract: A video broadcast of a live event is enhanced by providing graphics in the video in real time to depict the fluid flow around a moving object in the event and to provide other informative graphics regarding aerodynamic forces on the object. A detailed flow field around the object is calculated before the event, on an offline basis, for different speeds of the object and different locations of other nearby objects. The fluid flow data is represented by baseline data and modification factors or adjustments which are based on the speed of the object and the locations of the other objects. During the event, the modification factors are applied to the baseline data to determine fluid flow in real time, as the event is captured on video. In an example implementation, the objects are race cars which transmit their location and/or speed to a processing facility which provides the video.
    Type: Application
    Filed: December 19, 2007
    Publication date: January 29, 2009
    Applicant: SPORTVISION, INC.
    Inventors: Richard H. Cavallaro, Alina Alt, Vidya Elangovan, James O. McGuffin, Timothy P. Heidmann, Reuben Halper