Patents by Inventor Varun Ganapathi

Varun Ganapathi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013564
    Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to implement one or more encoding and/or decoding techniques.
    Type: Application
    Filed: September 26, 2023
    Publication date: January 11, 2024
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Weiyao Wang
  • Patent number: 11861503
    Abstract: Embodiments relate to system for automatically predicting payer response to claims. In an embodiment, the system receives claim data associated with a claim. The system identifies a set of claim features of the claim data, and generates an input vector with at least a portion of the set of claim features. The system applies the input vector to a trained model. A first portion of the neural network is configured to generate an embedding representing the input vector with a lower dimensionality than the input vector. A second portion of the neural network is configured to generate a prediction of whether the claim will be denied based on the embedding. The system provides the prediction for display on a user interface of a user device. The prediction may further include denial reason codes and a response date estimation to indicate if, when, and why a claim will be denied.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: January 2, 2024
    Assignee: AKASA, Inc.
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Andrew Atwal
  • Publication number: 20220383130
    Abstract: Example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to implement one or more self-supervised machine-learning techniques. In a particular implementation, first and second mappings may map features of an electronic document to associated first and second encoded domains.
    Type: Application
    Filed: January 14, 2022
    Publication date: December 1, 2022
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi
  • Publication number: 20220374993
    Abstract: Disclosed are a system, method and apparatus to generate service codes based, at least in part, on electronic documents.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 24, 2022
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Zhongfen Deng
  • Publication number: 20220374709
    Abstract: Disclosed are a system, method and apparatus to generate service codes based, at least in part, on electronic documents.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 24, 2022
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi
  • Publication number: 20220374710
    Abstract: Disclosed are a system, method and apparatus to generate service codes based, at least in part, on electronic documents.
    Type: Application
    Filed: May 23, 2022
    Publication date: November 24, 2022
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi
  • Publication number: 20220165373
    Abstract: Disclosed are a system, method and apparatus to generate service codes based, at least in part, on electronic documents. In an embodiment, tokens may be embedded in an electronic document based, at least in part, on a linguistic analysis of the electronic document. Likelihoods of applicability of service codes to the electronic document may be determined based, at least in part, on the embedding of tokens.
    Type: Application
    Filed: March 18, 2021
    Publication date: May 26, 2022
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi
  • Patent number: 11170448
    Abstract: Embodiments relate to system for automatically predicting payer response to claims. In an embodiment, the system receives claim data associated with a claim. The system identifies a set of claim features of the claim data, and generates an input vector with at least a portion of the set of claim features. The system applies the input vector to a trained model. A first portion of the neural network is configured to generate an embedding representing the input vector with a lower dimensionality than the input vector. A second portion of the neural network is configured to generate a prediction of whether the claim will be denied based on the embedding. The system provides the prediction for display on a user interface of a user device. The prediction may further include denial reason codes and a response date estimation to indicate if, when, and why a claim will be denied.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: November 9, 2021
    Assignee: AKASA, Inc.
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Andrew Atwal
  • Publication number: 20210342949
    Abstract: Embodiments relate to system for automatically predicting payer response to claims. In an embodiment, the system receives claim data associated with a claim. The system identifies a set of claim features of the claim data, and generates an input vector with at least a portion of the set of claim features. The system applies the input vector to a trained model. A first portion of the neural network is configured to generate an embedding representing the input vector with a lower dimensionality than the input vector. A second portion of the neural network is configured to generate a prediction of whether the claim will be denied based on the embedding. The system provides the prediction for display on a user interface of a user device. The prediction may further include denial reason codes and a response date estimation to indicate if, when, and why a claim will be denied.
    Type: Application
    Filed: June 29, 2021
    Publication date: November 4, 2021
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Andrew Atwal
  • Publication number: 20210192635
    Abstract: Embodiments relate to system for automatically predicting payer response to claims. In an embodiment, the system receives claim data associated with a claim. The system identifies a set of claim features of the claim data, and generates an input vector with at least a portion of the set of claim features. The system applies the input vector to a trained model. A first portion of the neural network is configured to generate an embedding representing the input vector with a lower dimensionality than the input vector. A second portion of the neural network is configured to generate a prediction of whether the claim will be denied based on the embedding. The system provides the prediction for display on a user interface of a user device. The prediction may further include denial reason codes and a response date estimation to indicate if, when, and why a claim will be denied.
    Type: Application
    Filed: March 13, 2020
    Publication date: June 24, 2021
    Inventors: Byung-Hak Kim, Hariraam Varun Ganapathi, Andrew Atwal
  • Patent number: 9857868
    Abstract: With the advent of touch-free interfaces such as described in the present disclosure, it is no longer necessary for computer interfaces to be in predefined locations (e.g., desktops) or configuration (e.g., rectangular keyboard). The present invention makes use of touch-free interfaces to encourage users to interface with a computer in an ergonomically sound manner. Among other things, the present invention implements a system for localizing human body parts such as hands, arms, shoulders, or even the fully body, with a processing device such as a computer along with a computer display to provide visual feedback on the display that encourages a user to maintain an ergonomically preferred position with ergonomically preferred motions. For example, the present invention encourages a user to maintain his motions within an ergonomically preferred range without have to reach out excessively or repetitively.
    Type: Grant
    Filed: March 19, 2011
    Date of Patent: January 2, 2018
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Christian Plagemann, Hendrik Dahlkamp, Hariraam Varun Ganapathi, Sebastian Thrun
  • Patent number: 9699217
    Abstract: A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
    Type: Grant
    Filed: April 7, 2016
    Date of Patent: July 4, 2017
    Assignee: Google Inc.
    Inventors: Christian Plagemann, Abraham Murray, Hendrik Dahlkamp, Alejandro Jose Kauffmann, Varun Ganapathi
  • Patent number: 9477302
    Abstract: Aspects of the present disclosure relate to controlling the functions of various devices based on spatial relationships. In one example, a system may include a depth and visual camera and a computer (networked or local) for processing data from the camera. The computer may be connected (wired or wirelessly) to any number of devices that can be controlled by the system. A user may use a mobile device to define a volume of space relative to the camera. The volume of space may then be associated with a controlled device as well as one or more control commands. When the volume of space is subsequently occupied, the one or more control commands may be used to control the controlled device. In this regard, a user may switch a device on or off, increase volume or speed, etc. simply by occupying the volume of space.
    Type: Grant
    Filed: August 10, 2012
    Date of Patent: October 25, 2016
    Assignee: Google Inc.
    Inventors: Alejandro Kauffmann, Aaron Joseph Wheeler, Liang-Yu Chi, Hendrik Dahlkamp, Varun Ganapathi, Yong Zhao, Christian Plagemann
  • Publication number: 20160226917
    Abstract: A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
    Type: Application
    Filed: April 7, 2016
    Publication date: August 4, 2016
    Inventors: Christian Plagemann, Abraham Murray, Hendrik Dahlkamp, Alejandro Jose Kauffmann, Varun Ganapathi
  • Patent number: 9392248
    Abstract: Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: July 12, 2016
    Assignee: Google Inc.
    Inventors: Aaron Joseph Wheeler, Christian Plagemann, Hendrik Dahlkamp, Liang-Yu Chi, Yong Zhao, Varun Ganapathi, Alejandro Jose Kauffmann
  • Patent number: 9317721
    Abstract: A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: April 19, 2016
    Assignee: Google Inc.
    Inventors: Christian Plagemann, Abraham Murray, Hendrik Dahlkamp, Alejandro Kauffmann, Varun Ganapathi
  • Publication number: 20150220149
    Abstract: A location of a first portion of a hand and a location of a second portion of the hand are detected within a working volume, the first portion and the second portion being in a horizontal plane. A visual representation is positioned on a display based on the location of the first portion and the second portion. A selection input is initiated when a distance between the first portion and the second portion meets a predetermined threshold, to select an object presented on the display, the object being associated with the location of the visual representation. A movement of the first portion of the hand and the second portion of the hand also may be detected in the working volume while the distance between the first portion and the second portion remains below the predetermined threshold and, in response, the object on the display can be repositioned.
    Type: Application
    Filed: February 4, 2013
    Publication date: August 6, 2015
    Applicant: Google Inc.
    Inventors: Christian Plagemann, Hendrik Dahlkamp, Varun Ganapathi
  • Publication number: 20150220150
    Abstract: In one implementation, a computer program product can be tangibly embodied on a non-transitory computer-readable storage medium and include instructions that, when executed, are configured to detect a gesture defined by an interaction of a user within a working volume defined above a surface. Based on the detected gesture, a gesture cursor control mode can be initiated within the computing device such that the user can manipulate the cursor by moving a portion of the hand of the user within the working volume. A location of the portion of the hand of the user relative to the surface can be identified within the working volume and a cursor can be positioned within a display portion of the computing device at a location corresponding to the identified location of the portion of the hand of the user within the working volume.
    Type: Application
    Filed: February 4, 2013
    Publication date: August 6, 2015
    Applicant: Google Inc.
    Inventors: Christian Plagemann, Hendrik Dahlkamp, Varun Ganapathi
  • Patent number: 9087241
    Abstract: A variety of methods, systems, devices and arrangements are implemented for use with motion capture. One such method is implemented for identifying salient points from three-dimensional image data. The method involves the execution of instructions on a computer system to generate a three-dimensional surface mesh from the three-dimensional image data. Lengths of possible paths from a plurality of points on the three-dimensional surface mesh to a common reference point are categorized. The categorized lengths of possible paths are used to identify a subset of the plurality of points as salient points.
    Type: Grant
    Filed: December 4, 2013
    Date of Patent: July 21, 2015
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Christian Plagemann, Hariraam Varun Ganapathi, Sebastian Thrun
  • Patent number: 9063573
    Abstract: The present invention provides a system and computerized method for receiving image information and translating it to computer inputs. In an embodiment of the invention, image information is received for a predetermined action space to identify an active body part. From such image information, depth information is extracted to interpret the actions of the active body part. Predetermined gestures can then be identified to provide input to a computer. For example, gestures that can be interpreted to mimic computerized touchscreen operation. Also, touchpad or mouse operations can be mimicked.
    Type: Grant
    Filed: February 17, 2011
    Date of Patent: June 23, 2015
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Christian Plagemann, Hendrik Dahlkamp, Hariraam Varun Ganapathi, Sebastian Thrun