Patents by Inventor Vijay Kamarshi

Vijay Kamarshi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230215129
    Abstract: Saliency regions are identified in a global scene depicted by volumetric video. Saliency video streams that track the saliency regions are generated. Each saliency video stream tracks a respective saliency region. A saliency stream based representation of the volumetric video is generated to include the saliency video streams. The saliency stream based representation of the volumetric video is transmitted to a video streaming client.
    Type: Application
    Filed: June 16, 2021
    Publication date: July 6, 2023
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Ajit NINAN, Shwetha RAM, Gregory John WARD, Domagoj BARICEVIC, Vijay KAMARSHI
  • Patent number: 11412108
    Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, determining the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: August 9, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
  • Patent number: 10671846
    Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, determining the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: June 2, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
  • Patent number: 10514256
    Abstract: In some examples, a vision system includes multiple time of flight (ToF) cameras and a single illumination source. The illumination source and the multiple ToF cameras may be synchronized with each other, such as through a phase locked loop based on a generated control signal. A first one of the ToF cameras may be co-located with the illumination source, and a second one of the ToF cameras may be spaced away from the illumination source and the first ToF camera. For instance, the first ToF camera may have a wider field of view (FoV) for generating depth mapping of a scene, while the second ToF camera may have a narrower FoV for generating higher resolution depth mapping of a particular portion of the scene, such as for gesture recognition.
    Type: Grant
    Filed: May 6, 2013
    Date of Patent: December 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Robert Warren Sjoberg, Menashe Haskin, Amit Tikare
  • Patent number: 9661328
    Abstract: A video processing system is provided to create quantization data parameters based on human eye attraction to provide to an encoder to enable the encoder to compress data taking into account the human perceptual guidance. The system includes a perceptual video processor (PVP) to generate a perceptual significance pixel map for data to be input to the encoder. Companding is provided to reduce the pixel values to values ranging from zero to one, and decimation is performed to match the pixel values to a spatial resolution of quantization parameter values (QP) values in a look up table (LUT). The LUT table values then provide the metadata to provide to the encoder to enable compression of the original picture to be performed by the encoder in a manner so that bits are allocated to pixels in a macroblock according to the predictions of eye tracking.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 23, 2017
    Assignee: ARRIS Enterprises, Inc.
    Inventors: Sean T. McCarthy, Peter A. Borgwardt, Vijay Kamarshi, Shiv Saxena
  • Patent number: 9563955
    Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, tracking the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.
    Type: Grant
    Filed: May 15, 2013
    Date of Patent: February 7, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
  • Patent number: 9558563
    Abstract: In a system that monitors the positions and movements of objects within an environment, a depth camera may be configured to produce depth images based on configurable measurement parameters such as illumination intensity and sensing duration. A supervisory component may be configured to roughly identify objects within an environment and to specify observation goals with respect to the objects. The measurement parameters of the depth camera may then be configured in accordance with the goals, and subsequent analyses of the environment may be based on depth images obtained using the measurement parameters.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: January 31, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Amit Tikare, Ronald Joseph Degges, Jr., Eric Wang, Christopher David Coley
  • Patent number: 9503756
    Abstract: Encoding a video signal including pictures includes generating perceptual representations based on the pictures. Reference pictures are selected and motion vectors are generated based on the perceptual representations and the reference pictures. The motion vectors and pointers for the reference pictures are provided in an encoded video signal. Decoding may include receiving pointers for reference pictures and motion vectors based on perceptual representations of the reference pictures. The decoding of the pictures in the encoded video signal may include selecting reference pictures using the pointers and determining predicted pictures, based on the motion vectors and the selected reference pictures. The decoding may include generating reconstructed pictures from the predicted pictures and the residual pictures.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 22, 2016
    Assignee: ARRIS Enterprises, Inc.
    Inventors: Sean T. McCarthy, Vijay Kamarshi
  • Patent number: 9465484
    Abstract: A vision system associated with a projection system includes multiple optical pathways. For instance, when the projection system projects an image onto a generally vertical surface, the vision system may operate in a rear sensing mode, such as for detecting one or more gestures made by a user located behind the projection system. Alternatively, when the projection system projects the image onto a generally horizontal surface the vision system may operate in a front sensing mode for detecting gestures made by a user located in front of the projection system. One or more thresholds may be established for switching between the front sensing mode and the rear sensing mode based on orientation information. As another example, the vision system may be operated in both the front sensing mode and the rear sensing mode contemporaneously.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: October 11, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Qiang Liu
  • Patent number: 9304582
    Abstract: A system may utilize a projector, a camera, and a depth sensor to produce images within the environment of a user and to detect and respond to user actions. Depth data from the depth sensor may be analyzed to detect and identify items within the environment. A coordinate transformation may then be used to identify corresponding color values from camera data, which can in turn be analyzed to determine the colors of detected items. A similar coordinate transformation may be used to identify color values of a projected image that correspond to the detected items. In some cases, camera color values corresponding to an item may be corrected based on the corresponding color values of a projected image. In other cases, projected color values corresponding to an item may be corrected based on the corresponding camera color values.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: April 5, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Vijay Kamarshi, Helder Jesus Ramalho, Rahul Agrawal, Robert Ramsey Flenniken, Ning Yao, Paul Matthew Bombach, Yue Liu, Sowmya Gopalan
  • Patent number: 9111338
    Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: August 18, 2015
    Assignee: ARRIS Technology, Inc.
    Inventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
  • Patent number: 9041691
    Abstract: A passive projection screen presents images projected thereon by a projection system. A surface of the screen includes elements that are reflective to non-visible light, such as infrared (IR) light. When non-visible light is directed to the screen, the non-visible light is reflected by the reflective elements back. Part of the reflected light may contact and reflect from a user's fingertip or hand (or other object, such as a stylus) while another part is reflected to the projection system. The projection system differentiates among distances to the surface and distances that include the additional travel to the fingertip. As the fingertip moves closer to the surface, the distances approach equality. When the distances are approximately equal, the finger is detected as touching the surface. In this manner, a projection surface equipped with reflective elements facilitates more accurate touch detection.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: May 26, 2015
    Assignee: Rawles LLC
    Inventors: Menashe Haskin, Kavitha Velusamy, Ning Yao, Robert Warren Sjoberg, Vijay Kamarshi, Kevin Wayne Arthur
  • Publication number: 20140314335
    Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.
    Type: Application
    Filed: June 27, 2014
    Publication date: October 23, 2014
    Applicant: General Instrument Corporation
    Inventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
  • Publication number: 20140269903
    Abstract: A video processing system is provided to create quantization data parameters based on human eye attraction to provide to an encoder to enable the encoder to compress data taking into account the human perceptual guidance. The system includes a perceptual video processor (PVP) to generate a perceptual significance pixel map for data to be input to the encoder. Companding is provided to reduce the pixel values to values ranging from zero to one, and decimation is performed to match the pixel values to a spatial resolution of quantization parameter values (QP) values in a look up table (LUT). The LUT table values then provide the metadata to provide to the encoder to enable compression of the original picture to be performed by the encoder in a manner so that bits are allocated to pixels in a macroblock according to the predictions of eye tracking.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Sean T. McCarthy, Peter A. Borgwardt, Vijay Kamarshi, Shiv Saxena
  • Patent number: 8767127
    Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.
    Type: Grant
    Filed: April 16, 2010
    Date of Patent: July 1, 2014
    Assignee: General Instrument Corporation
    Inventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
  • Publication number: 20130148731
    Abstract: Encoding a video signal including pictures includes generating perceptual representations based on the pictures. Reference pictures are selected and motion vectors are generated based on the perceptual representations and the reference pictures. The motion vectors and pointers for the reference pictures are provided in an encoded video signal. Decoding may include receiving pointers for reference pictures and motion vectors based on perceptual representations of the reference pictures. The decoding of the pictures in the encoded video signal may include selecting reference pictures using the pointers and determining predicted pictures, based on the motion vectors and the selected reference pictures. The decoding may include generating reconstructed pictures from the predicted pictures and the residual pictures.
    Type: Application
    Filed: December 9, 2011
    Publication date: June 13, 2013
    Applicant: GENERAL INSTRUMENT CORPORATION
    Inventors: Sean T. McCarthy, Vijay Kamarshi
  • Publication number: 20100265404
    Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.
    Type: Application
    Filed: April 16, 2010
    Publication date: October 21, 2010
    Applicant: General Instrument Corporation
    Inventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
  • Patent number: 6871008
    Abstract: In a system for processing and displaying a DVD-video data stream, a system for decoding and processing a subpicture data stream. The subpicture data stream comprises a subpicture pixel data stream and a subpicture display control data stream. The subpicture display control data stream preferably comprises one or more display control commands, one or more of which include subpicture display control information. The system comprises at least one processing unit for processing software programmed to perform at least some subpicture data stream decoding and subpicture display control command execution. In addition, the system further comprises a subpicture hardware unit configured to receive the subpicture pixel data stream, subpicture control information extracted from a subpicture display control command executed by said at least one processing unit, and subpicture display control commands not executed by said at least one processing unit.
    Type: Grant
    Filed: January 3, 2000
    Date of Patent: March 22, 2005
    Assignee: Genesis Microchip Inc.
    Inventors: Sandro H. Pintz, Vijay Kamarshi, Teju J. Khubchandani