Patents by Inventor Amit Tikare
Amit Tikare has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11412108Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, determining the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.Type: GrantFiled: May 26, 2020Date of Patent: August 9, 2022Assignee: Amazon Technologies, Inc.Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
-
Patent number: 10671846Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, determining the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.Type: GrantFiled: February 6, 2017Date of Patent: June 2, 2020Assignee: Amazon Technologies, Inc.Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
-
Patent number: 10514256Abstract: In some examples, a vision system includes multiple time of flight (ToF) cameras and a single illumination source. The illumination source and the multiple ToF cameras may be synchronized with each other, such as through a phase locked loop based on a generated control signal. A first one of the ToF cameras may be co-located with the illumination source, and a second one of the ToF cameras may be spaced away from the illumination source and the first ToF camera. For instance, the first ToF camera may have a wider field of view (FoV) for generating depth mapping of a scene, while the second ToF camera may have a narrower FoV for generating higher resolution depth mapping of a particular portion of the scene, such as for gesture recognition.Type: GrantFiled: May 6, 2013Date of Patent: December 24, 2019Assignee: Amazon Technologies, Inc.Inventors: Vijay Kamarshi, Robert Warren Sjoberg, Menashe Haskin, Amit Tikare
-
Patent number: 9563955Abstract: Techniques for efficiently identifying objects of interest in an environment and, thereafter, tracking the location and/or orientation of those objects. As described below, a system may analyze images captured by a camera to identify objects that may be represented by the images. These objects may be identified in the images based on their size, color, and/or other physical attributes. After identifying these potential objects, the system may define a region around each object for further inspection. Thereafter, portions of a depth map of the environment corresponding to these regions may be analyzed to determine whether any of the objects identified from the images are “objects of interest”—or objects that the system has previously been instructed to track. These objects of interest may include portable projection surfaces, a user's hand, or any other physical object. The techniques identify these objects with reference to the respective depth signatures of these objects.Type: GrantFiled: May 15, 2013Date of Patent: February 7, 2017Assignee: Amazon Technologies, Inc.Inventors: Vijay Kamarshi, Prasanna Venkatesh Krishnasamy, Amit Tikare
-
Patent number: 9558563Abstract: In a system that monitors the positions and movements of objects within an environment, a depth camera may be configured to produce depth images based on configurable measurement parameters such as illumination intensity and sensing duration. A supervisory component may be configured to roughly identify objects within an environment and to specify observation goals with respect to the objects. The measurement parameters of the depth camera may then be configured in accordance with the goals, and subsequent analyses of the environment may be based on depth images obtained using the measurement parameters.Type: GrantFiled: September 25, 2013Date of Patent: January 31, 2017Assignee: Amazon Technologies, Inc.Inventors: Vijay Kamarshi, Amit Tikare, Ronald Joseph Degges, Jr., Eric Wang, Christopher David Coley
-
Patent number: 9111338Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.Type: GrantFiled: June 27, 2014Date of Patent: August 18, 2015Assignee: ARRIS Technology, Inc.Inventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
-
Publication number: 20140314335Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.Type: ApplicationFiled: June 27, 2014Publication date: October 23, 2014Applicant: General Instrument CorporationInventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
-
Patent number: 8767127Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.Type: GrantFiled: April 16, 2010Date of Patent: July 1, 2014Assignee: General Instrument CorporationInventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare
-
Publication number: 20100265404Abstract: A system includes a data storage configured to store a model human visual system, an input module configured to receive an original picture in a video sequence and to receive a reference picture, and a processor. The processor is configured to create a pixel map of the original picture using the model human visual system. A first layer is determined from the pixel map. A weighting map is determined from a motion compensated difference between the original picture and the reference picture. A processed picture is then determined from the original picture using the weighting map and the first layer.Type: ApplicationFiled: April 16, 2010Publication date: October 21, 2010Applicant: General Instrument CorporationInventors: Sean T. McCarthy, Vijay Kamarshi, Amit Tikare