Patents by Inventor Neel Suresh Joshi

Neel Suresh Joshi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160330399
    Abstract: Various technologies described herein pertain to creation of an output hyper-lapse video from an input video. Values indicative of overlaps between pairs of frames in the input video are computed. A value indicative of an overlap between a pair of frames can be computed based on a sparse set of points from each of the frames in the pair. Moreover, a subset of the frames from the input video are selected based on the values of the overlaps between the pairs of the frames in the input video and a target frame speed-up rate. Further, the output hyper-lapse video is generated based on the subset of the frames. The output hyper-lapse video can be generated without a remainder of the frames of the input video other than the subset of the frames.
    Type: Application
    Filed: May 8, 2015
    Publication date: November 10, 2016
    Inventors: Neel Suresh Joshi, Wolf Kienzle, Michael A. Toelle, Matthieu Uyttendaele, Michael F. Cohen
  • Publication number: 20160275714
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: May 30, 2016
    Publication date: September 22, 2016
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Publication number: 20160202451
    Abstract: A lens assembly includes a plurality of component lens elements, and a fiber optic face plate having a back surface and a non-planar front surface. The plurality of component lens elements are configured to direct a focused image onto the non-planar front surface of the fiber optic face plate, and the fiber optic face plate is configured to transmit the focused image through the back surface.
    Type: Application
    Filed: March 24, 2016
    Publication date: July 14, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Brian Kevin Guenter, Neel Suresh Joshi, Changyin Zhou
  • Patent number: 9378578
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: June 28, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Publication number: 20160163085
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: February 15, 2016
    Publication date: June 9, 2016
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 9292956
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: March 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Publication number: 20150304560
    Abstract: The subject disclosure is directed towards modifying the apparent camera path from an existing video into a modified, stylized video. Camera motion parameters such as horizontal and vertical translation, rotation and zoom may be individually modified, including by an equalizer-like set of interactive controls. Camera motion parameters also may be set by loading preset data, such as motion data acquired from another video clip.
    Type: Application
    Filed: April 21, 2014
    Publication date: October 22, 2015
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Daniel Scott Morris, Michael F. Cohen
  • Publication number: 20150302158
    Abstract: Aspects of the subject disclosure are directed towards a video-based pulse/heart rate system that may use motion data to reduce or eliminate the effects of motion on pulse detection. Signal quality may be computed from (e.g., transformed) video signal data, such as by providing video signal feature data to a trained classifier that provides a measure of the quality of pulse information in each signal. Based upon the signal quality data, corresponding waveforms may be processed to select one for extracting pulse information therefrom. Heart rate data may be computed from the extracted pulse information, which may be smoothed into a heart rate value for a time window based upon confidence and/or prior heart rate data.
    Type: Application
    Filed: April 21, 2014
    Publication date: October 22, 2015
    Applicant: Microsoft Corporation
    Inventors: Daniel Scott Morris, Siddharth Khullar, Neel Suresh Joshi, Timothy Scott Saponas, Desney S. Tan
  • Publication number: 20150228062
    Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.
    Type: Application
    Filed: February 12, 2014
    Publication date: August 13, 2015
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
  • Publication number: 20150002393
    Abstract: The mobile image viewing technique described herein provides a hands-free interface for viewing large imagery (e.g., 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas) on mobile devices. The technique controls the imagery displayed on a display of a mobile device by movement of the mobile device. The technique uses sensors to track the mobile device's orientation and position, and front facing camera to track the user's viewing distance and viewing angle. The technique adjusts the view of a rendered imagery on the mobile device's display according to the tracked data. In one embodiment the technique can employ a sensor fusion methodology that combines viewer tracking using a front facing camera with gyroscope data from the mobile device to produce a robust signal that defines the viewer's 3D position relative to the display.
    Type: Application
    Filed: September 16, 2014
    Publication date: January 1, 2015
    Inventors: Michael F. Cohen, Neel Suresh Joshi
  • Publication number: 20140327680
    Abstract: Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video.
    Type: Application
    Filed: May 3, 2013
    Publication date: November 6, 2014
    Applicant: Microsoft Corporation
    Inventors: Hugues Hoppe, Neel Suresh Joshi, Zicheng Liao
  • Patent number: 8872850
    Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: October 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
  • Publication number: 20140293074
    Abstract: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.
    Type: Application
    Filed: May 5, 2014
    Publication date: October 2, 2014
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sing Bing Kang, Michael F. Cohen, Kalyan Krishna Sunkavalli
  • Patent number: 8750645
    Abstract: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.
    Type: Grant
    Filed: December 10, 2009
    Date of Patent: June 10, 2014
    Assignee: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sing Bing Kang, Michael F. Cohen, Kalyan Krishna Sunkavalli
  • Publication number: 20130229581
    Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.
    Type: Application
    Filed: March 5, 2012
    Publication date: September 5, 2013
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
  • Patent number: 8428390
    Abstract: A “Blur Remover” provides various techniques for constructing deblurred images from a sequence of motion-blurred images such as a video sequence of a scene. Significantly, this deblurring is accomplished without requiring specialized side information or camera setups. In fact, the Blur Remover receives sequential images, such as a typical video stream captured using conventional digital video capture devices, and directly processes those images to generate or construct deblurred images for use in a variety of applications. No other input beyond the video stream is required for a variety of the embodiments enabled by the Blur Remover. More specifically, the Blur Remover uses joint global motion estimation and multi-frame deblurring with optional automatic video “duty cycle” estimation to construct deblurred images from video sequences for use in a variety of applications. Further, the automatically estimated video duty cycle is also separately usable in a variety of applications.
    Type: Grant
    Filed: June 14, 2010
    Date of Patent: April 23, 2013
    Assignee: Microsoft Corporation
    Inventors: Yunpeng Li, Sing Bing Kang, Neel Suresh Joshi, Steven Maxwell Seitz
  • Publication number: 20130003196
    Abstract: A lens assembly includes a plurality of component lens elements, and a fiber optic face plate having a back surface and a non-planar front surface. The plurality of component lens elements are configured to direct a focused image onto the non-planar front surface of the fiber optic face plate, and the fiber optic face plate is configured to transmit the focused image through the back surface.
    Type: Application
    Filed: June 29, 2011
    Publication date: January 3, 2013
    Applicant: Microsoft Corporation
    Inventors: Brian Kevin Guenter, Neel Suresh Joshi, Changyin Zhou
  • Publication number: 20120314899
    Abstract: The mobile image viewing technique described herein provides a hands-free interface for viewing large imagery (e.g., 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas) on mobile devices. The technique controls the imagery displayed on a display of a mobile device by movement of the mobile device. The technique uses sensors to track the mobile device's orientation and position, and front facing camera to track the user's viewing distance and viewing angle. The technique adjusts the view of a rendered imagery on the mobile device's display according to the tracked data. In one embodiment the technique can employ a sensor fusion methodology that combines viewer tracking using a front facing camera with gyroscope data from the mobile device to produce a robust signal that defines the viewer's 3D position relative to the display.
    Type: Application
    Filed: June 13, 2011
    Publication date: December 13, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Michael F. Cohen, Neel Suresh Joshi
  • Publication number: 20110304687
    Abstract: A “Blur Remover” provides various techniques for constructing deblurred images from a sequence of motion-blurred images such as a video sequence of a scene. Significantly, this deblurring is accomplished without requiring specialized side information or camera setups. In fact, the Blur Remover receives sequential images, such as a typical video stream captured using conventional digital video capture devices, and directly processes those images to generate or construct deblurred images for use in a variety of applications. No other input beyond the video stream is required for a variety of the embodiments enabled by the Blur Remover. More specifically, the Blur Remover uses joint global motion estimation and multi-frame deblurring with optional automatic video “duty cycle” estimation to construct deblurred images from video sequences for use in a variety of applications. Further, the automatically estimated video duty cycle is also separately usable in a variety of applications.
    Type: Application
    Filed: June 14, 2010
    Publication date: December 15, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Neel Suresh Joshi, Sing Bing Kang, Yunpeng Li, Steven Maxwell Seitz
  • Publication number: 20110211758
    Abstract: The multi-image sharpening and denoising technique described herein creates a clean (low-noise, high contrast), detailed image of a scene from a temporal series of images of the scene. The technique employs a process of image alignment to remove global and local camera motion plus a novel weighted image averaging procedure that avoids sacrificing sharpness to create a resultant high-detail, low-noise image from the temporal series. In addition, the multi-image sharpening and denoising technique can employ a dehazing procedure that uses a spatially varying airlight model to dehaze an input image.
    Type: Application
    Filed: March 1, 2010
    Publication date: September 1, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Neel Suresh Joshi, Michael F. Cohen