Patents by Inventor Michael F. Cohen
Michael F. Cohen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9146119Abstract: Various embodiments provide techniques for scrubbing variable paths in content. By way of example and not limitation, scrubbing can include receiving user input that defines a scrub path and navigating a data path through content based on the scrub path. According to some embodiments, a data path can include one or more predefined paths (e.g., a travel route) through the content. One or more of the techniques can account for variations in a data path and provide ways of maintaining adjacency between a scrub path and navigation along the data path. In some embodiments, a data path can be associated with one or more types of data path content that can be presented in response to a navigation of the data path.Type: GrantFiled: June 5, 2009Date of Patent: September 29, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Billy P. Chen, Eyal Ofek, Michael F. Cohen
-
Publication number: 20150248916Abstract: Various technologies described herein pertain to generation of an output hyper-lapse video from an input video. A smoothed camera path can be computed based upon the input video. Further, output camera poses can be selected from the smoothed camera path for output frames of the output hyper-lapse video. One or more selected input frames from the input video can be chosen for an output frame. The selected input frames can be chosen based at least in part upon an output camera pose for the output frame. Moreover, the selected input frames can be combined to render the output frame. Choosing selected input frames from the input video and combining the selected input frames can be performed for each of the output frames of the output hyper-lapse video.Type: ApplicationFiled: June 30, 2014Publication date: September 3, 2015Inventors: Johannes Peter Kopf, Michael F. Cohen, Richard Szeliski
-
Publication number: 20150245216Abstract: Systems and methods of a personal daemon, executing as a background process on a mobile computing device, for providing personal assistant to an associated user is presented. While the personal daemon maintains personal information corresponding to the associated user, the personal daemon is configured to not share the personal information of the associated user with any other entity other than the associated user except under conditions of rules established by the associated user. The personal daemon monitors and analyzes the actions of the associated user to determine additional personal information of the associated user. Additionally, upon receiving one or more notices of events from a plurality of sensors associated with the mobile computing device, the personal daemon executes a personal assistance action on behalf of the associated user.Type: ApplicationFiled: February 24, 2014Publication date: August 27, 2015Applicant: Microsoft CorporationInventors: Michael F Cohen, Douglas C. Burger, Asta Roseway, Andrew D. Wilson, Blaise Hilary Aguera y Arca, Daniel Lee Massey
-
Publication number: 20150002393Abstract: The mobile image viewing technique described herein provides a hands-free interface for viewing large imagery (e.g., 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas) on mobile devices. The technique controls the imagery displayed on a display of a mobile device by movement of the mobile device. The technique uses sensors to track the mobile device's orientation and position, and front facing camera to track the user's viewing distance and viewing angle. The technique adjusts the view of a rendered imagery on the mobile device's display according to the tracked data. In one embodiment the technique can employ a sensor fusion methodology that combines viewer tracking using a front facing camera with gyroscope data from the mobile device to produce a robust signal that defines the viewer's 3D position relative to the display.Type: ApplicationFiled: September 16, 2014Publication date: January 1, 2015Inventors: Michael F. Cohen, Neel Suresh Joshi
-
Patent number: 8872850Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.Type: GrantFiled: March 5, 2012Date of Patent: October 28, 2014Assignee: Microsoft CorporationInventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
-
Publication number: 20140293074Abstract: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.Type: ApplicationFiled: May 5, 2014Publication date: October 2, 2014Applicant: Microsoft CorporationInventors: Neel Suresh Joshi, Sing Bing Kang, Michael F. Cohen, Kalyan Krishna Sunkavalli
-
Patent number: 8750645Abstract: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.Type: GrantFiled: December 10, 2009Date of Patent: June 10, 2014Assignee: Microsoft CorporationInventors: Neel Suresh Joshi, Sing Bing Kang, Michael F. Cohen, Kalyan Krishna Sunkavalli
-
Patent number: 8619071Abstract: A novel image view may be synthesized using a three-dimensional reference model. In an example embodiment, a device-implemented method for synthesizing a novel image view includes acts of registering, selecting, applying and synthesizing. An image is registered to at least one reference model. A source block of visual data from the image is selected with regard to a destination block of the reference model based on a source depth associated with the source block and a destination depth associated with the destination block. The destination position of the destination block of the reference model is not visible in the image. The source block of visual data from the image is applied to the destination block of the reference model to produce an image-augmented model. A novel image view is synthesized using the image-augmented model.Type: GrantFiled: September 16, 2008Date of Patent: December 31, 2013Assignee: Microsoft CorporationInventors: Johannes P. Kopf, Michael F. Cohen, Daniel Lischinski, Matthieu T. Uyttendaele
-
Patent number: 8610741Abstract: Techniques and systems are disclosed for navigating human scale image data using aligned perspective images. A consecutive sequence of digital images is stacked together by aligning consecutive images laterally with an image offset between edges of consecutive images corresponding to a distance between respective view windows of the consecutive images. A view window of an image in the sequence is rendered, where the view window of the image corresponds to a desired location. Offset portions of the view window of a desired number of images in the sequence are rendered, for example, alongside the full view of the image at the desired location.Type: GrantFiled: June 2, 2009Date of Patent: December 17, 2013Assignee: Microsoft CorporationInventors: Richard S. Szeliski, Johannes P. Kopf, Michael F. Cohen, Eric J. Stollnitz
-
Patent number: 8581900Abstract: Various embodiments provide a global approach for computing transitions between captured runs through an intersection. In accordance with one or more embodiments, a transition algorithm receives as input various runs that have been captured through an intersection and an input path through the intersection. The transition algorithm processes its inputs and provides, as an output, a set of points and data such as a direction associated with each of the points. The set of points includes points from different captured runs. The output set of points and associated data indicate which images to obtain from a database and which field of view to create a simulated turn for the user.Type: GrantFiled: June 10, 2009Date of Patent: November 12, 2013Assignee: Microsoft CorporationInventors: Billy P. Chen, Michael F. Cohen, Eyal Ofek, Blaise H. Aguera y Arcas
-
Publication number: 20130229581Abstract: Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet.Type: ApplicationFiled: March 5, 2012Publication date: September 5, 2013Applicant: Microsoft CorporationInventors: Neel Suresh Joshi, Sisil Sanjeev Mehta, Michael F. Cohen, Steven M. Drucker, Hugues Hoppe, Matthieu Uyttendaele
-
Patent number: 8489331Abstract: A user interface is presented via which user inputs can be received and maps can be displayed. A user selection of a destination and a user specification of a region of interest on a map are received. The region of interest surrounds the destination on the map. In response to receiving the user specification of the region of interest, a destination map is displayed via the user interface. The destination map includes both the destination and the region of interest, and a layout of one or more roads that include one or more routes to the destination at multiple different scales.Type: GrantFiled: April 29, 2010Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Johannes P. Kopf, Michael F. Cohen
-
Publication number: 20120314899Abstract: The mobile image viewing technique described herein provides a hands-free interface for viewing large imagery (e.g., 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas) on mobile devices. The technique controls the imagery displayed on a display of a mobile device by movement of the mobile device. The technique uses sensors to track the mobile device's orientation and position, and front facing camera to track the user's viewing distance and viewing angle. The technique adjusts the view of a rendered imagery on the mobile device's display according to the tracked data. In one embodiment the technique can employ a sensor fusion methodology that combines viewer tracking using a front facing camera with gyroscope data from the mobile device to produce a robust signal that defines the viewer's 3D position relative to the display.Type: ApplicationFiled: June 13, 2011Publication date: December 13, 2012Applicant: MICROSOFT CORPORATIONInventors: Michael F. Cohen, Neel Suresh Joshi
-
Patent number: 8330802Abstract: The stereo movie editing technique described herein combines knowledge of both multi-view stereo algorithms and human depth perception. The technique creates a digital editor, specifically for stereographic cinema. The technique employs an interface that allows intuitive manipulation of the different parameters in a stereo movie setup, such as camera locations and screen position. Using the technique it is possible to reduce or enhance well-known stereo movie effects such as cardboarding and miniaturization. The technique also provides new editing techniques such as directing the user's attention and easier transitions between scenes.Type: GrantFiled: December 9, 2008Date of Patent: December 11, 2012Assignee: Microsoft Corp.Inventors: Sanjeev J. Koppal, Sing Bing Kang, Charles Lawrence Zitnick, III, Michael F. Cohen, Bryan Kent Ressler
-
Patent number: 8290294Abstract: An image may be dehazed using a three-dimensional reference model. In an example embodiment, a device-implemented method for dehazing includes acts of registering, estimating, and producing. An image that includes haze is registered to a reference model. A haze curve is estimated for the image based on a relationship between colors in the image and colors and depths of the reference model. A dehazed image is produced by using the estimated haze curve to reduce the haze of the image.Type: GrantFiled: September 16, 2008Date of Patent: October 16, 2012Assignee: Microsoft CorporationInventors: Johannes P. Kopf, Michael F. Cohen, Daniel Lischinski, Matthieu T. Uyttendaele
-
Publication number: 20110270516Abstract: A user interface is presented via which user inputs can be received and maps can be displayed. A user selection of a destination and a user specification of a region of interest on a map are received. The region of interest surrounds the destination on the map. In response to receiving the user specification of the region of interest, a destination map is displayed via the user interface. The destination map includes both the destination and the region of interest, and a layout of one or more roads that include one or more routes to the destination at multiple different scales.Type: ApplicationFiled: April 29, 2010Publication date: November 3, 2011Applicant: Microsoft CorporationInventors: Johannes P. Kopf, Michael F. Cohen
-
Publication number: 20110211758Abstract: The multi-image sharpening and denoising technique described herein creates a clean (low-noise, high contrast), detailed image of a scene from a temporal series of images of the scene. The technique employs a process of image alignment to remove global and local camera motion plus a novel weighted image averaging procedure that avoids sacrificing sharpness to create a resultant high-detail, low-noise image from the temporal series. In addition, the multi-image sharpening and denoising technique can employ a dehazing procedure that uses a spatially varying airlight model to dehaze an input image.Type: ApplicationFiled: March 1, 2010Publication date: September 1, 2011Applicant: MICROSOFT CORPORATIONInventors: Neel Suresh Joshi, Michael F. Cohen
-
Publication number: 20110142370Abstract: A method described herein includes acts of receiving a sequence of images of a scene and receiving an indication of a reference image in the sequence of images. The method further includes an act of automatically assigning one or more weights independently to each pixel in each image in the sequence of images of the scene. Additionally, the method includes an act of automatically generating a composite image based at least in part upon the one or more weights assigned to each pixel in each image in the sequence of images of the scene.Type: ApplicationFiled: December 10, 2009Publication date: June 16, 2011Applicant: Microsoft CorporationInventors: Neel Suresh Joshi, Sing Bing Kang, Michael F. Cohen, Kalyan Krishna Sunkavalli
-
Publication number: 20110069089Abstract: Embodiments of power management for OLED displays are described. In various embodiments, power consumption for an OLED display can be managed by adjusting brightness of individual pixels. An input image can be obtained and processed using an algorithm that reduces brightness and maintains perceived contrast. This can involve computing a difference value associated with individual pixels of the image to account for perceived contrast and computing a reduced brightness value for the pixel using the difference value. An ultra-low power mode in which power consumption of the OLED display is adjusted semantically can be employed for a low brightness range. The algorithm and the ultra-low power mode can be combined to provide a continuous range of adjustment for the OLED display.Type: ApplicationFiled: November 2, 2009Publication date: March 24, 2011Applicant: MICROSOFT CORPORATIONInventors: Johannes P. Kopf, Georg F. Petschnigg, Michael F. Cohen
-
Patent number: 7889949Abstract: A “Joint Bilateral Upsampler” uses a high-resolution input signal to guide the interpolation of a low-resolution solution set (derived from a downsampled version of the input signal) from low-to high-resolution. The resulting high-resolution solution set is then saved or applied to the original input signal to produce a high-resolution output signal. The high-resolution solution set is close to what would be produced directly from the input signal without downsampling. However, since the high-resolution solution set is constructed in part from a downsampled version of the input signal, it is computed using significantly less computational overhead and memory than a solution set computed directly from a high-resolution signal. Consequently, the Joint Bilateral Upsampler is advantageous for use in near real-time operations, in applications where user wait times are important, and in systems where computational costs and available memory are limited.Type: GrantFiled: April 30, 2007Date of Patent: February 15, 2011Assignee: Microsoft CorporationInventors: Michael F. Cohen, Matthew T. Uyttendaele, Daniel Lischinski, Johannes Kopf