Patents by Inventor Nitin Suri
Nitin Suri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230412855Abstract: Techniques and solutions are described for executing a video processing task. A video processing task is received that includes one or more operations to be performed on a digital video file and an identifier of the digital video file. The video processing task is divided into subtasks of operations to be performed on fragments of the video, such as fragments having a particular duration. The duration can correspond to a duration used for video streaming Compared with video processing that is performed as a single task, disclosed techniques can provide improved fault tolerance, as only failed tasks need to be reprocessed. Video processing subtasks can be distributed to a plurality of workers, which can further improve fault tolerance, and can increase the computing power available for video processing, including allowing for the use of heterogenous or unreliable workers.Type: ApplicationFiled: June 17, 2022Publication date: December 21, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Nicholas Tegan Heckman, Steven Craig Peterson, Nitin Suri, Jason Allen Whitehouse
-
Publication number: 20230412866Abstract: A method and system for uploading a media file container from a first device to a second device are described herein, including receiving an instruction to upload the media file container and in response, reading a metadata box of the media file container to locate a track box containing information about video data in a media data box, identifying sample frames of the video data throughout a duration of the video data in the media data box using information from the track box, packaging the identified sample frames, and uploading the packaged sample frames of the video data prior to completing upload of the media file container.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Nicholas Tegan Heckman, Ohad Atia, Nitin Suri, Steven Craig Peterson
-
Publication number: 20230412669Abstract: A method and system for uploading a media file container from a first device to a second device are described herein, including receiving an instruction to upload the media file container and in response, reading a metadata box of the media file container to locate a track box containing information about audio data, including a size and a location of the audio data, in a media data box of the media file container, identifying the audio data in the media data box using the information from the track box, packaging the identified audio data from the media data box into an audio byte stream separate from the media data box, and uploading the audio byte stream to the second device prior to completing upload of the media file container.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Nicholas Tegan HECKMAN, Ohad ATIA, Nitin SURI, Steven Craig PETERSON
-
Publication number: 20230412901Abstract: A method and system for uploading a media file container from a first device to a second device are described herein, including receiving an instruction to upload the media file container and in response, identifying a first portion of the media file container and a last portion of the media file container, each of the first and last portions having a size in bytes and including at least one box of the media file container, and uploading the first portion of the media file container and the last portion of the media file container before the intervening portions of the media file container between the first and last portions.Type: ApplicationFiled: June 15, 2022Publication date: December 21, 2023Inventors: Nicholas Tegan HECKMAN, Ohad ATIA, Nitin SURI, Steven Craig PETERSON
-
Publication number: 20230388515Abstract: Techniques and solutions are described for encoding digital video files, such as for streaming applications. Data associated with the digital video file forms a dataset that can be characterized by a measure of the dataset's center, such as an average, and a spread of the dataset, such as a deviation, with respective to a bitrate over a duration of the digital video file. The measure of center and spread are used to calculate a deviation-adjusted bitrate. A deviation adjusted bitrate can be calculated for the entire digital video file, or for particular subsets of the digital video file, such as for segments of a duration forming units of video streaming Disclosed techniques can provide various advantages, including using a reduced bitrate for video or video portions as compared with an average or static bitrate, for lower-complexity video, or using a higher bitrate for video or video portions for higher-complexity video.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Nitin Suri, Carlos Alberto Lopez Servin
-
Patent number: 10664687Abstract: The importance of video sections of a video file may be determined from features of the video file. The video file may be decoded to obtain video frames and audio data associated with the video frames. Feature scores for each video frame may be obtained by analyzing features of the video frame or the audio data associated with the video frame based on a local rule, a global rule, or both. The feature scores are further combined to derive a frame importance score for the video frame. Based on the feature scores of the video frames in the video file, the video file may be segmented into video sections of different section importance values.Type: GrantFiled: June 12, 2014Date of Patent: May 26, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Nitin Suri, Tzong-Jhy Wang, Omkar Mehendale, Andrew S. Ivory, William D. Sproule
-
Patent number: 9934558Abstract: Technologies for a single-pass process for enhancing video quality with temporal smoothing. The process may include providing for user overrides of automatically enhanced video/frame characteristics and providing substantially immediate previews of enhanced video frames to a user. The process may also include detecting a degree of shakiness in a portion of the video, and performing or recommending stabilization based on the detected shakiness.Type: GrantFiled: September 8, 2016Date of Patent: April 3, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nitin Suri, Andrew Shaun Ivory, Tzong-Jhy Wang, Bruce Justin Lindbloom, William David Sproule
-
Patent number: 9934423Abstract: Techniques for identifying prominent subjects in video content based on feature point extraction are described herein. Video files may be processed to detect faces on video frames and extract feature points from the video frames. Some video frames may include detected faces and extracted feature points and other video frames may not include detected faces. Based on the extracted feature points, faces may be inferred on video frames where no face was detected. The inferring may be based on feature points. Additionally, video frames may be arranged into groups and two or more groups may be merged. The merging may be based on some groups including video frames having overlapping feature points. The resulting groups each may identify a subject. A frequency representing a number of video frames where the subject appears may be determined for calculating a prominence score for each of the identified subjects in the video file.Type: GrantFiled: July 29, 2014Date of Patent: April 3, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Tzong-Jhy Wang, Nitin Suri, Andrew S. Ivory, William D. Sproule
-
Patent number: 9646227Abstract: This disclosure describes techniques for training models from video data and applying the learned models to identify desirable video data. Video data may be labeled to indicate a semantic category and/or a score indicative of desirability. The video data may be processed to extract low and high level features. A classifier and a scoring model may be trained based on the extracted features. The classifier may estimate a probability that the video data belongs to at least one of the categories in a set of semantic categories. The scoring model may determine a desirability score for the video data. New video data may be processed to extract low and high level features, and feature values may be determined based on the extracted features. The learned classifier and scoring model may be applied to the feature values to determine a desirability score associated with the new video data.Type: GrantFiled: July 29, 2014Date of Patent: May 9, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nitin Suri, Xian-Sheng Hua, Tzong-Jhy Wang, William D. Sproule, Andrew S. Ivory, Jin Li
-
Publication number: 20160379343Abstract: Technologies for a single-pass process for enhancing video quality with temporal smoothing. The process may include providing for user overrides of automatically enhanced video/frame characteristics and providing substantially immediate previews of enhanced video frames to a user. The process may also include detecting a degree of shakiness in a portion of the video, and performing or recommending stabilization based on the detected shakiness.Type: ApplicationFiled: September 8, 2016Publication date: December 29, 2016Inventors: Nitin Suri, Andrew Shaun Ivory, Tzong-Jhy Wang, Bruce Justin Lindbloom, William David Sproule
-
Patent number: 9460493Abstract: Technologies for a single-pass process for enhancing video quality with temporal smoothing. The process may include providing for user overrides of automatically enhanced video/frame characteristics and providing substantially immediate previews of enhanced video frames to a user. The process may also include detecting a degree of shakiness in a portion of the video, and performing or recommending stabilization based on the detected shakiness.Type: GrantFiled: June 14, 2014Date of Patent: October 4, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nitin Suri, Andrew Shaun Ivory, Tzong-Jhy Wang, Bruce Justin Lindbloom, William David Sproule
-
Publication number: 20160034748Abstract: Techniques for identifying prominent subjects in video content based on feature point extraction are described herein. Video files may be processed to detect faces on video frames and extract feature points from the video frames. Some video frames may include detected faces and extracted feature points and other video frames may not include detected faces. Based on the extracted feature points, faces may be inferred on video frames where no face was detected. The inferring may be based on feature points. Additionally, video frames may be arranged into groups and two or more groups may be merged. The merging may be based on some groups including video frames having overlapping feature points. The resulting groups each may identify a subject. A frequency representing a number of video frames where the subject appears may be determined for calculating a prominence score for each of the identified subjects in the video file.Type: ApplicationFiled: July 29, 2014Publication date: February 4, 2016Inventors: Tzong-Jhy Wang, Nitin Suri, Andrew S. Ivory, William D. Sproule
-
Publication number: 20160034786Abstract: This disclosure describes techniques for training models from video data and applying the learned models to identify desirable video data. Video data may be labeled to indicate a semantic category and/or a score indicative of desirability. The video data may be processed to extract low and high level features. A classifier and a scoring model may be trained based on the extracted features. The classifier may estimate a probability that the video data belongs to at least one of the categories in a set of semantic categories. The scoring model may determine a desirability score for the video data. New video data may be processed to extract low and high level features, and feature values may be determined based on the extracted features. The learned classifier and scoring model may be applied to the feature values to determine a desirability score associated with the new video data.Type: ApplicationFiled: July 29, 2014Publication date: February 4, 2016Inventors: Nitin Suri, Xian-Sheng Hua, Tzong-Jhy Wang, William D. Sproule, Andrew S. Ivory, Jin Li
-
Publication number: 20160035387Abstract: Automatic story production is implemented by the utilization of theme scripts with user assets to generate a quality finished product with minimum user input or direction. A user chooses a predesigned theme script to be applied to the user's assets to automatically create a story with a particular look and feel. Metadata and feature information, when available, is automatically gathered from the user assets to personalize the generated story. A user can include additional information and/or alter any aspect of the generated story to further personalize the resultant finished product.Type: ApplicationFiled: October 9, 2015Publication date: February 4, 2016Inventors: Nitin Suri, Sriram Subramanian, William David Sproule
-
Publication number: 20150363919Abstract: Technologies for a single-pass process for enhancing video quality with temporal smoothing. The process may include providing for user overrides of automatically enhanced video/frame characteristics and providing substantially immediate previews of enhanced video frames to a user. The process may also include detecting a degree of shakiness in a portion of the video, and performing or recommending stabilization based on the detected shakiness.Type: ApplicationFiled: June 14, 2014Publication date: December 17, 2015Inventors: Nitin Suri, Andrew Shaun Ivory, Tzong-Jhy Wang, Bruce Justin Lindbloom, William David Sproule
-
Publication number: 20150363635Abstract: The importance of video sections of a video file may be determined from features of the video file. The video file may be decoded to obtain video frames and audio data associated with the video frames. Feature scores for each video frame may be obtained by analyzing features of the video frame or the audio data associated with the video frame based on a local rule, a global rule, or both. The feature scores are further combined to derive a frame importance score for the video frame. Based on the feature scores of the video frames in the video file, the video file may be segmented into video sections of different section importance values.Type: ApplicationFiled: June 12, 2014Publication date: December 17, 2015Inventors: Nitin Suri, Tzong-Jhy Wang, Omkar Mehendale, Andrew S. Ivory, William D. Sproule
-
Patent number: 9208599Abstract: Visual animation platforms may allow users to develop visual media projects, such as movies. Many visual animation platforms may provide animation effects that may be applied to visual elements of a visual media project. Unfortunately, current techniques for providing a preview of an animation effect may be limited. Accordingly, one or more systems and/or techniques for presenting a visual preview are disclosed herein. In particular, a snapshot of an original state of a selected visual element may be stored. A referenced animation effect may be applied to the selected visual element to generate an updated visual element that may be used to generate a visual preview of how the referenced animation effect may look as applied to the selected visual element. The snapshot may be applied to the updated visual element to non-destructively revert the updated visual element to the original state.Type: GrantFiled: June 17, 2010Date of Patent: December 8, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Sriram Subramanian, Nitin Suri, William D. Sproule
-
Patent number: 9161007Abstract: Automatic story production is implemented by the utilization of theme scripts with user assets to generate a quality finished product with minimum user input or direction. A user chooses a predesigned theme script to be applied to the user's assets to automatically create a story with a particular look and feel. Metadata and feature information, when available, is automatically gathered from the user assets to personalize the generated story. A user can include additional information and/or alter any aspect of the generated story to further personalize the resultant finished product.Type: GrantFiled: March 16, 2013Date of Patent: October 13, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nitin Suri, Sriram Subramanian, William David Sproule
-
Patent number: 8422852Abstract: Automatic story production is implemented by the utilization of theme scripts with user assets to generate a quality finished product with minimum user input or direction. A user chooses a predesigned theme script to be applied to the user's assets to automatically create a story with a particular look and feel. Metadata and feature information, when available, is automatically gathered from the user assets to personalize the generated story. A user can include additional information and/or alter any aspect of the generated story to further personalize the resultant finished product.Type: GrantFiled: April 9, 2010Date of Patent: April 16, 2013Assignee: Microsoft CorporationInventors: Nitin Suri, Sriram Subramanian, William David Sproule
-
Publication number: 20110310109Abstract: Visual animation platforms may allow users to develop visual media projects, such as movies. Many visual animation platforms may provide animation effects that may be applied to visual elements of a visual media project. Unfortunately, current techniques for providing a preview of an animation effect may be limited. Accordingly, one or more systems and/or techniques for presenting a visual preview are disclosed herein. In particular, a snapshot of an original state of a selected visual element may be stored. A referenced animation effect may be applied to the selected visual element to generate an updated visual element that may be used to generate a visual preview of how the referenced animation effect may look as applied to the selected visual element. The snapshot may be applied to the updated visual element to non-destructively revert the updated visual element to the original state.Type: ApplicationFiled: June 17, 2010Publication date: December 22, 2011Applicant: Microsoft CorporationInventors: Sriram Subramanian, Nitin Suri, William D. Sproule