Patents by Inventor Mrityunjay Kumar

Mrityunjay Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230188698
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td the plurality of intermediate views being extrapolated from the captured views.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 15, 2023
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 11573860
    Abstract: A method for verifying a consistency of snapshot metadata maintained in an ordered data structure for a plurality of snapshots in a snapshot hierarchy is provided. The method includes identifying a first plurality of nodes maintained in a first ordered data structure for a first snapshot that is a child of a second snapshot; for a first node of the first plurality of nodes, verifying the first node by checking for the first node in a second node map maintained in memory for the second snapshot, wherein the second node map includes a plurality of verified nodes in a second ordered data structure; and based on whether the first node is in the second node map: adding the first node to a first node map maintained in memory for the first snapshot, wherein the first node map includes verified nodes of the first plurality of nodes; or triggering an alarm.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: February 7, 2023
    Assignee: VMware, Inc.
    Inventors: Enning Xiang, Wenguang Wang, Mrityunjay Kumar
  • Patent number: 11558600
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: January 17, 2023
    Assignee: Light Field Lab, Inc.
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 11166007
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: November 2, 2021
    Assignee: Light Field Lab, Inc.
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 11150940
    Abstract: A transaction recording method associated with a page-oriented non-volatile memory is disclosed. The method includes identifying page requiring one or more update for implementing a defined transaction. Further, the method includes replicating the page in a non-volatile buffer. The method further includes updating the identified page with the transaction contents. The transaction is committed if the page is updated without interruption else the entire transaction is rolled-back.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: October 19, 2021
    Assignee: IDEMIA
    Inventors: Mrityunjay Kumar, Mohammad Ovais Siddiqui
  • Publication number: 20210314552
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Application
    Filed: November 24, 2020
    Publication date: October 7, 2021
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Publication number: 20190250945
    Abstract: A transaction recording method associated with a page-oriented non-volatile memory is disclosed. The method includes identifying page requiring one or more update for implementing a defined transaction. Further, the method includes replicating the page in a non-volatile buffer. The method further includes updating the identified page with the transaction contents. The transaction is committed if the page is updated without interruption else the entire transaction is rolled-back.
    Type: Application
    Filed: February 13, 2019
    Publication date: August 15, 2019
    Inventors: Mrityunjay KUMAR, Mohammad Ovais Siddiqui
  • Publication number: 20180376132
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Application
    Filed: June 26, 2018
    Publication date: December 27, 2018
    Inventors: Jonathan Sean Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 10009597
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: June 26, 2018
    Assignee: LIGHT FIELD LAB, INC.
    Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Publication number: 20170237970
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Application
    Filed: January 27, 2017
    Publication date: August 17, 2017
    Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 9665775
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: May 30, 2017
    Assignee: KODAK ALARIS INC.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Patent number: 9558421
    Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: January 31, 2017
    Assignee: RealD Inc.
    Inventors: Jon Karafin, Mrityunjay Kumar
  • Publication number: 20160328615
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Application
    Filed: July 22, 2016
    Publication date: November 10, 2016
    Applicant: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Patent number: 9424473
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: August 23, 2016
    Assignee: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Patent number: 9076043
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A summary is formed based on the determined video frame clusters.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: July 7, 2015
    Assignee: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20150178585
    Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.
    Type: Application
    Filed: September 26, 2014
    Publication date: June 25, 2015
    Inventors: Jon Karafin, Mrityunjay Kumar
  • Publication number: 20150161450
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Application
    Filed: February 13, 2015
    Publication date: June 11, 2015
    Applicant: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Patent number: 8989503
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: March 24, 2015
    Assignee: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Patent number: 8982958
    Abstract: A method for representing a video sequence including a time sequence of input video frames, the input video frames including some common scene content that is common to all of the input video frames and some dynamic scene content that changes between at least some of the input video frames. Affine transform are determined to align the common scene content in the input video frames. A common video frame including the common scene content is determined by forming a sparse combination of a first basis functions. A dynamic video frame is determined for each input video frame by forming a sparse combination of a second basis functions, wherein the dynamic video frames can be combined with the respective affine transforms and the common video frame to provide reconstructed video frames.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: March 17, 2015
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: Mrityunjay Kumar, Abdolreza Abdolhosseini Moghadam, Alexander C. Loui, Jiebo Luo
  • Patent number: 8976299
    Abstract: A method for determining a scene boundary location dividing a first scene and a second scene in an input video sequence. The scene boundary location is determined responsive to a merit function value, which is a function of the candidate scene boundary location. The merit function value for a particular candidate scene boundary location is determined by representing the dynamic scene content for the input video frames before and after candidate scene boundary using sparse combinations of a set of basis functions, wherein the sparse combinations of the basis functions are determined by finding a sparse vector of weighting coefficients for each of the basis functions. The weighting coefficients determined for each of the input video frames are combined to determine the merit function value. The candidate scene boundary providing the smallest merit function value is designated to be the scene boundary location.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: March 10, 2015
    Assignee: Intellectual Ventures Fund 83 LLC
    Inventors: Mrityunjay Kumar, Abdolreza Abdolhosseini Moghadam, Alexander C. Loui, Jiebo Luo