Patents by Inventor Kamil Wnuk

Kamil Wnuk has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10216984
    Abstract: An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: February 26, 2019
    Assignee: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Nicholas J. Witchey
  • Patent number: 10217227
    Abstract: Image feature trackability ranking systems and methods are disclosed. A method of establishing a trackability ranking order from tracked image features within a training video sequence at a tracking analysis device includes establishing a tracking region within the training video sequence using a feature detection algorithm. Trajectories of tracked image features within the tracking region are compiled using a feature tracking algorithm. Saliency metrics are assigned to each one of the trajectories of tracked image features based on one or more feature property measurements within the tracking region, and a trackability ranking algorithm that is a function of the saliency metrics and a defined feature trajectory ranking associated with the training video sequence is determined, the trackability ranking algorithm being usable for ranking, based on trackability, tracked image features within another video sequence.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: February 26, 2019
    Assignee: Nant Holdings IP, LLC
    Inventor: Kamil Wnuk
  • Publication number: 20190057113
    Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.
    Type: Application
    Filed: October 23, 2018
    Publication date: February 21, 2019
    Applicant: Nant Holdings IP, LLC
    Inventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
  • Patent number: 10192365
    Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: January 29, 2019
    Assignee: Nant Holdings IP, LLC
    Inventors: Matheen Siddiqui, Kamil Wnuk
  • Publication number: 20190012557
    Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.
    Type: Application
    Filed: September 6, 2018
    Publication date: January 10, 2019
    Applicant: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
  • Patent number: 10140317
    Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: November 27, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
  • Patent number: 10095945
    Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: October 9, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
  • Publication number: 20180260962
    Abstract: Image feature trackability ranking systems and methods are disclosed. A method of establishing a trackability ranking order from tracked image features within a training video sequence at a tracking analysis device includes establishing a tracking region within the training video sequence using a feature detection algorithm. Trajectories of tracked image features within the tracking region are compiled using a feature tracking algorithm. Saliency metrics are assigned to each one of the trajectories of tracked image features based on one or more feature property measurements within the tracking region, and a trackability ranking algorithm that is a function of the saliency metrics and a defined feature trajectory ranking associated with the training video sequence is determined, the trackability ranking algorithm being usable for ranking, based on trackability, tracked image features within another video sequence.
    Type: Application
    Filed: May 8, 2018
    Publication date: September 13, 2018
    Applicant: Nant Holdings IP, LLC
    Inventor: Kamil Wnuk
  • Patent number: 10013612
    Abstract: A system for analyzing scene traits in an object recognition ingestion ecosystem is presented. In some embodiment, a trait analysis engine analyzes a digital representation of a scene to derive one or more features. The features are compiled into sets of similar features with respect to a feature space. The engine attempts to discover which traits of the scene (e.g., temperature, lighting, gravity, etc.) can be used to distinguish the features for purposes of object recognition. When such distinguishing traits are found, an object recognition database is populated with object information, possibly indexed according to the similar features and their corresponding distinguishing traits.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: July 3, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: David McKinnon, John Wiacek, Jeremi Sudol, Kamil Wnuk, Bing Song
  • Publication number: 20180165519
    Abstract: Systems and methods of quickly recognizing or differentiating many objects are presented. Contemplated systems include an object model database storing recognition models associated with known modeled objects. The object identifiers can be indexed in the object model database based on recognition features derived from key frames of the modeled object. Such objects are recognized by a recognition engine at a later time. The recognition engine can construct a recognition strategy based on a current context where the recognition strategy includes rules for executing one or more recognition algorithms on a digital representation of a scene. The recognition engine can recognize an object from the object model database, and then attempt to identify key frame bundles that are contextually relevant, which can then be used to track the object or to query a content database for content information.
    Type: Application
    Filed: January 26, 2018
    Publication date: June 14, 2018
    Applicant: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Bing Song, Matheen Siddiqui, David McKinnon, Jeremi Sudol, Patrick Soon-Shiong, Orang Dialameh
  • Patent number: 9984473
    Abstract: Image feature trackability ranking systems and methods are disclosed. A method of establishing a trackability ranking order from tracked image features within a training video sequence at a tracking analysis device includes establishing a tracking region within the training video sequence using a feature detection algorithm. Trajectories of tracked image features within the tracking region are compiled using a feature tracking algorithm. Saliency metrics are assigned to each one of the trajectories of tracked image features based on one or more feature property measurements within the tracking region, and a trackability ranking algorithm that is a function of the saliency metrics and a defined feature trajectory ranking associated with the training video sequence is determined, the trackability ranking algorithm being usable for ranking, based on trackability, tracked image features within another video sequence.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: May 29, 2018
    Assignee: Nant Holdings IP, LLC
    Inventor: Kamil Wnuk
  • Publication number: 20180144186
    Abstract: An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph.
    Type: Application
    Filed: January 19, 2018
    Publication date: May 24, 2018
    Applicant: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Nicholas J. Witchey
  • Publication number: 20180144261
    Abstract: Techniques are provided for predicting DNA accessibility. DNase-seq data files and RNA-seq data files for a plurality of cell types are paired by assigning DNase-seq data files to RNA-seq data files that are at least within a same biotype. A neural network is configured to be trained using batches of the paired data files, where configuring the neural network comprises configuring convolutional layers to process a first input comprising DNA sequence data from a paired data file to generate a convolved output, and fully connected layers following the convolutional layers to concatenate the convolved output with a second input comprising gene expression levels derived from RNA-seq data from the paired data file and process the concatenation to generate a DNA accessibility prediction output. The trained neural network is used to predict DNA accessibility in a genomic sample input comprising RNA-seq data and whole genome sequencing for a new cell type.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 24, 2018
    Applicants: NantOmics, LLC., Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Jeremi Sudol, Shahrooz Rabizadeh, Patrick Soon-Shiong, Christopher Szeto, Charles Vaske
  • Patent number: 9904850
    Abstract: Systems and methods of quickly recognizing or differentiating many objects are presented. Contemplated systems include an object model database storing recognition models associated with known modeled objects. The object identifiers can be indexed in the object model database based on recognition features derived from key frames of the modeled object. Such objects are recognized by a recognition engine at a later time. The recognition engine can construct a recognition strategy based on a current context where the recognition strategy includes rules for executing one or more recognition algorithms on a digital representation of a scene. The recognition engine can recognize an object from the object model database, and then attempt to identify key frame bundles that are contextually relevant, which can then be used to track the object or to query a content database for content information.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 27, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Bing Song, Matheen Siddiqui, David McKinnon, Jeremi Sudol, Patrick Soon-Shiong, Orang Dialameh
  • Publication number: 20180046648
    Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.
    Type: Application
    Filed: October 26, 2017
    Publication date: February 15, 2018
    Applicant: Nant Holdings IP, LLC
    Inventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
  • Patent number: 9886625
    Abstract: An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: February 6, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Nicholas J. Witchey
  • Publication number: 20180005081
    Abstract: A sensor data processing system and method is described. Contemplated systems and methods derive a first recognition trait of an object from a first data set that represents the object in a first environmental state. A second recognition trait of the object is then derived from a second data set that represents the object in a second environmental state. The sensor data processing systems and methods then identifies a mapping of elements of the first and second recognition traits in a new representation space. The mapping of elements satisfies a variance criterion for corresponding elements, which allows the mapping to be used for object recognition. The sensor data processing systems and methods described herein provide new object recognition techniques that are computationally efficient and can be performed in real-time by the mobile phone technology that is currently available.
    Type: Application
    Filed: September 15, 2017
    Publication date: January 4, 2018
    Applicant: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, Jeremi Sudol, Bing Song, Matheen Siddiqui, David McKinnon
  • Publication number: 20180005453
    Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
    Type: Application
    Filed: September 18, 2017
    Publication date: January 4, 2018
    Applicant: Nant Holdings IP, LLC
    Inventors: Matheen Siddiqui, Kamil Wnuk
  • Patent number: 9817848
    Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: November 14, 2017
    Assignee: Nant Holdings IP, LLC
    Inventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
  • Patent number: 9805510
    Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
    Type: Grant
    Filed: May 13, 2015
    Date of Patent: October 31, 2017
    Inventors: Matheen Siddiqui, Kamil Wnuk