Patents by Inventor Matheen Siddiqui
Matheen Siddiqui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240037857Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Applicant: Nant Holdings IP, LLCInventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
-
Publication number: 20230377340Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: ApplicationFiled: August 3, 2023Publication date: November 23, 2023Applicant: Nant Holdings IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Patent number: 11748990Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: GrantFiled: June 1, 2022Date of Patent: September 5, 2023Assignee: Nant Holdings IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Patent number: 11710282Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.Type: GrantFiled: October 19, 2021Date of Patent: July 25, 2023Assignee: Nant Holdings IP, LLCInventors: Matheen Siddiqui, Kamil Wnuk
-
Publication number: 20230117368Abstract: A computer system may identify a first position of a physical camera corresponding to a first time period. The computer system may render a first virtual scene for the first time period. The system may project the first scene onto a display surface to determine a first rendered image for the first time period. The computer system may receive a first camera image of the display surface from the camera during the first time period. The system may determine a first corrected position of the camera by comparing the first rendered image to the first camera image. The system may predict a second position of the camera corresponding to a second time period. The computer system may render a second virtual scene for the second time period. The system may project the second virtual scene onto the display surface to determine a second rendered image for the second time period.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: NantHealth, Inc.Inventor: Matheen SIDDIQUI
-
Patent number: 11554785Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.Type: GrantFiled: September 11, 2019Date of Patent: January 17, 2023Assignee: Foresight AI Inc.Inventors: Matheen Siddiqui, Cheng-Yi Lin, Chang Yuan
-
Publication number: 20220292804Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: ApplicationFiled: June 1, 2022Publication date: September 15, 2022Applicant: Nant Holdings IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Patent number: 11430145Abstract: Methods that identify local motions in point cloud data that are generated by one or more rounds of LIDAR scans are disclosed. The point cloud data describes an environment with points with each point having a scanned time and scanned coordinates. From the point cloud data, a subset of points is selected. A surface is reconstructed at a common reference time using the subset of points. The reconstructed surface includes points that are moved from the scanned coordinates of the points in the subset. The moved points are derived from a projected movement under a projected motion parameter in duration between the scanned time and the common reference time. The surface quality of the reconstructed surface is determined. If the surface has a high quality, the projected motion parameter is determined to be the final motion parameter that is used as an indication whether an object is moving.Type: GrantFiled: May 31, 2019Date of Patent: August 30, 2022Assignee: Foresight AI Inc.Inventors: Matheen Siddiqui, Chang Yuan
-
Patent number: 11392636Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.Type: GrantFiled: April 30, 2020Date of Patent: July 19, 2022Assignee: NANT HOLDINGS IP, LLCInventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
-
Patent number: 11380080Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: GrantFiled: September 30, 2020Date of Patent: July 5, 2022Assignee: NANT HOLDINGS IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Publication number: 20220156314Abstract: Apparatus, methods and systems of providing AR content are disclosed. Embodiments of the inventive subject matter can obtain an initial map of an area, derive views of interest, obtain AR content objects associated with the views of interest, establish experience clusters and generate a tile map tessellated based on the experience clusters. A user device could be configured to obtain and instantiate at least some of the AR content objects based on at least one of a location and a recognition.Type: ApplicationFiled: January 28, 2022Publication date: May 19, 2022Applicant: Nant Holding IP, LLCInventors: David McKinnon, Kamil Wnuk, Jeremi Sudol, Matheen Siddiqui, John Wiacek, Bing Song, Nicholas J. Witchey
-
Publication number: 20220036661Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.Type: ApplicationFiled: October 19, 2021Publication date: February 3, 2022Applicant: Nant Holdings IP, LLCInventors: Matheen Siddiqui, Kamil Wnuk
-
Patent number: 11188786Abstract: A sensor data processing system and method is described. Contemplated systems and methods derive a first recognition trait of an object from a first data set that represents the object in a first environmental state. A second recognition trait of the object is then derived from a second data set that represents the object in a second environmental state. The sensor data processing systems and methods then identifies a mapping of elements of the first and second recognition traits in a new representation space. The mapping of elements satisfies a variance criterion for corresponding elements, which allows the mapping to be used for object recognition. The sensor data processing systems and methods described herein provide new object recognition techniques that are computationally efficient and can be performed in real-time by the mobile phone technology that is currently available.Type: GrantFiled: August 12, 2019Date of Patent: November 30, 2021Assignee: Nant Holdings IP, LLCInventors: Kamil Wnuk, Jeremi Sudol, Bing Song, Matheen Siddiqui, David McKinnon
-
Patent number: 11176406Abstract: Edge-based recognition systems and methods are presented. Edges of the object are identified from the image data based on co-circularity of edgels, and edge-based descriptors are constructed based on the identified edges. The edge-based descriptors along with additional perception metrics are used to obtain a list of candidate objects matched with the edge-based descriptors. Through various filtering processes and verification processes, false positive candidate objects are further removed from the list to determine the final candidate object.Type: GrantFiled: August 24, 2018Date of Patent: November 16, 2021Assignee: Nant Holdings IP, LLCInventors: Bing Song, Matheen Siddiqui
-
Patent number: 11176754Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.Type: GrantFiled: May 26, 2020Date of Patent: November 16, 2021Assignee: Nant Holdings IP, LLCInventors: Matheen Siddiqui, Kamil Wnuk
-
Patent number: 11094112Abstract: The present invention generally relates to generating a three-dimensional representation of a physical environment, which includes dynamic scenarios.Type: GrantFiled: August 15, 2019Date of Patent: August 17, 2021Assignee: Foresight AI Inc.Inventors: Shili Xu, Matheen Siddiqui, Chang Yuan
-
Patent number: 11062169Abstract: Apparatus, methods and systems of object recognition are disclosed. Embodiments of the inventive subject matter generates map-altered image data according to an object-specific metric map, derives a metric-based descriptor set by executing an image analysis algorithm on the map-altered image data, and retrieves digital content associated with a target object as a function of the metric-based descriptor set.Type: GrantFiled: July 8, 2019Date of Patent: July 13, 2021Assignee: Nant Holdings IP, LLCInventor: Matheen Siddiqui
-
Patent number: 10970924Abstract: A method of using a drone that is equipped with a camera and an inertial measurement unit (IMU) to survey an environment to reconstruct a 3D map is described. A key frame location is first identified. A first image of the environment is captured by the camera from the key frame location. The drone is then moved away from the key frame location to another location. A second image of the environment is captured from the other location. The drone then returns to the key frame location. The drone may perform additional rounds of scans and returns to the key frame location between each round. By constantly requiring the drone to return to the key frame location, the precise location of the drone may be determined by the acceleration data of the IMU because the location information may be recalibrated each time at the key frame location.Type: GrantFiled: May 31, 2019Date of Patent: April 6, 2021Assignee: Foresight AI Inc.Inventors: Matheen Siddiqui, Chang Yuan
-
Publication number: 20210027084Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: ApplicationFiled: September 30, 2020Publication date: January 28, 2021Applicant: Nant Holdings IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Publication number: 20200353943Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.Type: ApplicationFiled: September 11, 2019Publication date: November 12, 2020Inventors: Matheen Siddiqui, Cheng-Yi Lin, Chang Yuan