Patents by Inventor Onur C. Hamsici
Onur C. Hamsici has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240378827Abstract: Various implementations disclosed herein include devices, systems, and methods that track a position of a device within an object-based coordinate system. An example process may include, at a device and prior to movement of an object, acquiring first images of the object, identifying three-dimensional (3D) keypoints on surfaces of the object, and tracking positions of the device in an object-based coordinate system during acquisition of the first images. The process may further include, subsequent to a movement of the object, acquiring second images of the object, identifying the 3D keypoints on surfaces of the object, and tracking positions of the device in the object-based coordinate system during acquisition of the second images based on identifying the 3D keypoints. The process may further include generating a 3D model of the object based on the first and second images and the tracked positions of the device during acquisition of the images.Type: ApplicationFiled: July 8, 2024Publication date: November 14, 2024Inventors: Thorsten Gernoth, Chen Huang, Onur C. Hamsici, Shuo Feng, Hao Tang, Tobias Rick
-
Patent number: 12002227Abstract: Devices, systems, and methods are disclosed for partial point cloud registration. In some implementations, a method includes obtaining a first set of three-dimensional (3D) points corresponding to an object in a physical environment, the first set of 3D points having locations in a first coordinate system, obtaining a second set of 3D points corresponding to the object in the physical environment, the second set of 3D points having locations in a second coordinate system, predicting, via a machine learning model, locations of the first set of 3D points in the second coordinate system, and determining transform parameters relating the first set of 3D points and the second set of 3D points based on the predicted location of the first set of 3D points in the second coordinate system.Type: GrantFiled: July 15, 2021Date of Patent: June 4, 2024Assignee: Apple Inc.Inventors: Donghoon Lee, Thorsten Gernoth, Onur C. Hamsici, Shuo Feng
-
Patent number: 11915460Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.Type: GrantFiled: July 7, 2022Date of Patent: February 27, 2024Assignee: Apple Inc.Inventors: Carlos E. Guestrin, Leon A. Gatys, Shreyas V. Joshi, Gustav M. Larsson, Kory R. Watson, Srikrishna Sridhar, Karla P. Vega, Shawn R. Scully, Thorsten Gernoth, Onur C Hamsici
-
Publication number: 20240062488Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model of an object based on images and tracked positions of a device during acquisition of the images. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment acquired via a camera on the device, identifying the object in at least some of the images, tracking positions of the device during acquisition of the images based on identifying the object in the at least some of the images, the positions identifying positioning of the device with respect to a coordinate system defined based on a position and orientation of the object, and generating a 3D model of the object based on the images and positions of the device during acquisition of the images.Type: ApplicationFiled: November 1, 2023Publication date: February 22, 2024Inventors: Thorsten Gernoth, Chen Huang, Onur C. Hamsici, Shuo Feng, Hao Tang, Tobias Rick
-
Publication number: 20220414543Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.Type: ApplicationFiled: July 7, 2022Publication date: December 29, 2022Inventors: Carlos E. GUESTRIN, Leon A. GATYS, Shreyas V. JOSHI, Gustav M. LARSSON, Kory R. WATSON, Srikrishna SRIDHAR, Karla P. VEGA, Shawn R. SCULLY, Thorsten GERNOTH, Onur C HAMSICI
-
Patent number: 11386355Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.Type: GrantFiled: December 6, 2019Date of Patent: July 12, 2022Assignee: Apple Inc.Inventors: Carlos E. Guestrin, Leon A. Gatys, Shreyas V. Joshi, Gustav M. Larsson, Kory R. Watson, Srikrishna Sridhar, Karla P. Vega, Shawn R. Scully, Thorsten Gernoth, Onur C Hamsici
-
Publication number: 20210279967Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model of an object based on images and tracked positions of a device during acquisition of the images. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment acquired via a camera on the device, identifying the object in at least some of the images, tracking positions of the device during acquisition of the images based on identifying the object in the at least some of the images, the positions identifying positioning of the device with respect to a coordinate system defined based on a position and orientation of the object, and generating a 3D model of the object based on the images and positions of the device during acquisition of the images.Type: ApplicationFiled: February 19, 2021Publication date: September 9, 2021Inventors: Thorsten Gernoth, Chen Huang, Onur C. Hamsici, Shuo Feng, Hao Tang, Tobias Rick
-
Patent number: 10915734Abstract: An image captured using a camera on a device (e.g., a mobile device) may be operated on by one or more processes to determine properties of a user's face in the image. A first process may determine one or more first properties of the user's face in the image. A second process operating downstream from the first process may determine at least one second property of the user's face in the image. The second process may use at least one of the first properties from the first process to determine the second property.Type: GrantFiled: February 15, 2019Date of Patent: February 9, 2021Assignee: Apple Inc.Inventors: Atulit Kumar, Joerg A. Liebelt, Onur C. Hamsici, Feng Tang
-
Patent number: 10762335Abstract: Detection of a user paying attention to a device may be used to enable or support biometric security (e.g., facial recognition) enabled features on the device. Images captured by a camera on the device may be used to determine if the user is paying attention to the device. Facial features of the user's face in the images may be assessed to determine if the user is paying attention to the device. Facial features may be assessed through comparison of feature vectors generated from the captured image feature to a set of known feature vectors. The known feature vectors for attention may be generated using a machine learning process.Type: GrantFiled: March 23, 2018Date of Patent: September 1, 2020Assignee: Apple Inc.Inventors: Thorsten Gernoth, Joerg A. Liebelt, Onur C. Hamsici, Kelsey Y. Ho
-
Patent number: 10713475Abstract: A single network encodes and decodes an image captured using a camera on a device. The single network detects if a face is in the image. If a face is detected in the image, the single network determines properties of the face in the image and outputs the properties along with the face detection output. Properties of the face may be determined by sharing the task for face detection. Properties of the face that are output along with the face detection output include the location of the face, the pose of the face, and/or the distance of the face from the camera.Type: GrantFiled: March 2, 2018Date of Patent: July 14, 2020Assignee: Apple Inc.Inventors: Thorsten Gernoth, Atulit Kumar, Ian R. Fasel, Haitao Guo, Onur C. Hamsici
-
Publication number: 20200193328Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.Type: ApplicationFiled: December 6, 2019Publication date: June 18, 2020Inventors: Carlos E. GUESTRIN, Leon A. GATYS, Shreyas V. JOSHI, Gustav M. LARSSON, Kory R. WATSON, Srikrishna SRIDHAR, Karla P. VEGA, Shawn R. SCULLY, Thorsten GERNOTH, Onur C. HAMSICI
-
Publication number: 20200104570Abstract: An image captured using a camera on a device (e.g., a mobile device) may be operated on by one or more processes to determine properties of a user's face in the image. A first process may determine one or more first properties of the user's face in the image. A second process operating downstream from the first process may determine at least one second property of the user's face in the image. The second process may use at least one of the first properties from the first process to determine the second property.Type: ApplicationFiled: February 15, 2019Publication date: April 2, 2020Inventors: Atulit Kumar, Joerg A. Liebelt, Onur C. Hamsici, Feng Tang
-
Publication number: 20190042833Abstract: A single network encodes and decodes an image captured using a camera on a device. The single network detects if a face is in the image. If a face is detected in the image, the single network determines properties of the face in the image and outputs the properties along with the face detection output. Properties of the face may be determined by sharing the task for face detection. Properties of the face that are output along with the face detection output include the location of the face, the pose of the face, and/or the distance of the face from the camera.Type: ApplicationFiled: March 2, 2018Publication date: February 7, 2019Inventors: Thorsten Gernoth, Atulit Kumar, Ian R. Fasel, Haitao Guo, Onur C. Hamsici
-
Publication number: 20180336399Abstract: Detection of a user paying attention to a device may be used to enable or support biometric security (e.g., facial recognition) enabled features on the device. Images captured by a camera on the device may be used to determine if the user is paying attention to the device. Facial features of the user's face in the images may be assessed to determine if the user is paying attention to the device. Facial features may be assessed through comparison of feature vectors generated from the captured image feature to a set of known feature vectors. The known feature vectors for attention may be generated using a machine learning process.Type: ApplicationFiled: March 23, 2018Publication date: November 22, 2018Inventors: Thorsten Gernoth, Joerg A. Liebelt, Onur C. Hamsici, Kelsey Y. Ho
-
Patent number: 9530073Abstract: A local feature descriptor for a point in an image is generated over multiple levels of an image scale space. The image is gradually smoothened to obtain a plurality of scale spaces. A point may be identified as the point of interest within a first scale space from the plurality of scale spaces. A plurality of image derivatives is obtained for each of the plurality of scale spaces. A plurality of orientation maps is obtained (from the plurality of image derivatives) for each scale space in the plurality of scale spaces. Each of the plurality of orientation maps is then smoothened (e.g., convolved) to obtain a corresponding plurality of smoothed orientation maps. Therefore, a local feature descriptor for the point may be generated by sparsely sampling a plurality of smoothed orientation maps corresponding to two or more scale spaces from the plurality of scale spaces.Type: GrantFiled: April 19, 2011Date of Patent: December 27, 2016Assignee: QUALCOMM IncorporatedInventors: Onur C. Hamsici, John H. Hong, Yuriy Reznik, Sundeep Vaddadi, Chong Uk. Lee
-
Patent number: 9153061Abstract: Techniques for segmentation of three-dimensional (3D) point clouds are described herein. An example of a method for user-assisted segmentation of a 3D point cloud described herein includes obtaining a 3D point cloud of a scene containing a target object; receiving a seed input indicative of a location of the target object within the scene; and generating a segmented point cloud corresponding to the target object by pruning the 3D point cloud based on the seed input.Type: GrantFiled: September 14, 2012Date of Patent: October 6, 2015Assignee: QUALCOMM IncorporatedInventors: Sundeep Vaddadi, Andrew Moore Ziegler, Onur C. Hamsici
-
Patent number: 9129189Abstract: In general, techniques are described for performing a vocabulary-based visual search using multi-resolution feature descriptors. A device may comprise one or more processors configured to perform the techniques. The processors may generate a hierarchically arranged data structure to be used when classifying objects included within a query image based on multi-resolution query feature descriptor extracted from the query image at a first scale space resolution and a second scale space resolution. The hierarchically arranged data structure may represent a first query feature descriptor of the multi-resolution feature descriptor extracted at the first scale space resolution and a second corresponding query feature descriptor of the multi-resolution feature descriptor extracted at the second scale space resolution hierarchically arranged according to the first scale space resolution and the second scale space resolution. The processors may then perform a visual search based on the generated data structure.Type: GrantFiled: September 30, 2013Date of Patent: September 8, 2015Assignee: QUALCOMM IncorporatedInventor: Onur C. Hamsici
-
Patent number: 9117144Abstract: In general, techniques are described for performing a vocabulary-based visual search using multi-resolution feature descriptors. A device may comprise one or more processors configured to perform the techniques. The one or more processors may to apply a partitioning algorithm to a first subset of target feature descriptors to determine a first classifying data structure to be used when performing a visual search with respect to a query feature descriptor. The one or more processors may then apply the partitioning algorithm to a second subset of the target feature descriptors to determine a second classifying data structure to be used when performing the visual search with respect to the same query feature descriptor.Type: GrantFiled: September 30, 2013Date of Patent: August 25, 2015Assignee: QUALCOMM IncorporatedInventor: Onur C. Hamsici
-
Patent number: 9036925Abstract: Techniques are disclosed for performing robust feature matching for visual search. An apparatus comprising an interface and a feature matching unit may implement these techniques. The interface receives a query feature descriptor. The feature matching unit then computes a distance between a query feature descriptor and reference feature descriptors and determines a first group of the computed distances and a second group of the computed distances in accordance with a clustering algorithm, where this second group of computed distances comprises two or more of the computed distances. The feature matching unit then determines whether the query feature descriptor matches one of the reference feature descriptors associated with a smallest one of the computed distances based on the determined first group and second group of the computed distances.Type: GrantFiled: December 6, 2011Date of Patent: May 19, 2015Assignee: QUALCOMM INCORPORATEDInventors: Sundeep Vaddadi, Onur C. Hamsici, Yuriy Reznik, John H. Hong, Chong U. Lee
-
Publication number: 20150049943Abstract: In general, techniques are described for performing a vocabulary-based visual search using multi-resolution feature descriptors. A device may comprise one or more processors configured to perform the techniques. The one or more processors may to apply a partitioning algorithm to a first subset of target feature descriptors to determine a first classifying data structure to be used when performing a visual search with respect to a query feature descriptor. The one or more processors may then apply the partitioning algorithm to a second subset of the target feature descriptors to determine a second classifying data structure to be used when performing the visual search with respect to the same query feature descriptor.Type: ApplicationFiled: September 30, 2013Publication date: February 19, 2015Applicant: QUALCOMM IncorporatedInventor: Onur C. Hamsici