Patents by Inventor Juan David Hincapie
Juan David Hincapie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240410711Abstract: Systems, methods, and and/or algorithms facilitate navigation with augmented reality by determining which streets should be annotated, and/or determining the position and/or format of street annotations, based on factors such as user distance and orientation relative to streets, and/or the configuration of streets (e.g., whether nearby streets form a simple intersection, a complex intersection, or no intersection).Type: ApplicationFiled: September 29, 2021Publication date: December 12, 2024Inventors: Juan David Hincapie, Mohamed Suhail Mohamed Yousuf sait, Marek Gorecki, Andre Le, Michael Humphrey, Devin Nickoloff, Mirko Ranieri, Loran Briggs, Yan Wang, Carlos David Correa Ocampo, Wenli Zhao
-
Publication number: 20240320855Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: ApplicationFiled: June 5, 2024Publication date: September 26, 2024Inventors: Juan David Hincapie, Andre Le
-
Publication number: 20240303876Abstract: A method for determining a location at which an unobstructed view of a target location is obtained, includes capturing an image of an environment which includes a target location; accessing target location data, wherein the target location data comprises information associated with the environment and the target location; determining, based on the target location data and a spatial relationship between the target location and one or more objects in the image which obstruct a view of the target location, at least one suitable location of a user device from which an unobstructed view of the target location is within a field of view of the user device and satisfies one or more criteria; and providing a guided human-machine interaction process to assist a user associated with the user device in re-locating the user device to the at least one suitable location.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Inventors: Juan David Hincapie Ramos, Justin Paul Quimby, Marek Lech Gorecki
-
Patent number: 12033351Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: GrantFiled: May 15, 2023Date of Patent: July 9, 2024Assignee: Google LLCInventors: Juan David Hincapie, Andre Le
-
Patent number: 12026805Abstract: Methods, systems, devices, and tangible non-transitory computer readable media for generating geolocalized images are provided. The disclosed technology can access target location data. The target location data can include information associated with an environment from which a target location is within a field of view of a user device. A suitable position of the user device from which an unobstructed view of the target location is within the field of view of the user device and that satisfies one or more criteria can be determined based on the target location data. Furthermore, indications and images of the environment within the field of view of the user device can be generated. The indications can be associated with positioning the user device in the suitable position.Type: GrantFiled: December 31, 2020Date of Patent: July 2, 2024Assignee: GOOGLE LLCInventors: Juan David Hincapie Ramos, Justin Paul Quimby, Marek Lech Gorecki
-
Publication number: 20240177364Abstract: To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.Type: ApplicationFiled: February 6, 2024Publication date: May 30, 2024Inventors: Mohamed Suhail Mohamed Yousuf Sait, Andre Le, Juan David Hincapie, Mirko Ranieri, Marek Gorecki, Wenli Zhao, Tony Shih, Bo Zhang, Alan Sheridan, Matt Seegmiller
-
Patent number: 11928756Abstract: To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.Type: GrantFiled: September 22, 2021Date of Patent: March 12, 2024Assignee: GOOGLE LLCInventors: Mohamed Suhail Mohamed Yousuf Sait, Andre Le, Juan David Hincapie, Mirko Ranieri, Marek Gorecki, Wenli Zhao, Tony Shih, Bo Zhang, Alan Sheridan, Matt Seegmiller
-
Publication number: 20230360257Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: ApplicationFiled: May 15, 2023Publication date: November 9, 2023Applicant: Google LLCInventors: Juan David Hincapie, Andre Le
-
Publication number: 20230274491Abstract: A method including receiving (S605) a request for a depth map, generating (S625) a hybrid depth map based on a device depth map (110) and downloaded depth information (105), and responding (S630) to the request for the depth map with the hybrid depth map (415). The device depth map (110) can be depth data captured on a user device (515) using sensors and/or software. The downloaded depth information (105) can be associated with depth data, map data, image data, and/or the like stored on a remote (to the user device) server (505).Type: ApplicationFiled: September 1, 2021Publication date: August 31, 2023Inventors: Eric Turner, Adarsh Prakash Murthy Kowdle, Bicheng Luo, Juan David Hincapie Ramos
-
Patent number: 11688096Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: GrantFiled: November 16, 2021Date of Patent: June 27, 2023Assignee: Google LLCInventors: Juan David Hincapie, Andre Le
-
Publication number: 20230154059Abstract: Methods, systems, devices, and tangible non-transitory computer readable media for generating geolocalized images are provided. The disclosed technology can access target location data. The target location data can include information associated with an environment from which a target location is within a field of view of a user device. A suitable position of the user device from which an unobstructed view of the target location is within the field of view of the user device and that satisfies one or more criteria can be determined based on the target location data. Furthermore, indications and images of the environment within the field of view of the user device can be generated. The indications can be associated with positioning the user device in the suitable position.Type: ApplicationFiled: December 31, 2020Publication date: May 18, 2023Inventors: Juan David Hincapie Ramos, Justin Paul Quimby, Marek Lech Gorecki
-
Publication number: 20230088884Abstract: To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.Type: ApplicationFiled: September 22, 2021Publication date: March 23, 2023Inventors: Mohamad Suhail Mohamad Yousuf Sait, Andre Le, Juan David Hincapie, Mirko Ranieri, Marek Gorecki, Wenli Zhao, Tony Shih, Bo Zhang, Alan Sheridan, Matt Seegmiller
-
Publication number: 20220178713Abstract: The present disclosure provides systems and methods for determining the orientation of a device based on visible and/or non-visible orientation cues. The orientation cues may be a geographically located objects, such as a park, body of water, monument, building, landmark, etc. The orientation cues may be visible or non-visible with respect to the location of the device. The device may use one or more image sensors to detect the visible orientation cues. Non-visible orientation cues may be associated with map data. Using the location of the orientation cue and the distance of the orientation cue to the device, the orientation of the device may be determined. The device may then provide an output indicating the orientation of the device.Type: ApplicationFiled: July 6, 2020Publication date: June 9, 2022Inventors: Juan David Hincapie, Rachel Elizabeth Inman
-
Publication number: 20220076443Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: ApplicationFiled: November 16, 2021Publication date: March 10, 2022Inventors: Juan David Hincapie, Andre Le
-
Patent number: 11232587Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: GrantFiled: November 6, 2019Date of Patent: January 25, 2022Assignee: Google LLCInventors: Juan David Hincapie, Andre Le
-
Patent number: 11113427Abstract: The present disclosure provides a method for processing display contents, a first electronic device, and a second electronic device thereof. The method of displaying contents includes the steps of: providing a first electronic device configured to display one or more virtual contents to a user, wherein the first electronic device is communicable with and coupled to a second electronic device, which includes a physical display configured to display one or more non-virtual contents; determining the user's line of sight; and prohibiting the physical display of the second electronic device from displaying the one or more non-virtual contents, in response to the user's line of sight not being on the physical display of the second electronic device.Type: GrantFiled: December 24, 2018Date of Patent: September 7, 2021Assignee: LENOVO (BEIJING) CO., LTD.Inventors: Xin Jiang, Juan David Hincapie-Ramos
-
Publication number: 20210134003Abstract: The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest.Type: ApplicationFiled: November 6, 2019Publication date: May 6, 2021Inventors: Juan David Hincapie, Andre Le
-
Publication number: 20190310705Abstract: An image processing method for a head mount display is provided. The image processing method comprises obtaining a first rendering pitch corresponding to a user's real interpupillary distance; and adjusting, according to the first rendering pitch, display positions of two rendering images to be rendered by two physical displays of the head mount display, such that the rendering images respectively rendered by the two physical displays correspond to the user's real interpupillary distance.Type: ApplicationFiled: April 4, 2019Publication date: October 10, 2019Inventor: Juan David HINCAPIE RAMOS
-
Publication number: 20190197262Abstract: The present disclosure provides a method for processing display contents, a first electronic device, and a second electronic device thereof. The method of displaying contents includes the steps of: providing a first electronic device configured to display one or more virtual contents to a user, wherein the first electronic device is communicable with and coupled to a second electronic device, which includes a physical display configured to display one or more non-virtual contents; determining the user's line of sight; and prohibiting the physical display of the second electronic device from displaying the one or more non-virtual contents, in response to the user's line of sight not being on the physical display of the second electronic device.Type: ApplicationFiled: December 24, 2018Publication date: June 27, 2019Inventors: Xin JIANG, Juan David HINCAPIE-RAMOS
-
Publication number: 20190196710Abstract: The present disclosure provides a display screen processing method. Application scenario information of one or more virtual display screens displayed by a first electronic device through an optical lens module is determined. The first electronic device is coupled with a second electronic device that includes a physical display screen. A display parameter of the one or more virtual display screens is adjusted based on the application scenario information.Type: ApplicationFiled: December 21, 2018Publication date: June 27, 2019Inventors: Xin JIANG, Juan David HINCAPIE-RAMOS