Patents by Inventor Ke HUO
Ke HUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11666224Abstract: A lesion detection system for use with a patient, comprising an optoacoustic guide wire assembly configured to be insertable into a patient's tissue. The optical acoustic guide wire assembly can be comprised of an optical waveguide have a first end and a second end, a light source coupled to the second end of the optical waveguide, wherein said light source configured to emit energy to the patient's tissue, at least one transducer configured to detect an ultrasound signal emitted from the patient's tissue in response to energy emitted from the light source, and a computer system.Type: GrantFiled: November 7, 2016Date of Patent: June 6, 2023Assignee: PURDUE RESEARCH FOUNDATIONInventors: Ji-xin Cheng, Pu Wang, Lu Lan, Yan Xia, Ke Huo
-
Patent number: 11670082Abstract: Systems and methods for providing a virtual space for multiple devices can include a first device having at least one sensor configured to acquire a spatial information of a physical space of the first device. The first device may include at least one processor configured to establish, according to the acquired spatial information, a virtual space corresponding to the physical space, that is accessible by a user of the first device via the first device. The at least one processor may further be configured to register a second device within the physical space, to allow a user of the second device to access the virtual space via the second device.Type: GrantFiled: December 6, 2021Date of Patent: June 6, 2023Assignee: Meta Platforms Technologies, LLCInventors: Chengyuan Yan, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan, Ke Huo
-
Publication number: 20230019960Abstract: A method and system for localizing a plurality of stationary devices, such as Internet of Things (IoT devices), arranged in an environment is disclosed. A mobile device is configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile device and the stationary devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the devices using wireless ranging techniques. Based on the measured distances, the mobile device is configured to determine locations of the stationary devices in a reference frame of the three-dimensional map. In some embodiments, the determined locations can be used to enable a variety of spatially aware augmented reality features and interactions between the mobile device and the stationary device.Type: ApplicationFiled: September 20, 2022Publication date: January 19, 2023Inventors: Ke Huo, Karthik Ramani
-
Patent number: 11521356Abstract: Systems and methods for maintaining a shared interactive environment include receiving, by a server, requests to register a first input device of a first user and a second input device of a second user with a shared interactive environment. The first input device may be for a first modality involving user input for an augmented reality (AR) environment, and the second input device may be for a second modality involving user input for a personal computer (PC) based virtual environment or a virtual reality (VR) environment. The server may register the first and second input device with the shared interactive environment. The server may receive inputs from a first adapter for the first modality and from a second adapter for the second modality. The inputs may be for the first and second user to use the shared interactive environment.Type: GrantFiled: October 10, 2019Date of Patent: December 6, 2022Assignee: Meta Platforms Technologies, LLCInventors: Chengyuan Yan, Ke Huo, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan
-
Patent number: 11475651Abstract: A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.Type: GrantFiled: April 2, 2020Date of Patent: October 18, 2022Assignee: Purdue Research FoundationInventors: Ke Huo, Vinayak Raman Krishnamurthy, Karthik Ramani
-
Patent number: 11450102Abstract: A method and system for localizing a plurality of stationary devices, such as Internet of Things (IoT devices), arranged in an environment is disclosed. A mobile device is configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile device and the stationary devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the devices using wireless ranging techniques. Based on the measured distances, the mobile device is configured to determine locations of the stationary devices in a reference frame of the three-dimensional map. In some embodiments, the determined locations can be used to enable a variety of spatially aware augmented reality features and interactions between the mobile device and the stationary device.Type: GrantFiled: February 27, 2019Date of Patent: September 20, 2022Assignee: Purdue Research FoundationInventors: Ke Huo, Karthik Ramani
-
Patent number: 11321929Abstract: A method and system for enabling a self-localizing mobile device to localize other self-localizing mobile devices having different reference frames is disclosed. Multiple self-localizing mobile devices are configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the mobile devices using wireless ranging techniques. Based on the measured distances and self-localized positions in the environment corresponding to each measured distance, at least one of the mobile devices is configured to determine relative rotational and translational transformations between the different reference frames of the mobile devices.Type: GrantFiled: February 27, 2019Date of Patent: May 3, 2022Assignee: Purdue Research FoundationInventors: Ke Huo, Karthik Ramani
-
Patent number: 11195020Abstract: Systems and methods for providing a virtual space for multiple devices can include a first device having at least one sensor configured to acquire a spatial information of a physical space of the first device. The first device may include at least one processor configured to establish, according to the acquired spatial information, a virtual space corresponding to the physical space, that is accessible by a user of the first device via the first device. The at least one processor may further be configured to register a second device within the physical space, to allow a user of the second device to access the virtual space via the second device.Type: GrantFiled: October 29, 2019Date of Patent: December 7, 2021Assignee: Facebook Technologies, LLCInventors: Chengyuan Yan, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan, Ke Huo
-
Publication number: 20210365681Abstract: A method and system for localizing a plurality of stationary devices, such as Internet of Things (IoT devices), arranged in an environment is disclosed. A mobile device is configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile device and the stationary devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the devices using wireless ranging techniques. Based on the measured distances, the mobile device is configured to determine locations of the stationary devices in a reference frame of the three-dimensional map. In some embodiments, the determined locations can be used to enable a variety of spatially aware augmented reality features and interactions between the mobile device and the stationary device.Type: ApplicationFiled: February 27, 2019Publication date: November 25, 2021Inventors: Ke Huo, Karthik Ramani
-
Publication number: 20210252699Abstract: A system and method for authoring and performing Human-Robot-Collaborative (HRC) tasks is disclosed. The system and method adopt an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. The system and method utilize an intuitive workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. The system and method utilize a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration.Type: ApplicationFiled: September 16, 2020Publication date: August 19, 2021Inventors: Karthik Ramani, Ke Huo, Yuanzhi Cao, Tianyi Wang
-
Publication number: 20210256765Abstract: A method and system for enabling a self-localizing mobile device to localize other self-localizing mobile devices having different reference frames is disclosed. Multiple self-localizing mobile devices are configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the mobile devices using wireless ranging techniques. Based on the measured distances and self-localized positions in the environment corresponding to each measured distance, at least one of the mobile devices is configured to determine relative rotational and translational transformations between the different reference frames of the mobile devices.Type: ApplicationFiled: February 27, 2019Publication date: August 19, 2021Inventors: Ke Huo, Karthik Ramani
-
Publication number: 20210110609Abstract: Systems and methods for maintaining a shared interactive environment include receiving, by a server, requests to register a first input device of a first user and a second input device of a second user with a shared interactive environment. The first input device may be for a first modality involving user input for an augmented reality (AR) environment, and the second input device may be for a second modality involving user input for a personal computer (PC) based virtual environment or a virtual reality (VR) environment. The server may register the first and second input device with the shared interactive environment. The server may receive inputs from a first adapter for the first modality and from a second adapter for the second modality. The inputs may be for the first and second user to use the shared interactive environment.Type: ApplicationFiled: October 10, 2019Publication date: April 15, 2021Inventors: Chengyuan Yan, Ke Huo, Amrutha Hakkare Arunachala, Chengyuan Lin, Anush Mohan
-
Publication number: 20210090349Abstract: A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.Type: ApplicationFiled: April 2, 2020Publication date: March 25, 2021Applicant: Purdue Research FoundationInventors: Ke Huo, Vinayak Raman Krishnamurthy, Karthik Ramani
-
Patent number: 10884505Abstract: The disclosed computer-implemented method may include tracking, using a low-order degree-of-freedom (DOF) mode, an orientation of a device based on input from an inertial measurement unit (IMU) of the device. The method may also include determining, using a magnetometer, that the device has entered a magnetic tracking volume defined by at least one magnet and in response to determining that the device has entered the magnetic tracking volume, transitioning from the low-order DOF mode to a high-order DOF mode that tracks a higher number of DOFs than the low-order DOF mode. The method may also include tracking, using the high-order DOF mode, the position and orientation of the device based on input from both the IMU and the magnetometer. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: November 7, 2018Date of Patent: January 5, 2021Assignee: Facebook Technologies, LLCInventors: Ke Huo, Chengyuan Yan
-
Patent number: 10643469Abstract: The present invention provides a traffic intersection driving assistance method and system, wherein the method comprises: acquiring driving information about vehicles within a current intersection area by way of vehicle interconnection; obtaining vehicle flows in various driving directions according to the driving information; determining whether a difference in between vehicle flows in various driving directions exceeds a pre-set threshold, and if so, changing a driving direction of a variable lane; and sending information about the distribution of driving directions of current lanes to the vehicles within the current intersection area. The object of the present invention is to avoid traffic congestion by adjusting a variable lane in time.Type: GrantFiled: May 20, 2016Date of Patent: May 5, 2020Assignees: ZHEJIANG GEELY AUTOMOBILE RESEARCH INSTITUTE CO., LTD, ZHEJIANG GEELY HOLDING GROUP CO., LTD.Inventors: Xuefeng Li, Ke Huo, Bo Li, Dayong Zhou, Weiguo Liu, Chengming Wu, Qingfeng Feng
-
Patent number: 10643397Abstract: A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.Type: GrantFiled: March 19, 2018Date of Patent: May 5, 2020Assignee: Purdue Research FoundationInventors: Ke Huo, Vinayak Raman Krishnamurthy, Karthik Ramani
-
Publication number: 20190035160Abstract: A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.Type: ApplicationFiled: March 19, 2018Publication date: January 31, 2019Inventors: Ke Huo, Fnu Vinayak, Karthik Ramani
-
Publication number: 20180310831Abstract: A lesion detection system for use with a patient, comprising an optoacoustic guide wire assembly configured to be insertable into a patient's tissue. The optical acoustic guide wire assembly can be comprised of an optical waveguide have a first end and a second end, a light source coupled to the second end of the optical waveguide, wherein said light source configured to emit energy to the patient's tissue, at least one transducer configured to detect an ultrasound signal emitted from the patient's tissue in response to energy emitted from the light source, and a computer system.Type: ApplicationFiled: November 7, 2016Publication date: November 1, 2018Inventors: Ji-xin Cheng, Pu Wang, Lu Lan, Yan Xia, Ke Huo
-
Publication number: 20180158331Abstract: The present invention provides a traffic intersection driving assistance method and system, wherein the method comprises: acquiring driving information about vehicles within a current intersection area by way of vehicle interconnection; obtaining vehicle flows in various driving directions according to the driving information; determining whether a difference in between vehicle flows in various driving directions exceeds a pre-set threshold, and if so, changing a driving direction of a variable lane; and sending information about the distribution of driving directions of current lanes to the vehicles within the current intersection area. The object of the present invention is to avoid traffic congestion by adjusting a variable lane in time.Type: ApplicationFiled: May 20, 2016Publication date: June 7, 2018Inventors: Xuefeng LI, Ke HUO, Bo LI, Dayong ZHOU, Weiguo LIU, Chengming WU, Qingfeng FENG