Patents by Inventor Bo Robert Xiao
Bo Robert Xiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220011113Abstract: Inertial measurement units with gyroscopic sensors are standard in mobile computers. The present invention shows that these sensors can be co-opted for vibroacoustic data reception. The present invention illustrates a new capability for an old sensor utilizing the commodity gyroscope sensor found in most average smartphones and a low-cost transducer to the present invention can transmit error-corrected data at 2028 bits per sec with the expectation that 95% of packets will be successfully received.Type: ApplicationFiled: June 23, 2021Publication date: January 13, 2022Applicant: CARNEGIE MELLON UNIVERSITYInventors: Christopher Harrison, Bo Robert Xiao
-
Patent number: 10942550Abstract: The present invention includes provides secure, instant, and anonymous connections between two devices. The invention pairs a “cap” device with a capacitive touchscreen to a “cam” device with a camera sensor. For example, typical smartphones and tablets can be paired with each other, and these devices can be paired to even larger touchscreens, such as smart whiteboards and touchscreen monitors. The invention uses the cap device's touchscreen to detect and track the cam device, and displays color-modulated pairing data directly underneath the camera once the camera is touching the screen. The pairing data is used as configuration data for a bidirectional link, such as an ad-hoc WiFi or Bluetooth link. These links are established without requiring user configuration. As such, the present invention provides a unidirectional communication mechanism from the touchscreen to the camera, which is used to bootstrap a full bidirectional, high-speed link.Type: GrantFiled: April 20, 2017Date of Patent: March 9, 2021Assignee: CARNEGIE MELLON UNIVERSITYInventors: Bo Robert Xiao, Christopher Harrison, Scott E. Hudson
-
Patent number: 10942603Abstract: Techniques that can improve efficiency of a touch sensitive device are presented. A touch controller (TC) can comprise a hover classification engine and an application processor (AP) can comprise a touch classification engine usable to classify touch or hover interactions of an object(s) with a touch sensitive surface (TSS) of the device. In response to classifying a touch or hover interaction with TSS as unintentional, AP can reject such interaction and can transition from an active state to an inactive state. TC can continue to monitor touch or hover interactions with TSS. In response to determining there is an intentional touch interaction with TS S or no unintentional face/ear interaction with TSS, TC can transmit a notification signal to AP. In response to the notification signal, AP can transition from the inactive state to active state, and can process the intentional touch interaction or monitor the TSS.Type: GrantFiled: May 6, 2019Date of Patent: March 9, 2021Assignee: QEEXO, CO.Inventors: Joshua Dale Stone, Yanfei Chen, Shyama Purnima Dorbala, Bo Robert Xiao
-
Publication number: 20200357182Abstract: A system and method using light sources as spatial anchors is provided. Augmented reality (AR) requires precise and instant overlay of digital information onto everyday objects. Embodiments disclosed herein provide a new method for displaying spatially-anchored data, also referred to as LightAnchors. LightAnchors takes advantage of pervasive point lights—such as light emitting diodes (LEDs) and light bulbs—for both in-view anchoring and data transmission. These lights are blinked at high speed to encode data. An example embodiment includes an application that runs on a mobile operating system without any hardware or software modifications, which has been demonstrated to perform well under various use cases.Type: ApplicationFiled: May 6, 2020Publication date: November 12, 2020Inventors: Karan Ahuja, Sujeath Pareddy, Bo Robert Xiao, Christopher Harrison, Mayank Goel
-
Publication number: 20200356210Abstract: Techniques that can improve efficiency of a touch sensitive device are presented. A touch controller (TC) can comprise a hover classification engine and an application processor (AP) can comprise a touch classification engine usable to classify touch or hover interactions of an object(s) with a touch sensitive surface (TSS) of the device. In response to classifying a touch or hover interaction with TSS as unintentional, AP can reject such interaction and can transition from an active state to an inactive state. TC can continue to monitor touch or hover interactions with TSS. In response to determining there is an intentional touch interaction with TS S or no unintentional face/ear interaction with TSS, TC can transmit a notification signal to AP. In response to the notification signal, AP can transition from the inactive state to active state, and can process the intentional touch interaction or monitor the TSS.Type: ApplicationFiled: May 6, 2019Publication date: November 12, 2020Inventors: Joshua Dale Stone, Yanfei Chen, Shyama Purnima Dorbala, Bo Robert Xiao
-
Patent number: 10657385Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.Type: GrantFiled: March 25, 2016Date of Patent: May 19, 2020Assignees: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit Corporation, UNIVERSITY OF ROCHESTERInventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
-
Publication number: 20200033163Abstract: A sensing system includes a sensor assembly and a back end server system. The sensor assembly includes a collection of sensors in communication with a control circuit. The sensors are each configured to sense one or more physical phenomena in an environment of the sensor assembly. The control circuit of the sensor assembly is configured to identify one or more selected sensors of the collection of sensors whose data corresponds to an event occurring in the environment of the sensor assembly and transmit data to the back end server system. The back end server system is configured to generate a first order virtual sensor by training a machine learning model to detect the event based on the data from at least one of the selected sensors and detect the event using the trained first order virtual sensor and data from the selected sensors.Type: ApplicationFiled: October 3, 2019Publication date: January 30, 2020Inventors: Yuvraj AGARWAL, Christopher HARRISON, Gierad LAPUT, Sudershan BOOVARAGHAVAN, Chen CHEN, Abhijit HOTA, Bo Robert XIAO, Yang ZHANG
-
Patent number: 10436615Abstract: A sensing system includes a sensor assembly that is communicably connected to a computer system, such as a server or a cloud computing system. The sensor assembly includes a plurality of sensors that sense a variety of different physical phenomena. The sensor assembly featurizes the raw sensor data and transmits the featurized data to the computer system. Through machine learning, the computer system then trains a classifier to serve as a virtual sensor for an event that is correlated to the data from one or more sensor streams within the featurized sensor data. The virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event. Higher order virtual sensors can receive the outputs from lower order virtual sensors to infer nonbinary details about the environment in which the sensor assemblies are located.Type: GrantFiled: April 24, 2018Date of Patent: October 8, 2019Assignee: Carnegie Mellon UniversityInventors: Yuvraj Agarwal, Christopher Harrison, Gierad Laput, Sudershan Boovaraghavan, Chen Chen, Abhijit Hota, Bo Robert Xiao, Yang Zhang
-
Publication number: 20190302963Abstract: Touch tracking systems and methods are described, which employ depth image information and infrared image information to robustly and accurately detect finger touches on surfaces within the touch tracking system's field of view, with accuracy exceeding the noise level of the depth image sensor. The disclosed embodiments require no prior calibration to the surface, and are capable of adapting to changes in the sensing environment. Various described embodiments facilitate providing reliable, low-cost touch tracking system for surfaces without requiring modification or instrumentation of the surface itself.Type: ApplicationFiled: May 31, 2017Publication date: October 3, 2019Inventors: Christopher Harrison, Bo Robert Xiao, Scott E. Hudson
-
Patent number: 10290152Abstract: Methods, computing devices and head-mounted display devices for displaying user interface elements with virtual objects are disclosed. In one example, a virtual object and one or more user interface elements are displayed within a physical environment. User input is received that moves one or more of the virtual object and the one or more user interface elements. One or more of the virtual object and the one or more user interface elements are determined to be within a predetermined distance of a physical surface. Based at least on this determination, the one or more user interface elements are displayed on the surface.Type: GrantFiled: April 3, 2017Date of Patent: May 14, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Julia Schwarz, Bo Robert Xiao, Hrvoje Benko, Andrew Wilson
-
Publication number: 20190138063Abstract: The present invention includes provides secure, instant, and anonymous connections between two devices. The invention pairs a “cap” device with a capacitive touchscreen to a “cam” device with a camera sensor. For example, typical smartphones and tablets can be paired with each other, and these devices can be paired to even larger touchscreens, such as smart whiteboards and touchscreen monitors. The invention uses the cap device's touchscreen to detect and track the cam device, and displays color-modulated pairing data directly underneath the camera once the camera is touching the screen. The pairing data is used as configuration data for a bidirectional link, such as an ad-hoc WiFi or Bluetooth link. These links are established without requiring user configuration. As such, the present invention provides a unidirectional communication mechanism from the touchscreen to the camera, which is used to bootstrap a full bidirectional, high-speed link.Type: ApplicationFiled: April 20, 2017Publication date: May 9, 2019Inventors: Bo Robert Xiao, Christopher Harrison, Scott E. Hudson
-
Patent number: 10175487Abstract: Various technologies described herein pertain to a head mounted display device having a display with a central portion and a periphery portion. Graphical content can be displayed on the central portion of the display. The central portion can be a primary display that provides a field of view and displays the graphical content, and the periphery portion can be a peripheral display. The peripheral display can be positioned relative to the primary display such that an overall field of view provided by the primary display and the peripheral display is extended compared to the field of view of the primary display. Further, complementary content can be rendered based on the graphical content and caused to be displayed on the periphery portion (e.g., the peripheral display). The complementary content can include a countervection visualization viewable in a far periphery region of a field of view of human vision.Type: GrantFiled: March 29, 2016Date of Patent: January 8, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Hrvoje Benko, Bo Robert Xiao
-
Patent number: 10126123Abstract: According to embodiments of the present invention are a system and method that use projected structured patterns of light and linear optical sensors for motion tracking. Sensors are capable of recovering two-dimensional location within the projection area, while several sensors can be combined for up to six degrees of freedom tracking. The structure patterns are based on m-sequences, in which any consecutive subsequence of m bits is unique. Both digital and static light sources can be used. The system and method of the present invention enables high-speed, high precision, and low-cost motion tracking for a wide range of applications.Type: GrantFiled: September 21, 2015Date of Patent: November 13, 2018Assignees: CARNEGIE MELLON UNIVERSITY, DISNEY ENTERPRISES, INC.Inventors: Christopher Harrison, Bo Robert Xiao, Scott E. Hudson, Ivan Poupyrev, Karl D. D. Willis
-
Publication number: 20180306609Abstract: A sensing system includes a sensor assembly that is communicably connected to a computer system, such as a server or a cloud computing system. The sensor assembly includes a plurality of sensors that sense a variety of different physical phenomena. The sensor assembly featurizes the raw sensor data and transmits the featurized data to the computer system. Through machine learning, the computer system then trains a classifier to serve as a virtual sensor for an event that is correlated to the data from one or more sensor streams within the featurized sensor data. The virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event. Higher order virtual sensors can receive the outputs from lower order virtual sensors to infer nonbinary details about the environment in which the sensor assemblies are located.Type: ApplicationFiled: April 24, 2018Publication date: October 25, 2018Inventors: Yuvraj Agarwal, Christopher Harrison, Gierad Laput, Sudershan Boovaraghavan, Chen Chen, Abhijit Hota, Bo Robert Xiao, Yang Zhang
-
Publication number: 20180286126Abstract: Methods, computing devices and head-mounted display devices for displaying user interface elements with virtual objects are disclosed. In one example, a virtual object and one or more user interface elements are displayed within a physical environment. User input is received that moves one or more of the virtual object and the one or more user interface elements. One or more of the virtual object and the one or more user interface elements are determined to be within a predetermined distance of a physical surface. Based at least on this determination, the one or more user interface elements are displayed on the surface.Type: ApplicationFiled: April 3, 2017Publication date: October 4, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Bo Robert Xiao, Hrvoje Benko, Andrew Wilson
-
Publication number: 20180173300Abstract: Disclosed are an apparatus and a method of detecting a user interaction with a virtual object. In some embodiments, a depth sensing device of an NED device receives a plurality of depth values. The depth values correspond to depths of points in a real-world environment relative to the depth sensing device. The NED device overlays an image of a 3D virtual object on a view of the real-world environment, and identifies an interaction limit in proximity to the 3D virtual object. Based on depth values of points that are within the interaction limit, the NED device detects a body part or a user device of a user interacting with the 3D virtual object.Type: ApplicationFiled: December 19, 2016Publication date: June 21, 2018Inventors: Julia Schwarz, Hrvoje Benko, Andrew D. Wilson, Robert Charles Johnstone Pengelly, Bo Robert Xiao
-
Publication number: 20180107879Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.Type: ApplicationFiled: March 25, 2016Publication date: April 19, 2018Applicant: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit CorporationInventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
-
Publication number: 20170285344Abstract: Various technologies described herein pertain to a head mounted display device having a display with a central portion and a periphery portion. Graphical content can be displayed on the central portion of the display. The central portion can be a primary display that provides a field of view and displays the graphical content, and the periphery portion can be a peripheral display. The peripheral display can be positioned relative to the primary display such that an overall field of view provided by the primary display and the peripheral display is extended compared to the field of view of the primary display. Further, complementary content can be rendered based on the graphical content and caused to be displayed on the periphery portion (e.g., the peripheral display). The complementary content can include a countervection visualization viewable in a far periphery region of a field of view of human vision.Type: ApplicationFiled: March 29, 2016Publication date: October 5, 2017Inventors: Hrvoje Benko, Bo Robert Xiao
-
Publication number: 20160084960Abstract: According to embodiments of the present invention are a system and method that use projected structured patterns of light and linear optical sensors for motion tracking. Sensors are capable of recovering two-dimensional location within the projection area, while several sensors can be combined for up to six degrees of freedom tracking. The structure patterns are based on m-sequences, in which any consecutive subsequence of m bits is unique. Both digital and static light sources can be used. The system and method of the present invention enables high-speed, high precision, and low-cost motion tracking for a wide range of applications.Type: ApplicationFiled: September 21, 2015Publication date: March 24, 2016Applicants: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Profit Corporation, The Walt Disney CompanyInventors: Christopher Harrison, Bo Robert Xiao, Scott E. Hudson, Ivan Poupyrev, Karl D.D. Willis