Patents by Inventor GIERAD LAPUT

GIERAD LAPUT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11353965
    Abstract: Disclosed herein is a method and system a system that enables users to simply tap their smartphone or other electronic device to an object to discover and rapidly utilize contextual functionality. As described herein, the system and method provide for recognition of physical contact with uninstrumented objects, and summons object-specific interfaces.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: June 7, 2022
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Christopher Harrison, Robert Xiao, Gierad Laput
  • Patent number: 11292169
    Abstract: Embodiments disclosed herein describe a method of fabricating soft, flexible fibers using a 3D printer having an extrusion head. Embodiments of the method further include termination techniques to allow a series of fibers to be fabricated on the same object. Aspects of the certain embodiments offer a range of design parameters for controlling the properties of single strands and also of bundles of fibers. The method extends the capabilities of 3D printing without requiring any new hardware.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: April 5, 2022
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Gierad Laput, Christopher Harrison, Xiang Chen
  • Publication number: 20220030345
    Abstract: Systems and processes for user identification using headphones associated with a first device are provided. For example, first movement information corresponding to movement of a second electronic device is detected. Second movement information corresponding to movement of a third electronic device is detected. A similarity score is determined based on the first movement information and the second movement information. In accordance with a determination that the similarity score is above a threshold similarity score, a user is identified as an authorized user of the first electronic device and the second electronic device. Based on the identification, an output is provided to the second electronic device.
    Type: Application
    Filed: August 13, 2020
    Publication date: January 27, 2022
    Inventors: Jun GONG, Gierad LAPUT
  • Publication number: 20220005454
    Abstract: Embodiments are provided to recognize features and activities from an audio signal. In one embodiment, a model is generated from sound effect data, which is augmented and projected into an audio domain to form a training dataset efficiently. Sound effect data is data that has been artificially created or from enhanced sounds or sound processes to provide a more accurate baseline of sound data than traditional training data. The sound effect data is augmented to create multiple variants to broaden the sound effect data. The augmented sound effects are projected into various audio domains, such as indoor, outdoor, urban, based on mixing background sounds consistent with these audio domains. The model is installed on any computing device, such as a laptop, smartphone, or other device. Features and activities from an audio signal are then recognized by the computing device based on the model without the need for in-situ training.
    Type: Application
    Filed: July 15, 2021
    Publication date: January 6, 2022
    Inventors: Gierad Laput, Karan Ahuja, Mayank Goel, Christopher Harrison
  • Patent number: 11069334
    Abstract: Embodiments are provided to recognize features and activities from an audio signal. In one embodiment, a model is generated from sound effect data, which is augmented and projected into an audio domain to form a training dataset efficiently. Sound effect data is data that has been artificially created or from enhanced sounds or sound processes to provide a more accurate baseline of sound data than traditional training data. The sound effect data is augmented to create multiple variants to broaden the sound effect data. The augmented sound effects are projected into various audio domains, such as indoor, outdoor, urban, based on mixing background sounds consistent with these audio domains. The model is installed on any computing device, such as a laptop, smartphone, or other device. Features and activities from an audio signal are then recognized by the computing device based on the model without the need for in-situ training.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: July 20, 2021
    Assignee: Carnegie Mellon University
    Inventors: Gierad Laput, Karan Ahuja, Mayank Goel, Christopher Harrison
  • Patent number: 10942596
    Abstract: A method for a touch sensing system includes generating, by a first pair of electrodes at a first location in a conductive material, an electric field in the conductive material; generating measurement data by measuring, by one or more second pairs of electrodes, the electric field in the conductive material at one or more second locations in the conductive material, with each of the one or more second locations differing from the first location; generating, based on the measurement data, an approximation of the electric field in the conductive material; and classifying, based on the approximation, one or more regions of the interface into a given state.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: March 9, 2021
    Assignee: Carnegie Mellon University
    Inventors: Christopher Harrison, Yang Zhang, Gierad Laput
  • Publication number: 20210063434
    Abstract: Individual health related events (e.g., handwashing events) can be detected based on multiple sensors including motion and audio sensors. Detecting a qualifying handwashing event can include detecting a qualifying scrubbing event based on motion data (e.g., accelerometer data) and a qualifying rinsing event based on audio data. In some examples, power consumption can be reduced by implementing one or more power saving mitigations.
    Type: Application
    Filed: August 14, 2020
    Publication date: March 4, 2021
    Inventors: Gierad LAPUT, Jared LeVan ZERBE, William C. ATHAS, Andreas Edgar SCHOBEL, Shawn R. SCULLY, Brian H. TSANG, Kevin LYNCH, Charles MAALOUF, Shiwen ZHAO
  • Publication number: 20200264769
    Abstract: Internet of Things (“IoT”) appliances are gaining consumer traction, from smart thermostats to smart speakers. These devices generally have limited user interfaces, most often small buttons and touchscreens, or rely on voice control. Further, these devices know little about their surroundings—unaware of objects, people and activities around them. Consequently, interactions with these “smart” devices can be cumbersome and limited. The present invention presents an approach that enriches IoT experiences with rich touch and object sensing, offering a complementary input channel and increased contextual awareness. The present invention incorporates a range sensing technology into the computing devices, providing an expansive ad hoc plane of sensing just above the surface with which a device is associated. Additionally, the present invention can recognize and track a wide array of objects, including finger touches and hand gestures.
    Type: Application
    Filed: February 20, 2020
    Publication date: August 20, 2020
    Applicant: CARNEGIE MELLON UNIVERSITY
    Inventors: Christopher Harrison, Gierad Laput
  • Patent number: 10657385
    Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: May 19, 2020
    Assignees: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit Corporation, UNIVERSITY OF ROCHESTER
    Inventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
  • Publication number: 20200117889
    Abstract: Systems and techniques for facilitating hand activity sensing are presented. In one example, a system obtains, from a wrist-worn computational device, hand activity data associated with a sustained series of hand motor actions in performance of a human task. The system also employs a machine learning technique to determine classification data indicative of a classification for the human task.
    Type: Application
    Filed: October 9, 2019
    Publication date: April 16, 2020
    Inventors: Gierad Laput, Christopher Harrison
  • Publication number: 20200051544
    Abstract: Embodiments are provided to recognize features and activities from an audio signal. In one embodiment, a model is generated from sound effect data, which is augmented and projected into an audio domain to form a training dataset efficiently. Sound effect data is data that has been artificially created or from enhanced sounds or sound processes to provide a more accurate baseline of sound data than traditional training data. The sound effect data is augmented to create multiple variants to broaden the sound effect data. The augmented sound effects are projected into various audio domains, such as indoor, outdoor, urban, based on mixing background sounds consistent with these audio domains. The model is installed on any computing device, such as a laptop, smartphone, or other device. Features and activities from an audio signal are then recognized by the computing device based on the model without the need for in-situ training.
    Type: Application
    Filed: August 13, 2019
    Publication date: February 13, 2020
    Inventors: Gierad Laput, Karan Ahuja, Mayank Goel, Christopher Harrison
  • Publication number: 20200033163
    Abstract: A sensing system includes a sensor assembly and a back end server system. The sensor assembly includes a collection of sensors in communication with a control circuit. The sensors are each configured to sense one or more physical phenomena in an environment of the sensor assembly. The control circuit of the sensor assembly is configured to identify one or more selected sensors of the collection of sensors whose data corresponds to an event occurring in the environment of the sensor assembly and transmit data to the back end server system. The back end server system is configured to generate a first order virtual sensor by training a machine learning model to detect the event based on the data from at least one of the selected sensors and detect the event using the trained first order virtual sensor and data from the selected sensors.
    Type: Application
    Filed: October 3, 2019
    Publication date: January 30, 2020
    Inventors: Yuvraj AGARWAL, Christopher HARRISON, Gierad LAPUT, Sudershan BOOVARAGHAVAN, Chen CHEN, Abhijit HOTA, Bo Robert XIAO, Yang ZHANG
  • Patent number: 10436615
    Abstract: A sensing system includes a sensor assembly that is communicably connected to a computer system, such as a server or a cloud computing system. The sensor assembly includes a plurality of sensors that sense a variety of different physical phenomena. The sensor assembly featurizes the raw sensor data and transmits the featurized data to the computer system. Through machine learning, the computer system then trains a classifier to serve as a virtual sensor for an event that is correlated to the data from one or more sensor streams within the featurized sensor data. The virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event. Higher order virtual sensors can receive the outputs from lower order virtual sensors to infer nonbinary details about the environment in which the sensor assemblies are located.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: October 8, 2019
    Assignee: Carnegie Mellon University
    Inventors: Yuvraj Agarwal, Christopher Harrison, Gierad Laput, Sudershan Boovaraghavan, Chen Chen, Abhijit Hota, Bo Robert Xiao, Yang Zhang
  • Publication number: 20190227667
    Abstract: A method for a touch sensing system includes generating, by a first pair of electrodes at a first location in a conductive material, an electric field in the conductive material; generating measurement data by measuring, by one or more second pairs of electrodes, the electric field in the conductive material at one or more second locations in the conductive material, with each of the one or more second locations differing from the first location; generating, based on the measurement data, an approximation of the electric field in the conductive material; and classifying, based on the approximation, one or more regions of the interface into a given state.
    Type: Application
    Filed: October 3, 2017
    Publication date: July 25, 2019
    Inventors: Christopher Harrison, Yang Zhang, Gierad Laput
  • Publication number: 20190129508
    Abstract: Disclosed herein is a method of interacting with a wearable electronic device. The wearable electronic device, comprising a vibration sensor, captures vibrations transmitted through a body part on which the electronic device is worn. The vibration can emanate from an object in contact with the user's body or by the motions of the body itself. Once received by the wearable electronic device, the vibrations are analyzed and identified as a specific object, data message, or movement.
    Type: Application
    Filed: June 23, 2017
    Publication date: May 2, 2019
    Applicant: CARNEGIE MELLON UNIVERSITY
    Inventors: Christopher Harrison, Robert Xiao, Gierad Laput
  • Publication number: 20190101992
    Abstract: Disclosed herein is a method and system a system that enables users to simply tap their smartphone or other electronic device to an object to discover and rapidly utilize contextual functionality. As described herein, the system and method provide for recognition of physical contact with uninstrumented objects, and summons object-specific interfaces.
    Type: Application
    Filed: April 21, 2017
    Publication date: April 4, 2019
    Applicant: CARNEGIE MELLON UNIVERSITY
    Inventors: Christopher Harrison, Robert Xiao, Gierad Laput
  • Publication number: 20180306609
    Abstract: A sensing system includes a sensor assembly that is communicably connected to a computer system, such as a server or a cloud computing system. The sensor assembly includes a plurality of sensors that sense a variety of different physical phenomena. The sensor assembly featurizes the raw sensor data and transmits the featurized data to the computer system. Through machine learning, the computer system then trains a classifier to serve as a virtual sensor for an event that is correlated to the data from one or more sensor streams within the featurized sensor data. The virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event. Higher order virtual sensors can receive the outputs from lower order virtual sensors to infer nonbinary details about the environment in which the sensor assemblies are located.
    Type: Application
    Filed: April 24, 2018
    Publication date: October 25, 2018
    Inventors: Yuvraj Agarwal, Christopher Harrison, Gierad Laput, Sudershan Boovaraghavan, Chen Chen, Abhijit Hota, Bo Robert Xiao, Yang Zhang
  • Publication number: 20180281275
    Abstract: Embodiments disclosed herein describe a method of fabricating soft, flexible fibers using a 3D printer having an extrusion head. Embodiments of the method further include termination techniques to allow a series of fibers to be fabricated on the same object. Aspects of the certain embodiments offer a range of design parameters for controlling the properties of single strands and also of bundles of fibers. The method extends the capabilities of 3D printing without requiring any new hardware.
    Type: Application
    Filed: October 31, 2016
    Publication date: October 4, 2018
    Applicant: CARNEGIE MELLON UNIVERSITY
    Inventors: Gierad Laput, Christopher Harrison, Xiang Chen
  • Publication number: 20180107879
    Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.
    Type: Application
    Filed: March 25, 2016
    Publication date: April 19, 2018
    Applicant: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit Corporation
    Inventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
  • Patent number: 9881273
    Abstract: An object recognition device that senses electrical signals conducted by the body of a human user (e.g., as a result of direct contact with or close proximity to a device emitting or conducting electromagnetic noise), compares the sensed electrical signals to a plurality of signatures of electrical signals produced by a corresponding plurality of types of electrical and electromechanical devices to determine the type of electrical or electromechanical device that generated the electrical signals sensed by the sensor, and communicates information to the human user related to or triggered by the electrical or electromechanical device.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: January 30, 2018
    Assignees: DISNEY INTERPRISES, INC., CARNEGIE MELLON UNIVERSITY
    Inventors: Chouchang Yang, Gierad Laput, Robert Xiao, Christopher Harrison, Alanson Sample