Patents by Inventor Peter Bull

Peter Bull has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11955765
    Abstract: Techniques are provided for controlling an output laser pulse signal of a medical device. A control device defines a time duration of capacitive discharge to a laser device. The time duration corresponds to an intended energy of the output laser pulse signal. The control device generates a plurality of sub-pulse control signals. The sub-pulse control signals define a series of capacitive discharge events of the capacitor bank. The control device modulates one or more of a sub-pulse control signal period or a sub-pulse time duration of the sub-pulse control signals to modify the capacitive discharge of the capacitor bank to the laser device during the time duration.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: April 9, 2024
    Assignee: Boston Scientific Scimed, Inc.
    Inventors: Jian James Zhang, Baocheng Yang, Xirong Yang, Hyun Wook Kang, Brian Cheng, Peter Bull, Rongwei Jason Xuan, Thomas C. Hasenberg
  • Publication number: 20240062547
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 22, 2024
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20240019657
    Abstract: A laser system includes a first laser cavity to output a laser light along a first path, a first mirror to receive the laser light from the first laser cavity, and redirect the laser light along a second path that is different than the first path, a second mirror to receive the laser light from the first mirror, and redirect the laser light along a third path that is different than the first path and the second path, a beam splitter located at a first position on the third path, a beam combiner located at a second position on the third path; and a coupling lens assembly, the coupling lens assembly including a lens located at a third position on the third path, wherein the coupling lens assembly moves the lens in x-, y-, and x-directions.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 18, 2024
    Applicant: Boston Scientific Scimed, Inc.
    Inventors: Xirong YANG, Baocheng YANG, Brian CHENG, Peter BULL, Viju PANICKER, Yang-Te FAN, Rongwei Jason XUAN, Thomas Charles HASENBERG, Jian James ZHANG
  • Patent number: 11810357
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: November 7, 2023
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11809011
    Abstract: A laser system includes a first laser cavity to output a laser light along a first path, a first mirror to receive the laser light from the first laser cavity, and redirect the laser light along a second path that is different than the first path, a second mirror to receive the laser light from the first mirror, and redirect the laser light along a third path that is different than the first path and the second path, a beam splitter located at a first position on the third path, a beam combiner located at a second position on the third path; and a coupling lens assembly, the coupling lens assembly including a lens located at a third position on the third path, wherein the coupling lens assembly moves the lens in x-, y-, and x-directions.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: November 7, 2023
    Assignee: Boston Scientific Scimed, Inc.
    Inventors: Xirong Yang, Baocheng Yang, Brian Cheng, Peter Bull, Viju Panicker, Yang-Te Fan, Rongwei Jason Xuan, Thomas Charles Hasenberg, Jian James Zhang
  • Publication number: 20230083298
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Application
    Filed: November 1, 2022
    Publication date: March 16, 2023
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11521620
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: December 6, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20220378505
    Abstract: A medical laser system for outputting laser pulses includes at least one laser cavity configured to generate at least one laser pulse, a rotating mirror configured to receive and reflect the at least one laser pulse, a beam splitter configured to receive and reflect a portion of the at least one laser pulse received from the rotating mirror, an energy-sensing device configured to detect the portion of the at least one laser pulse, an energy measurement assembly configured to generate a feedback signal based on the portion of the at least one laser pulse detected by the energy-sensing device, and a controller configured to generate an electronic control pulse based on the feedback signal received from the energy measurement assembly to generate at least one adjusted laser pulse.
    Type: Application
    Filed: May 20, 2022
    Publication date: December 1, 2022
    Applicant: Boston Scientific Scimed, Inc.
    Inventors: Baocheng YANG, Xirong YANG, Peter BULL, Brian CHENG, Rongwei Jason XUAN, Jian James ZHANG, Thomas Charles HASENBERG, David PIH
  • Publication number: 20220370129
    Abstract: A medical laser system for outputting laser pulses includes at least one laser cavity, a rotating mirror, a user interface, and a controller. The controller is configured to receive at least one laser parameter associated with a laser pulse output by the system. The controller is configured to determine an average power level of the laser pulse based on the at least one laser parameter associated with the laser pulse. The controller is configured to determine a pulse width modulation (PWM) control signal based on at least one laser parameter. The controller is configured to generate the laser pulse based on the average power level and the PWM control signal, the laser pulse comprising at least one of a first shape, a second shape, or a third shape. Each of the first shape, the second shape, and the third shape of the laser pulse includes different pulse widths.
    Type: Application
    Filed: May 20, 2022
    Publication date: November 24, 2022
    Applicant: Boston Scientific Scimed, Inc.
    Inventors: Baocheng YANG, Xirong YANG, Brian CHENG, Peter BULL, Rongwei Jason XUAN, Jian James ZHANG, Thomas Charles HASENBERG, David PIH
  • Publication number: 20220376459
    Abstract: A medical laser system for outputting laser pulses includes at least one laser cavity configured to generate at least one laser pulse, a rotating mirror configured to receive and reflect the at least one laser pulse, a beam splitter configured to receive and reflect a portion of the at least one laser pulse received from the rotating mirror, an energy-sensing device configured to detect the portion of the at least one laser pulse, an energy measurement assembly configured to generate a measurement signal based on the portion of the at least one laser pulse detected by the energy-sensing device, and a controller. The controller may include a calibration module. The calibration module may be configured to generate at least one categorized calibration table, determine calibration parameters, interpolate the calibration parameters, and cause the at least one laser cavity to generate at least one calibrated laser pulse.
    Type: Application
    Filed: May 20, 2022
    Publication date: November 24, 2022
    Applicant: Boston Scientific Scimed, Inc.
    Inventors: Baocheng YANG, David PIH, Xirong YANG, Peter BULL, Brian CHENG, Rongwei Jason XUAN, Jian James ZHANG, Thomas Charles HASENBERG
  • Publication number: 20220343899
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Application
    Filed: July 11, 2022
    Publication date: October 27, 2022
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20220343911
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Application
    Filed: July 11, 2022
    Publication date: October 27, 2022
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11417330
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: August 16, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Kellerman, Ryan Sonnek
  • Patent number: 11417318
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: August 16, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20220206239
    Abstract: A laser system includes a first laser cavity to output a laser light along a first path, a first mirror to receive the laser light from the first laser cavity, and redirect the laser light along a second path that is different than the first path, a second mirror to receive the laser light from the first mirror, and redirect the laser light along a third path that is different than the first path and the second path, a beam splitter located at a first position on the third path, a beam combiner located at a second position on the third path; and a coupling lens assembly, the coupling lens assembly including a lens located at a third position on the third path, wherein the coupling lens assembly moves the lens in x-, y-, and x-directions.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 30, 2022
    Applicant: Boston Scientific Scimed, Inc.
    Inventors: Xirong YANG, Baocheng YANG, Brian CHENG, Peter BULL, Viju PANICKER, Yang-Te FAN, Rongwei Jason XUAN, Thomas Charles HASENBERG, Jian James ZHANG
  • Publication number: 20210264900
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264162
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264909
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264921
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11030076
    Abstract: A method of generating an output log for analysis of a computer program, the method comprising: receiving a recording of an execution of the program; receiving an additional print instruction to print a value of a data item and an indication of a point in the program at which the additional print instruction is to be evaluated; determining a corresponding point in the recording of the execution based upon the indication of the point in the program; and evaluating the additional print instruction based upon the recording of the execution and the determined corresponding point to determine an output of the additional print instruction for insertion into the output log.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 8, 2021
    Assignee: Undo Ltd.
    Inventors: Gregory Edward Warwick Law, Julian Philip Smith, Thomas Paul Perry, Nicholas Peter Bull, Geoffrey Finn Grimwood