Patents by Inventor Adam SAMUELS

Adam SAMUELS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220091681
    Abstract: Techniques for improving the convenience of activating different computing applications on a mobile computing device are disclosed. Sensors associated with a mobile computing device (e.g., accelerometers, gyroscopes, light sensors, microphones, image capture sensors) may receive inputs of various physical conditions to which the mobile computing device is being subjected. Based on one or more of these inputs, the mobile computing device may automatically select a content presentation mode that is likely to improve the consumption of the content by the user. In other embodiments, image analysis may be used to access different mobile computing applications.
    Type: Application
    Filed: May 12, 2021
    Publication date: March 24, 2022
    Applicant: Oracle International Corporation
    Inventors: Jennifer Darmour, Adam Samuel Riddle, Loretta Marie Grande, Diego Pantoja-Navajas, Roberto Espinosa, Arunachalam Murugan
  • Publication number: 20220019221
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.
    Type: Application
    Filed: September 30, 2021
    Publication date: January 20, 2022
    Inventors: Andrea Allais, Micah Christopher Chambers, William Gongshu Xie, Adam Samuel Cadien, Elliot Branson
  • Publication number: 20210397274
    Abstract: Methods and systems are provided that are directed to automatically adjusting a user interface based on tilt position of a digital drawing board. The digital drawing board has a tiltable screen with a sensor. The tiltable screen may be fixed in a stable tilt position. A sensor is used to determine that the digital drawing board has a first tilt position. The digital drawing board displays a first user interface associated with the first tilt position. The first user interface may be associated with a first use mode. The first user interface may also be based on an application running on the digital drawing board. When the sensor senses that the digital drawing board has moved from the first tilt position to a second tilt position, it automatically displays a second user interface associated with a second tilt position. The second user interface has different functionality than the first user interface.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Paul HINCKLEY, Hugo Karl Denis ROMAT, Christopher Mervin COLLINS, Nathalie RICHE, Michel PAHUD, Adam Samuel RIDDLE, William Arthur Stewart BUXTON
  • Patent number: 11204605
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: December 21, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Andrea Allais, Micah Christopher Chambers, William Gongshu Xie, Adam Samuel Cadien, Elliot Branson
  • Publication number: 20210358463
    Abstract: A switch lock apparatus for a guitar having a pickup selector switch is disclosed herein for locking a position of the pickup selector switch. The switch lock apparatus may include a body plate, a mounting hole, a switch opening defined in the body plate which is configured to receive the pickup selector switch. The switch lock apparatus may also include at least one first side receptacle positioned along the first switch opening side and at least one second side receptable positioned along the second switch opening side. The at least one second side receptacle may be offset from the at least one first side receptacle along the switch opening length to define a central passageway that enables free movement of the pickup selector switch. The switch lock apparatus may define a locked position when the pickup selector switch is received by one of the first or second side receptacles.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 18, 2021
    Inventors: Joshua John Misko, Adam Samuel Mendel
  • Publication number: 20210314704
    Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Applicant: INCONTACT, INC.
    Inventors: Adam Samuel HORROCKS, Matthew Lawrence PAGE, Nathan Edwin BODEN, Christopher Garn SEAMAN
  • Publication number: 20210223402
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.
    Type: Application
    Filed: April 9, 2021
    Publication date: July 22, 2021
    Inventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
  • Patent number: 11070916
    Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: July 20, 2021
    Assignee: INCONTACT, INC.
    Inventors: Adam Samuel Horrocks, Matthew Lawrence Page, Nathan Edwin Boden, Christopher Garn Seaman
  • Publication number: 20210216476
    Abstract: Embodiments herein describe a memory controller that has an encryption path and a bypass path. Using an indicator (e.g., a dedicated address range), an outside entity can inform the memory controller whether to use the encryption path or the bypass path. For example, using the encryption path when performing a write request means the memory controller encrypts the data before it was stored, while using the bypass path means the data is written into memory without be encrypted. Similarly, using the encryption path when performing a read request means the controller decrypts the data before it is delivered to the requesting entity, while using the bypass path means the data is delivered without being decrypted.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 15, 2021
    Inventors: Tony SAWAN, Adam Samuel HALE
  • Publication number: 20210216645
    Abstract: Embodiments herein describe a memory controller that has an encryption path and a bypass path. Using an indicator (e.g., a dedicated address range), an outside entity can inform the memory controller whether to use the encryption path or the bypass path. For example, using the encryption path when performing a write request means the memory controller encrypts the data before it was stored, while using the bypass path means the data is written into memory without be encrypted. Similarly, using the encryption path when performing a read request means the controller decrypts the data before it is delivered to the requesting entity, while using the bypass path means the data is delivered without being decrypted.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 15, 2021
    Inventors: Tony SAWAN, Adam Samuel HALE
  • Patent number: 11022693
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: June 1, 2021
    Assignee: GM Global Technology Operations LLC
    Inventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
  • Patent number: 11023070
    Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
  • Patent number: 10884411
    Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN. The computing system is further configured to align a heightmap to lidar data output by the lidar sensor system based upon output of the DNN. The lidar segmentation system can identify objects in proximity to the autonomous vehicle based upon the aligned heightmap.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: January 5, 2021
    Assignee: GM Global Technology Operations LLC
    Inventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
  • Patent number: 10872199
    Abstract: Described herein is a system and method for modifying electronic documents. While a user is editing an electronic document on a canvas of an application, a trigger event related to an electronic pen is received (e.g., explicitly or inferred). The electronic pen has one or more associated attributes (e.g., type of pen, color of pen, thickness of line, transparency value). In response to the trigger event, which of a plurality of advanced productivity actions related to editing to apply to the electronic document is determined based upon at least one of the associated attributes. The advanced production actions can include, for example, styles, formatting, and/or themes. The electronic document is modified in accordance with the determined advanced productivity action.
    Type: Grant
    Filed: May 26, 2018
    Date of Patent: December 22, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Elise Leigh Livingston, Daniel Yancy Parish, Adam Samuel Riddle
  • Publication number: 20200272275
    Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.
    Type: Application
    Filed: May 11, 2020
    Publication date: August 27, 2020
    Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
  • Patent number: 10747949
    Abstract: Systems, methods, and software are disclosed herein for presenting an overlay canvas in response to receiving an editing gesture to existing text on a canvas. In an implementation, user input is received comprising an inking gesture associated with existing text displayed on a canvas in a user interface. The inking gesture is then determined to comprise any of a plurality of editing gestures. In response to the inking gesture comprising an editing gesture, an overlay canvas is presented above the canvas in the user interface. Additional user input is received comprising inking on the overlay canvas. The inking is then incorporated into the existing text on the canvas.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: August 18, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Elise Livingston, Adam Samuel Riddle, L. Tucker Hatfield, Charles Cummins, Allison Smedley
  • Publication number: 20200249825
    Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism, which is smaller than a display screen controlled by the computing system, is displayed on the display screen. A maneuver touch input is detected, and the focus area input mechanism is moved, on the display screen, to a new position based upon the maneuver touch input. Other touch gestures are used to perform operations within the focus area input mechanism.
    Type: Application
    Filed: February 18, 2019
    Publication date: August 6, 2020
    Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
  • Patent number: 10684725
    Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism, which is smaller than a display screen controlled by the computing system, is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: June 16, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
  • Patent number: 10643482
    Abstract: Embodiments of the present invention are generally directed to an Audio-Story Engine that includes a repository of prerecorded audio files that, when played in a certain sequence, with user provided recordings placed throughout, tell a story. To obtain the user provided recordings, the Audio-Story Engine asks the user to make audio recordings of various words or phrases. For example, the Audio-Story Engine may ask the user a series of questions in order to record and store the user's audible responses. Upon completion, the Audio-Story Engine plays back a completed story that incorporates the user's audio recordings by playing an appropriate user recording after playing a prerecorded audio file. This is repeated several times in sequence to form a seamless, customized, audio story. In addition, the Audio-Story Engine may alter the pitch or sound of the user's recorded words to match the pitch of the prerecorded story.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: May 5, 2020
    Assignee: Hallmark Cards, Incorporated
    Inventors: Anne Catherine Bates, Jason Paul Gahr, Adam Samuel Scheff, Jason Blake Penrod, Stephane Farris Young, Timothy Jay Lien, Michael Anthony Monaco, Jr.
  • Publication number: 20200137494
    Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Applicant: INCONTACT, INC.
    Inventors: Adam Samuel HORROCKS, Matthew Lawrence PAGE, Nathan Edwin BODEN, Christopher Garn SEAMAN