Patents by Inventor Adam SAMUELS
Adam SAMUELS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220091681Abstract: Techniques for improving the convenience of activating different computing applications on a mobile computing device are disclosed. Sensors associated with a mobile computing device (e.g., accelerometers, gyroscopes, light sensors, microphones, image capture sensors) may receive inputs of various physical conditions to which the mobile computing device is being subjected. Based on one or more of these inputs, the mobile computing device may automatically select a content presentation mode that is likely to improve the consumption of the content by the user. In other embodiments, image analysis may be used to access different mobile computing applications.Type: ApplicationFiled: May 12, 2021Publication date: March 24, 2022Applicant: Oracle International CorporationInventors: Jennifer Darmour, Adam Samuel Riddle, Loretta Marie Grande, Diego Pantoja-Navajas, Roberto Espinosa, Arunachalam Murugan
-
Publication number: 20220019221Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.Type: ApplicationFiled: September 30, 2021Publication date: January 20, 2022Inventors: Andrea Allais, Micah Christopher Chambers, William Gongshu Xie, Adam Samuel Cadien, Elliot Branson
-
Publication number: 20210397274Abstract: Methods and systems are provided that are directed to automatically adjusting a user interface based on tilt position of a digital drawing board. The digital drawing board has a tiltable screen with a sensor. The tiltable screen may be fixed in a stable tilt position. A sensor is used to determine that the digital drawing board has a first tilt position. The digital drawing board displays a first user interface associated with the first tilt position. The first user interface may be associated with a first use mode. The first user interface may also be based on an application running on the digital drawing board. When the sensor senses that the digital drawing board has moved from the first tilt position to a second tilt position, it automatically displays a second user interface associated with a second tilt position. The second user interface has different functionality than the first user interface.Type: ApplicationFiled: June 19, 2020Publication date: December 23, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Paul HINCKLEY, Hugo Karl Denis ROMAT, Christopher Mervin COLLINS, Nathalie RICHE, Michel PAHUD, Adam Samuel RIDDLE, William Arthur Stewart BUXTON
-
Patent number: 11204605Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.Type: GrantFiled: August 3, 2018Date of Patent: December 21, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Andrea Allais, Micah Christopher Chambers, William Gongshu Xie, Adam Samuel Cadien, Elliot Branson
-
Publication number: 20210358463Abstract: A switch lock apparatus for a guitar having a pickup selector switch is disclosed herein for locking a position of the pickup selector switch. The switch lock apparatus may include a body plate, a mounting hole, a switch opening defined in the body plate which is configured to receive the pickup selector switch. The switch lock apparatus may also include at least one first side receptacle positioned along the first switch opening side and at least one second side receptable positioned along the second switch opening side. The at least one second side receptacle may be offset from the at least one first side receptacle along the switch opening length to define a central passageway that enables free movement of the pickup selector switch. The switch lock apparatus may define a locked position when the pickup selector switch is received by one of the first or second side receptacles.Type: ApplicationFiled: May 18, 2021Publication date: November 18, 2021Inventors: Joshua John Misko, Adam Samuel Mendel
-
Publication number: 20210314704Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.Type: ApplicationFiled: June 17, 2021Publication date: October 7, 2021Applicant: INCONTACT, INC.Inventors: Adam Samuel HORROCKS, Matthew Lawrence PAGE, Nathan Edwin BODEN, Christopher Garn SEAMAN
-
Publication number: 20210223402Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.Type: ApplicationFiled: April 9, 2021Publication date: July 22, 2021Inventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
-
Patent number: 11070916Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.Type: GrantFiled: October 29, 2018Date of Patent: July 20, 2021Assignee: INCONTACT, INC.Inventors: Adam Samuel Horrocks, Matthew Lawrence Page, Nathan Edwin Boden, Christopher Garn Seaman
-
Publication number: 20210216476Abstract: Embodiments herein describe a memory controller that has an encryption path and a bypass path. Using an indicator (e.g., a dedicated address range), an outside entity can inform the memory controller whether to use the encryption path or the bypass path. For example, using the encryption path when performing a write request means the memory controller encrypts the data before it was stored, while using the bypass path means the data is written into memory without be encrypted. Similarly, using the encryption path when performing a read request means the controller decrypts the data before it is delivered to the requesting entity, while using the bypass path means the data is delivered without being decrypted.Type: ApplicationFiled: January 15, 2020Publication date: July 15, 2021Inventors: Tony SAWAN, Adam Samuel HALE
-
Publication number: 20210216645Abstract: Embodiments herein describe a memory controller that has an encryption path and a bypass path. Using an indicator (e.g., a dedicated address range), an outside entity can inform the memory controller whether to use the encryption path or the bypass path. For example, using the encryption path when performing a write request means the memory controller encrypts the data before it was stored, while using the bypass path means the data is written into memory without be encrypted. Similarly, using the encryption path when performing a read request means the controller decrypts the data before it is delivered to the requesting entity, while using the bypass path means the data is delivered without being decrypted.Type: ApplicationFiled: January 15, 2020Publication date: July 15, 2021Inventors: Tony SAWAN, Adam Samuel HALE
-
Patent number: 11022693Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN.Type: GrantFiled: August 3, 2018Date of Patent: June 1, 2021Assignee: GM Global Technology Operations LLCInventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
-
Patent number: 11023070Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.Type: GrantFiled: May 11, 2020Date of Patent: June 1, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
-
Patent number: 10884411Abstract: An autonomous vehicle is described herein. The autonomous vehicle includes a lidar sensor system. The autonomous vehicle additionally includes a computing system that executes a lidar segmentation system, wherein the lidar segmentation system is configured to identify objects that are in proximity to the autonomous vehicle based upon output of the lidar sensor system. The computing system further includes a deep neural network (DNN), where the lidar segmentation system identifies the objects in proximity to the autonomous vehicle based upon output of the DNN. The computing system is further configured to align a heightmap to lidar data output by the lidar sensor system based upon output of the DNN. The lidar segmentation system can identify objects in proximity to the autonomous vehicle based upon the aligned heightmap.Type: GrantFiled: August 3, 2018Date of Patent: January 5, 2021Assignee: GM Global Technology Operations LLCInventors: Andrea Allais, Adam Samuel Cadien, Elliot Branson, William Gongshu Xie, Micah Christopher Chambers
-
Patent number: 10872199Abstract: Described herein is a system and method for modifying electronic documents. While a user is editing an electronic document on a canvas of an application, a trigger event related to an electronic pen is received (e.g., explicitly or inferred). The electronic pen has one or more associated attributes (e.g., type of pen, color of pen, thickness of line, transparency value). In response to the trigger event, which of a plurality of advanced productivity actions related to editing to apply to the electronic document is determined based upon at least one of the associated attributes. The advanced production actions can include, for example, styles, formatting, and/or themes. The electronic document is modified in accordance with the determined advanced productivity action.Type: GrantFiled: May 26, 2018Date of Patent: December 22, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Elise Leigh Livingston, Daniel Yancy Parish, Adam Samuel Riddle
-
Publication number: 20200272275Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.Type: ApplicationFiled: May 11, 2020Publication date: August 27, 2020Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
-
Patent number: 10747949Abstract: Systems, methods, and software are disclosed herein for presenting an overlay canvas in response to receiving an editing gesture to existing text on a canvas. In an implementation, user input is received comprising an inking gesture associated with existing text displayed on a canvas in a user interface. The inking gesture is then determined to comprise any of a plurality of editing gestures. In response to the inking gesture comprising an editing gesture, an overlay canvas is presented above the canvas in the user interface. Additional user input is received comprising inking on the overlay canvas. The inking is then incorporated into the existing text on the canvas.Type: GrantFiled: April 13, 2018Date of Patent: August 18, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Elise Livingston, Adam Samuel Riddle, L. Tucker Hatfield, Charles Cummins, Allison Smedley
-
Publication number: 20200249825Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism, which is smaller than a display screen controlled by the computing system, is displayed on the display screen. A maneuver touch input is detected, and the focus area input mechanism is moved, on the display screen, to a new position based upon the maneuver touch input. Other touch gestures are used to perform operations within the focus area input mechanism.Type: ApplicationFiled: February 18, 2019Publication date: August 6, 2020Inventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
-
Patent number: 10684725Abstract: An input mode trigger is detected so that a computing system treats inputs from a touch sensing device as touch inputs. A focus area input mechanism, which is smaller than a display screen controlled by the computing system, is displayed on the display screen. A hover mode touch input is detected, and a touch input on the touch sensing device is mirrored by corresponding movement of visual indicia on the focus area input mechanism on the display screen. Other touch gestures are used to perform operations within the focus area input mechanism.Type: GrantFiled: February 18, 2019Date of Patent: June 16, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Jessica Chen, Jonathan Marc Holley, Christopher Court, Taylor Jordan Hartman, Adam Samuel Riddle
-
Patent number: 10643482Abstract: Embodiments of the present invention are generally directed to an Audio-Story Engine that includes a repository of prerecorded audio files that, when played in a certain sequence, with user provided recordings placed throughout, tell a story. To obtain the user provided recordings, the Audio-Story Engine asks the user to make audio recordings of various words or phrases. For example, the Audio-Story Engine may ask the user a series of questions in order to record and store the user's audible responses. Upon completion, the Audio-Story Engine plays back a completed story that incorporates the user's audio recordings by playing an appropriate user recording after playing a prerecorded audio file. This is repeated several times in sequence to form a seamless, customized, audio story. In addition, the Audio-Story Engine may alter the pitch or sound of the user's recorded words to match the pitch of the prerecorded story.Type: GrantFiled: February 20, 2015Date of Patent: May 5, 2020Assignee: Hallmark Cards, IncorporatedInventors: Anne Catherine Bates, Jason Paul Gahr, Adam Samuel Scheff, Jason Blake Penrod, Stephane Farris Young, Timothy Jay Lien, Michael Anthony Monaco, Jr.
-
Publication number: 20200137494Abstract: A separate virtual (e.g. aural) location for one or more interaction or telephony call participants may provide an indication or clue for at least one of the call participants of who is speaking at any one time, reducing errors and misunderstandings during the call. Auditory localization may be used so that participants are heard from separate virtual locations. An audible user interface (AUI) may be produced such that audio presented to the listening user is location-specific, the location being relevant to the user, just as information presented in a graphical user interface (GUI) might be relevant. For example, a plurality of audio streams which are part of an interaction between communicating parties may be accepted, and based on the audio streams, a plurality of audio outputs may be provided, each located at a different location in three-dimensional space.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Applicant: INCONTACT, INC.Inventors: Adam Samuel HORROCKS, Matthew Lawrence PAGE, Nathan Edwin BODEN, Christopher Garn SEAMAN