Patents by Inventor Amit Shahar
Amit Shahar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240187482Abstract: A transmitting device and a receiving device are provided. The transmitting device is configured to: maintain a send queue, wherein the send queue comprises one or more WQEs, each WQE comprising destination information of the WQE; assign each of the one or more WQEs with an XID, and translate each WQE into a respective packet, wherein each respective packet comprises the XID of the corresponding WQE and is associated with a PSN; transmit each packet to the receiving device of the destination information of the WQE; and receive a notification message indicating whether the transmitted packet is received, for each transmitted packet from the receiving device; determine whether or not to generate a completion for each WQE, based on the notification message, on information carried in the WQE and information held by the transmitting device.Type: ApplicationFiled: February 12, 2024Publication date: June 6, 2024Inventors: Ben-Shahar Belkar, Reuven Cohen, David Ganor, Amit Geron
-
Publication number: 20240121302Abstract: An entity for RDMA is configured to maintain a time-based queue pair (QP). The time-based QP comprises a first area associated with a time-based indication and is configured to hold one or more first WQEs. The time-based indication indicates that the one or more WQEs in the first area are to be periodically processed. The entity is further configured to periodically process the one or more first WQEs in the first area according to the time-based indication.Type: ApplicationFiled: December 18, 2023Publication date: April 11, 2024Inventors: Ben-Shahar Belkar, Sagiv Goren, Reuven Cohen, David Ganor, Amit Geron
-
Publication number: 20240105177Abstract: Embodiments herein relate to a local assistant system responding to voice input using an ear-wearable device. The system detects a wake-up signal and receives a first voice input communicating a first query content. The system includes speech recognition circuitry to determine the first query content, speech generation circuitry, and an input database of locally-handled user inputs. If the first audio input matches one of the locally-handled user inputs, then the system takes a local responsive action. If the first audio input does not match one of the locally-handled user inputs, then the system transmits at least a portion of the first query content over a wireless network to a network resource.Type: ApplicationFiled: December 8, 2023Publication date: March 28, 2024Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Justin R. Burwinkel, Jeffrey Paul Solum, Thomas Howard Burns
-
Patent number: 11893997Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.Type: GrantFiled: January 26, 2022Date of Patent: February 6, 2024Assignee: Starkey Laboratories, Inc.Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
-
Patent number: 11869505Abstract: Embodiments herein relate to a local assistant system responding to voice input using an ear-wearable device. The system detects a wake-up signal and receives a first voice input communicating a first query content. The system includes speech recognition circuitry to determine the first query content, speech generation circuitry, and an input database of locally-handled user inputs. If the first audio input matches one of the locally-handled user inputs, then the system takes a local responsive action. If the first audio input does not match one of the locally-handled user inputs, then the system transmits at least a portion of the first query content over a wireless network to a network resource.Type: GrantFiled: January 26, 2022Date of Patent: January 9, 2024Assignee: Starkey Laboratories, Inc.Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Justin R. Burwinkel, Jeffrey Paul Solum, Thomas Howard Burns
-
Publication number: 20240000315Abstract: Embodiments herein relate to ear-wearable devices configured to detect aberrant patterns indicative of events related to the safety or health of a wearer and related methods. In a first aspect, an ear-wearable device is included having a control circuit, a microphone, a motion sensor, and a power supply. The ear-wearable device is configured to monitor signals from the microphone and/or the motion sensor to identify an aberrant pattern, and issue an alert when an aberrant pattern is detected. Other embodiments are also included herein.Type: ApplicationFiled: November 11, 2021Publication date: January 4, 2024Inventors: Amit Shahar, David Alan Fabry
-
Publication number: 20230351064Abstract: A method comprises obtaining ear modeling data, wherein the ear modeling data includes a 3D model of an ear canal; applying a shell generation to generate a shell shape based on the ear modeling data, wherein the shell-generation model is a machine learning model and the shell shape is a 3D representation of a shell of an ear-wearable device; applying a set of one or more component-placement models to determine, based on the ear modeling data, a position and orientation of a component of the ear-wearable device, wherein the component-placement models are independent of the shell-generation model and each of the component-placement models is a separate machine learning model; and generating an ear-wearable device model based on the shell shape and the 3D arrangement of the components of the ear-wearable device.Type: ApplicationFiled: April 21, 2023Publication date: November 2, 2023Inventors: Amit Shahar, Lior Weizman, Deepak Kadetotad, Jinjun Xiao, Nitzan Bornstein
-
Publication number: 20230301580Abstract: Embodiments herein relate to ear-worn devices and related systems and methods that can be used detect oropharyngeal events and related occurrences such as food and drink intake. In an embodiment, an ear-worn device system is included having a first ear-worn device that has a control circuit, a motion sensor, at least one microphone, an electroacoustic transducer, and a power supply circuit. The system can also include a second ear-worn device. The system can be configured to monitor signals from at least one of the motion sensor and the at least one microphone and evaluate the signals to identify oropharyngeal events and/or related occurrences. Other embodiments are also included herein.Type: ApplicationFiled: July 28, 2021Publication date: September 28, 2023Inventors: Jinjun Xiao, Amit Shahar
-
Publication number: 20230292064Abstract: A hearing assisting system is included having a first microphone and a second microphone. The system further includes a vision device having a display device, where the display device is configured to display visual information to the user when the user is wearing the vision device. The system is further configured to process audio signals to identify first audio content information and first audio direction information related to the first audio content information. The system further can transmit, to the vision device, first display information to cause the vision device to display a non-transcript content representation of the first audio content information and a direction representation of the first audio direction information using the display device.Type: ApplicationFiled: February 10, 2023Publication date: September 14, 2023Inventors: William F. Austin, Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Majd Srour
-
Publication number: 20230051613Abstract: Embodiments herein relate to ear-worn devices that can be used to locate other devices such as mobile electronic devices. In a first aspect, an ear-worn device is included having a control circuit, a microphone in electrical communication with the control circuit, an electroacoustic transducer for generating sound in electrical communication with the control circuit, a motion sensor in electrical communication with the control circuit, and a power supply circuit in electrical communication with the control circuit, wherein the ear-worn device is configured to issue a location command to a mobile electronic device and wherein the location command triggers the mobile electronic device to emit a locating signal. Other embodiments are also included herein.Type: ApplicationFiled: January 8, 2021Publication date: February 16, 2023Inventors: Achintya Kumar Bhowmik, Amit Shahar, Majd Srour, Dagan Shtifman
-
Publication number: 20230016667Abstract: Embodiments herein relate to embodiments herein relate to hearing assistance systems and methods for monitoring a device wearer's emotional state and status. In an embodiment, a hearing assistance system is included having an ear-worn device that can include a control circuit and a microphone in electronic communication with the control circuit. The ear-worn device can be configured to monitor signals from the microphone, analyze the signals in order to identify speech, and transmit data based on the signals representing the identified speech to a separate device. Other embodiments are also included herein.Type: ApplicationFiled: December 17, 2020Publication date: January 19, 2023Inventors: Majd Srour, Amit Shahar, Roy Talman
-
Publication number: 20220386959Abstract: Embodiments herein relate to ear-wearable devices and systems that can detect a risk of infection in a device wearer. In a first aspect, an ear-wearable infection sensor device is included having a control circuit, a microphone, a sensor package, and an electroacoustic transducer, wherein the electroacoustic transducer is in electrical communication with the control circuit. The ear-wearable infection sensor device can be configured to analyze data from the sensor package to determine physiological parameters of a device wearer and evaluate the physiological parameters to detect the risk of an infection. Other embodiments are also included herein.Type: ApplicationFiled: June 7, 2022Publication date: December 8, 2022Inventors: Archelle Georgiou, Krishna Chaithanya Vastare, Justin R. Burwinkel, Andy S. Lin, Kyle Olson, Michael Karl Sacha, Amit Shahar
-
Publication number: 20220157434Abstract: Embodiments herein relate to ear-wearable device systems and methods for monitoring a device wearer's emotional state and status. In an embodiment, an ear-wearable device is included having a control circuit, a microphone, and a power supply circuit. The ear-wearable device is configured to monitor signals from the microphone, identify signs of anxiety in the microphone signals, and provide a wearer of the ear-wearable device with feedback related to identified anxiety. In another embodiment, a method of monitoring anxiety with an ear-wearable device is included, the method including monitoring signals from a microphone, identifying signs of anxiety in the microphone signals, and providing a wearer of the ear-wearable device with feedback related to the identified anxiety. Other embodiments are also included herein.Type: ApplicationFiled: November 15, 2021Publication date: May 19, 2022Inventors: Majd Srour, Amit Shahar, Roy Talman
-
Publication number: 20220148597Abstract: Embodiments herein relate to a local assistant system responding to voice input using an ear-wearable device. The system detects a wake-up signal and receives a first voice input communicating a first query content. The system includes speech recognition circuitry to determine the first query content, speech generation circuitry, and an input database of locally-handled user inputs. If the first audio input matches one of the locally-handled user inputs, then the system takes a local responsive action. If the first audio input does not match one of the locally-handled user inputs, then the system transmits at least a portion of the first query content over a wireless network to a network resource.Type: ApplicationFiled: January 26, 2022Publication date: May 12, 2022Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Justin R. Burwinkel, Jeffrey Paul Solum, Thomas Howard Burns
-
Publication number: 20220148599Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.Type: ApplicationFiled: January 26, 2022Publication date: May 12, 2022Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
-
Patent number: 11264029Abstract: Embodiments herein relate to a local assistant system responding to voice input using an ear-wearable device. The system detects a wake-up signal and receives a first voice input communicating a first query content. The system includes speech recognition circuitry to determine the first query content, speech generation circuitry, and an input database of locally-handled user inputs. If the first audio input matches one of the locally-handled user inputs, then the system takes a local responsive action. If the first audio input does not match one of the locally-handled user inputs, then the system transmits at least a portion of the first query content over a wireless network to a network resource.Type: GrantFiled: January 2, 2020Date of Patent: March 1, 2022Assignee: Starkey Laboratories, Inc.Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Justin R. Burwinkel, Jeffrey Paul Solum, Thomas Howard Burns
-
Patent number: 11264035Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.Type: GrantFiled: January 2, 2020Date of Patent: March 1, 2022Assignee: Starkey Laboratories, Inc.Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
-
Publication number: 20200219515Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.Type: ApplicationFiled: January 2, 2020Publication date: July 9, 2020Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
-
Publication number: 20200219506Abstract: Embodiments herein relate to a local assistant system responding to voice input using an ear-wearable device. The system detects a wake-up signal and receives a first voice input communicating a first query content. The system includes speech recognition circuitry to determine the first query content, speech generation circuitry, and an input database of locally-handled user inputs. If the first audio input matches one of the locally-handled user inputs, then the system takes a local responsive action. If the first audio input does not match one of the locally-handled user inputs, then the system transmits at least a portion of the first query content over a wireless network to a network resource.Type: ApplicationFiled: January 2, 2020Publication date: July 9, 2020Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Justin R. Burwinkel, Jeffrey Paul Solum, Thomas Howard Burns
-
Publication number: 20200143703Abstract: Embodiments herein relate to hearing assistance devices and related systems and methods for providing visual feedback to a subject undergoing fixed-gaze movement training. In an embodiment, method of providing vestibular therapy to a subject is included. The method can include prompting the subject to execute an exercise, the exercise comprising a predetermined movement while maintaining a fixed point of eye gaze, tracking the point of gaze of the subject's eyes using a camera, and generating data representing a measured deviation between the fixed point of eye gaze and the tracked point of gaze. Other embodiments are included herein.Type: ApplicationFiled: November 7, 2019Publication date: May 7, 2020Inventors: David Alan Fabry, Achintya Kumar Bhowmik, Justin R. Burwinkel, Jeffery Lee Crukley, Amit Shahar