Abstract: A sub-vocal speech recognition (SVSR) apparatus includes a headset that is worn over an ear and electromyography (EMG) electrodes and an Inertial Measurement Unit (IMU) in contact with a user's skin in a position over the neck, under the chin and behind the ear. When a user speaks or mouths words, the EMG and IMU signals are recorded by sensors and amplified and filtered, before being divided in multi-millisecond time windows. These time windows are then transmitted to the interface computing device for Mel Frequency Cepstral Coefficients (MFCC) conversion into aggregated vector representation (AVR). The AVR is the input to the SVSR system, which utilizes a neural network, CTC function, and language model to classify the phoneme. The phonemes are then combined into words and sent back to the interface computing device, where they are played either as audible output, such as from a speaker, or non-audible output, such as text.