Patents by Inventor Abbas ATAYA
Abbas ATAYA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12229227Abstract: Described herein is a non-transitory computer readable storage medium having computer-readable program code stored thereon for causing a computer system to perform a method for processing data, the method comprising: receiving data, processing the data at a fixed code processing engine, wherein operation of the fixed code processing engine is controlled according to stored parameters, and classifying processed data at a fixed code classification engine, wherein operation of the fixed code classification engine is controlled according to the stored parameters.Type: GrantFiled: August 26, 2022Date of Patent: February 18, 2025Assignee: TDK CORPORATIONInventor: Abbas Ataya
-
Publication number: 20240346380Abstract: Disclosed embodiments provide data augmentation techniques in which collected sensor data and simulated sensor data created by transforming collected sensor data are used to train a machine learning model (MLM), the MLM is then deployed on an integrated circuit chip of an embedded device, live sensor data received by the embedded device is then either transformed and input to the MLM or input to the MLM without transformation, and the MLM then performs a prediction by, for example, recognizing a gesture made by the user of the embedded device.Type: ApplicationFiled: April 12, 2024Publication date: October 17, 2024Inventors: Juan S. Mejia SANTAMARIA, Abbas ATAYA, Rémi Louis Clément PONÇOT
-
Publication number: 20240346381Abstract: Disclosed embodiments provide data augmentation techniques in which collected sensor data (for example, data from a motion sensor or a microphone) related to a gesture or an activity is used to simulate a unified data representation by using one or more transfer functions. The collected sensor data is for a particular condition. The unified representation is agnostic to the condition in which the gesture or activity is made. The unified representation is used to train a machine learning model (MLM). The MLM is then deployed on an integrated circuit chip of an embedded device. Live sensor data received by the embedded device is then transformed and input to the MLM, and the MLM then performs a prediction by, for example, recognizing a gesture made by the user of the embedded device.Type: ApplicationFiled: April 12, 2024Publication date: October 17, 2024Inventors: Juan S. Mejia SANTAMARIA, Abbas ATAYA, Rémi Louis Clément PONÇOT
-
Publication number: 20240201793Abstract: In a method for training a gesture recognition model, gesture data is collected from an inertial measurement unit (IMU) positioned on one side of a user, wherein the IMU is capable of collecting data when positioned on either side of the user. A transformation is applied to the gesture data, wherein the transformation generates transformed gesture data that is independent of either side of the user. A gesture recognition model is trained using the transformed gesture data.Type: ApplicationFiled: December 15, 2023Publication date: June 20, 2024Applicant: TDK CORPORATIONInventors: Rémi Louis Clément PONÇOT, Juan S Mejia SANTAMARIA, Abbas ATAYA, Etienne De FORAS, Bruno FLAMENT
-
Publication number: 20230267919Abstract: In a method for human speech processing in an automatic speech recognition (ASR) system, human speech is received at a speech interface of the ASR system, wherein the ASR system comprises embedded componentry for onboard processing of the human speech and cloud-based componentry for remote processing of the human speech. A keyword is identified at the speech interface within a first portion of the human speech. Responsive to identifying the keyword, a second portion of the human speech is analyzed to identify at least one command, the second portion following the first portion. The at least one command is identified within the second portion of the human speech. The at least one command is selectively processed within at least one of the embedded componentry and the cloud-based componentry.Type: ApplicationFiled: February 23, 2023Publication date: August 24, 2023Applicant: TDK CORPORATIONInventors: Rémi Louis Clément PONÇOT, Abbas ATAYA, Peter George HARTWELL
-
Patent number: 11680823Abstract: A mobile robotic device has a motion sensor assembly configured to provide data for deriving a navigation solution for the mobile robotic device. The mobile robotic device temperature is determined for at least two different epochs so that an accumulated heading error of the navigation solution can be estimated based on the determined temperature at the at least two different epochs. A calibration procedure is then performed for at least one sensor of the motion sensor assembly when the estimated accumulated heading error is outside a desired range.Type: GrantFiled: July 31, 2020Date of Patent: June 20, 2023Assignee: IvenSense, Inc.Inventors: Zhongyang Li, Joe Youssef, Abbas Ataya, Sinan Karahan, Sankalp Dayal
-
Publication number: 20230066206Abstract: Disclosed herein is a method for designing a processing chain of a sensor system, the method comprising receiving a desired application comprising at least one activity for a sensor system to monitor, the sensor system comprising at least one sensor capable of generating sensor data based on sensing the at least one activity, accessing a database comprising a plurality of raw sensor data and a plurality of annotations corresponding to the plurality of raw sensor data, the plurality of annotations identifying activities corresponding to the plurality of raw sensor data, and automatically generating a processing chain of the sensor system for executing the desired application based on the desired application and the plurality of raw sensor data, the processing chain for processing the sensor data and for extracting at least one feature from the sensor data for use in sensing the at least one activity.Type: ApplicationFiled: August 26, 2022Publication date: March 2, 2023Applicant: TDK CORPORATIONInventors: Abbas ATAYA, Mahdi HEYDARI
-
Publication number: 20230063290Abstract: Described herein is a method for modifying a trained classification model, the method comprising receiving feature data extracted from sensor data, classifying the feature data according to the trained classification model to identify a label corresponding to the feature data, wherein the trained classification model comprises a decision tree comprising a plurality of decision nodes for feature identification for a plurality of features, tracking identified features of the plurality of features over a predetermined amount of time, and responsive to determining that a feature of the trained classification model does not satisfy a frequency of usage threshold over the predetermined amount of time, deactivating a decision node of the decision tree of the trained classification model corresponding to the feature such that a subsequent instance of the classifying does not consider a deactivated decision nodes for subsequently received feature data.Type: ApplicationFiled: August 26, 2022Publication date: March 2, 2023Applicant: TDK CORPORATIONInventor: Abbas ATAYA
-
Publication number: 20230068190Abstract: Described herein is a non-transitory computer readable storage medium having computer-readable program code stored thereon for causing a computer system to perform a method for processing data, the method comprising: receiving data, processing the data at a fixed code processing engine, wherein operation of the fixed code processing engine is controlled according to stored parameters, and classifying processed data at a fixed code classification engine, wherein operation of the fixed code classification engine is controlled according to the stored parameters.Type: ApplicationFiled: August 26, 2022Publication date: March 2, 2023Applicant: TDK CORPORATIONInventor: Abbas ATAYA
-
Patent number: 10936841Abstract: In a method for darkfield tracking at a sensor, it is determined whether an object is interacting with the sensor. Provided an object is not interacting with the sensor, a determination that a darkfield candidate image can be captured at the sensor is made. It is determined whether to capture a darkfield candidate image at the sensor based at least in part on the determination that a darkfield candidate image can be captured at the sensor. Responsive to making a determination to capture the darkfield candidate image, the darkfield candidate image is captured at the sensor, wherein the darkfield candidate image is an image absent an object interacting with the sensor. A darkfield estimate is updated with the darkfield candidate image.Type: GrantFiled: November 30, 2018Date of Patent: March 2, 2021Assignee: InvenSense, Inc.Inventors: Abbas Ataya, Bruno Flament
-
Publication number: 20210033421Abstract: A mobile robotic device has a motion sensor assembly configured to provide data for deriving a navigation solution for the mobile robotic device. The mobile robotic device temperature is determined for at least two different epochs so that an accumulated heading error of the navigation solution can be estimated based on the determined temperature at the at least two different epochs. A calibration procedure is then performed for at least one sensor of the motion sensor assembly when the estimated accumulated heading error is outside a desired range.Type: ApplicationFiled: July 31, 2020Publication date: February 4, 2021Inventors: Zhongyang Li, Joe Youssef, Abbas Ataya, Sinan Karahan, Sankalp Dayal
-
Publication number: 20190188442Abstract: In a method for correcting a fingerprint image, it is determined whether an object is interacting with the fingerprint sensor. Provided an object is not interacting with the fingerprint sensor, it is determined whether to capture a darkfield candidate image at the fingerprint sensor, wherein the darkfield candidate image is an image absent an object interacting with the fingerprint sensor. Responsive to making a determination to capture the darkfield candidate image, the darkfield candidate image is captured at the fingerprint sensor. Provided an object is interacting with the fingerprint sensor, it is determined whether to model a darkfield candidate image at the fingerprint sensor. Responsive to making a determination to model the darkfield candidate image, the darkfield candidate image is modeled at the fingerprint sensor. A darkfield estimate is updated with the darkfield candidate image. A fingerprint image is captured at the fingerprint sensor.Type: ApplicationFiled: February 7, 2019Publication date: June 20, 2019Applicant: InvenSense, Inc.Inventors: Bruno FLAMENT, Daniela HALL, Etienne DeForas, Harihar NARASIMHA-IYER, Romain FAYOLLE, Jonathan BAUDOT, Abbas ATAYA, Sina AKHBARI
-
Publication number: 20190171858Abstract: In a method for darkfield tracking at a sensor, it is determined whether an object is interacting with the sensor. Provided an object is not interacting with the sensor, a determination that a darkfield candidate image can be captured at the sensor is made. It is determined whether to capture a darkfield candidate image at the sensor based at least in part on the determination that a darkfield candidate image can be captured at the sensor. Responsive to making a determination to capture the darkfield candidate image, the darkfield candidate image is captured at the sensor, wherein the darkfield candidate image is an image absent an object interacting with the sensor. A darkfield estimate is updated with the darkfield candidate image.Type: ApplicationFiled: November 30, 2018Publication date: June 6, 2019Applicant: InvenSense, Inc.Inventors: Abbas ATAYA, Bruno FLAMENT