Patents by Inventor Aki Sakari Harma

Aki Sakari Harma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11503424
    Abstract: An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703).
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: November 15, 2022
    Assignee: Koninklijke Philips N.V.
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Harma, Arnoldus Werner Johannes Oomen
  • Publication number: 20220309115
    Abstract: A two-phase recommendation system for a recommendation device, employing both an external recommendation process and an internal, to a recommendation device, recommendation process. In particular, a processing unit uses a first data file, which is modifiable by an external source, and a second data file stored in a memory unit to recommend one or more content items to a user. The first and second data files are stored in a memory unit of the recommendation device.
    Type: Application
    Filed: June 18, 2020
    Publication date: September 29, 2022
    Inventors: Aki Sakari HÄRMÄ, Illapha Gustav Lars CUBA GYLLENSTEN, Arlette VAN WISSEN
  • Publication number: 20220183620
    Abstract: According to an embodiment of an aspect, there is provided a computer-implemented method for determining a sleep state of a user. The method comprising receiving (S11) a physiological signal from a physiological signal detector used by the user. The method further comprising determining (S12), based on the received physiological signal, the sleep state of the user. The method further comprising calculating (S13) a reliability value associated with the determination. The reliability value being calculated based on a comparison of the received physiological signal with historic physiological signals of the same sleep state as the determined sleep state. There is further provided a device (20) and computer-readable medium (30). In accordance with the present disclosure, the sleep state of a user may be determined with greater accuracy when compared with past methods.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 16, 2022
    Inventors: Dimitrios MAVROEIDIS, Ulf GROSSEKATHOEFER, Aki Sakari HÄRMÄ
  • Patent number: 11355226
    Abstract: In an embodiment, an apparatus (16) is presented that classifies device-sensed movement along a path based on a score that characterizes a geometrical property of the movement.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: June 7, 2022
    Assignee: Koninklijke Philips N.V.
    Inventor: Aki Sakari Härmä
  • Publication number: 20220044148
    Abstract: A method and system for modifying a prediction model. In particular, an inaccuracy of the prediction model is categorized into one of at least three categories. Different modifications are made to the prediction model depending on the category of the inaccuracy. In particular examples, an inaccuracy category defines what training data is used to modify the prediction model.
    Type: Application
    Filed: October 10, 2019
    Publication date: February 10, 2022
    Inventors: Aki Sakari Härmä, Andrei Poliakov, Irina Fedulova
  • Patent number: 11197120
    Abstract: An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703).
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: December 7, 2021
    Assignee: Koninklijke Philips N.V.
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Harma, Arnoldus Werner Johannes Oomen
  • Publication number: 20210352176
    Abstract: A system for managing a call includes a virtual caregiver that assists callers of a monitoring service in response to data received from a monitoring device. The virtual caregiver includes conversation analyzer that initiates a call to a user of the monitoring device and performs a conversation with the user during the call. The conversation is performed by generating audible comments in a synthesized voice to interact with the user. The audible comments are generated to elicit voice responses from the user containing information corresponding to the sensor data. The conversation analyzer also analyzes audible features of the voice responses using one or more models to interpret a condition of the user, generates a decision based on the interpreted condition of the user, and performs at least one action based on the decision.
    Type: Application
    Filed: April 1, 2021
    Publication date: November 11, 2021
    Inventors: Wilhelmus Andreas Marinus Arnoldus Maria VAN DEN DUNGEN, Warner Rudolph Theophile TEN KATE, Aki Sakari HÄRMÄ
  • Publication number: 20210335474
    Abstract: In an embodiment, an apparatus (16) is presented that classifies device-sensed movement along a path based on a score that characterizes a geometrical property of the movement.
    Type: Application
    Filed: August 4, 2017
    Publication date: October 28, 2021
    Inventor: AKI SAKARI HÄRMÄ
  • Patent number: 11116424
    Abstract: The present invention relates to a device, system and method for reliable and fast fall detection. The device comprises a sensor input (11) for obtaining sensor data related to movement of a subject acquired by a body-worn sensor (20, 22, 61) worn by the subject, and a video input (12) for obtaining video data of the subject and/or the subject's environment. An analysis unit (13) analyzes the obtained sensor data to detect one or more signal features indicative of a potential fall of the subject and for analyzing, if one or more such signal features have been detected, the detected one or more signal features and/or related sensor data in combination with related video data to identify similarities between: changes of the related sensor data and/or the signal features over time; and changes of the video data over time. An output (14, 64, 65, 66) issues a fall detection indication if the level and/or amount of detected similarities exceeds a corresponding threshold.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: September 14, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Javier Espina Perez, Vincent Alexander Rudolf Aarts, Aki Sakari Harma
  • Publication number: 20210146082
    Abstract: In an embodiment, a method is described. The method comprises obtaining an indication of a speech pattern of a subject and using the indication to determine a predicted time of inspiration by the subject. A machine learning model is used for predicting the relationship between the speech pattern and a breathing pattern of the subject. The machine learning model can then be used to determine the predicted time of inspiration by the subject. The method further comprises controlling delivery of gas to the subject based on the predicted time of inspiration by the subject.
    Type: Application
    Filed: October 15, 2020
    Publication date: May 20, 2021
    Inventors: Aki Sakari Härmä, Francesco Vicario, Venkata Srikanth Nallanthighal
  • Publication number: 20210144507
    Abstract: An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703).
    Type: Application
    Filed: January 20, 2021
    Publication date: May 13, 2021
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Harma, Arnoldus Werner Johannes Oomen
  • Publication number: 20210136512
    Abstract: An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703).
    Type: Application
    Filed: January 14, 2021
    Publication date: May 6, 2021
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Harma, Arnoldus Werner Johannes Oomen
  • Publication number: 20200186956
    Abstract: An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703).
    Type: Application
    Filed: February 12, 2020
    Publication date: June 11, 2020
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Harma, Arnoldus Werner Johannes Oomen
  • Patent number: 10582330
    Abstract: An audio processing apparatus includes a receiver configured to receive audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers. A renderer is configured to generate audio transducer signals for the set of audio transducers from the audio data, and to render audio components in accordance with rendering modes. A render controller is configured to select a rendering mode for the renderer based on the audio transducer position data. The renderer is configured to employ different rendering modes for different subsets of the set of audio transducers and the render controller is configured to independently select rendering modes for each of the different subsets of the set of audio transducer including selecting the rendering mode for a first audio transducer in response to a position of the first audio transducer relative to a predetermined position for the first audio transducer.
    Type: Grant
    Filed: May 16, 2014
    Date of Patent: March 3, 2020
    Assignee: Koninklijke Philips N.V.
    Inventors: Werner Paulus Josephus De Bruijn, Aki Sakari Härmä, Arnoldus Werner Johannes Oomen
  • Publication number: 20190297033
    Abstract: Techniques described herein relate to applying reinforcement learning to improve engagement with counseling chatbots. In various embodiments, based on a first state of a subject and a decision model (109), a given natural language response may be selected (404) from a plurality of candidate natural language responses and provided to the user by the counseling chatbot. A free-form natural language input may be received (408) from the subject at one or more input components of one or more computing devices. A second state of the subject may be determined (410) based on speech recognition output generated from the free-from natural language input. The second state may be a positive, negative, or neutral valance towards a target behavior change. Based on the second state, and instant reward may be calculated (412) and used to train (414) the decision model.
    Type: Application
    Filed: March 21, 2019
    Publication date: September 26, 2019
    Inventors: Aki Sakari HARMA, Rim HELAOUI
  • Publication number: 20190167157
    Abstract: The present invention relates to a device, system and method for reliable and fast fall detection. The device comprises a sensor input (11) for obtaining sensor data related to movement of a subject acquired by a body-worn sensor (20, 22, 61) worn by the subject, and a video input (12) for obtaining video data of the subject and/or the subject's environment. An analysis unit (13) analyzes the obtained sensor data to detect one or more signal features indicative of a potential fall of the subject and for analyzing, if one or more such signal features have been detected, the detected one or more signal features and/or related sensor data in combination with related video data to identify similarities between: changes of the related sensor data and/or the signal features over time; and changes of the video data over time. An output (14, 64, 65, 66) issues a fall detection indication if the level and/or amount of detected similarities exceeds a corresponding threshold.
    Type: Application
    Filed: August 8, 2017
    Publication date: June 6, 2019
    Inventors: Javier ESPINA PEREZ, Vincent Alexander Rudolf AARTS, Aki Sakari HARMA
  • Publication number: 20190141418
    Abstract: The present disclosure pertains to a method and system configured for generating one or more statements. In some embodiments, the system comprises at least one processor; memory operatively connected with the at least one processor; a communication component operatively connected to the at least one processor and configured to communicate with a user device via a network. The at least one processor is configure by machine-readable instructions to receive one or more measurements pertaining to a parameter of a user from the user device; generate one or more statements based on the one or more measurements and one or more templates; and transmit, via the network, the one or more statements for presentation on the user device.
    Type: Application
    Filed: April 7, 2017
    Publication date: May 9, 2019
    Inventors: Aki Sakari HARMA, Cliff Johannes Robert Hubertina LASCHET, Marinus Bastiaan VAN LEEUWEN, Murtaza BULUT
  • Publication number: 20190121803
    Abstract: In an embodiment, a computer-implemented method for managing plural micromodules over time, each micromodule comprising plural nodes corresponding to electronic interventions that follow logically from one another to form a narrative or story, the method comprising: receiving inputs from one or more input sources, the input sources monitoring user behavior and associated contexts; monitoring each of the plural micromodules, each of the plural micromodules beginning with an opportunity and ending with an assessment, each of the plural micromodules comprising a score; updating the scores based on the received inputs; selecting which of at least one of the plural micromodules to provide to a user at any given moment in time based on a comparison of the scores and historical data; and providing the selected micromodule to the user.
    Type: Application
    Filed: September 26, 2018
    Publication date: April 25, 2019
    Inventors: SILVIA BERTAGNA DE MARCHI, AKI SAKARI HARMA
  • Publication number: 20190074076
    Abstract: Methods and systems for managing a user's health habits. Various embodiments of the invention allow the user to register habits to a habit registry. Relevant sensor and other data is collected contemporaneously with the registration of the behavior and is stored in association with the registered habit. In normal use, a configured processor attempts to detect occurrences of a previously registered habit by comparing subsequently collected data to previously stored data. When collected and stored data match, a registered health habit is detected, the occurrence may be added to a health habit log, and the feedback may be provided to the user. The feedback assists the user to achieve his goals in health habit management.
    Type: Application
    Filed: February 17, 2016
    Publication date: March 7, 2019
    Inventors: AKI SAKARI HARMA, JAN MARTIJN KRANS, DIETWIG JOS CLEMENT LOWET, SASKIA VAN DANTZIG
  • Publication number: 20180338709
    Abstract: In an embodiment, a wearable device (12) is disclosed that automates the detection and determination of a type of activity, and both measures physiological and behavioral parameters and computes information corresponding to measured physiological parameters based on the determined type of activity. An embodiment of the wearable device provides these features by wirelessly receiving a signal with information coded therein that enables the wearable device to automatically detect and identify the type of activity in which a person (82) is engaged. The determination of the type of activity enables a more accurate computation of information that is specific to the activity in which the person is engaged, the computation of information based on the measured physiological and behavioral parameter.
    Type: Application
    Filed: December 1, 2016
    Publication date: November 29, 2018
    Inventors: JAN Martijn KRANS, AKI Sakari HÄRMÄ, Saskia VAN DANTZIG