Patents by Inventor Albert Garcia
Albert Garcia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250040992Abstract: The present invention relates to operating a camera-based tracking system. In order to provide accurate tracking at an earlier stage, a control device (10) for operation of a camera-based tracking system is provided. The control device comprises a controller (12) and a control signal output (14). The controller is configured to generate at least one operation signal for the tracking system that comprises, at least during a starting phase of the tracking system, at least one calibration compensation instruction. The at least one calibration compensation instruction comprises instructions for at least one of the group of: i) actively adjusting a temperature of the camera-based tracking system, and ii) adapting a calibration-related camera output of the camera-based tracking system, which camera output is used for a tracking calculation. The control signal output is configured to provide the at least one operation signal to the tracking system.Type: ApplicationFiled: November 28, 2022Publication date: February 6, 2025Inventors: JARICH WILLEM SPLIETHOFF, BERNARDUS HENDRIKUS WILHELMUS HENDRIKS, ALBERT GARCIA TORMO, MANFRED MÜLLER
-
Publication number: 20240406096Abstract: Multiple different transmission media may be available for transmitting a requested content item. A transmission medium may be selected based on various factors, such as predicted transmission qualities for the different transmission media and/or transmission requirements associated with the requested content item. The requested content item may be transmitted via a selected transmission medium.Type: ApplicationFiled: May 21, 2024Publication date: December 5, 2024Inventors: Ross Gilson, David Urban, Albert Garcia
-
Patent number: 12138089Abstract: The present invention relates to spectral correction. A spectral correction apparatus is described that is configured to identify a voltage fluctuation in the X-ray tube and to parameterize the high voltage fluctuation to correct the effective X-ray spectrum per individual frame.Type: GrantFiled: December 9, 2020Date of Patent: November 12, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Gereon Vogtemeir, Roger Steadman Booker, Albert Garcia I Tormo, Klaus Jürgen Engel
-
PROCESS FOR ENERGY-EFFICIENT DRYING OF GERMINATED SEEDS AND DEVICE FOR IMPLEMENTATION OF THE PROCESS
Publication number: 20240353177Abstract: A process and device for energy-efficient drying of green malt in a drying kiln using a rotational speed-modulated fan for directing an air mass flow through a bed of drying material for the green malt. The drying is implemented for a specified maximum kiln time. To do this, grain moisture content and a grain mass of a grain are detected, which constitutes a basis for the bed of drying material of the green malt after steeping and germinating, Furthermore, an ambient air humidity of an ambient air used for the air mass flow and a plurality of weather forecast data are detected at least with respect to the ambient air humidity and, during drying, a moisture content of the drying material of the green malt is monitored up to a specified limit. A rotational speed modulation of the fan is controlled such that the specified maximum kiln time is reached.Type: ApplicationFiled: August 11, 2022Publication date: October 24, 2024Applicant: Bühler GMBHInventors: Clêment LEFEBVRE, Michael RITTENAUER, Albert Garcia TORRENTÓ -
Publication number: 20240324971Abstract: A synchronisation system comprises aa sensor arrangement to detect a trigger base event. An analysis module and an arithmetic unit are configured to access prior information on a time delay between the sensor arrangement's detection of the trigger base and a starting point of an acquisition time interval for acquiring imaging data. The starting point is computed of the acquisition time interval from the detected trigger base event and the prior information of the time delay. The time delay between the sensor arrangement's detection of the trigger base event and the acquisition time interval may vary between individual subjects, but for each individual subject the time delay is well reproducible and hence on a per subject basis may be calibrated for.Type: ApplicationFiled: July 11, 2022Publication date: October 3, 2024Inventors: Steffen Weiss, Wenjin Wang, Albertus Cornelis Den Brinker, Albert Garcia Tormo
-
Publication number: 20240320834Abstract: A method (100) is disclosed for determining a signal indicative of a state of a subject during a diagnostic imaging or therapeutic procedure based on camera observation. The method comprises acquiring (101) camera images from a camera configured to monitor a body part of the subject during the procedure, e.g. via a reflection thereof in a reflective surface. The method comprises detecting (102) a shape or contour of the reflective surface in at least one acquired camera image to define a region of interest in the image that contains image information corresponding to the body part of interest, and segmenting (103) the region of interest in one or more camera images to select pixels that correspond to a feature of the body part of interest. The method also comprises determining (105) the signal indicative of the state of the subject from the selection. The invention further relates to a corresponding device, system and computer-program.Type: ApplicationFiled: July 22, 2022Publication date: September 26, 2024Inventors: Wenjin Wang, Albertus Cornelis Den Brinker, Albert Garcia Tormo, Ioannis Pappous, Steffen Weiss, Jan Hendrik Wuelbern, Peter Caesar Mazurkewitz, Julien Thomas Senegas, Thomas Netsch
-
Patent number: 12028244Abstract: Methods and systems for selecting routes from among multiple media and/or optimizing transmission across those media are described. A minimum data rate may be determined for transmitting a content item. Based on that minimum data rate, a device may determine whether to transmit the content item via a given medium, select a different medium for transmission, or adjust transmission to compensate for unfavorable network conditions. A device may select a medium based on ranking one or more routes from a content source to a user device. Further, a device may determine a data rate for transmission based on calculating an expected time of transmission that includes time spent performing retransmissions at a given data rate.Type: GrantFiled: January 26, 2021Date of Patent: July 2, 2024Assignee: Comcast Cable Communications, LLCInventors: Ross Gilson, David Urban, Albert Garcia
-
Publication number: 20240206754Abstract: A vital sign detection system comprises a camera (10) configured to acquire image frames from an examination zone (42). A signal processor (11) derives vital sign information from the acquired image frames. An illumination controller (12) controls illumination of the examination zone, generates temporal modulations of the illumination and synchronise the camera frame rate with the modulated illumination. The vital sign detection system of the invention achieves to increase the dynamic range and hence also the signal-to-noise ratio of the vital sign signal.Type: ApplicationFiled: July 26, 2022Publication date: June 27, 2024Inventors: Steffen Weiss, Wenjin Wang, Albert Garcia Tormo, Jan Hendrik Wuelbern, Albertus Cornelis Den Brinker
-
Patent number: 11974063Abstract: Transcribed text and physiological data of a remote video conference participant are transmitted to a local device separately from the video data, which depicts the remote party during a time interval. An image of the video data is captured at a time instant within the time interval. A value of a remote party feature is determined remotely using the video data. The remote party feature can be the remote party's heart rate at the time instant. The value of the feature is received onto the local device. Audio data captures sounds spoken by the remote party and is converted by the remote device into words of text. The audio data converted into a particular word was captured at the time instant. The particular word is received onto the local device. The particular word and the value of the feature are displayed in association with one another on the local device.Type: GrantFiled: July 27, 2022Date of Patent: April 30, 2024Assignee: KOA HEALTH DIGITAL SOLUTIONS S.L.U.Inventors: Albert Garcia i Tormo, Nicola Hemmings, Aleksandar Matic, Johan Lantz
-
Patent number: 11922120Abstract: An autocomplete function for textual input uses situational parameters to predict the next words the user is intending to type. Situational and temporal parameters are based on textual input and sensor data of the user. A past time window is based on the situational and temporal parameters. Historical textual input and sensor data during the time window relating to the situational parameters are retrieved from a storage device and aggregated. A pre-existing model that relates the situational parameter to the time window is used to select a situational value based on the textual input and sensor data. Words relating to the situational parameter are listed that the user is likely to input next based on the selected situational value. The words are ranked by the probability that the user is intending to type each of the words. The highest ranked word is displayed to the user on a user interface.Type: GrantFiled: March 17, 2023Date of Patent: March 5, 2024Assignee: Koa Health Digital Solutions S.L.U.Inventors: Teodora Sandra Buda, Joao Guerreiro, Aleksandar Matic, Albert Garcia i Tormo
-
Publication number: 20230285711Abstract: A method determines the most effective motivator at inducing a user to engage in a digital mental health intervention. The user is exposed to a first motivator that prompts the user to perform the intervention. The motivator can be a video, audio tape, textual explanation or quiz-like game. Intervention and motivator parameters are monitored to assess user engagement both with the first motivator and in performing the intervention. An intervention delivery model is personalized to the user based on both parameters. The intervention delivery model is used to determine the efficacy of the first motivator at motivating the user to perform the intervention. The intervention and motivator parameters are compared to an intervention engagement threshold and a motivator engagement threshold. If either or both parameters are below the corresponding threshold, the intervention delivery model is used to select a second motivator. The user is then exposed to the second motivator.Type: ApplicationFiled: June 8, 2022Publication date: September 14, 2023Inventors: Albert Garcia i Tormo, Nicola Hemmings, Claire Vowell, Teodora Sandra Buda, Remko Vermeulen
-
Publication number: 20230252236Abstract: An autocomplete function for textual input uses situational parameters to predict the next words the user is intending to type. Situational and temporal parameters are based on textual input and sensor data of the user. A past time window is based on the situational and temporal parameters. Historical textual input and sensor data during the time window relating to the situational parameters are retrieved from a storage device and aggregated. A pre-existing model that relates the situational parameter to the time window is used to select a situational value based on the textual input and sensor data. Words relating to the situational parameter are listed that the user is likely to input next based on the selected situational value. The words are ranked by the probability that the user is intending to type each of the words. The highest ranked word is displayed to the user on a user interface.Type: ApplicationFiled: March 17, 2023Publication date: August 10, 2023Inventors: Teodora Sandra Buda, Joao Guerreiro, Aleksandar Matic, Albert Garcia i Tormo
-
Publication number: 20230245659Abstract: Video data captured during a time interval at the location of a remote party to a videoconference is received onto a remote device. The video data depicts the remote party. Audio data capturing sounds spoken by the remote party during the time interval is also received onto the remote device, which converts the audio data into words of text and captures prosodic information describing the sounds spoken by the remote party during the time interval. The words of text are received onto a local device. The prosodic information corresponding to the sounds spoken by the remote party during the time interval that were converted into the words of text are also received onto the local device. The words of text and prosodic information are stored in association with one another. A physiological parameter of the remote party is determined using the video data and is received onto the local device.Type: ApplicationFiled: September 27, 2022Publication date: August 3, 2023Inventors: Albert Garcia i Tormo, Nicola Hemmings, Aleksandar Matic, Johan Lantz
-
Publication number: 20230247169Abstract: Transcribed text and physiological data of a remote video conference participant are transmitted to a local device separately from the video data, which depicts the remote party during a time interval. An image of the video data is captured at a time instant within the time interval. A value of a remote party feature is determined remotely using the video data. The remote party feature can be the remote party's heart rate at the time instant. The value of the feature is received onto the local device. Audio data captures sounds spoken by the remote party and is converted by the remote device into words of text. The audio data converted into a particular word was captured at the time instant. The particular word is received onto the local device. The particular word and the value of the feature are displayed in association with one another on the local device.Type: ApplicationFiled: July 27, 2022Publication date: August 3, 2023Inventors: Albert Garcia i Tormo, Nicola Hemmings, Aleksandar Matic, Johan Lantz
-
Publication number: 20230246868Abstract: The intelligibility of a video conference is monitored using speech-to-text conversion and by comparing text as spoken to text converted from received audio. A first portion of audio data of speech of a user which is timestamped with a first time is input into a first audio and text analyzer. A second portion of the audio data, which is also timestamped with the first time, is received onto a remote audio and text analyzer. The first audio and text analyzer converts the first portion of audio data into a first text fragment. The remote audio and text analyzer converts the second portion of audio data into a second text fragment. The first audio and text analyzer receives the second text fragment. The first text fragment is compared to the second text fragment. Whether the first text fragment matches the second text fragment is indicated to the user on a display.Type: ApplicationFiled: February 24, 2022Publication date: August 3, 2023Inventors: Albert Garcia i Tormo, Miguel González, Javier Acedo, Johan Lantz
-
Publication number: 20230104641Abstract: A system for monitoring the reaction of a user and for adjusting output content based on the user's reaction includes an output unit, a monitoring unit, a synchronization unit, an analysis unit and a control unit. The output unit presents content to the user. The monitoring unit monitors a user parameter during a period during which a first content is presented to the user in order to obtain monitoring data from the user. The monitoring data is synchronized during the period with the first content so as to link in time the monitoring data and the first content. The analysis unit analyzes the monitoring data and links it to the first content in order to determine the user's reaction to the first content. The control unit controls the output unit to present a second content to the user that is selected based on the user's reaction to the first content.Type: ApplicationFiled: October 5, 2021Publication date: April 6, 2023Inventors: Albert Garcia i Tormo, Nicola Hemmings, Teodora Sandra Buda, Roger Garriga Calleja, Federico Lucchesi, Giovanni Maffei
-
Patent number: 11620447Abstract: An autocomplete function for textual input uses situational parameters to predict the next words the user is intending to type. Situational and temporal parameters are based on textual input and sensor data of the user. A past time window is based on the situational and temporal parameters. Historical textual input and sensor data during the time window relating to the situational parameters are retrieved from a storage device and aggregated. A pre-existing model that relates the situational parameter to the time window is used to select a situational value based on the textual input and sensor data. Words relating to the situational parameter are listed that the user is likely to input next based on the selected situational value. The words are ranked by the probability that the user is intending to type each of the words. The highest ranked word is displayed to the user on a user interface.Type: GrantFiled: March 30, 2022Date of Patent: April 4, 2023Assignee: Koa Health B.V.Inventors: Teodora Sandra Buda, Joao Guerreiro, Aleksandar Matic, Albert Garcia i Tormo
-
Publication number: 20230008276Abstract: The present invention relates to spectral correction. A spectral correction apparatus is described that is configured to identify a voltage fluctuation in the X-ray tube and to parameterize the high voltage fluctuation to correct the effective X-ray spectrum per individual frame.Type: ApplicationFiled: December 9, 2020Publication date: January 12, 2023Inventors: GEREON VOGTEMEIR, ROGER STEADMAN BOOKER, ALBERT GARCIA I TORMO, KLAUS JÜRGEN ENGEL
-
Publication number: 20230005154Abstract: The invention refers to an apparatus for monitoring a subject (121) during an imaging procedure, e.g. CT-imaging The apparatus (110) comprises a monitoring image providing unit (111) providing a first monitoring image and a second monitoring image acquired at different support positions, a monitoring position providing unit (112) providing a first monitoring position of a region of interest in the first monitoring image, a support position providing unit (113) providing support position data of the support positions, a position map providing unit (114) providing a position map mapping calibration support positions to calibration monitoring positions, and a region of interest position determination unit (115) determining a position of the region of interest in the second monitoring image based on the first monitoring position, the support position data, and the position map. This allows to determine the position of the region of interest accurately and with low computational effort.Type: ApplicationFiled: December 1, 2020Publication date: January 5, 2023Inventors: ALBERT GARCIA I TORMO, RINK SPRINGER, IHOR OLEHOVYCH KIRENKO, JULIEN SENEGAS, HOLGER SCHMITT
-
Patent number: 11523165Abstract: A television remote finder assembly for locating a misplaced remote control includes a television that has a communication unit integrated into the television. The communication unit broadcasts an alert signal when the communication unit is actuated. A remote control is in wireless communication with the television for controlling operational parameters of the television. The remote control receives the alert signal when the communication unit in the television is actuated. Additionally, the remote control includes an alarm that is integrated into the remote control. The alarm is actuated when the remote control receives the alert signal to emit an audible alert thereby facilitating the user to locate the remote control when the remote control has been misplaced.Type: GrantFiled: July 29, 2021Date of Patent: December 6, 2022Inventor: Albert Garcia