Patents by Inventor Giuseppe Raffa
Giuseppe Raffa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230124495Abstract: Disclosed is a technical solution to process a video that captures actions to be performed for completing a task based on a chronological sequence of stages within the task. An example system may identify an action sequence from an instruction for the task. The system inputs the action sequence into a trained model (e.g., a recurrent neural network), which outputs the chronological sequence of stages. The RNN may be trained through self-supervised learning. The system may input the video and the chronological sequence of stages into another trained model, e.g., a temporal convolutional network. The other trained model may include hidden layers arranged before an attention layer. The hidden layers may extract features from the video and feed the features into the attention layer. The attention layer may determine attention weights of the features based on the chronological sequence of stages.Type: ApplicationFiled: October 28, 2022Publication date: April 20, 2023Applicant: Intel CorporationInventors: Sovan Biswas, Anthony Daniel Rhodes, Ramesh Radhakrishna Manuvinakurike, Giuseppe Raffa, Richard Beckwith
-
Patent number: 11605179Abstract: The systems and methods disclosed herein provide determination of an orientation of a feature towards a reference target. As a non-limiting example, a system consistent with the present disclosure may include a processor, a memory, and a single camera affixed to the ceiling of a room occupied by a person. The system may analyze images from the camera to identify any objects in the room and their locations. Once the system has identified an object and its location, the system may prompt the person to look directly at the object. The camera may then record an image of the user looking at the object. The processor may analyze the image to determine the location of the user's head and, combined with the known location of the object and the known location of the camera, determine the direction that the user is facing. This direction may be treated as a reference value, or “ground truth.” The captured image may be associated with the direction, and the combination may be used as training input into an application.Type: GrantFiled: June 14, 2022Date of Patent: March 14, 2023Assignee: Intel CorporationInventors: Glen J. Anderson, Giuseppe Raffa, Carl S. Marshall, Meng Shi
-
Publication number: 20230071760Abstract: Disclosed is a technical solution to calibrate confidence scores of classification networks. A classification network has been trained to receive an input and output a label of the input that indicates a class of the input. The classification network also outputs a confidence score of the label, which indicates a likelihood of the input falling into the class, i.e., a confidence level of the classification network that the label is correct. To calibrate the confidence of the classification network, a logit transformation function may be added into the classification network. The logic transformation function may be an entropy-based function and have learnable parameters, which may be trained by inputting calibration samples into the classification network and optimizing a negative log likelihood based on the labels generated by the classification network and ground-truth labels of the calibration samples. The trained logic transformation function can be used to compute reliable confidence scores.Type: ApplicationFiled: October 28, 2022Publication date: March 9, 2023Applicant: Intel CorporationInventors: Anthony Daniel Rhodes, Sovan Biswas, Giuseppe Raffa
-
Publication number: 20230024803Abstract: Systems, apparatuses, and methods include technology that generates final frame predictions for a first plurality of frames of a video, where the first plurality of frames is associated with unlabeled data. The technology predicts an ordered list of actions for the first plurality of frames based on the final frame predictions, and temporally aligning the ordered list of actions to the final frame predictions to generate labels.Type: ApplicationFiled: September 30, 2022Publication date: January 26, 2023Inventors: Sovan Biswas, Anthony Rhodes, Ramesh Manuvinakurike, Giuseppe Raffa, Richard Beckwith
-
Patent number: 11557098Abstract: Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.Type: GrantFiled: October 28, 2020Date of Patent: January 17, 2023Assignee: INTEL CORPORATIONInventors: Pete A. Denman, Glen J. Anderson, Giuseppe Raffa
-
Publication number: 20230010230Abstract: Systems, apparatuses, and methods include technology that identifies, with a neural network, that a predetermined amount of a first action is completed at a first portion of a plurality of portions. A subset of the plurality of portions collectively represents the first action. The technology generates a first loss based on the predetermined amount of the first action being identified as being completed at the first portion. The technology updates the neural network based on the first loss.Type: ApplicationFiled: September 15, 2022Publication date: January 12, 2023Inventors: Anthony Rhodes, Sovan Biswas, Giuseppe Raffa
-
Publication number: 20220392371Abstract: Examples disclosed herein provide real-time language learning within a smart space. An example system includes a sensor; object detection software to identify a first object and a second object in an environment based on an output of the sensor; assign a first weight to the first object and a second weight to the second object; perform a comparison of the first weight and the second weight; and select the first object to be associated with a second language output based on the comparison; context determination software to determine a second language context based on the output of the sensor; linguistic analysis software to associate the first object with a second language based on the second language context; and prompt generation software to cause the second language output for the first object in the second language to be presented.Type: ApplicationFiled: August 15, 2022Publication date: December 8, 2022Inventors: Carl S. Marshall, Giuseppe Raffa, Shi Meng, Lama Nachman, Ankur Agrawal, Selvakumar Panneer, Glen J. Anderson, Lenitra M. Durham
-
Publication number: 20220392100Abstract: The systems and methods disclosed herein provide determination of an orientation of a feature towards a reference target. As a non-limiting example, a system consistent with the present disclosure may include a processor, a memory, and a single camera affixed to the ceiling of a room occupied by a person. The system may analyze images from the camera to identify any objects in the room and their locations. Once the system has identified an object and its location, the system may prompt the person to look directly at the object. The camera may then record an image of the user looking at the object. The processor may analyze the image to determine the location of the user's head and, combined with the known location of the object and the known location of the camera, determine the direction that the user is facing. This direction may be treated as a reference value, or “ground truth.” The captured image may be associated with the direction, and the combination may be used as training input into an application.Type: ApplicationFiled: June 14, 2022Publication date: December 8, 2022Inventors: GLEN J. ANDERSON, GIUSEPPE RAFFA, CARL S. MARSHALL, MENG SHI
-
Publication number: 20220382787Abstract: Systems, apparatuses, and methods include technology that extracts a plurality of features from the input data. The technology generates a confidence metric for the plurality of features. The confidence metric corresponds to a degree that at least one feature of the plurality of features is relevant for classification of the input data.Type: ApplicationFiled: August 1, 2022Publication date: December 1, 2022Inventors: Anthony Rhodes, Sovan Biswas, Giuseppe Raffa
-
Publication number: 20220334620Abstract: Methods and apparatus to operate closed-lid portable computers are disclosed. An example portable compute device includes: a microphone; a speaker; a first camera to face a first direction; and a second camera to face a second direction, the second direction opposite the first direction. The compute device further includes communications circuitry; a first display; a second display separate from the first display; and a hinge to enable the first display to rotate relative to the second display between an open position and a closed position. At least a portion of the second display is capable of being visible when the first display is rotated about the hinge to the closed position. The portion of the second display is multiple times longer in a third direction than in a fourth direction perpendicular to the third direction, the third direction extending parallel to an axis of rotation of the hinge.Type: ApplicationFiled: July 1, 2022Publication date: October 20, 2022Inventors: Barnes Cooper, Aleksander Magi, Arvind Kumar, Giuseppe Raffa, Wendy March, Marko Bartscherer, Irina Lazutkina, Duck Young Kong, Meng Shi, Vivek Paranjape, Vinod Gomathi Nayagam, Glen J. Anderson
-
Patent number: 11423490Abstract: Systems and methods may provide for conducting an interest analysis of data associated with a user, wherein the interest analysis distinguishes between abstract interests and social interests. Additionally, one or more recommendations may be generated for the user based on the interest analysis and a current context of the user, wherein the one or more recommendations may be presented to the user. In one example, the abstract interests identify types of topics and types of objects, and the social interests identify types of social groups.Type: GrantFiled: June 28, 2017Date of Patent: August 23, 2022Assignee: Intel CorporationInventors: Norma S. Savage, Lama Nachman, Saurav Sahay, Giuseppe Raffa
-
Patent number: 11417236Abstract: Language education systems capable of integrating with a user's daily life and automatically producing educational prompts would be particularly advantageous. An example method includes determining a user's identity, detecting a language education subject, prompting the user with a language education message, receiving a user's response, and updating a user profile associated with the user based on the user's response. Methods may also include determining user state (including emotional, physical, social, etc.) and determining, based on the user state, whether to prompt the user with the language education prompt.Type: GrantFiled: December 28, 2018Date of Patent: August 16, 2022Assignee: Intel CorporationInventors: Carl S. Marshall, Giuseppe Raffa, Shi Meng, Lama Nachman, Ankur Agrawal, Selvakumar Panneer, Glen J. Anderson, Lenitra M. Durham
-
Patent number: 11410326Abstract: The systems and methods disclosed herein provide determination of an orientation of a feature towards a reference target. As a non-limiting example, a system consistent with the present disclosure may include a processor, a memory, and a single camera affixed to the ceiling of a room occupied by a person. The system may analyze images from the camera to identify any objects in the room and their locations. Once the system has identified an object and its location, the system may prompt the person to look directly at the object. The camera may then record an image of the user looking at the object. The processor may analyze the image to determine the location of the user's head and, combined with the known location of the object and the known location of the camera, determine the direction that the user is facing. This direction may be treated as a reference value, or “ground truth.” The captured image may be associated with the direction, and the combination may be used as training input into an application.Type: GrantFiled: January 20, 2020Date of Patent: August 9, 2022Assignee: Intel CorporationInventors: Glen J. Anderson, Giuseppe Raffa, Carl S. Marshall, Meng Shi
-
Patent number: 11379016Abstract: Methods and apparatus to operate closed-lid portable computers are disclosed. An example apparatus includes a camera input analyzer to analyze image data captured by a world facing camera on a portable computer when a lid of the portable computer is in a closed position. The world facing camera is on a first side of the lid. The portable computer includes a primary display on a second side of the lid opposite the first side. The example apparatus further includes a secondary display controller to render content via a secondary display of the portable computer in response to the analysis of the image data. The secondary display controller is to render the content on the secondary display while the lid of the portable computer is in the closed position and the primary display is turned off.Type: GrantFiled: May 23, 2019Date of Patent: July 5, 2022Assignee: Intel CorporationInventors: Barnes Cooper, Aleksander Magi, Arvind Kumar, Giuseppe Raffa, Wendy March, Marko Bartscherer, Irina Lazutkina, Duck Young Kong, Meng Shi, Vivek Paranjape, Vinod Gomathi Nayagam, Glen J. Anderson
-
Patent number: 11320912Abstract: Techniques for gesture-based device connections are described. For example, a method may comprise receiving video data corresponding to motion of a first computing device, receiving sensor data corresponding to motion of the first computing device, comparing, by a processor, the video data and the sensor data to one or more gesture models, and initiating establishment of a wireless connection between the first computing device and a second computing device if the video data and sensor data correspond to gesture models for the same gesture. Other embodiments are described and claimed.Type: GrantFiled: January 6, 2020Date of Patent: May 3, 2022Assignee: INTEL CORPORATIONInventors: Giuseppe Raffa, Sangita Sharma
-
Patent number: 11320913Abstract: Techniques for gesture-based device connections are described. For example, a method may comprise receiving video data corresponding to motion of a first computing device, receiving sensor data corresponding to motion of the first computing device, comparing, by a processor, the video data and the sensor data to one or more gesture models, and initiating establishment of a wireless connection between the first computing device and a second computing device if the video data and sensor data correspond to gesture models for the same gesture. Other embodiments are described and claimed.Type: GrantFiled: August 28, 2020Date of Patent: May 3, 2022Assignee: INTEL CORPORATIONInventors: Giuseppe Raffa, Sangita Sharma
-
Publication number: 20220086566Abstract: Systems and methods may provide for sending a sound wave signal and measuring a body conduction characteristic of the sound wave signal. Additionally, a user authentication may be performed based at least in part on the body conduction characteristic. In one example, the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue.Type: ApplicationFiled: September 27, 2021Publication date: March 17, 2022Applicant: Intel CorporationInventors: JOHN C. WEAST, GLEN J. ANDERSON, GIUSEPPE RAFFA, DANIEL S. LAKE, KATHY YUEN, LENITRA M. DURHAM
-
Patent number: 11217126Abstract: The disclosed embodiments generally relate to methods, systems and apparatuses to provide ad hoc digital signage for public or private display. In certain embodiments, the disclosure provides dynamically formed digital signage. In one application, one or more drones are used to project the desired signage. In another application one or more drones are used to form a background to receive the projected image. In still another application, sensors are used to detect audience movement, line of sight or engagement level. The sensor information is then used to arrange the projecting drones or the surface-image drones to further signage presentation.Type: GrantFiled: December 28, 2017Date of Patent: January 4, 2022Assignee: INTEL CORPORATIONInventors: Carl S. Marshall, John Sherry, Giuseppe Raffa, Glen J. Anderson, Selvakumar Panneer, Daniel Pohl
-
Patent number: 11134340Abstract: Systems and methods may provide for sending a sound wave signal and measuring a body conduction characteristic of the sound wave signal. Additionally, a user authentication may be performed based at least in part on the body conduction characteristic. In one example, the body conduction characteristic includes one or more of a timing, a frequency or an amplitude of the sound wave signal after passing through one or more of bone or tissue.Type: GrantFiled: November 18, 2014Date of Patent: September 28, 2021Assignee: Intel CorporationInventors: John C. Weast, Glen J. Anderson, Giuseppe Raffa, Daniel S. Lake, Kathy Yuen, Lenitra M. Durham
-
Publication number: 20210272467Abstract: In one embodiment, an apparatus comprises a memory and a processor. The memory is to store sensor data, wherein the sensor data is captured by a plurality of sensors within an educational environment. The processor is to: access the sensor data captured by the plurality of sensors; identify a student within the educational environment based on the sensor data; detect a plurality of events associated with the student based on the sensor data, wherein each event is indicative of an attention level of the student within the educational environment; generate a report based on the plurality of events associated with the student; and send the report to a third party associated with the student.Type: ApplicationFiled: September 28, 2018Publication date: September 2, 2021Inventors: Shao-Wen Yang, Addicam V. Sanjay, Karthik Veeramani, Gabriel L. Silva, Marcos P. Da Silva, Jose A. Avalos, Stephen T. Palermo, Glen J. Anderson, Meng Shi, Benjamin W. Bair, Pete A. Denman, Reese L. Bowes, Rebecca A. Chierichetti, Ankur Agrawal, Mrutunjayya Mrutunjayya, Gerald A. Rogers, Shih-Wei Roger Chien, Lenitra M. Durham, Giuseppe Raffa, Irene Liew, Edwin Verplanke