Patents by Inventor Diogo Strube de Lima
Diogo Strube de Lima has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230195878Abstract: An example device includes a connection engine to establish a secure connection with a second device. The device includes a security engine to determine a shared security state for the first device and the second device based on a security state of the first device and a security state of the second device. The security engine is to detect a change in the security state of the first device should occur. The security engine is to change the shared security state at the first device. The security engine is to indicate to the second device the change in the shared security state at the first device.Type: ApplicationFiled: June 3, 2020Publication date: June 22, 2023Applicant: Hewlett-Packard Development Company, L.P.Inventors: Carlos Haas Costa, Diogo Strube de Lima, Walter Flores Pereira, Andre Dafonte Lopes Da Silva
-
Patent number: 10481733Abstract: In an example implementation according to aspects of the present disclosure, a method may include receiving, on a touch sensitive mat of a computing system, a touch input associated with a first event type. The method further includes determining whether to transform the touch input associated with the first event type to a different event type, and sending, to an application, the touch input associated with an event type based according to the determination.Type: GrantFiled: March 26, 2019Date of Patent: November 19, 2019Assignee: Hewlett-Packard Development Company, L.P.Inventors: Arman Alimian, Diogo Strube de Lima, Bradley Neal Suggs, Immanuel Amo, Nicholas P Lyons, Evan Wilson, Ruth Ann Lim
-
Publication number: 20190220146Abstract: In an example implementation according to aspects of the present disclosure, a method may include receiving, on a touch sensitive mat of a computing system, a touch input associated with a first event type. The method further includes determining whether to transform the touch input associated with the first event type to a different event type, and sending, to an application, the touch input associated with an event type based according to the determination.Type: ApplicationFiled: March 26, 2019Publication date: July 18, 2019Inventors: Arman Alimian, Diogo Strube de Lima, Bradley Neal Suggs, Immanuel Amo, Nicholas P. Lyons, Evan Wilson, Ruth Ann Lim
-
Patent number: 10275092Abstract: In an example implementation according to aspects of the present disclosure, a method may include receiving, on a touch sensitive mat of a computing system, a touch input associated with a first event type. The method further includes determining whether to transform the touch input associated with the first event type to a different event type, and sending, to an application, the touch input associated with an event type based according to the determination.Type: GrantFiled: September 24, 2014Date of Patent: April 30, 2019Assignee: Hewlett-Packard Development Company, L.P.Inventors: Arman Alimian, Diogo Strube de Lima, Bradley Neal Suggs, Immanuel Amo, Nicholas P Lyons, Evan Wilson, Ruth Ann Lim
-
Patent number: 10091490Abstract: In one implementation, a system for using a scan recommendation includes a receiver engine to receive a plurality of pictures of a three-dimensional (3D) object from a scanner, a model engine to generate a 3D model of the 3D object by aligning the plurality of pictures of the 3D object, an analysis engine to analyze the 3D model for a volume, a shape, and a color of the 3D object, wherein the volume, the shape, and the color analysis is used to generate scan recommendations, and a display engine to display information relating to the scan recommendations based on the volume, the shape, and the color analysis of the 3D model of the 3D object.Type: GrantFiled: October 8, 2015Date of Patent: October 2, 2018Assignee: Hewlett-Packard Development Company, L.P.Inventors: Divya Sharma, Daniel R. Tretter, Diogo Strube de Lima, Ilya Gerasimets
-
Patent number: 9858482Abstract: Example embodiments relate to providing mobile augmented reality for an enclosed area. In example embodiments, controller device receives a fixed video stream from a fixed camera and a mobile video stream of a current field of view of a mobile user device. The mobile user device comprises a reality augmentation module to project information on the current field of view. Further, the controller device includes a tracking module to identify a position and orientation of a mobile user of the mobile user device based on image processing of the fixed video stream and a fuzzy map module to use a fuzzy map of the enclosed area and the position and orientation of the mobile user to identify items of interest in the current field of view of the mobile user device, where the fuzzy map is generated based on a floor plan of the enclosed area.Type: GrantFiled: May 28, 2013Date of Patent: January 2, 2018Assignee: Ent. Services Development Corporation LPInventors: Taciano Dreckmann Perez, Diogo Strube de Lima, Carlos Costa Haas, Saulo Saraiva Schuh
-
Publication number: 20170300174Abstract: In an example implementation according to aspects of the present disclosure, a method may include receiving, on a touch sensitive mat of a computing system, a touch input associated with a first event type. The method further includes determining whether to transform the touch input associated with the first event type to a different event type, and sending, to an application, the touch input associated with an event type based according to the determination.Type: ApplicationFiled: September 24, 2014Publication date: October 19, 2017Inventors: Arman Alimian, Diogo Strube de Lima, Bradley Neal Suggs, Immanuel Amo, Nicholas P Lyons, Evan Wilson, Ruth Ann Lim
-
Publication number: 20170103511Abstract: In one implementation, a system for using a scan recommendation includes a receiver engine to receive a plurality of pictures of a three-dimensional (3D) object from a scanner, a model engine to generate a 3D model of the 3D object by aligning the plurality of pictures of the 3D object, an analysis engine to analyze the 3D model for a volume, a shape, and a color of the 3D object, wherein the volume, the shape, and the color analysis is used to generate scan recommendations, and a display engine to display information relating to the scan recommendations based on the volume, the shape, and the color analysis of the 3D model of the 3D object.Type: ApplicationFiled: October 8, 2015Publication date: April 13, 2017Inventors: Divya Sharma, Daniel R. Tretter, Diogo Strube de Lima, Ilya Gerasimets
-
Patent number: 9324292Abstract: Techniques for selecting an interaction scenario based on an object are described in various implementations. A method that implements the techniques may include receiving, at a computer system and from an image capture device, an image that depicts a viewing area proximate to a presentation device. The method may also include processing the image, using the computer system, to detect a user in the viewing area presenting an object in a manner that indicates desired interaction with the presentation device. The method may also include selecting, using the computer system, an interaction scenario for presentation on the presentation device based on the object.Type: GrantFiled: September 27, 2012Date of Patent: April 26, 2016Assignee: Hewlett-Packard Development Company, L.P.Inventors: Roberto Bender, Diogo Strube de Lima, Otavio Correa Cordeiro, Rodrigo Menezes do Prado, Soma Sundaram Santhiveeran
-
Publication number: 20150363647Abstract: Example embodiments relate to providing mobile augmented reality for an enclosed area. In example embodiments, controller device receives a fixed video stream from a fixed camera and a mobile video stream of a current field of view of a mobile user device. The mobile user device comprises a reality augmentation module to project information on the current field of view. Further, the controller device includes a tracking module to identify a position and orientation of a mobile user of the mobile user device based on image processing of the fixed video stream and a fuzzy map module to use a fuzzy map of the enclosed area and the position and orientation of the mobile user to identify items of interest in the current field of view of the mobile user device, where the fuzzy map is generated based on a floor plan of the enclosed area.Type: ApplicationFiled: May 28, 2013Publication date: December 17, 2015Inventors: Taciano Dreckmann PEREZ, Diogo Strube de LIMA, Carlos Costa HAAS, Saulo Saraiva SCHUH
-
Publication number: 20140085180Abstract: Techniques for selecting an interaction scenario based on an object are described in various implementations. A method that implements the techniques may include receiving, at a computer system and from an image capture device, an image that depicts a viewing area proximate to a presentation device. The method may also include processing the image, using the computer system, to detect a user in the viewing area presenting an object in a manner that indicates desired interaction with the presentation device. The method may also include selecting, using the computer system, an interaction scenario for presentation on the presentation device based on the object.Type: ApplicationFiled: September 27, 2012Publication date: March 27, 2014Applicant: Hewlett-Packard Development Company, L.P.Inventors: Roberto Bender, Diogo Strube de Lima, Otavio Correa Cordeiro, Rodrigo Menezes do Prado, Soma Sundaram Santhiveeran
-
Publication number: 20140035814Abstract: Techniques for adjusting settings of a presentation system are described in various implementations. A method that implements the techniques may include receiving, at a computer system and from an image capture device, an image that depicts a viewing area proximate to the presentation system. The method may also include processing the image, using the computer system, to determine whether a viewer is present in the viewing area. The method may also include, in response to determining that a viewer is present in the viewing area, processing the image, using the computer system, to determine an ambient lighting value associated with the viewing area and to determine a distance from the presentation system to the viewer. The method may also include adjusting a presentation setting of the presentation system based on the ambient lighting value and the distance.Type: ApplicationFiled: July 31, 2012Publication date: February 6, 2014Inventors: Diogo Strube de Lima, Soma Sundaram Santhiveeran, Walter Flores Pereira
-
Publication number: 20130290108Abstract: Techniques for selecting a targeted content item for playback are described in various implementations. A method that implements the techniques may include receiving, at a computer system and from an image capture device, an image that includes a plurality of potential users of a presentation device. The method may also include processing the image, using the computer system, to determine an indication of a relationship between two or more of the plurality of potential users. The method may further include selecting, using the computer system, a targeted content item for playback on the presentation device based on the indication of the relationship.Type: ApplicationFiled: April 26, 2012Publication date: October 31, 2013Inventors: Leonardo Alves Machado, Somma Sundaram Santhiveeran, Diogo Strube de Lima, Walter Flores Pereira
-
Publication number: 20130290994Abstract: Techniques for selecting a targeted content item for playback are described in various implementations. A method that implements the techniques may include receiving, from an image capture device, an image that includes a user who is viewing a first content item being displayed on a presentation device. The method may also include processing the image to identify a facial expression of the user, and determining an indication of user reaction to the first content item based on the identified facial expression of the user. The method may further include comparing the indication of user reaction to an indication of intended reaction associated with the first content item to determine an efficacy value of the first content item. The method may also include selecting a targeted content item for playback on the presentation device based on the efficacy value.Type: ApplicationFiled: April 27, 2012Publication date: October 31, 2013Inventors: Leonardo Alves Machado, Soma Sundaram Santhiveeran, Diogo Strube de Lima, Walter Flores Pereira