Patents by Inventor Gonzalo A Ramos
Gonzalo A Ramos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20110225541Abstract: Various embodiments enable, in a mapping context, various visual entities to be clustered into groups that do not occlude one another. In at least some embodiments, individual clusters are represented on a map by a puddle defined by a computed contour line. Users can interact with the puddle to acquire more information about the puddle's content. In at least some embodiments, user interaction can include zooming operations, clicking operations, hovering operations and the like.Type: ApplicationFiled: March 9, 2010Publication date: September 15, 2011Applicant: Microsoft CorporationInventor: Gonzalo A. Ramos
-
Publication number: 20110225546Abstract: Various embodiments enable, in a mapping context, various regions containing points of interest to be spotlighted. In at least some embodiments, a map is displayed on a computing device and points of interest can be located on the map. One or more regions containing the points of interest can be visually spotlighted to draw the user's attention to associated regions.Type: ApplicationFiled: March 9, 2010Publication date: September 15, 2011Inventor: Gonzalo A. Ramos
-
Publication number: 20110173565Abstract: A system for displaying hybrid image data produced by embedding additional media objects within street-level panoramic images includes a user interface through which a user may view, search for, and/or navigate through additional media objects in the context of browsing a virtual environment of a location at street level. In response to user input indicating a request to view a geographic location and/or an additional media object, street-level panoramic image data associated with the geographic location, in which one or more additional media objects also associated with the geographic location have been embedded, may be provided for display through the user interface. The user interface may be provided by a client device including one or more processors that receive hybrid image data produced by one or more processors of a server and display the image data to the user.Type: ApplicationFiled: March 4, 2010Publication date: July 14, 2011Applicant: Microsoft CorporationInventors: Eyal Ofek, Michael Kroepfl, Julian R. Walker, Gonzalo A. Ramos, Blaise Hilary Aguera y Arcas
-
Publication number: 20100080489Abstract: The first image may be displayed adjacent to the second image where the second image is a three dimensional image. An element may be selected in the first image and a matching element may be selected in the second image. A selection may be permitted to view a merged view where the merged view is the first image displayed over the second image by varying the opaqueness of the images. If the merged view is not acceptable, the method may repeat and if the merged view is acceptable; the first view onto the second view and the merged view may be stored as a merged image.Type: ApplicationFiled: September 30, 2008Publication date: April 1, 2010Applicant: MICROSOFT CORPORATIONInventors: Billy Chen, Eyal Ofek, Gonzalo Ramos, Michael F. Cohen, Steven M. Drucker
-
Publication number: 20100073398Abstract: A visual summarization of a web page is generated. This generally involves identifying at least one of, an image that is exemplary of the page content, text that is exemplary of the page content, and a logo associated with the web page. The exemplary image and logo, if identified, are scaled to prescribed sizes. The exemplary image can act as a background image for the summarization, or a scaled version of the at least a portion of the web page can act as the background image. In the latter, if an exemplary image was identified, it is overlaid onto the background image at a prescribed location. In either case, if a logo was identified, it is also overlaid onto the background image at a prescribed location. If exemplary text was identified, a text area in the background image is identified and at least some of the exemplary text is inserted.Type: ApplicationFiled: September 22, 2008Publication date: March 25, 2010Applicant: Microsoft CorporationInventors: Danyel Fisher, Jaime B. Teevan, Steven M. Drucker, Edward Cutrell, Gonzalo A. Ramos, Joseph Pitt, Paul Andre
-
Patent number: 7663620Abstract: Providing axonometric views of layers containing objects while preserving the visual attributes of the objects is disclosed. A group of objects, e.g., overlapping objects, is determined. Layer dimensions are calculated such that each object in the group is encompassed by a layer. Objects are placed in the layers and the layers are displayed in axonometric views. Visual cues to indicate selected layers are provided. Controls to adjust the depth of the layers and to enable moving objects in the selected layer are also provided.Type: GrantFiled: December 5, 2005Date of Patent: February 16, 2010Assignee: Microsoft CorporationInventors: George G Robertson, Daniel C Robbins, Desney S Tan, Kenneth P Hinckley, Maneesh Agrawala, Mary P Czerwinski, Patrick Markus Baudisch, Gonzalo A Ramos
-
Patent number: 7636794Abstract: Methods and apparatus of the various embodiments allow the coordination of resources of devices to jointly execute tasks or perform actions on one of the devices. In the method, a first gesture input is received at a first mobile computing device. A second gesture input is received at a second mobile computing device. In response, a determination is made as to whether the second gesture is accepted at the initiating device. If it is determined that the second gesture inputs is accepted, then resources of the devices are combined to jointly execute a particular task associated with the shared resources.Type: GrantFiled: October 31, 2005Date of Patent: December 22, 2009Assignee: Microsoft CorporationInventors: Gonzalo A. Ramos, Kenneth P. Hinckley
-
Patent number: 7523405Abstract: Displaying the relative depth of 2D image objects while preserving the visual attributes of the objects is disclosed. After an object group is determined, the members of the object group are temporarily moved away from a center location while preserving the object group members' positions relative to each other in the X-Y plane. A depth well is displayed at the center location and each object group member is connected to a ring-beam in the depth well. In response to a control action indicating a relative depth adjustment of an object group member relative to the remaining object group members, the depth of the object relative to the remaining object group members is changed. In response to a control action indicating the depth adjustment is complete, object group members are returned to their original positions in the X-Y plane with the adjusted object displayed at the object's new relative depth.Type: GrantFiled: November 16, 2005Date of Patent: April 21, 2009Assignee: Microsoft CorporationInventors: George G Robertson, Daniel C Robbins, Desney S Tan, Kenneth P Hinckley, Maneesh Agrawala, Mary P Czerwinski, Patrick Markus Baudisch, Gonzalo A Ramos
-
Patent number: 7454717Abstract: A pen-based user interface (PBUI) that facilitates input of a delimiter to a scope in a substantially uninterrupted stroke for generating a selection-action gesture phrase. Four delimiter techniques are provided, which include a Multi-stroke delimiter, a button delimiter, timeout delimiter and pigtail delimiter. The Pigtail delimiter uses a small loop to delimit the gesture. The delimiter techniques support integrated scope selection, command activation, and direct manipulation all in a single fluid pen gesture. The delimiter techniques can also be employed to terminate a complex scope consisting of a sequence of multiple pen strokes.Type: GrantFiled: October 20, 2004Date of Patent: November 18, 2008Assignee: Microsoft CorporationInventors: Kenneth P Hinckley, Patrick M Baudisch, Gonzalo A Ramos, Francois V Guimbretiere
-
Publication number: 20070126732Abstract: Providing axonometric views of layers containing objects while preserving the visual attributes of the objects is disclosed. A group of objects, e.g., overlapping objects, is determined. Layer dimensions are calculated such that each object in the group is encompassed by a layer. Objects are placed in the layers and the layers are displayed in axonometric views. Visual cues to indicate selected layers are provided. Controls to adjust the depth of the layers and to enable moving objects in the selected layer are also provided.Type: ApplicationFiled: December 5, 2005Publication date: June 7, 2007Applicant: Microsoft CorporationInventors: George Robertson, Daniel Robbins, Desney Tan, Kenneth Hinckley, Maneesh Agrawala, Mary Czerwinski, Patrick Baudisch, Gonzalo Ramos
-
Publication number: 20070124503Abstract: Methods and apparatus of the various embodiments allow the coordination of resources of devices to jointly execute tasks or perform actions on one of the devices. In the method, a first gesture input is received at a first mobile computing device. A second gesture input is received at a second mobile computing device. In response, a determination is made as to whether the second gesture is accepted at the initiating device. If it is determined that the second gesture inputs is accepted, then resources of the devices are combined to jointly execute a particular task associated with the shared resources.Type: ApplicationFiled: October 31, 2005Publication date: May 31, 2007Applicant: Microsoft CorporationInventors: Gonzalo Ramos, Kenneth Hinckley
-
Publication number: 20070113198Abstract: Displaying the relative depth of 2D image objects while preserving the visual attributes of the objects is disclosed. After an object group is determined, the members of the object group are temporarily moved away from a center location while preserving the object group members' positions relative to each other in the X-Y plane. A depth well is displayed at the center location and each object group member is connected to a ring-beam in the depth well. In response to a control action indicating a relative depth adjustment of an object group member relative to the remaining object group members, the depth of the object relative to the remaining object group members is changed. In response to a control action indicating the depth adjustment is complete, object group members are returned to their original positions in the X-Y plane with the adjusted object displayed at the object's new relative depth.Type: ApplicationFiled: November 16, 2005Publication date: May 17, 2007Applicant: Microsoft CorporationInventors: George Robertson, Daniel Robbins, Desney Tan, Kenneth Hinckley, Maneesh Agrawala, Mary Czerwinski, Patrick Baudisch, Gonzalo Ramos
-
Publication number: 20060085767Abstract: A pen-based user interface (PBUI) that facilitates input of a delimiter to a scope in a substantially uninterrupted stroke for generating a selection-action gesture phrase. Four delimiter techniques are provided, which include a Multi-stroke delimiter, a button delimiter, timeout delimiter and pigtail delimiter. The Pigtail delimiter uses a small loop to delimit the gesture. The delimiter techniques support integrated scope selection, command activation, and direct manipulation all in a single fluid pen gesture. The delimiter techniques can also be employed to terminate a complex scope consisting of a sequence of multiple pen strokes.Type: ApplicationFiled: October 20, 2004Publication date: April 20, 2006Applicant: Microsoft CorporationInventors: Kenneth Hinckley, Patrick Baudisch, Gonzalo Ramos, Francois Guimbretiere