Patents by Inventor Janet Galore
Janet Galore has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11170817Abstract: Embodiments herein describe a video editor that can identify and track objects (e.g., products) in a video. The video editor identifies a particular object in one frame of the video and tracks the location of the object in the video. The video editor can update a position of an indicator that tracks the location of the object in the video. In addition, the video editor can identify an identification (ID) of the object which the editor can use to suggest annotations that provide additional information about the object. Once modified, the video is displayed on a user device, and when the viewer sees an object she can is interested in, she can pause the video which causes the indicator to appear. The user can select the indicator which prompts the user device to display the annotations corresponding to the object.Type: GrantFiled: April 27, 2020Date of Patent: November 9, 2021Assignee: Amazon Technologies, Inc.Inventors: Dominick Khanh Pham, Sven Daehne, Janet Galore, Damon Jon Ganem
-
Patent number: 11120490Abstract: A video segmenting system identifies a product for sale in a video and determines one or more attributes of audio and video content within the video. The video segmenting system determines a video segment within the video that is associated with the product for sale, based on the attributes. The video segmenting system generates a tag that associates the product for sale with the video segment and sends an indication of the tag to a user device. Once the video is played on a user device, the user device detects a search query about the product for sale. Using the tag, the user device can display a marker on the user device corresponding to the location of the video segment within the video.Type: GrantFiled: June 5, 2019Date of Patent: September 14, 2021Assignee: Amazon Technologies, Inc.Inventors: Dominick Khanh Pham, Sven Daehne, Mike Dodge, Janet Galore
-
Patent number: 10897637Abstract: Techniques described herein include systems and methods for synchronizing multiple content streams for an event. A reference time point for an event being live-streamed by a plurality of streaming devices may be maintained. A plurality of content streams and manifest files may be received from streaming devices for the event where a manifest file includes a segment template and a timeline for a segment of content included in a content stream. The timeline may identify a capture time for the content generated by an associated streaming device of the streaming devices. The plurality of content streams may be synchronized by modifying the manifest files based on the capture times included in the manifest files and the reference time point. The plurality of content streams and the modified manifest files may be transmitted to a content viewing computer with instructions for synchronizing playback.Type: GrantFiled: September 20, 2018Date of Patent: January 19, 2021Assignee: Amazon Technologies, Inc.Inventors: Dominick Pham, Charles Dorner, Xerxes Irani, Janet Galore
-
Patent number: 10863230Abstract: Techniques described herein include systems and methods for identifying areas of a user interface to position overlay content without obscuring primary content. A scene in a content stream may be identified based on one or more user interface elements included in the content stream. Boundaries and positions of the one or more user interface elements may be identified in the scene based on an edge detection algorithm. A prominence value may be determined for a container that corresponds to an area of a user interface that includes the one or more user interface elements based on aggregate user input for the scene. Instructions for updating the scene may be transmitted to a user device to incorporate an overlay that includes containers that correspond to areas of the user interface that enables a user to place an overlay user interface element in a particular container based on the prominence value.Type: GrantFiled: September 21, 2018Date of Patent: December 8, 2020Assignee: Amazon Technologies, Inc.Inventors: Dominick Pham, Janet Galore, Xerxes Irani
-
Publication number: 20200381018Abstract: Embodiments herein describe a video editor that can identify and track objects (e.g., products) in a video. The video editor identifies a particular object in one frame of the video and tracks the location of the object in the video. The video editor can update a position of an indicator that tracks the location of the object in the video. In addition, the video editor can identify an identification (ID) of the object which the editor can use to suggest annotations that provide additional information about the object. Once modified, the video is displayed on a user device, and when the viewer sees an object she can is interested in, she can pause the video which causes the indicator to appear. The user can select the indicator which prompts the user device to display the annotations corresponding to the object.Type: ApplicationFiled: April 27, 2020Publication date: December 3, 2020Inventors: Dominick Khanh PHAM, Sven DAEHNE, Janet GALORE, Damon Jon GANEM
-
Patent number: 10699750Abstract: Embodiments herein describe a video editor that can identify and track objects (e.g., products) in a video. The video editor identifies a particular object in one frame of the video and tracks the location of the object in the video. The video editor can update a position of an indicator that tracks the location of the object in the video. In addition, the video editor can identify an identification (ID) of the object which the editor can use to suggest annotations that provide additional information about the object. Once modified, the video is displayed on a user device, and when the viewer sees an object she can is interested in, she can pause the video which causes the indicator to appear. The user can select the indicator which prompts the user device to display the annotations corresponding to the object.Type: GrantFiled: May 30, 2019Date of Patent: June 30, 2020Assignee: Amazon Technologies, Inc.Inventors: Dominick Khanh Pham, Sven Daehne, Janet Galore, Damon Jon Ganem
-
Patent number: 10657176Abstract: A video tagging system that can generate tags corresponding to associations of object-related keywords mentioned in a video to time instances in the video is described. The video tagging system identifies a particular object associated with a video. Using a transcription of audio content within the video, the video tagging system determines a keyword mentioned in the audio content that is associated with the object and a time instance within a timeline of the video when the keyword is mentioned. The video tagging system generates a tag that associates the keyword with the time instance and sends an indication of the tag to a user device. Once the video is displayed on the user device, the user can search for the keyword. This prompts the user device to display a marker indicating the time instance when the keyword is mentioned.Type: GrantFiled: June 11, 2019Date of Patent: May 19, 2020Assignee: Amazon Technologies, Inc.Inventors: Dominick Khanh Pham, Sven Daehne, Mike Dodge, Janet Galore
-
Patent number: 9468848Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: GrantFiled: December 12, 2013Date of Patent: October 18, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Andy Wilson, Alex A. Kipman, Janet Galore
-
Patent number: 8943420Abstract: The claimed subject matter relates to an architecture that can enhance an experience associated with indicia related to a local environment. In particular, the architecture can receive an image that depicts a view of the local environment including a set of entities represented in the image. One or more of the entities can be matched or correlated to modeled entities included in a geospatial model of the environment, potentially based upon location and direction, in order to scope or frame the view depicted in the image to a modeled view. In addition, the architecture can select additional content that can be presented. The additional content typically relates to services or data associated with modeled entities included in the geospatial model or associated with modeled entities included in an image-based data store.Type: GrantFiled: June 18, 2009Date of Patent: January 27, 2015Assignee: Microsoft CorporationInventors: Flora P. Goldthwaite, Brett D. Brewer, Eric I-Chao Chang, Jonathan C. Cluts, Karim T. Farouki, Gary W. Flake, Janet Galore, Jason Garms, Abhiram G. Khune, Oscar Murillo, Sven Pleyer
-
Publication number: 20140109023Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: ApplicationFiled: December 12, 2013Publication date: April 17, 2014Applicant: Microsoft CorporationInventors: Oscar E. Murillo, Andy Wilson, Alex A. Kipman, Janet Galore
-
Patent number: 8631355Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: GrantFiled: January 8, 2010Date of Patent: January 14, 2014Assignee: Microsoft CorporationInventors: Oscar E. Murillo, Andy Wilson, Alex A. Kipman, Janet Galore
-
Publication number: 20110173204Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: ApplicationFiled: January 8, 2010Publication date: July 14, 2011Applicant: Microsoft CorporationInventors: Oscar E. Murillo, Andy Wilson, Alex A. Kipman, Janet Galore
-
Publication number: 20100332313Abstract: The claimed subject matter provides a system and/or a method that facilitates user selectable advertising networks. Advertising content can be formed into cohesive subsets of advertising. These subsets can be related to criteria to facilitate selection between available subsets of advertising content. A selection component can facilitate selection of the available subsets of advertising content based on these criteria. The criteria can be related to user preferences. Further the criteria can relate to explicit user preferences such as opt-in or opt-out indicia. The user can be presented with more relevant advertising content where user selection of advertising networks occurs.Type: ApplicationFiled: June 25, 2009Publication date: December 30, 2010Applicant: MICROSOFT CORPORATIONInventors: John M. Miller, Janet Galore, Alexander Gounares, Eric Horvitz, Karim Farouki, Patrick Nguyen, Brett Brewer, Jayaram N.M. Nanduri, Milind Mahajan, Oscar Murillo
-
Publication number: 20100332496Abstract: The claimed subject matter provides a system and/or a method that facilitates accessing information content based at least in part on relevancy to a user by leveraging user ambitions. User ambitions can take the form of to-do lists, calendar items, goals, or interests. These can be leveraged with or without contextual information, historical data, user profiles, and the like to determine the relevancy of content to a specific user. This can facilitate determining what content is accessible to a user based on relevance. A threshold relevance level can be dynamically adjusted.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Applicant: MICROSOFT CORPORATIONInventors: Eric Horvitz, Brett Brewer, Melissa W. Dunn, Janet Galore, Abhiram G. Khune, Sin Lew, Timothy D. Sharpe
-
Publication number: 20100325563Abstract: The claimed subject matter relates to an architecture that can enhance an experience associated with indicia related to a local environment. In particular, the architecture can receive an image that depicts a view of the local environment including a set of entities represented in the image. One or more of the entities can be matched or correlated to modeled entities included in a geospatial model of the environment, potentially based upon location and direction, in order to scope or frame the view depicted in the image to a modeled view. In addition, the architecture can select additional content that can be presented. The additional content typically relates to services or data associated with modeled entities included in the geospatial model or associated with modeled entities included in an image-based data store.Type: ApplicationFiled: June 18, 2009Publication date: December 23, 2010Applicant: MICROSOFT CORPORATIONInventors: Flora P. Goldthwaite, Brett D. Brewer, Eric I-Chao Chang, Jonathan C. Cluts, Karim T. Farouki, Gary W. Flake, Janet Galore, Jason Garms, Abhiram G. Khune, Oscar Murillo, Sven Pleyer