Patents by Inventor Jae Pum Park
Jae Pum Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9143601Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.Type: GrantFiled: November 9, 2011Date of Patent: September 22, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
-
Patent number: 9031847Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.Type: GrantFiled: November 15, 2011Date of Patent: May 12, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
-
Patent number: 9008639Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.Type: GrantFiled: March 11, 2011Date of Patent: April 14, 2015Assignee: Microsoft Technology Licensing, LLCInventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
-
Publication number: 20140333670Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.Type: ApplicationFiled: May 9, 2014Publication date: November 13, 2014Applicant: Amazon Technologies, Inc.Inventors: Bryan Todd Agnetta, Venkata Nagesh Babu Balivada, Blair Harold Beebe, Joseph Robert Buchta, Vibhunandan Gavini, Catherine Ann Hendricks, Brian Peter Kralyevich, Santhosh Kumar Paraliyil Krishnankutty, Richard Leigh Mains, Garret Martin Miller Graaf, Jae Pum Park, Sean Anthony Rooney, Marc Anthony Salazar, Nino Yuniardi
-
Publication number: 20140333530Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.Type: ApplicationFiled: May 9, 2014Publication date: November 13, 2014Applicant: Amazon Technologies, Inc.Inventors: Bryan Todd Agnetta, Brian Peter Kralyevich, Jason Phillip Kriese, Jae Pum Park, Sean Anthony Rooney
-
Publication number: 20140337800Abstract: A computing device can utilize a recognition mode wherein an interface utilizes graphical elements, such as virtual fireflies or other such elements, to indicate objects that are recognized or identified. As objects are recognized, fireflies perform one or more specified actions to indicate recognition. A ribbon or other user-selectable icon is displayed indicates a specific action that the device can perform with respect to the respective object. As additional objects are recognized, additional ribbons are created and older ribbons can be moved off screen and stored for subsequent retrieval or search. The fireflies disperse when the objects are no longer represented in captured sensor data, and can be animated to move towards representations of new objects as features of those objects are identified as potential object features, in order to communicate a level of recognition for a current scene or environment.Type: ApplicationFiled: December 20, 2013Publication date: November 13, 2014Applicant: Amazon Technologies, Inc.Inventors: Timothy Thomas Gray, Gray Anthony Salazar, Steven Steven Sommer, Charles Eugene Cummins, Sean Anthony Rooney, Bryan Todd Agnetta, Jae Pum Park, Richard Leigh Mains, Suzan Marashi
-
Publication number: 20140337791Abstract: Electronic devices, interfaces for electronic devices, and techniques for interacting with such interfaces and electronic devices are described. For instance, this disclosure describes an example electronic device that includes sensors, such as multiple front-facing cameras to detect orientation and/or location of the electronic device relative to an object and one or more inertial sensors. Users of the device may perform gestures on the device by moving the device in-air and/or by moving their head, face, or eyes relative to the device. In response to these gestures, the device may perform operations.Type: ApplicationFiled: May 9, 2014Publication date: November 13, 2014Applicant: Amazon Technologies, Inc.Inventors: Bryan Todd Agnetta, Aaron Michael Donsbach, Catherine Ann Hendricks, Brian Peter Kralyevich, Richard Leigh Mains, Jae Pum Park, Sean Anthony Rooney, Marc Anthony Salazar, Jason Glenn Silvis, Nino Yuniardi
-
Publication number: 20130324194Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: ApplicationFiled: August 8, 2013Publication date: December 5, 2013Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Patent number: 8509842Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: GrantFiled: February 18, 2011Date of Patent: August 13, 2013Assignee: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Publication number: 20130124207Abstract: A computing device (e.g., a smart phone, a tablet computer, digital camera, or other device with image capture functionality) causes an image capture device to capture one or more digital images based on audio input (e.g., a voice command) received by the computing device. For example, a user's voice (e.g., a word or phrase) is converted to audio input data by the computing device, which then compares (e.g., using an audio matching algorithm) the audio input data to an expected voice command associated with an image capture application. In another aspect, a computing device activates an image capture application and captures one or more digital images based on a received voice command. In another aspect, a computing device transitions from a low-power state to an active state, activates an image capture application, and causes a camera device to capture digital images based on a received voice command.Type: ApplicationFiled: November 15, 2011Publication date: May 16, 2013Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Joseph H. Matthews, III, James Kai Yu Lau, Monica Estela Gonzalez Veron, Jae Pum Park
-
Publication number: 20130117365Abstract: Exemplary methods, apparatus, and systems are disclosed for capturing, organizing, sharing, and/or displaying media. For example, using embodiments of the disclosed technology, a unified playback and browsing experience for a collection of media can be created automatically. For instance, heuristics and metadata can be used to assemble and add narratives to the media data. Furthermore, this representation of media can recompose itself dynamically as more media is added to the collection. While a collection may use a single user's content, sometimes media that is desirable to include in the collection is captured by friends and/or others at the same event. In certain embodiments, media content related to the event can be automatically collected and shared among selected groups. Further, in some embodiments, new media can be automatically incorporated into a media collection associated with the event, and the playback experience dynamically updated.Type: ApplicationFiled: November 9, 2011Publication date: May 9, 2013Applicant: Microsoft CorporationInventors: Udiyan Padmanabhan, William Messing, Martin Shetter, Tatiana Gershanovich, Michael J. Ricker, Jannes Paul Peters, Raman Kumar Sarin, Joseph H. Matthews, III, Monica Gonzalez, Jae Pum Park
-
Patent number: 8375007Abstract: A method to expose status information is provided. The status information is associated with metadata extracted from multimedia files and stored in a metadata database. The metadata information that is extracted from the multimedia files is stored in a read queue to allow a background thread to process the metadata and populate the metadata database. Additionally, the metadata database may be updated to include user-define metadata, which is written back to the multimedia files. The user-defined metadata is included in a write queue and is written to the multimedia files associated with the user-defined metadata. The status of the read and write queues are exposed to a user through a graphical user interface. The status may include the list of multimedia files included in the read and write queues, the priorities of each multimedia file, and the number of remaining multimedia files.Type: GrantFiled: June 21, 2011Date of Patent: February 12, 2013Assignee: Microsoft CorporationInventors: Alexander S. Brodie, Benjamin L. Perry, David R. Parlin, Jae Pum Park, Michael J. Gilmore, Scott E. Dart
-
Publication number: 20120231838Abstract: Techniques and tools are described for controlling an audio signal of a mobile device. For example, information indicative of acceleration of the mobile device can be received and correlation between the information indicative of acceleration and exemplar whack event data can be determined. An audio signal of the mobile device can be controlled based on the correlation.Type: ApplicationFiled: March 11, 2011Publication date: September 13, 2012Applicant: Microsoft CorporationInventors: James M. Lyon, James Kai Yu Lau, Raman Kumar Sarin, Jae Pum Park, Monica Estela Gonzalez Veron
-
Publication number: 20120214542Abstract: The present disclosure relates to a mobile phone and a method for answering such a phone automatically without user input. In one embodiment, the mobile phone detects that a call is being received. A proximity sensor is then used to detect the presence of a nearby object. For example, this allows a determination to be made whether the mobile phone is within a pocket of the user while the phone is ringing. Then a determination is made whether the proximity sensor changes states. For example, if a user removes the phone from their pocket, the proximity sensor switches from detecting something proximal to detecting that the phone is no longer in the user's pocket. Next, a determination is made whether the proximity sensor is again next to an object, such as an ear. If so, the mobile phone can be automatically answered without further user input.Type: ApplicationFiled: February 18, 2011Publication date: August 23, 2012Applicant: Microsoft CorporationInventors: Raman Kumar Sarin, Monica Estela Gonzalez Veron, Kenneth Paul Hinckley, Sumit Kumar, James Kai Yu Lau, Joseph H. Matthews, III, Jae Pum Park
-
Publication number: 20120158290Abstract: A navigation user interface displays a route to be navigated over a road view map. If a user selects a particular segment of the displayed route, text-based directions associated with the particular route segment are displayed, with a selectable area to return to the map view. Additionally, as the road view map is zoomed in, when the zoom level reaches a threshold zoom level, the navigation user interface automatically transitions to displaying a satellite view map overlaid with at least a portion of the route to be navigated.Type: ApplicationFiled: December 17, 2010Publication date: June 21, 2012Applicant: Microsoft CorporationInventors: Aarti Bharathan, Liang Chen, Jae Pum Park
-
Publication number: 20110252069Abstract: A method to expose status information is provided. The status information is associated with metadata extracted from multimedia files and stored in a metadata database. The metadata information that is extracted from the multimedia files is stored in a read queue to allow a background thread to process the metadata and populate the metadata database. Additionally, the metadata database may be updated to include user-define metadata, which is written back to the multimedia files. The user-defined metadata is included in a write queue and is written to the multimedia files associated with the user-defined metadata. The status of the read and write queues are exposed to a user through a graphical user interface. The status may include the list of multimedia files included in the read and write queues, the priorities of each multimedia file, and the number of remaining multimedia files.Type: ApplicationFiled: June 21, 2011Publication date: October 13, 2011Applicant: MICROSOFT CORPORATIONInventors: ALEXANDER S. BRODIE, BENJAMIN L. PERRY, DAVID R. PARLIN, JAE PUM PARK, MICHAEL J. GILMORE, SCOTT E. DART
-
Patent number: 7987160Abstract: A method to expose status information is provided. The status information is associated with metadata extracted from multimedia files and stored in a metadata database. The metadata information that is extracted from the multimedia files is stored in a read queue to allow a background thread to process the metadata and populate the metadata database. Additionally, the metadata database may be updated to include user-define metadata, which is written back to the multimedia files. The user-defined metadata is included in a write queue and is written to the multimedia files associated with the user-defined metadata. The status of the read and write queues are exposed to a user through a graphical user interface. The status may include the list of multimedia files included in the read and write queues, the priorities of each multimedia file, and the number of remaining multimedia files.Type: GrantFiled: January 30, 2006Date of Patent: July 26, 2011Assignee: Microsoft CorporationInventors: Alexander S. Brodie, Benjamin L. Perry, David R. Parlin, Jae Pum Park, Michael J. Gilmore, Scott E. Dart
-
Patent number: 7979790Abstract: A method and system to manage rendering of multimedia content are provided. A theme specifies a collection of layouts defining multimedia content placement. The multimedia content is processed to extract one or more characteristics. The layouts are selected and populated with multimedia content based on the one or more characteristics associated with the multimedia content. The multimedia content is rendered by transitioning through the selected layouts.Type: GrantFiled: February 28, 2006Date of Patent: July 12, 2011Assignee: Microsoft CorporationInventors: Benjamin Nicholas Truelove, Chunkit Chan, Jae Pum Park, Shabbir Abbas Shahpurwala, William Mountain Lewis
-
Patent number: 7451407Abstract: A system, a method and computer-readable media for presenting groups of items to a user. Items are divided into groups, and a group header is associated with each group. The items and group headers are presented on a screen display, and the displayed content is subject to navigational requests from a user. When one of the group headers is located near an edge of the screen display, its position is fixed to prevent the header from being removed from the screen display.Type: GrantFiled: November 30, 2005Date of Patent: November 11, 2008Assignee: Microsoft CorporationInventors: Alexander Brodie, Benjamin Truelove, David Parlin, Jae Pum Park, Scott Dart
-
Patent number: 7441182Abstract: Systems and methods for digital negatives are described. In one aspect, a digital negative is created on a computing device from a digital image. The digital image is linked to the digital negative. In response to a save operation associated with the digital image, a new digital image is generated and bi-directionally connected to the digital negative. In response to a revert operation associated with the new digital image, contents of the new digital image are replaced with contents of the digital negative.Type: GrantFiled: October 23, 2003Date of Patent: October 21, 2008Assignee: Microsoft CorporationInventors: Craig Beilinson, Benjamin L. Perry, Christopher A. Evans, Clint Jorgenson, Jae Pum Park, Linda Hong, Pritvinath Obla, Anthony T. Chor, Wei Feng, Alexander Castro