Patents by Inventor Pernilla Qvarfordt

Pernilla Qvarfordt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10037312
    Abstract: A gaze annotation method for an image includes: receiving a user command to capture and display a captured image; receiving another user command to create an annotation for the displayed image; in response to the second user command, receiving from the gaze tracking device a point-of-regard estimating a user's gaze in the displayed image; displaying an annotation anchor on the image proximate to the point-of-regard; and receiving a spoken annotation from the user and associating the spoken annotation with the annotation anchor. A gaze annotation method for a real-world scene includes: receiving a field of view and location information; receiving from the gaze tracking device a point-of-regard from the user located within the field of view; capturing and displaying a captured image of the field of view; while capturing the image, receiving a spoken annotation from the user; and displaying an annotation anchor on the image.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: July 31, 2018
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Diako Mardanbegi, Pernilla Qvarfordt
  • Publication number: 20160283455
    Abstract: A gaze annotation method for an image includes: receiving a user command to capture and display a captured image; receiving another user command to create an annotation for the displayed image; in response to the second user command, receiving from the gaze tracking device a point-of-regard estimating a user's gaze in the displayed image; displaying an annotation anchor on the image proximate to the point-of-regard; and receiving a spoken annotation from the user and associating the spoken annotation with the annotation anchor. A gaze annotation method for a real-world scene includes: receiving a field of view and location information; receiving from the gaze tracking device a point-of-regard from the user located within the field of view; capturing and displaying a captured image of the field of view; while capturing the image, receiving a spoken annotation from the user; and displaying an annotation anchor on the image.
    Type: Application
    Filed: March 24, 2015
    Publication date: September 29, 2016
    Inventors: DIAKO MARDANBEGI, PERNILLA QVARFORDT
  • Patent number: 9384420
    Abstract: A computing device classifies user activities. The device receives eye tracking data for a person viewing a page having multiple contiguous regions. The eye tracking data comprises a temporal sequence of fixations, where each fixation has a duration and a location. The device partitions the fixations into clusters, where each cluster has a consecutive sub-sequence of the fixations. The device assigns a provisional user activity label to each fixation based on a set of characteristics of the fixation. The device also groups together consecutive fixations that have the same label to partition the fixations into groups. For each group that matches a respective cluster, the device retains the provisional label assignment as a final user activity label assigned to each of the fixations in the respective group. The device also reconciles non-matching groups with non-matching clusters, using the regions, to form a set of non-overlapping modified groups.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: July 5, 2016
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Pernilla Qvarfordt
  • Publication number: 20160132752
    Abstract: A computing device classifies user activities. The device receives eye tracking data for a person viewing a page having multiple contiguous regions. The eye tracking data comprises a temporal sequence of fixations, where each fixation has a duration and a location. The device partitions the fixations into clusters, where each cluster has a consecutive sub-sequence of the fixations. The device assigns a provisional user activity label to each fixation based on a set of characteristics of the fixation. The device also groups together consecutive fixations that have the same label to partition the fixations into groups. For each group that matches a respective cluster, the device retains the provisional label assignment as a final user activity label assigned to each of the fixations in the respective group. The device also reconciles non-matching groups with non-matching clusters, using the regions, to form a set of non-overlapping modified groups.
    Type: Application
    Filed: January 13, 2016
    Publication date: May 12, 2016
    Inventor: Pernilla Qvarfordt
  • Patent number: 9256785
    Abstract: A computing device classifies user activities for a person interacting with a computer user interface using one or more user interface devices. The computing device receives eye tracking data for the person, which includes a sequence of fixations ordered temporally. Each fixation corresponds to a plurality of consecutive measured gaze points. Each fixation has a duration and location based on the corresponding gaze points. For each fixation, the computing device determines a plurality of features for the fixation, including characteristics of the fixation, context features based on preceding or subsequent fixations, and user interaction features based on information from the user interface devices during the fixation. The computing device assigns a user activity label to the fixation according to the features. The label is selected from a predefined set. The computing device then analyzes the fixations and their assigned user activity labels to make recommendations.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: February 9, 2016
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Pernilla Qvarfordt
  • Patent number: 9177285
    Abstract: Described is a system and method for controlling the state and capabilities of a meeting room automatically based on the content being presented. The system can detect certain states (such as transitions to a demo, or question-and-answer sessions) based on content of slides, and can automatically switch displays and other devices in a meeting room to accommodate these new states.
    Type: Grant
    Filed: May 6, 2008
    Date of Patent: November 3, 2015
    Assignee: FUJI XEROX CO., LTD.
    Inventors: William Van Melle, Anthony Dunnigan, Eugene Golovchinsky, Scott Carter, Pernilla Qvarfordt
  • Publication number: 20150177967
    Abstract: In embodiments, a user interface provides for manipulating one or more physical devices for use in a conference room setting. The user interface includes a touch screen for presenting a variety of options to a user. The touch screen includes controllers, such as buttons, to enable the user to select any one of the options. Each of the controllers has goals-oriented information, enabling the user to select a goal, while insulating the user from the underlying complex processes required to carry out the goal through the selection of one of the controllers.
    Type: Application
    Filed: August 24, 2011
    Publication date: June 25, 2015
    Inventors: Maribeth Joy Back, Gene Golovchinsky, John Steven Boreczky, Anthony Eric Dunnigan, Pernilla Qvarfordt, William J. van Melle, Laurent Denoue
  • Publication number: 20150131850
    Abstract: A computing device classifies user activities for a person interacting with a computer user interface using one or more user interface devices. The computing device receives eye tracking data for the person, which includes a sequence of fixations ordered temporally. Each fixation corresponds to a plurality of consecutive measured gaze points. Each fixation has a duration and location based on the corresponding gaze points. For each fixation, the computing device determines a plurality of features for the fixation, including characteristics of the fixation, context features based on preceding or subsequent fixations, and user interaction features based on information from the user interface devices during the fixation. The computing device assigns a user activity label to the fixation according to the features. The label is selected from a predefined set. The computing device then analyzes the fixations and their assigned user activity labels to make recommendations.
    Type: Application
    Filed: November 12, 2013
    Publication date: May 14, 2015
    Applicant: FUJI XEROX CO., LTD.
    Inventor: Pernilla Qvarfordt
  • Patent number: 8909702
    Abstract: A system is provided that coordinates the operation of hardware devices and software applications in support of specific tasks such as holding a meeting. The system includes one or more computers connected by a network, at least one configuration repository component, at least one room control component, and one or more devices and applications for each room control component. Meeting presenters can configure a meeting, or they may use a default configuration. A meeting includes one or more presenters' configurations of devices and applications to accommodate multiple presenters simultaneously. The meeting configurations are stored by the configuration repository component. Each presenter's configuration comprises a subset of the one or more devices and applications. The operation of devices and applications in the meeting is coordinated by the room control component based on the presenters' configurations for the meeting.
    Type: Grant
    Filed: September 14, 2007
    Date of Patent: December 9, 2014
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Gene Golovchinsky, John Steven Boreczky, William J. van Melle, Maribeth Joy Back, Anthony Eric Dunnigan, Pernilla Qvarfordt
  • Patent number: 8601538
    Abstract: An automated test to tell computers and humans apart is disclosed, comprising displaying on a computer screen an animation comprising of a foreground and a background, one of the foreground comprising a plurality of typographical characters and the other comprising partial obstruction of the typographical characters, and wherein the animation comprises relative motion between the background and foreground. The automated test may comprise displaying on a computer screen an image, and requiring the user to perform operation on the image to resolve an encoded solution. The test may also comprise displaying on a computer screen a video clip, and requiring a user to provide an input corresponding to subject matter presented in the video clip.
    Type: Grant
    Filed: August 22, 2006
    Date of Patent: December 3, 2013
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Pernilla Qvarfordt, Eleanor G. Rieffel, David M. Hilbert
  • Patent number: 8243116
    Abstract: A method is described for modifying behavior for social appropriateness in computer mediated communications. Data can be obtained representing the natural non-verbal behavior of a video conference participant. The cultural appropriateness of the behavior is calculated based on a cultural model and previous behavior of the session. Upon detecting that the behavior of the user is culturally inappropriate, the system can calculate an alternative behavior based on the cultural model. Based on this alternative behavior, the video output stream can be modified to be more appropriate by altering gaze and gesture of the conference participants. The output stream can be modified by using previously recorded images of the participant, by digitally synthesizing a virtual avatar display or by switching the view displayed to the remote participant. Once the user's behavior changes to be once again culturally appropriate, the modified video stream can be returned to unmodified state.
    Type: Grant
    Filed: September 24, 2007
    Date of Patent: August 14, 2012
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Pernilla Qvarfordt, Gene Golovchinsky, Maribeth Joy Back
  • Publication number: 20110307800
    Abstract: In embodiments, a user interface provides for manipulating one or more physical devices for use in a conference room setting. The user interface includes a touch screen for presenting a variety of options to a user. The touch screen includes controllers, such as buttons, to enable the user to select any one of the options. Each of the controllers has goals-oriented information, enabling the user to select a goal, while insulating the user from the underlying complex processes required to carry out the goal through the selection of one of the controllers.
    Type: Application
    Filed: August 24, 2011
    Publication date: December 15, 2011
    Inventors: Maribeth Joy Back, Gene Golovchinsky, John Steven Boreczky, Anthony Eric Dunnigan, Pernilla Qvarfordt, William J. van Melle, Laurent Denoue
  • Publication number: 20090295682
    Abstract: One challenge with using sensing devices to collect information about a person is that the person needs to stay within reach of sensor. Several solutions exist to resolve this issue, some relay on improving the machinery of the sensor, while other relay on constraining the movements of the person tracked. Both these methods have limitations. In this invention proposal, we describe a method for improving data collection without restraining the user unnecessary or that require a leap in sensor technology. By providing subtle feedback of the sensor's field of sensitivity in the form of a reflecting UI, the user can adjust his or her position in front of the sensor to improve the data collection.
    Type: Application
    Filed: May 30, 2008
    Publication date: December 3, 2009
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Pernilla Qvarfordt, Anthony Dunnigan
  • Publication number: 20090282339
    Abstract: Described is a system and method for controlling the state and capabilities of a meeting room automatically based on the content being presented. The system can detect certain states (such as transitions to a demo, or question-and-answer sessions) based on content of slides, and can automatically switch displays and other devices in a meeting room to accommodate these new states.
    Type: Application
    Filed: May 6, 2008
    Publication date: November 12, 2009
    Applicant: FUJI XEROX CO., LTD.
    Inventors: William Van MELLE, Anthony Dunnigan, Eugene Golovchinsky, Scott Carter, Pernilla Qvarfordt
  • Publication number: 20090079816
    Abstract: A method is described for modifying behavior for social appropriateness in computer mediated communications. Data can be obtained representing the natural non-verbal behavior of a video conference participant. The cultural appropriateness of the behavior is calculated based on a cultural model and previous behavior of the session. Upon detecting that the behavior of the user is culturally inappropriate, the system can calculate an alternative behavior based on the cultural model. Based on this alternative behavior, the video output stream can be modified to be more appropriate by altering gaze and gesture of the conference participants. The output stream can be modified by using previously recorded images of the participant, by digitally synthesizing a virtual avatar display or by switching the view displayed to the remote participant. Once the user's behavior changes to be once again culturally appropriate, the modified video stream can be returned to unmodified state.
    Type: Application
    Filed: September 24, 2007
    Publication date: March 26, 2009
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Pernilla Qvarfordt, Gene Golovchinsky, Maribeth Joy Back
  • Publication number: 20080183820
    Abstract: A system is provided that coordinates the operation of hardware devices and software applications in support of specific tasks such as holding a meeting. The system includes one or more computers connected by a network, at least one configuration repository component, at least one room control component, and one or more devices and applications for each room control component. Meeting presenters can configure a meeting, or they may use a default configuration. A meeting includes one or more presenters' configurations of devices and applications to accommodate multiple presenters simultaneously. The meeting configurations are stored by the configuration repository component. Each presenter's configuration comprises a subset of the one or more devices and applications. The operation of devices and applications in the meeting is coordinated by the room control component based on the presenters' configurations for the meeting.
    Type: Application
    Filed: September 14, 2007
    Publication date: July 31, 2008
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Gene Golovchinsky, John Steven Boreczky, William J. van Melle, Maribeth Joy Back, Anthony Eric Dunnigan, Pernilla Qvarfordt
  • Publication number: 20080184115
    Abstract: In embodiments, a user interface provides for manipulating one or more physical devices for use in a conference room setting. The user interface includes a touch screen for presenting a variety of options to a user. The touch screen includes controllers, such as buttons, to enable the user to select any one of the options. Each of the controllers has goals-oriented information, enabling the user to select a goal, while insulating the user from the underlying complex processes required to carry out the goal through the selection of one of the controllers.
    Type: Application
    Filed: July 19, 2007
    Publication date: July 31, 2008
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Maribeth Joy Back, Gene Golovchinsky, John Steven Boreczky, Anthony Eric Dunnigan, Pernilla Qvarfordt, William J. van Melle, Laurent Denoue
  • Publication number: 20080127302
    Abstract: An automated test to tell computers and humans apart is disclosed, comprising displaying on a computer screen an animation comprising of a foreground and a background, one of the foreground comprising a plurality of typographical characters and the other comprising partial obstruction of the typographical characters, and wherein the animation comprises relative motion between the background and foreground. The automated test may comprise displaying on a computer screen an image, and requiring the user to perform operation on the image to resolve an encoded solution. The test may also comprise displaying on a computer screen a video clip, and requiring a user to provide an input corresponding to subject matter presented in the video clip.
    Type: Application
    Filed: August 22, 2006
    Publication date: May 29, 2008
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Pernilla Qvarfordt, Eleanor G. Rieffel, David M. Hilbert