Patents by Inventor Jeffrey Margolis

Jeffrey Margolis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11218670
    Abstract: In a system for video data capture and sharing client devices may include one or more video cameras and sensors to capture video data and to generate associated metadata. A cloud-based component may receive metadata from the client devices and requests for sharing video data captured by other client devices. Client devices with requested video data are identified by matching their provided metadata to the sharing request and by their response to an image search query for an object of interest specified in the request.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: January 4, 2022
    Assignee: XIRGO TECHNOLOGIES, LLC
    Inventors: Andrew Hodge, Nathan Ackerman, Jay Hamlin, Jeffrey Margolis
  • Publication number: 20210067743
    Abstract: In a system for video data capture and sharing client devices may include one or more video cameras and sensors to capture video data and to generate associated metadata. A cloud-based component may receive metadata from the client devices and requests for sharing video data captured by other client devices. Client devices with requested video data are identified by matching their provided metadata to the sharing request and by their response to an image search query for an object of interest specified in the request.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 4, 2021
    Applicant: Xirgo Technologies, LLC
    Inventors: Andrew Hodge, Nathan Ackerman, Jay Hamlin, Jeffrey Margolis
  • Patent number: 10805577
    Abstract: In a system for video data capture and sharing client devices may include one or more video cameras and sensors to capture video data and to generate associated metadata. A cloud-based component may receive metadata from the client devices and requests for sharing video data captured by other client devices. Client devices with requested video data are identified by matching their provided metadata to the sharing request and by their response to an image search query for an object of interest specified in the request.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: October 13, 2020
    Assignee: Owl Cameras, Inc.
    Inventors: Andrew Hodge, Nathan Ackerman, Jay Hamlin, Jeffrey Margolis
  • Publication number: 20190174099
    Abstract: In a system for video data capture and sharing client devices may include one or more video cameras and sensors to capture video data and to generate associated metadata. A cloud-based component may receive metadata from the client devices and requests for sharing video data captured by other client devices. Client devices with requested video data are identified by matching their provided metadata to the sharing request and by their response to an image search query for an object of interest specified in the request.
    Type: Application
    Filed: September 11, 2017
    Publication date: June 6, 2019
    Applicant: Owl Cameras, Inc.
    Inventors: Andrew Hodge, Nathan Ackerman, Jay Hamlin, Jeffrey Margolis
  • Publication number: 20180220189
    Abstract: In a system for video data capture and sharing client devices may include one or more video cameras and sensors to capture video data and a local buffer memory for storing the captured video data. The system uses inputs from various sources, including sensors, to determine an operating mode. Based on the operating mode, video recording settings are set for the video cameras to change the size of the video data that is generated and stored in the local buffer memory to optimize its use. When the operating mode corresponds to an event of interest, the data recorded is larger, with higher video quality parameters, and when the operational mode corresponds to video footage of no interest, the data recorded is smaller, with lower video quality parameters. Additionally, other actions can be taken based on the operational mode, such as over-write the video recording parameters, notify users of likely loss of recorded data, and the like.
    Type: Application
    Filed: January 5, 2018
    Publication date: August 2, 2018
    Applicant: 725-1 CORPORATION
    Inventors: Andrew Hodge, Nathan Ackerman, Jay Hamlin, Jeffrey Margolis
  • Patent number: 9861886
    Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.
    Type: Grant
    Filed: July 14, 2014
    Date of Patent: January 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kathryn Stone Perez, Alex A. Kipman, Jeffrey Margolis
  • Patent number: 9304594
    Abstract: Methods for recognizing gestures within a near-field environment are described. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may capture a first image of an environment while illuminating the environment using an IR light source with a first range (e.g., due to the exponential decay of light intensity) and capture a second image of the environment without illumination. The mobile device may generate a difference image based on the first image and the second image in order to eliminate background noise due to other sources of IR light within the environment (e.g., due to sunlight or artificial light sources). In some cases, object and gesture recognition techniques may be applied to the difference image in order to detect the performance of hand and/or finger gestures by an end user of the mobile device within a near-field environment of the mobile device.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: April 5, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Finocchio, Alexandru Balan, Nathan Ackerman, Jeffrey Margolis
  • Patent number: 9215478
    Abstract: A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: December 15, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mark J. Finocchio, Jeffrey Margolis
  • Patent number: 9116220
    Abstract: Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: August 25, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shao Liu, Mark Finocchio, Avi Bar-Zeev, Jeffrey Margolis, Jason Flaks, Robert Crocco, Jr., Alex Aben-Athar Kipman
  • Publication number: 20140306874
    Abstract: Methods for recognizing gestures within a near-field environment are described. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may capture a first image of an environment while illuminating the environment using an IR light source with a first range (e.g., due to the exponential decay of light intensity) and capture a second image of the environment without illumination. The mobile device may generate a difference image based on the first image and the second image in order to eliminate background noise due to other sources of IR light within the environment (e.g., due to sunlight or artificial light sources). In some cases, object and gesture recognition techniques may be applied to the difference image in order to detect the performance of hand and/or finger gestures by an end user of the mobile device within a near-field environment of the mobile device.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Mark Finocchio, Alexandru Balan, Nathan Ackerman, Jeffrey Margolis
  • Patent number: 8803889
    Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: August 12, 2014
    Assignee: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex A. Kipman, Jeffrey Margolis
  • Patent number: 8762894
    Abstract: Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users.
    Type: Grant
    Filed: February 10, 2012
    Date of Patent: June 24, 2014
    Assignee: Microsoft Corporation
    Inventors: Kathryn Stone-Perez, Jeffrey Margolis, Mark J. Finocchio, Brian E. Keane, Rudy Jacobus Poot, Stephen G. Latta
  • Publication number: 20140160055
    Abstract: A wrist-worn input device that is used in augmented reality (AR) operates in three modes of operation. In a first mode of operation, the input device is curved so that it may be worn on a user's wrist. A touch surface receives letters gestured or selections by the user. In a second mode of operation, the input device is flat and used as a touch surface for more complex single or multi-hand interactions. A sticker defining one or more locations on the touch surface that corresponds a user's input, such as a character, number or intended operation, may be affixed to the touch surface. The sticker may be interchanged with different stickers based on a mode of operation, user's preference and/or particular AR experience. In a third mode of operation, the input device receives biometric input from biometric sensors. The biometric input may provide contextual information in an AR experience while allowing the user to have their hands free.
    Type: Application
    Filed: December 12, 2012
    Publication date: June 12, 2014
    Inventors: Jeffrey Margolis, Nathan Ackerman, Sheridan Martin
  • Publication number: 20140085193
    Abstract: A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame.
    Type: Application
    Filed: November 27, 2013
    Publication date: March 27, 2014
    Inventors: Mark J. Finocchio, Jeffrey Margolis
  • Patent number: 8625837
    Abstract: A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame.
    Type: Grant
    Filed: June 16, 2009
    Date of Patent: January 7, 2014
    Assignee: Microsoft Corporation
    Inventors: Mark J. Finocchio, Jeffrey Margolis
  • Publication number: 20130311944
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Application
    Filed: July 29, 2013
    Publication date: November 21, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Daval, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Patent number: 8509479
    Abstract: An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector.
    Type: Grant
    Filed: June 16, 2009
    Date of Patent: August 13, 2013
    Assignee: Microsoft Corporation
    Inventor: Jeffrey Margolis
  • Patent number: 8504922
    Abstract: Described herein is technology for, among other things, performing navigation in a media environment. The technology involves the presenting a user with only a portion of the previously visited pages or areas as he or she moves to previously visited pages or areas. As the user moves around the media environment, the movements are recorded for use when requests are received for previously visited areas or pages. As the user moves to previously visited areas redundant pages or areas are skipped. Thus, the user's forward and backward navigation are different and the user moves backward more easily, quickly and efficiently.
    Type: Grant
    Filed: December 29, 2006
    Date of Patent: August 6, 2013
    Assignee: Microsoft Corporation
    Inventors: Mark Newell, Jeffrey Margolis, Will Vong, Bill Flora, Bojana Ostojic, Kristina Voros, Christen Coomer, Frederic Azera
  • Patent number: 8499257
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Grant
    Filed: February 9, 2010
    Date of Patent: July 30, 2013
    Assignee: Microsoft Corporation
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Patent number: 8416187
    Abstract: A system and method is provided for using motion-capture data to control navigating of a cursor in a user interface of a computing system. Movement of a user's hand or other object in a three-dimensional capture space is tracked and represented in the computing system as motion-capture model data. The method includes obtaining a plurality of positions for the object from the motion-capture model data. The method determines a curved-gesture center point based on at least some of the plurality of positions for the object. Using the curved-gesture center point as an origin, an angular property is determined for one of the plurality of positions for the object. The method further includes navigating the cursor in a sequential arrangement of selectable items based on the angular property.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Jeffrey Margolis, Tricia Lee, Gregory A. Martinez, Alex Aben-Athar Kipman