Patents by Inventor Abbas Rafii

Abbas Rafii has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150256813
    Abstract: A method for three-dimensional reconstruction of a scene includes: controlling a projection source to emit invisible light toward the scene; while the projection source is emitting light: controlling a first multi-channel image sensor to capture a first image, the first multi-channel image sensor including visible light detecting pixels and invisible light detecting pixels, the first image including a first invisible light channel and a first visible light channel; controlling a second multi-channel image sensor to capture a second image substantially simultaneously as the capture of the first image, the second multi-channel image sensor including visible light detecting pixels and invisible light detecting pixels, the second image including a second invisible light channel and a second visible light channel; performing stereo analysis of the first image and the second image in accordance with the invisible light channels and the visible light channels to generate a depth map.
    Type: Application
    Filed: March 6, 2015
    Publication date: September 10, 2015
    Inventors: Carlo Dal Mutto, Abbas Rafii, David Demirdjian
  • Publication number: 20150192991
    Abstract: Embodiments in accordance with this invention disclose systems and methods for implementing head tracking based graphical user interfaces that incorporate gesture reactive interface objects. The disclosed embodiments perform a method in which a GUI includes interface objects is rendered and displayed. Image data of an interaction zone is captured. A targeting gestured targeting a targeted interface object is detected in the captured image data and a set of 3D head interaction gestures are enabled. Additional image data is captured. Motion of at least a portion of a human head is detected and one of the 3D head interactions is identified. The rendering of the interface is modified in response to the detection of one of the 3D head interactions and the modified interface is displayed.
    Type: Application
    Filed: January 7, 2015
    Publication date: July 9, 2015
    Inventors: Carlo Dal Mutto, Giulio Marin, Abbas Rafii, Tony Zuccarino
  • Publication number: 20150195443
    Abstract: A video transmitting system includes: a display configured to display an image in a first direction; cameras including: a first camera adjacent a first edge of the display; and a second camera adjacent a second edge of the display, at least a portion of the display being proximal a convex hull that includes the first camera and the second camera, the first camera and the second camera having substantially overlapping fields of view encompassing the first direction; and an image processor to: receive a position of a virtual camera relative to the cameras substantially within the convex hull and substantially on the display, the virtual camera having a field of view encompassing the first direction; receive raw images captured by the cameras at substantially the same time; and generate processed image data from the raw images for synthesizing a view in accordance with the position of the virtual camera.
    Type: Application
    Filed: January 2, 2015
    Publication date: July 9, 2015
    Inventors: Carlo Dal Mutto, Abbas Rafii
  • Publication number: 20150089453
    Abstract: A system and method for providing a 3D gesture based interaction system for a projected 3D user interface is disclosed. A user interface display is projected onto a user surface. Image data of the user interface display and an interaction medium are captured. The image data includes visible light data and IR data. The visible light data is used to register the user interface display on the projected surface with the Field of View (FOV) of at least one camera capturing the image data. The IR data is used to determine gesture recognition information for the interaction medium. The registration information and gesture recognition information is then used to identify interactions.
    Type: Application
    Filed: September 25, 2014
    Publication date: March 26, 2015
    Inventors: Carlo Dal Mutto, Abbas Rafii, Britta Hummel
  • Publication number: 20150062004
    Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
    Type: Application
    Filed: September 3, 2014
    Publication date: March 5, 2015
    Inventor: Abbas Rafii
  • Publication number: 20150062003
    Abstract: User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
    Type: Application
    Filed: September 3, 2014
    Publication date: March 5, 2015
    Inventors: Abbas Rafii, Tony Zuccarino
  • Publication number: 20150057082
    Abstract: Natural three-dimensional (xw,yw,zw,tw) gesture player interaction with a two-dimensional game application rendered on a two or three dimensional display includes mapping acquired (xw,yw,zw,tw) gesture data to virtual game-world (xv,yv,zv,tv) coordinates or vice versa, and scaling if needed. The game application is caused to render at least one image on the display responsive to the mapped and scaled (xw,yw,zw) data, where the display and game interaction is rendered from the player's perception viewpoint. The (xw,yw,zw) data preferably is acquired using spaced-apart two-dimensional cameras coupled to software to reduce the acquired images to a relatively small number of landmark points, from which player gestures may be recognized. The invention may be implemented in a handheld device such as a smart phone or tablet, which device may include a gyroscope and/or accelerometer.
    Type: Application
    Filed: September 3, 2014
    Publication date: February 26, 2015
    Inventors: Nazim Kareemi, Abbas Rafii
  • Publication number: 20150009119
    Abstract: Systems and method are disclosed for enabling a user to interact with gestures in a natural way with image(s) displayed on the surface of an integrated monitor whose display contents are governed by an appliance, perhaps a PC, smart phone or tablet. Some embodiments include the display as well as the appliance, in a single package such as all-in-one computers. User interaction includes gestures that may occur within a three-dimensional hover zone spaced apart from the display surface.
    Type: Application
    Filed: January 29, 2013
    Publication date: January 8, 2015
    Applicant: Imimtek, Inc.
    Inventors: Tony Zuccarino, Abbas Rafii
  • Patent number: 8854433
    Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.
    Type: Grant
    Filed: February 1, 2013
    Date of Patent: October 7, 2014
    Assignee: Aquifi, Inc.
    Inventor: Abbas Rafii
  • Patent number: 8840466
    Abstract: Natural three-dimensional (xw, yw, zw, tw) gesture player interaction with a two-dimensional game application rendered on a two or three dimensional display includes mapping acquired (xw, yw, zw, tw) gesture data to virtual game-world (xv, yv, zv, tv) coordinates or vice versa, and scaling if needed. The game application is caused to render at least one image on the display responsive to the mapped and scaled (xw, yw, zw) data, where the display and game interaction is rendered from the player's perception viewpoint. The (xw, yw, zw) data preferably is acquired using spaced-apart two-dimensional cameras coupled to software to reduce the acquired images to a relatively small number of landmark points, from which player gestures may be recognized. The invention may be implemented in a handheld device such as a smart phone or tablet, which device may include a gyroscope and/or accelerometer.
    Type: Grant
    Filed: April 20, 2012
    Date of Patent: September 23, 2014
    Assignee: Aquifi, Inc.
    Inventors: Nazim Kareemi, Abbas Rafii
  • Patent number: 8836768
    Abstract: User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
    Type: Grant
    Filed: August 23, 2013
    Date of Patent: September 16, 2014
    Assignee: Aquifi, Inc.
    Inventors: Abbas Rafii, Tony Zuccarino
  • Publication number: 20140215407
    Abstract: A user-centric method and system to identify user-made gestures to control a remote device images the user using a three-dimensional image system, and defines at least one user-centric three-dimensional detection zone dynamically sized appropriately for the user, who is free to move about. Images made within the detection zone are compared to a library of stored gestures, and the thus identified gesture is mapped to an appropriate control command signal coupleable to the remote device. The method and system also provides of a first user to hand off control of the remote device to a second user.
    Type: Application
    Filed: April 3, 2014
    Publication date: July 31, 2014
    Applicant: Microsoft Corporation
    Inventors: Abdelrehim Ahmed, Abbas Rafii, Colin Tracy
  • Patent number: 8773512
    Abstract: A portable remote control device enables user interaction with an appliance by detecting user gestures made in a hover zone, and converting the gestures to commands that are wirelessly transmitted to the appliance. The remote control device includes at least two cameras whose intersecting FOVs define a three-dimensional hover zone within which user interactions are imaged. Separately and collectively image data is analyzed to identify a relatively few user landmarks. Substantially unambiguous correspondence is established between the same landmark on each acquired image, and a three-dimensional reconstruction is made in a common coordinate system. Preferably cameras are modeled to have characteristics of pinhole cameras, enabling rectified epipolar geometric analysis to facilitate more rapid disambiguation among potential landmark points. As a result processing overhead and latency times are substantially reduced.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: July 8, 2014
    Assignee: Aquifi, Inc.
    Inventor: Abbas Rafii
  • Publication number: 20140173440
    Abstract: Systems and methods for natural interaction with graphical user interfaces using gestural and vocal input in accordance with embodiments of the invention are disclosed. In one embodiment, a method for interpreting a command sequence that includes a gesture and a voice cue to issue an application command includes receiving image data, receiving an audio signal, selecting an application command from a command dictionary based upon a gesture identified using the image data, a voice cue identified using the audio signal, and metadata describing combinations of a gesture and a voice cue that form a command sequence corresponding to an application command, retrieving a list of processes running on an operating system, selecting at least one process based upon the selected application command and the metadata, where the metadata also includes information identifying at least one process targeted by the application command, and issuing an application command to the selected process.
    Type: Application
    Filed: May 21, 2013
    Publication date: June 19, 2014
    Applicant: IMIMTEK, INC.
    Inventors: Carlo Dal Mutto, Abbas Rafii
  • Patent number: 8723789
    Abstract: User interaction with a display is detected using at least two cameras whose intersecting FOVs define a three-dimensional hover zone within which user interactions can be imaged. Each camera substantially simultaneously acquires from its vantage point two-dimensional images of the user within the hover zone. Separately and collectively the image data is analyzed to identify therein a relatively few landmarks definable on the user. A substantially unambiguous correspondence is established between the same landmark on each acquired image, and as to those landmarks a three-dimensional reconstruction is made in a common coordinate system. This landmark identification and position information can be converted into a command causing the display to respond appropriately to a gesture made by the user. Advantageously size of the hover zone can far exceed size of the display, making the invention usable with smart phones as well as large size entertainment TVs.
    Type: Grant
    Filed: February 3, 2012
    Date of Patent: May 13, 2014
    Assignee: Imimtek, Inc.
    Inventor: Abbas Rafii
  • Patent number: 8693724
    Abstract: A user-centric method and system to identify user-made gestures to control a remote device images the user using a three-dimensional image system, and defines at least one user-centric three-dimensional detection zone dynamically sized appropriately for the user, who is free to move about. Images made within the detection zone are compared to a library of stored gestures, and the thus identified gesture is mapped to an appropriate control command signal coupleable to the remote device. The method and system also provides of a first user to hand off control of the remote device to a second user.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: April 8, 2014
    Assignee: Microsoft Corporation
    Inventors: Abdelrehim Ahmed, Abbas Rafii, Colin Tracy
  • Patent number: 8686943
    Abstract: User interaction with a display is detected substantially simultaneously using at least two cameras whose intersecting FOVs define a three-dimensional hover zone within which user interactions can be imaged. Separately and collectively image data is analyzed to identify a relatively few user landmarks. A substantially unambiguous correspondence is established between the same landmark on each acquired image, and a three-dimensional reconstruction is made in a common coordinate system. Preferably cameras are modeled to have characteristics of pinhole cameras, enabling rectified epipolar geometric analysis to facilitate more rapid disambiguation among potential landmark points. Consequently processing overhead is substantially reduced, as are latency times. Landmark identification and position information is convertible into a command causing the display to respond appropriately to a user gesture.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: April 1, 2014
    Assignee: Imimtek, Inc.
    Inventor: Abbas Rafii
  • Patent number: 8589033
    Abstract: Time-of-flight (TOF) three-dimensional sensing systems are deployed on or in a motor vehicle to image contact zones associated with potential contact between an avoidable object and the vehicle or vehicle frame and/or remotely controllable motorized moving door or liftgate. An algorithm processes depth data acquired by each TOF system to determine whether an avoidable object is in the associated contact zone. If present, a control signal issues to halt or reverse the mechanism moving the door. A stored database preferably includes a depth image of the contact zone absent any object, an image of the door, and volume of the door. Database images are compared to newly acquired depth images to identify pixel sensors whose depth values are statistically unlikely to represent background or the door. Pixels within the contact zone so identified are an object, and the control signal is issued.
    Type: Grant
    Filed: January 11, 2008
    Date of Patent: November 19, 2013
    Assignee: Microsoft Corporation
    Inventors: Abbas Rafii, Richard New, Sunil Acharya, Timothy Droz
  • Publication number: 20120270653
    Abstract: Natural three-dimensional (xw,yw,zw,tw) gesture player interaction with a two-dimensional game application rendered on a two or three dimensional display includes mapping acquired (xw,yw,zw,tw) gesture data to virtual game-world (xv,yv,zv,tv) coordinates or vice versa, and scaling if needed. The game application is caused to render at least one image on the display responsive to the mapped and scaled (xw,yw,zw) data, where the display and game interaction is rendered from the player's perception viewpoint. The (xw,yw,zw) data preferably is acquired using spaced-apart two-dimensional cameras coupled to software to reduce the acquired images to a relatively small number of landmark points, from which player gestures may be recognized. The invention may be implemented in a handheld device such as a smart phone or tablet, which device may include a gyroscope and/or accelerometer.
    Type: Application
    Filed: April 20, 2012
    Publication date: October 25, 2012
    Applicant: IMIMTEK, INC.
    Inventors: Nazim Kareemi, Abbas Rafii
  • Patent number: 8139142
    Abstract: RGB-Z imaging systems acquire RGB data typically with a high X-Y resolution RGB pixel array, and acquire Z-depth data with an array of physically larger Z pixels having additive signal properties. In each acquired frame, RGB pixels are mapped to a corresponding Z pixel. Z image resolution is enhanced by identifying Z discontinuities and identifying corresponding RGB pixels where the Z discontinuities occur. Thus segmented data enables RGB background substitution, which preferably blends foreground pixel color and substitute background color. The segmented data also enables up-sampling in which a higher XY resolution Z image with accurate Z values is obtained. Up-sampling uses an equation set enabling assignment of accurate Z values to RGB pixels. Fixed acquisition frame rates are enabled by carefully culling bad Z data. Segmenting and up-sampling enhanced video effects and enable low cost, low Z resolution arrays to function comparably to higher quality, higher resolution Z arrays.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: March 20, 2012
    Assignee: Microsoft Corporation
    Inventors: Cyrus Bamji, Abbas Rafii, Ryan E. Crabb