Patents by Inventor H. Keith Nishihara

H. Keith Nishihara has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120133616
    Abstract: Creative design systems and methods are disclosed. In one embodiment, a creative design system is provided. The creative design system comprises a high resolution display, an interactive stylus that includes a transmitter at a tip of the interactive stylus that transmits an encoded signal associated with the interactive stylus and a plurality of sensors that track movement of the interactive stylus over the high resolution display by capturing and decoding the transmitted encoded signal as the stylus moves over the high resolution display. A creative design controller is configured to display sketches of context in response to the tracking of the movement of the interactive stylus over the high resolution display.
    Type: Application
    Filed: November 29, 2010
    Publication date: May 31, 2012
    Inventors: H. Keith Nishihara, Shi-Ping Hsu, Donald G. Lariviere
  • Publication number: 20120118973
    Abstract: A hand held device containing at least one camera can perform various functions. In some embodiments, the device may have a primary lens for focusing on distant objects, and a secondary lens that may be combined with the primary lens for focusing on up-close objects. The secondary lens may be movable in and out of the optical path to permit such a transition.
    Type: Application
    Filed: January 26, 2012
    Publication date: May 17, 2012
    Inventors: Bran Ferren, H. Keith Nishihara
  • Publication number: 20120120301
    Abstract: A hand held device containing at least one camera can perform various functions. In some embodiments, the optical path between the lens and the optical sensor may be redirected at least once to allow it to fit within the body of a narrow device. For example, at least one reflector or beam combiner may be used to redirect the light so that part of the optical path is perpendicular to another part of the optical path.
    Type: Application
    Filed: January 26, 2012
    Publication date: May 17, 2012
    Inventors: Bran Ferren, H. Keith Nishihara
  • Publication number: 20120118972
    Abstract: A small handheld device may perform multiple functions, including those of a camera, a display, a processor, and a radio. In various embodiments, the camera may be used to read a bar code, and the processor may be used to decode the bar code. Additionally, the device may contain a focusing aid to enable the device to properly focus on an up-close bar code image.
    Type: Application
    Filed: January 26, 2012
    Publication date: May 17, 2012
    Inventors: Bran Ferren, H. Keith Nishihara
  • Publication number: 20120118971
    Abstract: A small handheld device may perform multiple functions, including those of a camera, a display, a processor, and a radio. In various embodiments, the device may receive information representing a bar code through the radio, and either display the bar code or wirelessly retransmit it to another device. The bar code may represent various things, such as but not limited to a coupon, an admission ticket, a verification of a prior purchase, and/or other evidence of commerce.
    Type: Application
    Filed: January 26, 2012
    Publication date: May 17, 2012
    Inventors: Bran Ferren, H. Keith Nishihara
  • Patent number: 8180114
    Abstract: One embodiment of the invention includes a gesture recognition interface system. The system may comprise a substantially vertical surface configured to define a gesture recognition environment based on physical space in a foreground of the substantially vertical surface. The system may also comprise at least one light source positioned to provide illumination of the gesture recognition environment. The system also comprises at least two cameras configured to generate a plurality of image sets based on the illumination being reflected from an input object in the gesture recognition environment. The system further comprises a controller configured to determine a given input gesture based on changes in relative locations of the input object in each of the plurality of image sets. The controller may further be configured to initiate a device input associated with the given input gesture.
    Type: Grant
    Filed: June 5, 2008
    Date of Patent: May 15, 2012
    Assignee: Northrop Grumman Systems Corporation
    Inventors: H. Keith Nishihara, Shi-Ping Hsu
  • Publication number: 20120075489
    Abstract: In a digital picture created by combining an outer zone from a first lens and an inner zone from a second lens, the two zones may be blended together in an intermediate zone created by processing pixels from both the outer and inner zones. The blending may be performed by creating pixels in the intermediate zone that are progressively less influenced by pixels from the first lens and progressively more influenced by pixels from the second lens, as the location of the intermediate pixels transitions from the outer zone to the inner zone. Image registration may be used to achieve the same scale before blending.
    Type: Application
    Filed: September 24, 2010
    Publication date: March 29, 2012
    Inventor: H. Keith Nishihara
  • Patent number: 8139110
    Abstract: One embodiment of the invention includes a gesture recognition interface system that determines input gestures based on changes in relative locations of an input object to initiate device inputs associated with the input gestures. The system comprises at least one light source positioned to illuminate a background surface in a first light spectrum to generate a light contrast difference between the input object and the background surface. The system also comprises a display device configured to display at least one calibration pattern on the background surface in a second light spectrum in response to a calibration command and at least one camera configured to receive images of the background surface in both the first light spectrum and the second light spectrum. The system further comprises an automated calibration component configured to issue the calibration command and to associate features of the at least one calibration pattern in the received images with known physical locations.
    Type: Grant
    Filed: November 1, 2007
    Date of Patent: March 20, 2012
    Assignee: Northrop Grumman Systems Corporation
    Inventor: H Keith Nishihara
  • Patent number: 7701439
    Abstract: A gesture recognition simulation system and method is provided. In one embodiment, a gesture recognition simulation system includes a three-dimensional display system that displays a three-dimensional image of at least one simulated object having at least one functional component. A gesture recognition interface system is configured to receive an input gesture associated with a sensorless input object from a user. The gesture recognition simulation system further comprises a simulation application controller configured to match a given input gesture with a predefined action associated with the at least one functional component. The simulation application controller could invoke the three dimensional display system to display a simulated action on at least a portion of the at least one simulated object associated an input gesture and a predefined action match.
    Type: Grant
    Filed: July 13, 2006
    Date of Patent: April 20, 2010
    Assignee: Northrop Grumman Corporation
    Inventors: William Daniel Hillis, H Keith Nishihara, Shi-Ping Hsu, Neil Siegel
  • Publication number: 20100050133
    Abstract: One embodiment of the invention includes a method for executing and interpreting gesture inputs in a gesture recognition interface system. The method includes detecting and translating a first sub-gesture into a first device input that defines a given reference associated with a portion of displayed visual content. The method also includes detecting and translating a second sub-gesture into a second device input that defines an execution command for the portion of the displayed visual content to which the given reference refers.
    Type: Application
    Filed: August 22, 2008
    Publication date: February 25, 2010
    Inventors: H. Keith Nishihara, Shi-Ping Hsu, Adrian Kaehler, Bran Ferren, Lars Jangaard
  • Publication number: 20100026723
    Abstract: One embodiment of the invention includes a computer interface system. The system comprises a user interface screen configured to display visual content and an input system configured to detect a presence of an input object within a threshold distance along a normal axis of the user interface screen. The system further comprises a graphical controller configured to magnify a portion of the visual content that is located at an approximate location of a base of the normal axis on the user interface screen.
    Type: Application
    Filed: July 31, 2008
    Publication date: February 4, 2010
    Inventors: H. Keith Nishihara, Shi-Ping Hsu, Adrian Kaehler, Eric Gradman, Kjerstin Williams
  • Publication number: 20090316952
    Abstract: One embodiment of the invention includes a gesture recognition interface system. The interface system may comprise at least one light source positioned to illuminate a first side of a light-diffusive screen. The interface system may also comprise at least one camera positioned on a second side of the light-diffusive screen, the second side being opposite the first side, and configured to receive a plurality of images based on a brightness contrast difference between the light-diffusive screen and an input object. The interface system may further comprise a controller configured to determine a given input gesture based on changes in relative locations of the input object in the plurality of images. The controller may further be configured to initiate a device input associated with the given input gesture.
    Type: Application
    Filed: June 20, 2008
    Publication date: December 24, 2009
    Inventors: Bran Ferren, H. Keith Nishihara
  • Publication number: 20090115721
    Abstract: A system and method is provided for a gesture recognition interface system. The system comprises a projector configured to project colorless light and visible images onto a background surface. The projection of the colorless light can be interleaved with the projection of the visible images. The system also comprises at least one camera configured to receive a plurality of images based on a reflected light contrast difference between the background surface and a sensorless input object during projection of the colorless light. The system further comprises a controller configured to determine a given input gesture based on changes in relative locations of the sensorless input object in the plurality of images, and being further configured to initiate a device input associated with the given input gesture.
    Type: Application
    Filed: November 2, 2007
    Publication date: May 7, 2009
    Inventors: Kenneth W. Aull, H Keith Nishihara, Shi-Ping Hsu
  • Publication number: 20090116742
    Abstract: One embodiment of the invention includes a gesture recognition interface system that determines input gestures based on changes in relative locations of an input object to initiate device inputs associated with the input gestures. The system comprises at least one light source positioned to illuminate a background surface in a first light spectrum to generate a light contrast difference between the input object and the background surface. The system also comprises a display device configured to display at least one calibration pattern on the background surface in a second light spectrum in response to a calibration command and at least one camera configured to receive images of the background surface in both the first light spectrum and the second light spectrum. The system further comprises an automated calibration component configured to issue the calibration command and to associate features of the at least one calibration pattern in the received images with known physical locations.
    Type: Application
    Filed: November 1, 2007
    Publication date: May 7, 2009
    Inventor: H Keith Nishihara
  • Publication number: 20090103780
    Abstract: One embodiment of the invention includes a method of providing device inputs. The method includes illuminating hand gestures performed via a bare hand of a user in a foreground of a background surface with at least one infrared (IR) light source. The method also includes generating a first plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface and generating a second plurality of silhouette images associated with the bare hand based on an IR light contrast between the bare hand and the background surface. The method also includes determining a plurality of three-dimensional features of the bare hand relative to the background surface based on a parallax separation of the bare hand in the first plurality of silhouette images relative to the second plurality of silhouette images.
    Type: Application
    Filed: December 17, 2008
    Publication date: April 23, 2009
    Inventors: H. Keith Nishihara, Shi-Ping Hsu, Adrian Kaehler, Lars Jangaard
  • Publication number: 20080244468
    Abstract: One embodiment of the invention includes a gesture recognition interface system. The system may comprise a substantially vertical surface configured to define a gesture recognition environment based on physical space in a foreground of the substantially vertical surface. The system may also comprise at least one light source positioned to provide illumination of the gesture recognition environment. The system also comprises at least two cameras configured to generate a plurality of image sets based on the illumination being reflected from an input object in the gesture recognition environment. The system further comprises a controller configured to determine a given input gesture based on changes in relative locations of the input object in each of the plurality of image sets. The controller may further be configured to initiate a device input associated with the given input gesture.
    Type: Application
    Filed: June 5, 2008
    Publication date: October 2, 2008
    Inventors: H. Keith Nishihara, Shi-Ping Hsu
  • Publication number: 20080043106
    Abstract: A system and method is provided for an intrusion detection system. The intrusion detection system comprises a first camera configured to acquire first visual images of a monitored area and a second camera configured to acquire second visual images of the monitored area. The intrusion detection system also comprises a detection device configured to compare the first images with a background image of the monitored area. The detection device can mark differences between the first images and the background image as a potential intruder. The intrusion detection system further comprises a tracking device configured to evaluate each of the first images relative to each of the second images to determine three-dimensional characteristics associated with the potential intruder.
    Type: Application
    Filed: August 10, 2006
    Publication date: February 21, 2008
    Inventors: Chris Hassapis, H. Keith Nishihara
  • Publication number: 20080028325
    Abstract: A system and method is provided for a collaboration input/output (I/O) system. The collaboration I/O system may comprise a display screen and a gesture image system. The gesture image system may be configured to generate image data associated with a location and an orientation of a first sensorless input object relative to a background surface. The collaboration I/O system may also comprise a transceiver. The transceiver could be configured to transmit the image data to at least one additional collaboration I/O system at at least one remote location. The transceiver can be further configured to receive image data from each of the at least one additional collaboration I/O system, such that the display screen can be configured to display the image data associated with a location and orientation of a sensorless input object associated with each of the at least one additional collaboration I/O system superimposed over a common image of visual data.
    Type: Application
    Filed: July 25, 2006
    Publication date: January 31, 2008
    Inventors: Bran Ferren, H. Keith Nishihara
  • Publication number: 20080013793
    Abstract: A gesture recognition simulation system and method is provided. In one embodiment, a gesture recognition simulation system includes a three-dimensional display system that displays a three-dimensional image of at least one simulated object having at least one functional component. A gesture recognition interface system is configured to receive an input gesture associated with a sensorless input object from a user. The gesture recognition simulation system further comprises a simulation application controller configured to match a given input gesture with a predefined action associated with the at least one functional component. The simulation application controller could invoke the three dimensional display system to display a simulated action on at least a portion of the at least one simulated object associated an input gesture and a predefined action match.
    Type: Application
    Filed: July 13, 2006
    Publication date: January 17, 2008
    Inventors: William Daniel Hillis, H Keith Nishihara, Shi-Ping Hsu, Neil Siegel
  • Publication number: 20080013826
    Abstract: A system and method is provided for a gesture recognition interface system. The interface system may comprise a first and second light source positioned to illuminate a background surface. The interface system may also comprise at least one camera operative to receive a first plurality of images based on a first reflected light contrast difference between the background surface and a sensorless input object caused by the first light source and a second plurality of images based on a second reflected light contrast difference between the background surface and the sensorless input object caused by the second light source. The interface system may further comprise a controller operative to determine a given input gesture based on changes in relative locations of the sensorless input object in the first plurality of images and the second plurality of images. The controller may further be operative to initiate a device input associated with the given input gesture.
    Type: Application
    Filed: July 13, 2006
    Publication date: January 17, 2008
    Inventors: William Daniel Hillis, H Keith Nishihara, Shi-Ping Hsu