Patents by Inventor Johnny Lee

Johnny Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9679390
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: June 13, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Craig Peeper, Johnny Lee, Tommer Leyvand, Szymon Stachniak
  • Publication number: 20170132580
    Abstract: A system and method for tracking a tire through a retread process include various checkpoints with networked touch screen computers at each checkpoint. At certain inspection and repair checkpoints the touch screen computers display tire cross section images with multiple different regions. Selecting a region allows a technician to change injury or repair data associated with that region of the tire. The computer automatically checks entered information against customer retread specifications and alerts the technician if the tire does not fit the customer's requirements for repairing or retreading the tire.
    Type: Application
    Filed: January 13, 2017
    Publication date: May 11, 2017
    Inventors: Johnny Lee McIntosh, Tony Lee Curtis, Anthony Neil Potts, Brandon Curtis Stewart, Bradley Dean Holmes, Harold Shane McWater
  • Patent number: 9646562
    Abstract: An image generating system includes an electromagnetic (“EM”) modulator, a camera module and a logic engine. The EM modulator is positioned to direct EM waves to a photoactive surface to stimulate the photoactive surface to generate an image. The camera module is positioned to monitor the photoactive surface to generate image data. The logic engine is communicatively coupled to the camera module and configured to receive the image data from the camera module and analyze the image data. The logic engine is communicatively coupled to the EM modulator to command the EM modulator where to direct the EM waves in response to the image data.
    Type: Grant
    Filed: October 22, 2012
    Date of Patent: May 9, 2017
    Assignee: X Development LLC
    Inventors: Johnny Lee, Eric Teller, William G. Patrick, Eric Peeters
  • Publication number: 20170123505
    Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
    Type: Application
    Filed: November 15, 2016
    Publication date: May 4, 2017
    Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
  • Patent number: 9596443
    Abstract: Methods and systems for providing depth data and image data to an application processor on a mobile device are described. An example method involves receiving image data from at least one camera of the mobile device and receiving depth data from a depth processor of the mobile device. The method further involves generating a digital image that includes at least the image data and the depth data. The depth data may be embedded in pixels of the digital image, for instance. Further, the method then involves providing the digital image to an application processor of the mobile device using a camera bus interface. Thus, the depth data and the image data may be provided to the application processor in a single data structure.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: March 14, 2017
    Assignee: Google Inc.
    Inventors: James Fung, Johnny Lee
  • Patent number: 9578869
    Abstract: An improvement to a roadside sprayer fixates spray nozzles on a registration plate. The registration plate may include integral tabs depending on the application. Inclination of the tabs is adjustable to place streams and droplets in a desired swath coverage. With the nozzles rigidly mounted onto the plate, the entire plate is nutated. The use of the registration plate in a spray unit creates a uniform nutation among the nozzles, thereby reducing variability in droplet placement, and providing a more predictable spray from the nozzles. In other embodiments, the spray unit is utilized in a spray system adaptable to service vehicles, and may be utilized in conjunction with a vegetation engagement device, such as a cutter to engage multiple zones of a roadway right-of-way. Additionally, an improved spray unit includes an electromagnetic field and an attractor to generate plate motion.
    Type: Grant
    Filed: December 23, 2013
    Date of Patent: February 28, 2017
    Inventor: Johnny Lee Kubacak
  • Patent number: 9576551
    Abstract: A method and apparatus for gesture interaction with an image displayed on a painted wall is described. The method may include capturing image data of the image displayed on the painted wall and a user motion performed relative to the image. The method may also include analyzing the captured image data to determine a sequence of one or more physical movements of the user relative to the image displayed on the painted wall. The method may also include determining, based on the analysis, that the user motion is indicative of a gesture associated with the image displayed on the painted wall, and controlling a connected system in response to the gesture.
    Type: Grant
    Filed: October 20, 2015
    Date of Patent: February 21, 2017
    Assignee: X Development LLC
    Inventors: Johnny Lee, Eric Teller, William Graham Patrick, Eric Peeters
  • Patent number: 9561975
    Abstract: A method comprises steps for (a) providing a liquid in a container; (b) flowing a gas to a volume within the liquid, wherein the volume is at least partially submerged in the liquid; and (c) repeatedly increasing and decreasing the volume, wherein the cycles of increasing and decreasing generates a pulsed aerated flow, wherein at least one of the pulsed aerated flow is released within the container and the pulsed aerated flow is released outside the container.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: February 7, 2017
    Assignee: Stone WaterWorks, Inc.
    Inventors: Johnny Lee Stone, Avery Matthew Stone, Gerald Donald Richardson
  • Publication number: 20170019658
    Abstract: An electronic device (100) includes a depth sensor (120), a first imaging camera (114, 116), and a controller (802). The depth sensor (120) includes a modulated light projector (119) to project a modulated light pattern (500). The first imaging camera (114, 116) is to capture at least a reflection of the modulated light pattern (500). The controller (802) is to selectively modify (1004) at least one of a frequency, an intensity, and a duration of projections of the modulated light pattern by the modulated light projector responsive to at least one trigger event (1002). The trigger event can include, for example, a change (1092) in ambient light incident on the electronic device, detection (1094) of motion of the electronic device, or a determination (1096) that the electronic device has encountered a previously-unencountered environment.
    Type: Application
    Filed: June 15, 2016
    Publication date: January 19, 2017
    Inventor: Johnny Lee
  • Publication number: 20170018092
    Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
    Type: Application
    Filed: August 15, 2016
    Publication date: January 19, 2017
    Inventors: Johnny Lee, Joel Hesch
  • Patent number: 9524024
    Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
    Type: Grant
    Filed: January 21, 2014
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
  • Patent number: 9485366
    Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Patent number: 9465980
    Abstract: A method of tracking a subject includes receiving from a source a depth image of a scene including the subject. The depth image includes a depth for each of a plurality of pixels. The method further includes identifying pixels of the depth image that image the subject and deriving from the identified pixels of the depth image one or more machine readable data structures representing the subject as a model including a plurality of shapes.
    Type: Grant
    Filed: September 5, 2014
    Date of Patent: October 11, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robert Matthew Craig, Tommer Leyvand, Craig Peeper, Momin M. Al-Ghosien, Matt Bronder, Oliver Williams, Ryan M. Geiss, Jamie Daniel Joseph Shotton, Johnny Lee, Mark Finocchio
  • Patent number: 9445015
    Abstract: Example methods and systems for adjusting sensor viewpoint to a virtual viewpoint are provided. An example method may involve receiving data from a first camera; receiving data from a second camera; transforming, from the first viewpoint to a virtual viewpoint within the device, frames in a first plurality of frames based on an offset from the first camera to the virtual viewpoint; determining, in a second plurality of frames, one or more features and a movement, relative to the second viewpoint, of the one or more features; and transforming, from the second viewpoint to the virtual viewpoint, the movement of the one or more features based on an offset from the second camera to the virtual viewpoint; adjusting the transformed frames of the virtual viewpoint by an amount that is proportional to the transformed movement; and providing for display the adjusted and transformed frames of the first plurality of frames.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: September 13, 2016
    Assignee: GOOGLE INC.
    Inventors: Joel Hesch, Ryan Hickman, Johnny Lee
  • Patent number: 9437000
    Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: September 6, 2016
    Assignee: Google Inc.
    Inventors: Johnny Lee, Joel Hesch
  • Patent number: 9424619
    Abstract: Methods and systems for detecting frame tears are described. As one example, a mobile device may include at least one camera, a sensor, a co-processor, and an application processor. The co-processor is configured to generate a digital image including image data from the at least one camera and sensor data from the sensor. The co-processor is further configured to embed a frame identifier corresponding to the digital image at least two corner pixels of the digital image. The application processor is configured to receive the digital image from the co-processor, determine a first value embedded in a first corner pixel of the digital image, and determined a second value embedded in a second corner pixel of the digital image. The application processor is also configured to provide an output indicative of a validity of the digital image based on a comparison between the first value and the second value.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: August 23, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Patent number: 9412053
    Abstract: Embodiments of an apparatus, system and method for creating light projection solutions for user guidance are described herein. A user may request that projected light be used to assist in a plurality of operations involving objects in the physical space around the user. A user can use voice commands and hand gestures to request that a projector project light or images on or near objects involved in one or more operations. Embodiments of the disclosure perform an image recognition process to scan the physical space around the user and to identify any user gesture performed (e.g., a user pointing at a plurality of objects, a user holding an object); a steerable projector may be actuated to project light or image data based on the user's request and a plurality of operations associated with the objects.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: August 9, 2016
    Assignee: Google Inc.
    Inventors: William Graham Patrick, Eric Teller, Johnny Lee
  • Patent number: 9398287
    Abstract: An electronic device (100) includes a depth sensor (120), a first imaging camera (114, 116), and a controller (802). The depth sensor (120) includes a modulated light projector (119) to project a modulated light pattern (500). The first imaging camera (114, 116) is to capture at least a reflection of the modulated light pattern (500). The controller (802) is to selectively modify (1004) at least one of a frequency, an intensity, and a duration of projections of the modulated light pattern by the modulated light projector responsive to at least one trigger event (1002). The trigger event can include, for example, a change (1092) in ambient light incident on the electronic device, detection (1094) of motion of the electronic device, or a determination (1096) that the electronic device has encountered a previously-unencountered environment.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: July 19, 2016
    Assignee: Google Technology Holdings LLC
    Inventor: Johnny Lee
  • Publication number: 20160191722
    Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
    Type: Application
    Filed: March 10, 2016
    Publication date: June 30, 2016
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20160173848
    Abstract: Methods and systems for providing depth data and image data to an application processor on a mobile device are described. An example method involves receiving image data from at least one camera of the mobile device and receiving depth data from a depth processor of the mobile device. The method further involves generating a digital image that includes at least the image data and the depth data. The depth data may be embedded in pixels of the digital image, for instance. Further, the method then involves providing the digital image to an application processor of the mobile device using a camera bus interface. Thus, the depth data and the image data may be provided to the application processor in a single data structure.
    Type: Application
    Filed: February 25, 2016
    Publication date: June 16, 2016
    Inventors: James Fung, Johnny Lee