Patents by Inventor Thomas Michael McLaughlin

Thomas Michael McLaughlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9443352
    Abstract: A computer generates one or more geometric rays at a predetermined height above one or more portions of an avatar's body in a virtual environment. Once the rays are generated, the computer computes intersections of the one or more rays with a collision mesh that defines a terrain of the virtual environment at the location of the avatar. Then, the computer computes the point of intersection of each of the one or more rays with the collision mesh. Using the points of intersection, the computer calculates an offset in an elevation and/or an offset in an orientation of the avatar with respect to the terrain in the virtual environment. Further, the computer adjusts the elevation and/or the orientation of the avatar based on the calculated offset in the elevation and/or the elevation in the orientation, respectively, such that the adjusted elevation and/or orientation compensates for the calculated offsets.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: September 13, 2016
    Assignee: Motion Reality, Inc.
    Inventors: Robert Michael Glover, Arris Eugene Ray, DJ Jonathan Cassel, Nels Howard Madsen, Thomas Michael McLaughlin
  • Patent number: 9311742
    Abstract: A computer determines a modified location for an avatar of a first entity in a virtual environment based on a location of the first entity in a capture volume and a transformation used to map a second entity from the capture volume to the virtual environment. The modified location of the avatar of the first entity relative to a location of an avatar of the second entity is consistent with the location of the first entity relative to the location of the second entity in the capture volume. Once the modified location is determined, the computer displays a graphical cue corresponding to the first entity at the modified location of the avatar of the first entity provided that a distance between the modified location of the avatar of the first entity is different from a current location of the avatar of the first entity in the virtual environment.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: April 12, 2016
    Assignee: Motion Reality, Inc.
    Inventors: Robert Michael Glover, Arris Eugene Ray, DJ Jonathan Cassel, Nels Howard Madsen, Thomas Michael McLaughlin
  • Patent number: 9223786
    Abstract: A participant in a capture volume can speak through a microphone. The microphone can capture the speech and transmit it to a wearable computing device of the participant. The wearable computing device can process the speech to generate audio data. The wearable computing device can transmit the audio data to a simulator engine. The simulator engine can receive the audio data and processes the audio data to determine an attribute of the audio data (e.g., amplitude) at the location of a virtual character in a simulated virtual environment based on one or more attenuation factors. The attenuation factors can be calculated based on 3D motion data of the participant. Further, the simulator engine can drive an change in state of the virtual character in the simulated virtual environment based on the attribute of the audio data.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: December 29, 2015
    Assignee: Motion Reality, Inc.
    Inventors: Cameron Travis Hamrick, Nels Howard Madsen, Thomas Michael McLaughlin
  • Patent number: 9159152
    Abstract: A motion capture simulation system can include a capture volume. A participant disposed in the capture volume can be motion captured and immersed into a virtual environment. The virtual environment may be larger in size than the capture volume. In the virtual environment, the participant may be represented by an avatar. The avatar in the virtual environment can be moved in a first direction based on a motion of the participant in the first direction in the capture volume. As the participant moves in the first direction in the capture volume, the participant may approach a boundary of the capture volume, while the participant's avatar may have space to move further in the first direction in the larger virtual environment. Approaching the boundary, the participant can change direction, for example turning around to avoid the boundary. The redirected participant can continue driving the avatar to move in the first direction.
    Type: Grant
    Filed: July 17, 2012
    Date of Patent: October 13, 2015
    Assignee: Motion Reality, Inc.
    Inventors: Robert Michael Glover, Nels Howard Madsen, Thomas Michael McLaughlin
  • Patent number: 8920172
    Abstract: A participant in a motion capture environment, such as a motion capture simulation, can utilize or interact with physical objects disposed in the environment, for example weapons, sporting goods, and wands. Markers attached to the physical objects can track movement of the objects in the motion capture environment as well as changes in operational state associated with participant interaction or usage. Markers on an object can be passive, active, or a combination of active and passive. A change in separation between two passive or active markers on a mechanized physical object can indicate that a participant has engaged a mechanism of the object, for example firing a semiautomatic weapon. One or more active markers can emit a pattern of light that is modulated spatially or temporally to report operational state of an object, such as when a participant has fired the weapon or turned a weapon safety off or on.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: December 30, 2014
    Assignee: Motion Reality, Inc.
    Inventors: Ronnie Johannes Hendrikus Wilmink, Nels Howard Madsen, Thomas Michael McLaughlin
  • Patent number: 8825187
    Abstract: A wearable computing device of the listener entity can receive 3D motion data of a virtual representation of the listener entity, 3D motion data of a virtual representation of a sound emitter entity and audio data. The audio data may be associated with an audio event triggered by the sound emitter entity in a capture volume. The wearable computing device of the listener entity can process the 3D motion data of the virtual representation of a listener entity, the 3D motion data of the virtual representation of the sound emitter entity and the audio data to generate a multi channel audio output data customized to the perspective of the virtual representation of a first entity. The multi channel audio output data may be associated with the audio event. The multi channel audio output data can be communicated to the listener entity through a surround sound audio output device.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: September 2, 2014
    Assignee: Motion Reality, Inc.
    Inventors: Cameron Travis Hamrick, Nels Howard Madsen, Thomas Michael McLaughlin