Patents by Inventor Ashraf Ayman Michail

Ashraf Ayman Michail has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954786
    Abstract: In various examples there is a method performed by a Head Mounted Display, HMD, comprising a high field rate display configured to display fields of rendered frames at a field rate. The method comprises receiving a stream of the rendered frames for display on the high field rate display, the stream of rendered frames having a frame rate. The process applies an early stage reprojection to the rendered frames of the stream of rendered frames at a rate which is lower than the field rate. The process applies a late stage reprojection to fields of the rendered frames at the field rate, wherein the early stage reprojection uses more computational resources than the late reprojection.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: April 9, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Dag Birger Frommhold, Christian Voss-Wolff, Ashraf Ayman Michail
  • Publication number: 20230377241
    Abstract: In various examples there is a method performed by a Head Mounted Display, HMD, comprising a high field rate display configured to display fields of rendered frames at a field rate. The method comprises receiving a stream of the rendered frames for display on the high field rate display, the stream of rendered frames having a frame rate. The process applies an early stage reprojection to the rendered frames of the stream of rendered frames at a rate which is lower than the field rate. The process applies a late stage reprojection to fields of the rendered frames at the field rate, wherein the early stage reprojection uses more computational resources than the late reprojection.
    Type: Application
    Filed: May 20, 2022
    Publication date: November 23, 2023
    Inventors: Dag Birger FROMMHOLD, Christian VOSS-WOLFF, Ashraf Ayman MICHAIL
  • Publication number: 20220357779
    Abstract: A wearable device includes multiple subsystems including a processor and a memory device, multiple temperature sensors coupled to sense temperatures of the multiple subsystems, and programming, including an application, stored on the memory device for execution by the processor to perform operations. The operations include receiving temperature information from the multiple temperature sensors corresponding to temperatures associated with the multiple subsystems, processing the temperature information to identify a first subsystem of the multiple subsystems, and providing a notification to the application executing on the processor to mitigate application performance in a manner to reduce heat generated by the first subsystem.
    Type: Application
    Filed: May 6, 2021
    Publication date: November 10, 2022
    Inventors: Sudeesh Reddy Pingili, Ashraf Ayman Michail, Jerome Raymond Halmans
  • Patent number: 11170579
    Abstract: One disclosed example provides a computing device comprising a processing device and a storage device storing instructions executable by the processing device to execute in a first local process an application that outputs digital content for rendering and display. During execution of the application, the instructions are executable to provide, to a second local or remote process, object information regarding an object to be rendered by the second local or remote process, receive, from the second local or remote process, a rendering of the object, output the rendering of the object to display the object, receive a manipulation made to the object, provide, to the second local or remote process, updated object information based on the manipulation made to the object, receive, from the second local or remote process, an updated rendering of the object, and output the updated rendering of the object to display the object.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: November 9, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dag Birger Frommhold, Jonathan Michael Lyons, Benjamin Markus Thaut, Ashraf Ayman Michail
  • Publication number: 20200327740
    Abstract: One disclosed example provides a computing device comprising a processing device and a storage device storing instructions executable by the processing device to execute in a first local process an application that outputs digital content for rendering and display. During execution of the application, the instructions are executable to provide, to a second local or remote process, object information regarding an object to be rendered by the second local or remote process, receive, from the second local or remote process, a rendering of the object, output the rendering of the object to display the object, receive a manipulation made to the object, provide, to the second local or remote process, updated object information based on the manipulation made to the object, receive, from the second local or remote process, an updated rendering of the object, and output the updated rendering of the object to display the object.
    Type: Application
    Filed: April 9, 2019
    Publication date: October 15, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dag Birger FROMMHOLD, Jonathan Michael LYONS, Benjamin Markus THAUT, Ashraf Ayman MICHAIL
  • Patent number: 10708597
    Abstract: Examples described herein generally relate to performing frame extrapolation in image frame rendering. A vertex mesh as a set of vertices is generated, and each vertex is mapped to a screen space position for defining a texture. One or more motion vectors for one or more regions in a first image frame of a stream of image frames can be determined. The screen space positions associated with at least a portion of the set of vertices within the texture can be modified based at least in part on the one or more motion vectors. A graphics processing unit (GPU) can render the first image frame into the texture. The extrapolated image frame is displayed after the first image frame and before a next image frame in the stream of image frames.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: July 7, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew Zicheng Yeung, Michael George Boulton, Ashraf Ayman Michail, Matt Bronder, Jack Andrew Elliott, Matthew David Sandy
  • Publication number: 20200137409
    Abstract: Examples are disclosed that relate to producing an extrapolated frame based on motion vectors. One example provides a computing device comprising a logic machine and a storage machine comprising instructions executable by the logic machine to, for each block of one or more blocks of pixels in rendered image data, generate a motion vector indicating motion between a current frame and a prior frame, and for each block of the one or blocks, extrapolate a predicted block of pixels from the current frame based on the motion vector and one or more prior motion vectors for the block, the one or more prior motion vectors determined via one or more corresponding frames preceding the prior frame. The instructions are further executable to produce an extrapolated frame comprising the predicted block of pixels for each block of the one or more blocks, and display the extrapolated frame.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ashraf Ayman MICHAIL, Michael George BOULTON
  • Patent number: 10430983
    Abstract: Encoding pixel information for pixels of an image. A method includes accessing information defining high-frequency image data correlated with pixels. The method further includes for each pixel, identifying if a vertex from the high-frequency image data is located in that pixel based on analysis of the high-frequency data correlated with the pixel. The method further includes, for one or more pixels in which a vertex is located, identifying the location of the vertex. The method further includes encoding the vertex location into image pixel data.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: October 1, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ashraf Ayman Michail, Michael George Boulton
  • Publication number: 20190238854
    Abstract: Examples described herein generally relate to performing frame extrapolation in image frame rendering. A vertex mesh as a set of vertices is generated, and each vertex is mapped to a screen space position for defining a texture. One or more motion vectors for one or more regions in a first image frame of a stream of image frames can be determined. The screen space positions associated with at least a portion of the set of vertices within the texture can be modified based at least in part on the one or more motion vectors. A graphics processing unit (GPU) can render the first image frame into the texture. The extrapolated image frame is displayed after the first image frame and before a next image frame in the stream of image frames.
    Type: Application
    Filed: February 1, 2018
    Publication date: August 1, 2019
    Inventors: Andrew Zicheng YEUNG, Michael George BOULTON, Ashraf Ayman MICHAIL, Matt BRONDER, Jack Andrew ELLIOTT, Matthew David SANDY
  • Patent number: 10237531
    Abstract: In various embodiments, methods and systems reprojecting three-dimensional (3D) virtual scenes using discontinuity depth late stage reprojection are provided. A reconstruction point, that indicates camera pose information, is accessed. The reconstruction point is associated with a plurality of sample points of a three-dimensional (3D) virtual scene. One or more closest sample points, relative to the reconstruction point, are identified, from the plurality of sample points. Each of the one or more closest sample points is associated with a cube map of color data and depth data. A relative convergence score is determined for each of the one or more closest sample points based on performing a depth-aware cube map late stage reprojection operation in relation to the reconstruction point. A subset of the one or more closest sample points is identified based on the relative convergence score. A reconstructed 3D virtual image is generated using the subset.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: March 19, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael George Boulton, Ashraf Ayman Michail
  • Publication number: 20180350039
    Abstract: Encoding pixel information for pixels of an image. A method includes accessing information defining high-frequency image data correlated with pixels. The method further includes for each pixel, identifying if a vertex from the high-frequency image data is located in that pixel based on analysis of the high-frequency data correlated with the pixel. The method further includes, for one or more pixels in which a vertex is located, identifying the location of the vertex.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Inventors: Ashraf Ayman MICHAIL, Michael George BOULTON
  • Patent number: 10127725
    Abstract: A two-dimensional augmentation image is rendered from a three-dimensional model from a first virtual perspective. A transformation is applied to the augmentation image to yield an updated two-dimensional augmentation image that approximates a second virtual perspective of the three-dimensional model without additional rendering from the three-dimensional model.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: November 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Kohler, Denis Demandolx, Will Guyman, Ashraf Ayman Michail, Minshik Park, Justin Nafziger, Eric Richards
  • Patent number: 10129523
    Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: November 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu
  • Patent number: 10114454
    Abstract: In various embodiments, methods and systems for reprojecting images based on a velocity depth late stage reprojection process are provided. A reprojection engine supports reprojecting images based on an optimized late stage reprojection process that is performed based on both depth data and velocity data. Image data and corresponding depth and velocity data of the image data is received. A determination of an adjustment to be made to the image is made. The determination is made based on motion data, the depth data and the velocity data. The motion data corresponds to a device associated with displaying the image data. The velocity data supports determining calculated correction distances for portions of the image data. The image data is adjusted based on the determined adjustment. Adjusting the image data is based on integrating depth-data-based translation and velocity-data-based motion correction, into a single pass implementation, to adjust the image data.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: October 30, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael George Boulton, Ashraf Ayman Michail, Gerhard Albert Schneider, Yang You
  • Patent number: 10078367
    Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: September 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
  • Patent number: 9892565
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: February 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
  • Patent number: 9874932
    Abstract: One embodiment provides a method to display video such as computer-rendered animation or other video. The method includes assembling a sequence of video frames featuring a moving object, each video frame including a plurality of subframes sequenced for display according to a schedule. The method also includes determining a vector-valued differential velocity of the moving object relative to a head of an observer of the video. At a time scheduled for display of a first subframe of a given frame, first-subframe image content transformed by a first transform is displayed. At a time scheduled for display of the second subframe of the given frame, second-subframe image content transformed by a second transform is displayed. The first and second transforms are computed based on the vector-valued differential velocity to mitigate artifacts.
    Type: Grant
    Filed: April 9, 2015
    Date of Patent: January 23, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Matthew Calbraith Crisler, Robert Thomas Held, Stephen Latta, Ashraf Ayman Michail, Martin Shetter, Arthur Tomlin
  • Publication number: 20170374343
    Abstract: In various embodiments, methods and systems for reprojecting images based on a velocity depth late stage reprojection process are provided. A reprojection engine supports reprojecting images based on an optimized late stage reprojection process that is performed based on both depth data and velocity data. Image data and corresponding depth and velocity data of the image data is received. A determination of an adjustment to be made to the image is made. The determination is made based on motion data, the depth data and the velocity data. The motion data corresponds to a device associated with displaying the image data. The velocity data supports determining calculated correction distances for portions of the image data. The image data is adjusted based on the determined adjustment. Adjusting the image data is based on integrating depth-data-based translation and velocity-data-based motion correction, into a single pass implementation, to adjust the image data.
    Type: Application
    Filed: January 17, 2017
    Publication date: December 28, 2017
    Inventors: Michael George Boulton, Ashraf Ayman Michail, Gerhard Albert Schneider, Yang You
  • Publication number: 20170374344
    Abstract: In various embodiments, methods and systems reprojecting three-dimensional (3D) virtual scenes using discontinuity depth late stage reprojection are provided. A reconstruction point, that indicates camera pose information, is accessed. The reconstruction point is associated with a plurality of sample points of a three-dimensional (3D) virtual scene. One or more closest sample points, relative to the reconstruction point, are identified, from the plurality of sample points. Each of the one or more closest sample points is associated with a cube map of color data and depth data. A relative convergence score is determined for each of the one or more closest sample points based on performing a depth-aware cube map late stage reprojection operation in relation to the reconstruction point. A subset of the one or more closest sample points is identified based on the relative convergence score. A reconstructed 3D virtual image is generated using the subset.
    Type: Application
    Filed: January 17, 2017
    Publication date: December 28, 2017
    Inventors: Michael George Boulton, Ashraf Ayman Michail
  • Publication number: 20170374341
    Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.
    Type: Application
    Filed: June 22, 2016
    Publication date: December 28, 2017
    Inventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu