Patents by Inventor Seyedkoosha MIRHOSSEINI
Seyedkoosha MIRHOSSEINI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250111623Abstract: Various implementations disclosed herein include devices, systems, and methods that apply a 3-dimensional (3D) effect to content for rendering. For example, a process may obtain content to render within an extended reality (XR) environment. The process may further generate, via a rendering framework, a two-dimensional (2D) rendering of the content The rendering framework generates 3D information based on the content. The process may further generate a 3D effect for rendering the content based on the 3D information. The process may further determine a location of a display region for the content within the XR environment and a view of the XR environment may be presented. Rendering of the content may be presented with the 3D effect at the location in the view of the XR environment.Type: ApplicationFiled: September 12, 2024Publication date: April 3, 2025Inventors: Jason M Cahill, Brendan J Scully, Christopher J Figueroa, Earl M Olson, Courtland M Idstrom, Seyedkoosha Mirhosseini
-
Publication number: 20250111589Abstract: Various implementations provide passthrough video based on adjusting camera parameters based on environment modeling. An environment characteristic may be determined based on modeling the physical environment based on sensor data captured via one or more sensors. For example, this may involve determining environment light source optical characteristics, environment surfaces optical characteristics, a 3D mapping of the environment, user behavior, a prediction of optical characteristics of light coming in the camera, and the like. The method may involve, based on the environment characteristic, determining a camera parameter for an image captured via the image sensor. For example, the method may determine exposure, gain, tone mapping, color balance, noise reduction, sharpness enhancement. The method may determine the camera parameter based on user information, e.g., user preferences, user activity, etc.Type: ApplicationFiled: September 6, 2024Publication date: April 3, 2025Inventors: Simon Fortin-Deschenes, Luke A Pillans, Anselm Grundhoefer, Christian I Moore, Seyedkoosha Mirhosseini
-
Publication number: 20250104580Abstract: Various implementations disclosed herein include devices, systems, and methods that present content items (e.g., movies, TV shows, home-made videos, etc.) on electronic devices such as HMDs. Some implementations adjust what is being displayed by the electronic devices to mitigate optical module-based artifacts (e.g., ghosting). For example, in an HMD with a catadioptric lens, a mirror layer may leak some light to produce ghosting artifacts that may be mitigated by adjusting brightness, dynamic range, contrast, light-spill, color, etc. Some implementations utilize adjustments that are based on content item awareness (e.g., adjustments based on the peak brightness of the scene in a movie that is being displayed within an extended reality (XR) environment, etc.). Some implementations provide adjustments based on environment awareness (e.g., how dark is the surroundings or pass-through environment) and/or optical module modeling.Type: ApplicationFiled: September 9, 2024Publication date: March 27, 2025Inventors: Stanley K. Melax, Seyedpooya Mirhosseini, Fuyi Yang, Seyedkoosha Mirhosseini, Dagny Fleischman, David M. Cook, Yashas Rai Kurlethimar, Xin Wang, Travis W. Brown, Ara H. Aroyan, Jin Wook Chang, Abbas Haddadi, Yang Li, Alexander G. Berardino, Mengu Sukan, Ermal Dreshaj, Kyrollos Yanny, William W. Sprague
-
Publication number: 20250071255Abstract: Various examples disclosed herein maintain stereo consistency in extended reality (XR) environments when receiving content data with a content frame depicting both a left eye portion of a content item corresponding to a first left eye viewpoint of the content item within the XR environment, and a right eye portion of the content item corresponding to a first right eye viewpoint of the content item within the XR environment. Stereo consistency may be maintained by determining to use an adjusted version of the content frame to provide a view of the content item wherein a left eye view from a second left eye viewpoint is different than the first left eye viewpoint and a right eye view from a second right eye viewpoint is different than the first right eye viewpoint, and presenting the left eye view and the right eye view based on the content frame and the adjustment.Type: ApplicationFiled: July 16, 2024Publication date: February 27, 2025Inventors: Jacob Wilson, Sushant Ojal, Seyedkoosha Mirhosseini
-
Patent number: 12217371Abstract: Techniques are disclosed, whereby graphical information for a first image frame to be rendered is obtained at a first device, the graphical information comprising at least depth information for at least a portion of the pixels within the first image frame. Next, a regional depth value may be determined for a region of pixels in the first image frame. Next, the region of pixels may be coded as either a “skipped” region or a “non-skipped” region based, at least in part, on the determined regional depth value for the region of pixels. Finally, if the region of pixels is coded as a non-skipped region, a representation of the region of pixels may be rendered and composited with any other graphical content, as desired, to a display of the first device; whereas, if the region of pixels is coded as a skipped region, the first device may avoid rendering the region.Type: GrantFiled: September 21, 2022Date of Patent: February 4, 2025Assignee: Apple Inc.Inventor: Seyedkoosha Mirhosseini
-
Publication number: 20240412320Abstract: In some implementations, a device includes an environmental sensor, a display, a non-transitory memory and one or more processors coupled with the environmental sensor, the display and the non-transitory memory. In some implementations, a method includes generating, at a first time, intermediate warping data for a warping operation to be performed on an application frame. In some implementations, the method includes obtaining, at a second time that occurs after the first time, via the environmental sensor, environmental data that indicates a pose of the device within a physical environment of the device. In some implementations, the method includes generating a warped application frame by warping the application frame in accordance with the pose of the device and the intermediate warping data. In some implementations, the method includes displaying the warped application frame on the display.Type: ApplicationFiled: September 8, 2022Publication date: December 12, 2024Inventor: Seyedkoosha Mirhosseini
-
Publication number: 20240406573Abstract: Various implementations disclosed herein improve the appearance of captured video by accounting for light-based flicker and/or other factors affecting the appearance of video captured by a wearable electronic device. Some implementations are used with head-mounted devices (HMDs) that relay one or more front-facing camera feeds to display panels in front of the user's eyes. Some implementations adjust the exposure of one or more cameras of such a device based on assessing the lighting in the physical environment being captured in images/video by the cameras. Camera exposure may be adjusted (e.g., using discrete levels of exposure that are an even multiple of a light flicker rate) to reduce the appearance of flicker from one or more light sources in the physical environment. Whether and how to adjust exposure to reduce flicker may be based on environmental characteristics corresponding to visibility/objectionability of flicker from one or more light sources.Type: ApplicationFiled: May 17, 2024Publication date: December 5, 2024Inventors: Anselm GRUNDHOEFER, Simon FORTIN-DESCHENES, Christian I. MOORE, Luke A. PILLANS, Christophe SEYVE, Seyedkoosha MIRHOSSEINI
-
Publication number: 20240404165Abstract: In one implementation, a method of displaying image is performed by a device including one or more processors and non-transitory memory. The method includes obtaining gaze information. The method includes obtaining, based on the gaze information, a first resolution function and a second resolution function different than the first resolution function. The method includes rendering a first layer based on first virtual content and the first resolution function. The method includes rendering a second layer based on second virtual content and the second resolution function. The method includes compositing the first layer and the second layer into an image. The method includes displaying, on the display, the image.Type: ApplicationFiled: May 23, 2024Publication date: December 5, 2024Inventors: Yashas Rai Kurlethimar, Jonathan Moorman, Mark L. Ma, Michael E. Buerli, Seyedkoosha Mirhosseini, Sushant Ojal
-
Publication number: 20240404185Abstract: In one implementation, a method of pipelined blending an image with virtual content is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, with the image sensor, a first portion of an image of a physical environment. The method includes warping the first portion of the image of the physical environment to generate a warped first portion. The method includes blending the warped first portion with a first portion of virtual content to generate a blended first portion. The method includes displaying, on the display, the blended first portion. The method includes capturing, with the image sensor, a second portion of the image of the physical environment. The method includes warping the second portion of the image of the physical environment to generate a warped second portion. The method includes blending the warped second portion with a second portion of the virtual content to generate a blended second portion.Type: ApplicationFiled: May 23, 2024Publication date: December 5, 2024Inventors: Christian I. Moore, Moinul H. Khan, Seyedkoosha Mirhosseini, Simon Fortin-Deschenes
-
Publication number: 20240406362Abstract: Electronic devices such as mixed reality devices may present virtual objects at a display and provide a virtual magnifier to alter (e.g., magnify) the virtual objects. In one or more implementations, the virtual magnifier magnifies a first virtual object and subsequently magnifies a second virtual object. The electronic device may provide one or more effects, such as initially maintaining the current size of the second virtual object and subsequently adjusting the size of the second virtual object based on the first virtual object, including the relative depth between the first virtual object and the second virtual object. In one or more implementations, a process for stabilization of a magnified object is applied in circumstances when the change in position of the electronic device or change in user's gaze location is at or above a threshold.Type: ApplicationFiled: May 20, 2024Publication date: December 5, 2024Inventors: Colin D. MUNRO, Seyedkoosha MIRHOSSEINI
-
Publication number: 20240404230Abstract: In one implementation, a method of displaying an image is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an object in a physical environment. The method includes obtaining, based on the image, a first predicted object pose of the object in the physical environment at a display time. The method includes rendering virtual content based on the first predicted object pose. The method includes obtaining a second predicted object pose of the object in the physical environment at the display time. The method includes warping the virtual content based on the second predicted object pose. The method includes displaying, on the display at the display time, the warped virtual content.Type: ApplicationFiled: May 23, 2024Publication date: December 5, 2024Inventors: Conner J. Brooks, Seyedkoosha Mirhosseini
-
Publication number: 20240402802Abstract: Various implementations disclosed herein include devices, systems, and methods that adjust a brightness characteristic of virtual content (e.g., virtual objects) and/or real content (e.g., passthrough video) in views of an XR environment provided by a head mounted device (HMD). The brightness characteristic may be adjusted based on determining a viewing state (e.g., a user's eye perception/adaptation state). A viewing state, such as a user's eye perception/adaptation state while viewing a view of an XR environment via an HMD, may respond to a brightness characteristic of the XR environment that the user is seeing, which is not necessarily the brightness characteristic of the physical environment upon which the view is wholly or partially based.Type: ApplicationFiled: June 4, 2024Publication date: December 5, 2024Inventors: Travis W. BROWN, Seyedkoosha MIRHOSSEINI, John Samuel BUSHELL, Alexander G. BERARDINO, David M. COOK, Jim J. TILANDER, Ryan W. BAKER
-
Publication number: 20240378822Abstract: Various implementations disclosed herein include devices, systems, and methods that adjust a tone map used to display virtual content in an extended reality (XR) environment based a tone map of pass-through video. For example, a process may obtain virtual content associated with a virtual content tone map relating pixel luminance values to display space luminance values. The process further obtains pass-through video depicting a physical environment. The pass-through video is associated with an image signal processing (ISP) tone map relating pixel luminance values of the pass-through video signal to display space luminance values. The process further determines an adjustment adjusting the virtual content tone map based on the ISP tone map. The process further displays a view of an XR environment. The view includes the pass-through video displayed using the ISP tone map and the virtual content displayed using the virtual content tone map with the adjustment.Type: ApplicationFiled: May 8, 2024Publication date: November 14, 2024Inventors: Tobias Holl, Seyedkoosha Mirhosseini, David M. Cook
-
Patent number: 12136169Abstract: In some implementations, a method includes obtaining a request to view an object from a target point-of-view (POV). In some implementations, the object is represented in a plurality of images captured from corresponding POVs that are different from the target POV. In some implementations, the method includes generating respective contribution scores for the corresponding POVs indicative of respective contributions of the corresponding POVs to a view frustum of the target POV. In some implementations, the method includes determining a sequence in which the plurality of images is ordered based on the respective contribution scores for the corresponding POVs. In some implementations, the method includes synthesizing a new view of the object corresponding to the target POV by performing a warping operation to the plurality of images in accordance with the sequence.Type: GrantFiled: January 24, 2022Date of Patent: November 5, 2024Assignee: APPLE INC.Inventors: Seyedpooya Mirhosseini, Seyedkoosha Mirhosseini
-
Publication number: 20240331661Abstract: Prior to rendering a current frame, a device obtains a previously rendered frame. The device determines that a first portion of the previously rendered frame is associated with a particular type of content. The device renders a first portion of the current frame that corresponds to the first portion of the previously rendered frame with a first rendering characteristic while rendering a second portion of the current frame with a second rendering characteristic that is different from the first rendering characteristic.Type: ApplicationFiled: March 25, 2024Publication date: October 3, 2024Inventors: Yashas Rai Kurlethimar, Nathaniel C. Begeman, Seyedkoosha Mirhosseini
-
Publication number: 20240303766Abstract: In some implementations, a method includes: obtaining a reference image and forward flow information; identifying a neighborhood of pixels corresponding to a pixel within a target image based on the forward flow information; in accordance with a determination that a characterization vector for the neighborhood of pixels satisfies a background condition, generating a warp result for the pixel based on a first warp type; in accordance with a determination that the characterization vector satisfies a foreground condition, generating the warp result for the pixel based on a second warp type; and in accordance with a determination that the characterization vector does not satisfy the foreground or background conditions, generating the warp result for the pixel based on a third warp type; and populating pixel information for the pixel within the target image based on pixel information for a reference pixel within the reference image that corresponds to the warp result.Type: ApplicationFiled: May 14, 2024Publication date: September 12, 2024Inventor: Seyedkoosha Mirhosseini
-
Publication number: 20240267503Abstract: In one implementation, a method of generating an image is performed by a device including one or more processors and non-transitory memory. The method includes generating a first resolution function based on a formula with a set of variables having a first set of values. The method includes generating a first image based on first content and the first resolution function. The method includes detecting a resolution constraint. The method includes generating a second resolution function based on the formula with the set of variables having a second set of values, wherein the second resolution function has a summation value that satisfies the resolution constraint. The method includes generating a second image based on second content and the second resolution function.Type: ApplicationFiled: February 8, 2024Publication date: August 8, 2024Inventors: Yashas Rai Kurlethimar, Seyedkoosha Mirhosseini, Tobias Eble
-
Publication number: 20240233205Abstract: In one implementation, a method of performing perspective correction of an image is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of a physical environment. The method includes obtaining a plurality of depths respectively associated with a plurality of pixels of the image of the physical environment. The method includes generating a clamped depth map of the image of the physical environment based on the plurality of depths, wherein each element of the clamped depth map has a depth value above or equal to a depth threshold. The method includes generating a display image by transforming, using the one or more processors, the image of the physical environment based on the clamped depth map and a difference between a perspective of the image sensor and a perspective of a user. The method includes displaying, on the display, the display image.Type: ApplicationFiled: March 21, 2024Publication date: July 11, 2024Inventors: Maxime Meilland, Duncan A. McRoberts, Julien Monat Rodier, Seyedkoosha Mirhosseini
-
Patent number: 12014472Abstract: In some implementations, a method includes: obtaining a reference image frame and forward flow information; for a respective pixel within a target image frame, obtaining a plurality of starting points within the reference image frame with different depths; generating a plurality of intermediate warp results based on the plurality of starting points and the forward flow information, wherein each of the plurality of intermediate warp results is associated with a candidate warp position and an associated depth; selecting a warp result for the respective pixel from among the plurality of intermediate warp results, wherein the warp result corresponds to the candidate warp position associated with a closest depth to a viewpoint associated with the reference image frame; and populating pixel information for the respective pixel within the target image frame based on pixel information for a reference pixel within the reference image frame that corresponds to the warp result.Type: GrantFiled: August 3, 2020Date of Patent: June 18, 2024Assignee: APPLE INC.Inventor: Seyedkoosha Mirhosseini
-
Publication number: 20240069688Abstract: A head-mounted device is provided that includes displays configured to display an image and to simultaneously display a magnifying window that presents a magnified portion of the image. The magnifying window lies in a magnification plane that is fixed relative to a user's head. One or more processors in the head-mounted device can be used to perform a first ray cast operation to identify an input point where a detected user input intersects the magnifying window, to obtain a remapped point from the input point, to compute a directional vector based on the remapped point and a reference point associated with the user's head, to obtain a shifted point by shifting the remapped point from the magnification plane to another plane parallel to the magnification plane, and to perform a second ray cast operation using the shifted point and the directional vector.Type: ApplicationFiled: August 17, 2023Publication date: February 29, 2024Inventors: Daniel M Golden, John M Nefulda, Joaquim Goncalo Lobo Ferreira da Silva, Anuj Bhatnagar, Mark A Ebbole, Andrew A Haas, Seyedkoosha Mirhosseini, Colin D Munro