Patents by Inventor Venu M. Duggineni
Venu M. Duggineni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240411593Abstract: Techniques are disclosed relating to kernel task scheduling. In various embodiments, a computing device receives, at a first scheduler, a compute graph defining interrelationships for a set of tasks to be performed by the computing device. In some embodiment, the set of tasks are performed to provide an extended reality (XR) experience to a user. The first scheduler determines a schedule for implementing the set of tasks based on the interrelationships defined in the compute graph and issues instructions to cause a second scheduler of the computing device to schedule performance of the set of tasks in accordance with the determined schedule.Type: ApplicationFiled: September 22, 2022Publication date: December 12, 2024Inventors: Arun Kannan, Venu M. Duggineni, Ranjit Desai, Rohan S. Patil
-
Publication number: 20240323342Abstract: An electronic device is provided that includes at least one image sensor for acquiring a video feed and one or more displays for presenting a passthrough video feed to a user. The electronic device can include a hierarchical failure detection scheme for detecting critical failures on the device. The hierarchical failure detection scheme may include monitoring a condition of a first subsystem with a second subsystem, monitoring a condition of the second subsystem with a third subsystem, monitoring a condition of the third subsystem with a fourth subsystem, and so on. The displays can operate in a first video passthrough mode or a second video passthrough mode based on the condition of the first subsystem as monitored by the second subsystem, the condition of the second subsystem as monitored by the third subsystem, and/or the condition of the third subsystem as monitored by the fourth subsystem.Type: ApplicationFiled: December 6, 2023Publication date: September 26, 2024Inventors: Mohamed Al Sharnouby, Arun Kannan, Venu M Duggineni, Kaushik Raghunath, Saul H Weiss, Luke Yoder, James C McIlree, Sankaravadivoo Subramanian, Mukta S Gore, Russell L Jones
-
Publication number: 20240244336Abstract: Aspects of the subject technology may provide time-synchronized image frames from multiple cameras to various system and/or application processes running on an electronic device. In one or more implementations, a frame identifier may be determined for each image frame from each camera based on a system pulse associated with the capture of the image frame. By generating frame identifiers based for images from multiple cameras based on a centralized source such as the system pulses, subsequent processes can immediately identify images from multiple cameras having the same frame identifier for co-processing of those images.Type: ApplicationFiled: September 12, 2023Publication date: July 18, 2024Inventors: Arun KANNAN, Dario A. ARANGUIZ, Mohamed AL SHARNOUBY, Rajiv KUMAR, Rohan Sanjeev PATIL, Varadharajan CHANDRAN, Venu M. DUGGINENI
-
Publication number: 20240201776Abstract: An electronic device may include software and hardware subsystems that are centrally controlled by a user experience manager. The user experience manager can identify a current user context and enforce a corresponding power and performance policy for the various subsystems that is optimized for the current user context. The user experience manager can provide a set of restrictions at the launch of a user experience, can allow the various subsystems to vary their dynamic behavior based on current operating conditions as long as the dynamic adjustments do not violate the restrictions, and can perform a series of thermal mitigation operations as the internal temperature of the electronic device varies. The centralized user experience manager can also be configured to predict a user context based on monitored states of the subsystems and monitored application usage on the electronic device.Type: ApplicationFiled: December 11, 2023Publication date: June 20, 2024Inventors: David M Jun, Arun Kannan, Kaushik Raghunath, Nikhil Sharma, Venu M Duggineni
-
Publication number: 20240104693Abstract: Generating synthesized data includes capturing one or more frames of a scene at a first frame rate by one or more cameras of a wearable device, determining body position parameters for the frames, and obtaining geometry data for the scene in accordance with the one or more frames. The frames, body position parameters, and geometry data are applied to a trained network which predicts one or more additional frames. With respect to virtual data, generating a synthesized frame includes determining current body position parameters in accordance with the one or more frames, predicting a future gaze position based on the current body position parameters, and rendering, at a first resolution, a gaze region of a frame in accordance with the future gaze position. A peripheral region is predicted for the frame at a second resolution, and the combined regions form a frame that is used to drive a display.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Vinay Palakkode, Kaushik Raghunath, Venu M. Duggineni, Vivaan Bahl
-
Publication number: 20240098234Abstract: A head-mounted device is provided that includes one or more cameras configured to acquire a raw video feed and one or more displays configured to present a passthrough video feed to a user. Generation of the passthrough video feed can involve processing the raw video feed using an image signal processor and auxiliary compute blocks. One or more of the auxiliary compute blocks can be bypassed in response to detecting one or more failures associated with the auxiliary compute blocks. Configured and operated in this way, the head-mounted device can fall back to a more reliable passthrough video feed without having to power cycle the head-mounted device when a failure occurs.Type: ApplicationFiled: September 6, 2023Publication date: March 21, 2024Inventors: Michael C Friedman, Russell L Jones, Kaushik Raghunath, Venu M Duggineni, Ranjit Desai, Manjunath M Venkatesh, Michael J Rockwell, Arun Kannan, Saul H Weiss
-
Patent number: 11818419Abstract: A mobile device includes a display, at least one sensor, and a wireless transceiver. The mobile device also includes control circuitry coupled to the display, the at least one sensor, and the wireless transceiver. The control circuitry is configured to obtain content primitives from the at least one sensor, to perform content provisioning operations to obtain content based at least in part on the content primitives, and to display the obtained content on the display, wherein at least some of the content is virtual content. In response to a bandwidth condition of the wireless communication channel being less than a threshold, the control circuitry is configured to perform adjusted content provisioning operations that involve increasing an amount of image processing operations performed by the mobile device to obtain the content.Type: GrantFiled: September 27, 2019Date of Patent: November 14, 2023Assignee: Apple Inc.Inventors: Moinul H. Khan, Katharina Buckl, Venu M. Duggineni, Aleksandr M. Movshovich, Sreeraman Anantharaman, Phillip N. Smith
-
Patent number: 11804019Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.Type: GrantFiled: March 14, 2022Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Alexsandr M. Movshovich, Gurjeet S. Saund
-
Publication number: 20230300285Abstract: A method is provided that includes determining a gaze position of a user relative to mixed-reality content displayed in a first frame, setting a binning mode for a first camera based on the determined gaze position, and capturing, using the first camera, passthrough content for a second frame at a resolution determined by the binning mode.Type: ApplicationFiled: February 15, 2023Publication date: September 21, 2023Inventors: Kaushik RAGHUNATH, Venu M. DUGGINENI
-
Publication number: 20230273817Abstract: A request to transition a computing system from a first state to a second state is received, and a respective manifest is compiled for each of a plurality of processors of the computing system. Each manifest comprises a transition identifier representing a command to transition from the first state to the second state and an action time for executing one or more operations associated with the transition identifier. The respective manifests are dispatched to the plurality of processors, and status reports are received from the plurality of processors regarding the transition from the first state to the second state.Type: ApplicationFiled: February 23, 2023Publication date: August 31, 2023Inventors: Arun KANNAN, Manjunath M. VENKATESH, Venu M. DUGGINENI, Alhad A. PALKAR, Kaushik RAGHUNATH, David M. JUN, Alex TUKH, Yakov BEN-ZAKEN
-
Publication number: 20220207842Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.Type: ApplicationFiled: March 14, 2022Publication date: June 30, 2022Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Alexsandr M. Movshovich, Gurjeet S. Saund
-
Patent number: 11308696Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.Type: GrantFiled: August 6, 2019Date of Patent: April 19, 2022Assignee: Apple Inc.Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Aleksandr M. Movshovich, Gurjeet S. Saund
-
Publication number: 20200107068Abstract: A mobile device includes a display, at least one sensor, and a wireless transceiver. The mobile device also includes control circuitry coupled to the display, the at least one sensor, and the wireless transceiver. The control circuitry is configured to obtain content primitives from the at least one sensor, to perform content provisioning operations to obtain content based at least in part on the content primitives, and to display the obtained content on the display, wherein at least some of the content is virtual content. In response to a bandwidth condition of the wireless communication channel being less than a threshold, the control circuitry is configured to perform adjusted content provisioning operations that involve increasing an amount of image processing operations performed by the mobile device to obtain the content.Type: ApplicationFiled: September 27, 2019Publication date: April 2, 2020Inventors: Moinul H. Khan, Katharina Buckl, Venu M. Duggineni, Aleksandr M. Movshovich, Sreeraman Anantharaman, Phillip N. Smith
-
Publication number: 20200043237Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.Type: ApplicationFiled: August 6, 2019Publication date: February 6, 2020Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Aleksandr M. Movshovich, Gurjeet S. Saund
-
Patent number: 10438564Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.Type: GrantFiled: June 11, 2018Date of Patent: October 8, 2019Assignee: Apple Inc.Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
-
Publication number: 20180293958Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.Type: ApplicationFiled: June 11, 2018Publication date: October 11, 2018Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
-
Patent number: 9997137Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.Type: GrantFiled: September 30, 2015Date of Patent: June 12, 2018Assignee: APPLE INC.Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
-
Publication number: 20180039315Abstract: In some implementations, a mobile device can be configured with virtual motion fences that delineate domains of motion detectable by the mobile device. In some implementations, the mobile device can be configured to invoke an application or function when the mobile device enters or exits a motion domain (by crossing a motion fence). In some implementations, entering or exiting a motion domain can cause components of the mobile device to power on or off (or awaken or sleep) in an incremental manner.Type: ApplicationFiled: March 20, 2017Publication date: February 8, 2018Applicant: Apple Inc.Inventors: Hung A. Pham, Parin Patel, Venu M. Duggineni
-
Publication number: 20170092228Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.Type: ApplicationFiled: September 30, 2015Publication date: March 30, 2017Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
-
Patent number: 9600049Abstract: In some implementations, a mobile device can be configured with virtual motion fences that delineate domains of motion detectable by the mobile device. In some implementations, the mobile device can be configured to invoke an application or function when the mobile device enters or exits a motion domain (by crossing a motion fence). In some implementations, entering or exiting a motion domain can cause components of the mobile device to power on or off (or awaken or sleep) in an incremental manner.Type: GrantFiled: June 7, 2013Date of Patent: March 21, 2017Assignee: Apple Inc.Inventors: Hung A. Pham, Parin Patel, Venu M. Duggineni