Patents by Inventor Venu M. Duggineni

Venu M. Duggineni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104693
    Abstract: Generating synthesized data includes capturing one or more frames of a scene at a first frame rate by one or more cameras of a wearable device, determining body position parameters for the frames, and obtaining geometry data for the scene in accordance with the one or more frames. The frames, body position parameters, and geometry data are applied to a trained network which predicts one or more additional frames. With respect to virtual data, generating a synthesized frame includes determining current body position parameters in accordance with the one or more frames, predicting a future gaze position based on the current body position parameters, and rendering, at a first resolution, a gaze region of a frame in accordance with the future gaze position. A peripheral region is predicted for the frame at a second resolution, and the combined regions form a frame that is used to drive a display.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 28, 2024
    Inventors: Vinay Palakkode, Kaushik Raghunath, Venu M. Duggineni, Vivaan Bahl
  • Publication number: 20240098234
    Abstract: A head-mounted device is provided that includes one or more cameras configured to acquire a raw video feed and one or more displays configured to present a passthrough video feed to a user. Generation of the passthrough video feed can involve processing the raw video feed using an image signal processor and auxiliary compute blocks. One or more of the auxiliary compute blocks can be bypassed in response to detecting one or more failures associated with the auxiliary compute blocks. Configured and operated in this way, the head-mounted device can fall back to a more reliable passthrough video feed without having to power cycle the head-mounted device when a failure occurs.
    Type: Application
    Filed: September 6, 2023
    Publication date: March 21, 2024
    Inventors: Michael C Friedman, Russell L Jones, Kaushik Raghunath, Venu M Duggineni, Ranjit Desai, Manjunath M Venkatesh, Michael J Rockwell, Arun Kannan, Saul H Weiss
  • Patent number: 11818419
    Abstract: A mobile device includes a display, at least one sensor, and a wireless transceiver. The mobile device also includes control circuitry coupled to the display, the at least one sensor, and the wireless transceiver. The control circuitry is configured to obtain content primitives from the at least one sensor, to perform content provisioning operations to obtain content based at least in part on the content primitives, and to display the obtained content on the display, wherein at least some of the content is virtual content. In response to a bandwidth condition of the wireless communication channel being less than a threshold, the control circuitry is configured to perform adjusted content provisioning operations that involve increasing an amount of image processing operations performed by the mobile device to obtain the content.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 14, 2023
    Assignee: Apple Inc.
    Inventors: Moinul H. Khan, Katharina Buckl, Venu M. Duggineni, Aleksandr M. Movshovich, Sreeraman Anantharaman, Phillip N. Smith
  • Patent number: 11804019
    Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: October 31, 2023
    Assignee: Apple Inc.
    Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Alexsandr M. Movshovich, Gurjeet S. Saund
  • Publication number: 20230300285
    Abstract: A method is provided that includes determining a gaze position of a user relative to mixed-reality content displayed in a first frame, setting a binning mode for a first camera based on the determined gaze position, and capturing, using the first camera, passthrough content for a second frame at a resolution determined by the binning mode.
    Type: Application
    Filed: February 15, 2023
    Publication date: September 21, 2023
    Inventors: Kaushik RAGHUNATH, Venu M. DUGGINENI
  • Publication number: 20230273817
    Abstract: A request to transition a computing system from a first state to a second state is received, and a respective manifest is compiled for each of a plurality of processors of the computing system. Each manifest comprises a transition identifier representing a command to transition from the first state to the second state and an action time for executing one or more operations associated with the transition identifier. The respective manifests are dispatched to the plurality of processors, and status reports are received from the plurality of processors regarding the transition from the first state to the second state.
    Type: Application
    Filed: February 23, 2023
    Publication date: August 31, 2023
    Inventors: Arun KANNAN, Manjunath M. VENKATESH, Venu M. DUGGINENI, Alhad A. PALKAR, Kaushik RAGHUNATH, David M. JUN, Alex TUKH, Yakov BEN-ZAKEN
  • Publication number: 20220207842
    Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 30, 2022
    Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Alexsandr M. Movshovich, Gurjeet S. Saund
  • Patent number: 11308696
    Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: April 19, 2022
    Assignee: Apple Inc.
    Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Aleksandr M. Movshovich, Gurjeet S. Saund
  • Publication number: 20200107068
    Abstract: A mobile device includes a display, at least one sensor, and a wireless transceiver. The mobile device also includes control circuitry coupled to the display, the at least one sensor, and the wireless transceiver. The control circuitry is configured to obtain content primitives from the at least one sensor, to perform content provisioning operations to obtain content based at least in part on the content primitives, and to display the obtained content on the display, wherein at least some of the content is virtual content. In response to a bandwidth condition of the wireless communication channel being less than a threshold, the control circuitry is configured to perform adjusted content provisioning operations that involve increasing an amount of image processing operations performed by the mobile device to obtain the content.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Moinul H. Khan, Katharina Buckl, Venu M. Duggineni, Aleksandr M. Movshovich, Sreeraman Anantharaman, Phillip N. Smith
  • Publication number: 20200043237
    Abstract: One implementation forms a composited stream of computer-generated reality (CGR) content using multiple data streams related to a CGR experience to facilitate recording or streaming. A media compositor obtains a first data stream of rendered frames and a second data stream of additional data. The rendered frame content (e.g., 3D models) represents real and virtual content rendered during a CGR experience at a plurality of instants in time. The additional data of the second data stream relates to the CGR experience, for example, relating to audio, audio sources, metadata identifying detected attributes of the CGR experience, image data, data from other devices involved in the CGR experience, etc. The media compositor forms a composited stream that aligns the rendered frame content with the additional data for the plurality of instants in time, for example, by forming time-stamped, n-dimensional datasets (e.g., images) corresponding to individual instants in time.
    Type: Application
    Filed: August 6, 2019
    Publication date: February 6, 2020
    Inventors: Ranjit Desai, Venu M. Duggineni, Perry A. Caro, Aleksandr M. Movshovich, Gurjeet S. Saund
  • Patent number: 10438564
    Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: October 8, 2019
    Assignee: Apple Inc.
    Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
  • Publication number: 20180293958
    Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.
    Type: Application
    Filed: June 11, 2018
    Publication date: October 11, 2018
    Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
  • Patent number: 9997137
    Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 12, 2018
    Assignee: APPLE INC.
    Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
  • Publication number: 20180039315
    Abstract: In some implementations, a mobile device can be configured with virtual motion fences that delineate domains of motion detectable by the mobile device. In some implementations, the mobile device can be configured to invoke an application or function when the mobile device enters or exits a motion domain (by crossing a motion fence). In some implementations, entering or exiting a motion domain can cause components of the mobile device to power on or off (or awaken or sleep) in an incremental manner.
    Type: Application
    Filed: March 20, 2017
    Publication date: February 8, 2018
    Applicant: Apple Inc.
    Inventors: Hung A. Pham, Parin Patel, Venu M. Duggineni
  • Publication number: 20170092228
    Abstract: An electronic display includes a display side and an ambient light sensor configured to measure received light received through the display side. The electronic display also includes multiple pixels located between the display side and the ambient light sensor. The multiple pixels are configured to emit display light through the display side.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: Guy Cote, Mahesh B. Chappalli, Venu M. Duggineni
  • Patent number: 9600049
    Abstract: In some implementations, a mobile device can be configured with virtual motion fences that delineate domains of motion detectable by the mobile device. In some implementations, the mobile device can be configured to invoke an application or function when the mobile device enters or exits a motion domain (by crossing a motion fence). In some implementations, entering or exiting a motion domain can cause components of the mobile device to power on or off (or awaken or sleep) in an incremental manner.
    Type: Grant
    Filed: June 7, 2013
    Date of Patent: March 21, 2017
    Assignee: Apple Inc.
    Inventors: Hung A. Pham, Parin Patel, Venu M. Duggineni
  • Patent number: 9330606
    Abstract: Automatic display brightness adjustments may be made by an electronic device based on ambient light sensor data. Proximity sensor data from a light-based proximity sensor, from nearby capacitive sensor electrodes in a touch screen, or from other proximity sensing components may be used to determine whether the ambient light sensor is being shadowed by a hand or other external object. Ambient light sensor data associated with blocked sensor conditions can be suppressed. A transient event filter may be used to remove spikes from ambient light sensor data. A display brightness baseline may be adaptively adjusted. Short changes in ambient light level may result in corresponding momentary adjustments to display brightness. Longer changes in ambient light level may be associated with persistent changes in the display brightness baseline.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: May 3, 2016
    Assignee: Apple Inc.
    Inventors: Ulrich Barnhoefer, Venu M. Duggineni, Clifford E. Russell
  • Publication number: 20140365803
    Abstract: In some implementations, a mobile device can be configured with virtual motion fences that delineate domains of motion detectable by the mobile device. In some implementations, the mobile device can be configured to invoke an application or function when the mobile device enters or exits a motion domain (by crossing a motion fence). In some implementations, entering or exiting a motion domain can cause components of the mobile device to power on or off (or awaken or sleep) in an incremental manner.
    Type: Application
    Filed: June 7, 2013
    Publication date: December 11, 2014
    Inventors: Hung A. Pham, Parin Patel, Venu M. Duggineni
  • Publication number: 20130328842
    Abstract: Automatic display brightness adjustments may be made by an electronic device based on ambient light sensor data. Proximity sensor data from a light-based proximity sensor, from nearby capacitive sensor electrodes in a touch screen, or from other proximity sensing components may be used to determine whether the ambient light sensor is being shadowed by a hand or other external object. Ambient light sensor data associated with blocked sensor conditions can be suppressed. A transient event filter may be used to remove spikes from ambient light sensor data. A display brightness baseline may be adaptively adjusted. Short changes in ambient light level may result in corresponding momentary adjustments to display brightness. Longer changes in ambient light level may be associated with persistent changes in the display brightness baseline.
    Type: Application
    Filed: September 27, 2012
    Publication date: December 12, 2013
    Applicant: Apple Inc.
    Inventors: Ulrich Barnhoefer, Venu M. Duggineni, Clifford E. Russell