Patents by Inventor Dipak Mahendra Patel
Dipak Mahendra Patel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240073273Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: ApplicationFiled: September 7, 2023Publication date: February 29, 2024Applicant: ZEALITY INC.Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Patent number: 11895175Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from one or more second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: GrantFiled: April 19, 2022Date of Patent: February 6, 2024Assignee: ZEALITY INCInventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Patent number: 11893699Abstract: A method and processing unit for providing content in a bandwidth constrained environment is disclosed. Initially, a content along with audio inputs, which is received during rendering of the content and provided to one or more users in a bandwidth constrained environment is received. Further, at least one object of interest within the content and associated with the audio inputs is identified. One or more regions of interest, including the at least one object of interest, is determined in the bandwidth constrained environment. Upon determining the one or more regions of interest, bitrate for rendering the content is modified based on the determined one or more regions of interest, to obtain a modified content for the bandwidth constrained environment. The modified content is provided to be rendered in the bandwidth constrained environment.Type: GrantFiled: March 15, 2022Date of Patent: February 6, 2024Assignee: Zeality IncInventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20240040097Abstract: A method, a processing unit, and a non-transitory computer-readable medium for controlling viewpoint of attendees in immersive environment are disclosed. For controlling viewpoint, initially, presenter input is received. Presenter input indicates objects selected amongst plurality of objects displayed in 360° view of a content in immersive environment. Further, the objects to be displayed are dynamically fixed within viewpoint across the 360° view of the attendees. The objects are dynamically fixed within viewpoint irrespective of inputs received from the attendees to change objects within viewpoint. New objects, amongst the plurality of objects are detected based on at least one of real-time preferences of the attendees and real-time context of the content provided by the presenter. Upon detecting the new objects, the objects are dynamically re-fixed within the viewpoint of the attendees.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Inventors: Dipak Mahendra Patel, Anmol Agarwal
-
Publication number: 20230333795Abstract: A method, a processing unit, a non-transitory computer-readable medium for providing recommendations in content rendering environment with presenter and attendees. For providing the recommendations, initially, user data is received for content rendering environment. The user data relates to the attendees. Further, the recommendations to the presenter are generated based on the user data. The recommendations are used to initiate interaction in the content rendering environment between the presenter and at least one selected attendee amongst the one or more attendees. The recommendations are provided to the presenter during rendering of the content to the one or more attendees. By the proposed system and method, the interaction between the presenter and the attendees may be enhanced and customized to be as per their preferences.Type: ApplicationFiled: April 19, 2022Publication date: October 19, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230336608Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from one or more second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: ApplicationFiled: April 19, 2022Publication date: October 19, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230298276Abstract: A method and processing unit for providing content in a bandwidth constrained environment is disclosed. Initially, a content along with audio inputs, which is received during rendering of the content and provided to one or more users in a bandwidth constrained environment is received. Further, at least one object of interest within the content and associated with the audio inputs is identified. One or more regions of interest, including the at least one object of interest, is determined in the bandwidth constrained environment. Upon determining the one or more regions of interest, bitrate for rendering the content is modified based on the determined one or more regions of interest, to obtain a modified content for the bandwidth constrained environment. The modified content is provided to be rendered in the bandwidth constrained environment.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230298221Abstract: A method and processing unit for controlling access to virtual environment and real-world environment for extended reality device are described. The method includes receiving parameters comprising at least one of content data, historic user behavior data, user movement data, and user commands data, in real-time, during display of virtual environment to user wearing extended reality device. Further, intent of one or more users associated with virtual environment is identified, to access real-world environment, based on parameters. Upon identified intent, display of virtual environment and selected view of real-world environment is enabled simultaneously on display screen of extended reality device, based on intent, to control access to virtual environment and selected view of real-world environment. By controlling the access in such manner, user is provisioned with display of real-world environment without interfering with virtual environment.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Patent number: 11216166Abstract: A media system stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal range, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial location and temporal range of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: GrantFiled: October 6, 2020Date of Patent: January 4, 2022Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Publication number: 20210019040Abstract: A media system stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal range, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial location and temporal range of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: ApplicationFiled: October 6, 2020Publication date: January 21, 2021Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10795557Abstract: A social media platform stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal location, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial and temporal location of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: GrantFiled: March 5, 2019Date of Patent: October 6, 2020Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10770113Abstract: A computing device has memory, a microphone, and an image sensor. A process plays an immersive video in a user interface region, which displays a portion of the immersive video according to a user selected visibility window. While playing the immersive video, the user adjusts the window, and the process records information that identifies placement of the window within the immersive video. The process records audio provided by the user and records video of the user. The process uses the information that identifies placement of the window to form a customized video including what was displayed in the window while playing the immersive video. The customized video also includes a visual overlay in a peripheral portion of the customized video, which includes the recorded video of the user. The customized video also includes an audio overlay using the recorded audio. The process transmits the customized video to another computer.Type: GrantFiled: July 22, 2016Date of Patent: September 8, 2020Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II
-
Publication number: 20190196696Abstract: A social media platform stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal location, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial and temporal location of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: ApplicationFiled: March 5, 2019Publication date: June 27, 2019Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10222958Abstract: A social media platform for immersive media stores immersive videos (e.g., 360 video) and embedded affordances for the immersive videos. The platform includes a publisher user interface that enables publisher users to upload immersive videos to the database and embed affordances into the immersive videos at locations that are defined both spatially and temporally. Each affordance is discoverable by viewer users, has an interactive user interface, and has one or more corresponding rewards. The platform includes a viewer user interface that enables viewer users to select and play immersive videos and navigate to different portions of a playing immersive video using a visibility window, which displays a respective selected portion of the playing immersive video based on placement of the visibility window. In response to a first user action to activate a first affordance of the playing immersive video, the platform initiates a reward corresponding to the first affordance.Type: GrantFiled: November 18, 2016Date of Patent: March 5, 2019Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10020025Abstract: A computing system has memory, a microphone, and an image sensor. A process displays an immersive media customization user interface, and plays a 360 video. In response to a first user action, the process captures a freeze frame of the 360 video at a specific moment. In response to a second user action, the process starts recording a customized video of the captured freeze frame in real time according to a dynamically adjustable visibility window of the captured freeze frame. While recording the customized video, a user adjusts the visibility window. After recording, a user annotates the customized video, including one or more of: adding a visual overlay in a portion of the customized video, where the visual overlay is a recorded video of the user; adding an audio overlay (e.g., user comments); and adding text or an image. The process transmits the customized video to another computing system.Type: GrantFiled: October 3, 2016Date of Patent: July 10, 2018Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II
-
Publication number: 20180025752Abstract: A computing system has memory, a microphone, and an image sensor. A process displays an immersive media customization user interface, and plays a 360 video. In response to a first user action, the process captures a freeze frame of the 360 video at a specific moment. In response to a second user action, the process starts recording a customized video of the captured freeze frame in real time according to a dynamically adjustable visibility window of the captured freeze frame. While recording the customized video, a user adjusts the visibility window. After recording, a user annotates the customized video, including one or more of: adding a visual overlay in a portion of the customized video, where the visual overlay is a recorded video of the user; adding an audio overlay (e.g., user comments); and adding text or an image. The process transmits the customized video to another computing system.Type: ApplicationFiled: October 3, 2016Publication date: January 25, 2018Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harington, II
-
Publication number: 20180024724Abstract: A social media platform for immersive media stores immersive videos (e.g., 360 video) and embedded affordances for the immersive videos. The platform includes a publisher user interface that enables publisher users to upload immersive videos to the database and embed affordances into the immersive videos at locations that are defined both spatially and temporally. Each affordance is discoverable by viewer users, has an interactive user interface, and has one or more corresponding rewards. The platform includes a viewer user interface that enables viewer users to select and play immersive videos and navigate to different portions of a playing immersive video using a visibility window, which displays a respective selected portion of the playing immersive video based on placement of the visibility window. In response to a first user action to activate a first affordance of the playing immersive video, the platform initiates a reward corresponding to the first affordance.Type: ApplicationFiled: November 18, 2016Publication date: January 25, 2018Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Publication number: 20180025751Abstract: A computing device has memory, a microphone, and an image sensor. A process plays an immersive video in a user interface region, which displays a portion of the immersive video according to a user selected visibility window. While playing the immersive video, the user adjusts the window, and the process records information that identifies placement of the window within the immersive video. The process records audio provided by the user and records video of the user. The process uses the information that identifies placement of the window to form a customized video including what was displayed in the window while playing the immersive video. The customized video also includes a visual overlay in a peripheral portion of the customized video, which includes the recorded video of the user. The customized video also includes an audio overlay using the recorded audio. The process transmits the customized video to another computer.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II