Patents by Inventor Mahendra Patel
Mahendra Patel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250131661Abstract: Method, processing unit, and non-transitory computer-readable medium for rendering modified scenes to user in immersive environment are disclosed. Initially, contextual data associated with user in immersive environment is retrieved, during rendering of real-time scene. The real-time scene is analyzed to identify modifiable contents within real-time scene. The modifiable contents are modified based on contextual data, to obtain modified contents. The real-time scene is augmented with modified contents to output modified scene. The modified scene is dynamically rendered to user, by replacing real-time scene with modified scene.Type: ApplicationFiled: October 19, 2023Publication date: April 24, 2025Inventor: Dipak Mahendra Patel
-
Patent number: 12108013Abstract: A method, a processing unit, and a non-transitory computer-readable medium for controlling viewpoint of attendees in immersive environment are disclosed. For controlling viewpoint, initially, presenter input is received. Presenter input indicates objects selected amongst plurality of objects displayed in 360° view of a content in immersive environment. Further, the objects to be displayed are dynamically fixed within viewpoint across the 360° view of the attendees. The objects are dynamically fixed within viewpoint irrespective of inputs received from the attendees to change objects within viewpoint. New objects, amongst the plurality of objects are detected based on at least one of real-time preferences of the attendees and real-time context of the content provided by the presenter. Upon detecting the new objects, the objects are dynamically re-fixed within the viewpoint of the attendees.Type: GrantFiled: July 27, 2022Date of Patent: October 1, 2024Assignee: Zeality IncInventors: Dipak Mahendra Patel, Anmol Agarwal
-
Patent number: 12106425Abstract: The present invention discloses to monitor viewing parameters of users in an immersive environment. Real-time inputs are received from sensors associated with participants in immersive environment. Inputs represent location data, gaze data, field of view (FOV) data, and movement data of the participants. Selection data indicating selected participants and selected viewing parameter is received from user. The inputs are processed to output the viewing parameters of the selected participants which include viewing angle, viewing range, viewing angle deviation, current FOV, relative location, and interaction status of the selected participants. A first set of pictorial representations is generated to represent the selected participants and selected viewing parameters. The first set of pictorial representations is displayed in a predefined region within FOV of the user in the immersive environment, to enable monitoring.Type: GrantFiled: December 11, 2023Date of Patent: October 1, 2024Assignee: Zeality IncInventor: Dipak Mahendra Patel
-
Patent number: 12056411Abstract: A method, a processing unit, a non-transitory computer-readable medium for providing recommendations in content rendering environment with presenter and attendees. For providing the recommendations, initially, user data is received for content rendering environment. The user data relates to the attendees. Further, the recommendations to the presenter are generated based on the user data. The recommendations are used to initiate interaction in the content rendering environment between the presenter and at least one selected attendee amongst the one or more attendees. The recommendations are provided to the presenter during rendering of the content to the one or more attendees. By the proposed system and method, the interaction between the presenter and the attendees may be enhanced and customized to be as per their preferences.Type: GrantFiled: April 19, 2022Date of Patent: August 6, 2024Assignee: Zeality IncInventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20240073273Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: ApplicationFiled: September 7, 2023Publication date: February 29, 2024Applicant: ZEALITY INC.Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Patent number: 11895175Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from one or more second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: GrantFiled: April 19, 2022Date of Patent: February 6, 2024Assignee: ZEALITY INCInventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Patent number: 11893699Abstract: A method and processing unit for providing content in a bandwidth constrained environment is disclosed. Initially, a content along with audio inputs, which is received during rendering of the content and provided to one or more users in a bandwidth constrained environment is received. Further, at least one object of interest within the content and associated with the audio inputs is identified. One or more regions of interest, including the at least one object of interest, is determined in the bandwidth constrained environment. Upon determining the one or more regions of interest, bitrate for rendering the content is modified based on the determined one or more regions of interest, to obtain a modified content for the bandwidth constrained environment. The modified content is provided to be rendered in the bandwidth constrained environment.Type: GrantFiled: March 15, 2022Date of Patent: February 6, 2024Assignee: Zeality IncInventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20240040097Abstract: A method, a processing unit, and a non-transitory computer-readable medium for controlling viewpoint of attendees in immersive environment are disclosed. For controlling viewpoint, initially, presenter input is received. Presenter input indicates objects selected amongst plurality of objects displayed in 360° view of a content in immersive environment. Further, the objects to be displayed are dynamically fixed within viewpoint across the 360° view of the attendees. The objects are dynamically fixed within viewpoint irrespective of inputs received from the attendees to change objects within viewpoint. New objects, amongst the plurality of objects are detected based on at least one of real-time preferences of the attendees and real-time context of the content provided by the presenter. Upon detecting the new objects, the objects are dynamically re-fixed within the viewpoint of the attendees.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Inventors: Dipak Mahendra Patel, Anmol Agarwal
-
Publication number: 20230333795Abstract: A method, a processing unit, a non-transitory computer-readable medium for providing recommendations in content rendering environment with presenter and attendees. For providing the recommendations, initially, user data is received for content rendering environment. The user data relates to the attendees. Further, the recommendations to the presenter are generated based on the user data. The recommendations are used to initiate interaction in the content rendering environment between the presenter and at least one selected attendee amongst the one or more attendees. The recommendations are provided to the presenter during rendering of the content to the one or more attendees. By the proposed system and method, the interaction between the presenter and the attendees may be enhanced and customized to be as per their preferences.Type: ApplicationFiled: April 19, 2022Publication date: October 19, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230336608Abstract: A method and processing unit for creating and rendering synchronized content for content rendering environment are described in present disclosure. Initially, live content rendered to users in content rendering environment is received. Further, inputs from first user, and optionally from one or more second users is collected during rendering of live content. The inputs comprises user inputs provided in at least one of virtual environment and user actions in real-world environment. Inputs are collected along with at least one of corresponding time stamp and corresponding spatial stamp in content rendering environment. Upon collecting inputs, live content is synchronized with the inputs based on at least one of the time and spatial stamp, and context mapping of the inputs with segments of the live content to output a synchronized content. By rendering such synchronized content to the user, replication of live rendering of the content may be experienced by the user.Type: ApplicationFiled: April 19, 2022Publication date: October 19, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230298221Abstract: A method and processing unit for controlling access to virtual environment and real-world environment for extended reality device are described. The method includes receiving parameters comprising at least one of content data, historic user behavior data, user movement data, and user commands data, in real-time, during display of virtual environment to user wearing extended reality device. Further, intent of one or more users associated with virtual environment is identified, to access real-world environment, based on parameters. Upon identified intent, display of virtual environment and selected view of real-world environment is enabled simultaneously on display screen of extended reality device, based on intent, to control access to virtual environment and selected view of real-world environment. By controlling the access in such manner, user is provisioned with display of real-world environment without interfering with virtual environment.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20230298276Abstract: A method and processing unit for providing content in a bandwidth constrained environment is disclosed. Initially, a content along with audio inputs, which is received during rendering of the content and provided to one or more users in a bandwidth constrained environment is received. Further, at least one object of interest within the content and associated with the audio inputs is identified. One or more regions of interest, including the at least one object of interest, is determined in the bandwidth constrained environment. Upon determining the one or more regions of interest, bitrate for rendering the content is modified based on the determined one or more regions of interest, to obtain a modified content for the bandwidth constrained environment. The modified content is provided to be rendered in the bandwidth constrained environment.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Dipak Mahendra Patel, Avram Maxwell Horowitz, Karla Celina Varela-Huezo
-
Publication number: 20220140648Abstract: A method includes injecting a probe signal into a power system; receiving a measurement of an operational parameter of the power system responsive to injecting the probe signal into the power system; generating a transfer function model of the power system based on the measurement of the operational parameter of the power system and the probe signal; and updating at least one control parameter of a Wide Area Damping Controller (WADC) communicatively coupled to the power system based on the transfer function model.Type: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Inventors: Ibrahim Abdullah S. ALTARJAMI, Evangelos FARANTATOS, Hossein HOOSHYAR, Yilu LIU, Mahendra PATEL, Huangqing XIAO, Chengwen ZHANG, Yi ZHAO, Lin ZHU
-
Patent number: 11216166Abstract: A media system stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal range, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial location and temporal range of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: GrantFiled: October 6, 2020Date of Patent: January 4, 2022Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Publication number: 20210019040Abstract: A media system stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal range, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial location and temporal range of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: ApplicationFiled: October 6, 2020Publication date: January 21, 2021Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10795557Abstract: A social media platform stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal location, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial and temporal location of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: GrantFiled: March 5, 2019Date of Patent: October 6, 2020Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Patent number: 10770113Abstract: A computing device has memory, a microphone, and an image sensor. A process plays an immersive video in a user interface region, which displays a portion of the immersive video according to a user selected visibility window. While playing the immersive video, the user adjusts the window, and the process records information that identifies placement of the window within the immersive video. The process records audio provided by the user and records video of the user. The process uses the information that identifies placement of the window to form a customized video including what was displayed in the window while playing the immersive video. The customized video also includes a visual overlay in a peripheral portion of the customized video, which includes the recorded video of the user. The customized video also includes an audio overlay using the recorded audio. The process transmits the customized video to another computer.Type: GrantFiled: July 22, 2016Date of Patent: September 8, 2020Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II
-
Publication number: 20190196696Abstract: A social media platform stores immersive videos and embedded affordances for each immersive video. Each embedded affordance has a temporal location, a spatial location, and a corresponding reward. A viewer selects and plays a first immersive video. While playing the first stored immersive video, the viewer navigates to different spatial portions of the first immersive video using a spatial visibility window. The viewer activates a first embedded affordance of the first immersive video according to the spatial and temporal location of the first embedded affordance. In response, the platform initiates the reward corresponding to the first embedded affordance. The reward includes a reward token deposited into an account corresponding to the viewer. The viewer later uses a redemption user interface to redeem a plurality of reward tokens from the account (including the deposited reward token) for a single aggregated reward.Type: ApplicationFiled: March 5, 2019Publication date: June 27, 2019Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske
-
Publication number: 20190142755Abstract: A stable tablet-in-tablet pharmaceutical composition having fixed dose combinations of cyclophosphamide and capecitabine and one or more pharmaceutically acceptable excipient. As disclosed is a process for the preparation thereof, and a method their use in treating cancer diseases.Type: ApplicationFiled: May 2, 2017Publication date: May 16, 2019Applicant: INTAS PHARMACEUTICALS LTD.Inventors: Priyank PATEL, Mayur PATEL, Mahendra PATEL, Balvir SINGH, Ashish SEHGAL
-
Patent number: 10222958Abstract: A social media platform for immersive media stores immersive videos (e.g., 360 video) and embedded affordances for the immersive videos. The platform includes a publisher user interface that enables publisher users to upload immersive videos to the database and embed affordances into the immersive videos at locations that are defined both spatially and temporally. Each affordance is discoverable by viewer users, has an interactive user interface, and has one or more corresponding rewards. The platform includes a viewer user interface that enables viewer users to select and play immersive videos and navigate to different portions of a playing immersive video using a visibility window, which displays a respective selected portion of the playing immersive video based on placement of the visibility window. In response to a first user action to activate a first affordance of the playing immersive video, the platform initiates a reward corresponding to the first affordance.Type: GrantFiled: November 18, 2016Date of Patent: March 5, 2019Assignee: Zeality Inc.Inventors: Dipak Mahendra Patel, Arlene Joy Ganancial Santos, Scott Riley Collins, Bryan Daniel Bor, Adam Mark Dubov, Timothy George Harrington, II, Jason Sperske