Patents by Inventor Amir Pascal Ebrahimi
Amir Pascal Ebrahimi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11068067Abstract: A method for improving a display of a user interface element in a mixed reality environment is disclosed. A request to display the user interface element is received. The request includes display instructions, angle threshold data, distance threshold data, and velocity threshold data. Display operations are continuously performed while sensor data is continuously received from a mixed reality user interface device. The display operations include displaying the user interface element according to the display instructions, and, based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user interface device has exceeded an angle threshold with respect to the user interface element in the mixed reality environment, hiding the user interface element.Type: GrantFiled: December 10, 2019Date of Patent: July 20, 2021Assignee: Unity IPR ApSInventors: Timoni West, Dylan Charles Urquidi-Maynard, Amir Pascal Ebrahimi, Matthew Taylor Schoen
-
Publication number: 20200183498Abstract: A method for improving a display of a user interface element in a mixed reality environment is disclosed. A request to display the user interface element is received. The request includes display instructions, angle threshold data, distance threshold data, and velocity threshold data. Display operations are continuously performed while sensor data is continuously received from a mixed reality user interface device. The display operations include displaying the user interface element according to the display instructions, and, based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user interface device has exceeded an angle threshold with respect to the user interface element in the mixed reality environment, hiding the user interface element.Type: ApplicationFiled: December 10, 2019Publication date: June 11, 2020Inventors: Timoni West, Dylan Charles Urquidi-Maynard, Amir Pascal Ebrahimi, Matthew Taylor Schoen
-
Patent number: 10678340Abstract: A system includes one or more hardware processors, a head mounted display (HMD) configured to display a virtual environment to a user wearing the HMD, an input device configured to allow the user to interact with virtual objects presented in the virtual environment, and a virtual mini-board module executable by the one or more hardware processors. The virtual mini-board module is configured to perform operations including providing a virtual mini-board to the user within the virtual environment, the virtual mini-board including a representation of a region of the virtual environment, detecting a scroll operation performed by the user, modifying the region of the virtual environment based on the scroll operation, and updating one or more of (1) the virtual environment and (2) the representation of the region of the virtual environment on the virtual mini-board, based on the modifying.Type: GrantFiled: July 31, 2017Date of Patent: June 9, 2020Assignee: Unity IPR ApSInventors: Timoni West, Amir Pascal Ebrahimi
-
Publication number: 20200122038Abstract: A method for generating behavior with a trait-based planning domain language is disclosed. A world model of a dynamic environment is created. The world model includes data defining a state for the world model. The data defining the state includes data describing objects within the environment. Input to update the state for the world model is received. The input includes data to change the state and data defining a goal for a future state. A machine-learning model is used to generate a planning state from the state for the world model. The planning state includes a plurality of planning domain objects and associated traits. Based on instructions associated with an action, one or more of modifying values within a trait associated with the planning domain object, adding a trait to the planning domain object, or removing a trait from the planning domain object are performed.Type: ApplicationFiled: October 18, 2019Publication date: April 23, 2020Inventors: Amir Pascal Ebrahimi, Nicolas Francois Xavier Meuleau, Trevor Joseph Santarra
-
Publication number: 20200122039Abstract: A method of behavior generation is disclosed. Planning state data in a planning domain language format is received and a state description and an associated action description based on the planning state data are generated. The state description and the associated action description are parsed into a series of tokens for a machine learning encoded state and associated ML encoded action. The series of tokens describe the state and the action. The ML encoded state and ML encoded action is processed with a recurrent neural network to generate an estimate of a value of the state description and the action description. Output of the RNN is taken as input into a neural network to generate a value estimate for a state-action pair. A plan that includes a plurality of sequential actions for an agent is generated. The plurality of sequential actions is chosen based on at least the value estimate.Type: ApplicationFiled: October 22, 2019Publication date: April 23, 2020Inventors: Nicolas Francois Xavier Meuleau, Vincent-Pierre Serge Mary Berges, Amir Pascal Ebrahimi, Arthur William Juliani, JR., Trevor Joseph Santarra
-
Patent number: 10521020Abstract: A method for improving a display of a user interface element in a mixed reality environment is disclosed. A request to display the user interface element is received. The request includes display instructions, angle threshold data, distance threshold data, and velocity threshold data. Display operations are continuously performed while sensor data is continuously received from a mixed reality user interface device. The display operations include displaying the user interface element according to the display instructions, and, based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user interface device has exceeded an angle threshold with respect to the user interface element in the mixed reality environment, hiding the user interface element.Type: GrantFiled: July 12, 2018Date of Patent: December 31, 2019Assignee: Unity IPR ApSInventors: Timoni West, Dylan Charles Urquidi-Maynard, Amir Pascal Ebrahimi, Matthew Taylor Schoen
-
Publication number: 20190018498Abstract: A method for improving a display of a user interface element in a mixed reality environment is disclosed. A request to display the user interface element is received. The request includes display instructions, angle threshold data, distance threshold data, and velocity threshold data. Display operations are continuously performed while sensor data is continuously received from a mixed reality user interface device. The display operations include displaying the user interface element according to the display instructions, and, based on the sensor data indicating a distance between the user interface element and the mixed reality user interface device in the mixed reality environment has exceeded a distance threshold or based on the sensor data indicating an angle of view of the mixed reality user interface device has exceeded an angle threshold with respect to the user interface element in the mixed reality environment, hiding the user interface element.Type: ApplicationFiled: July 12, 2018Publication date: January 17, 2019Inventors: Timoni West, Dylan Charles Urquidi-Maynard, Amir Pascal Ebrahimi, Matthew Taylor Schoen
-
Publication number: 20170329416Abstract: A system includes one or more hardware processors, a head mounted display (HMD) configured to display a virtual environment to a user wearing the HMD, an input device configured to allow the user to interact with virtual objects presented in the virtual environment, and a virtual mini-board module executable by the one or more hardware processors. The virtual mini-board module is configured to perform operations including providing a virtual mini-board to the user within the virtual environment, the virtual mini-board including a representation of a region of the virtual environment, detecting a scroll operation performed by the user, modifying the region of the virtual environment based on the scroll operation, and updating one or more of (1) the virtual environment and (2) the representation of the region of the virtual environment on the virtual mini-board, based on the modifying.Type: ApplicationFiled: July 31, 2017Publication date: November 16, 2017Inventors: Timoni West, Amir Pascal Ebrahimi
-
Patent number: 9766713Abstract: A system includes one or more hardware processors, a head mounted display configured to display a virtual environment to a user, an input device, and a virtual mini-board module. The mini-board module is configured to render the virtual environment for presentation to the user via the HMD, the virtual environment is rendered from a first perspective providing a field of view of the virtual environment to the user, provide a virtual mini-board to the user within the field of view, the virtual mini-board displaying a region of the virtual environment, detect an interaction event performed by the user on the virtual mini-board, identify the first object based on the interaction event performed on the virtual mini-board, and perform the interaction event on the first object within the virtual environment based on the interaction event performed on the virtual mini-board.Type: GrantFiled: September 8, 2016Date of Patent: September 19, 2017Assignee: Unity IPR ApSInventors: Timoni West, Amir Pascal Ebrahimi
-
Publication number: 20170068323Abstract: A system includes one or more hardware processors, a head mounted display configured to display a virtual environment to a user, an input device, and a virtual mini-board module. The mini-board module is configured to render the virtual environment for presentation to the user via the HMD, the virtual environment is rendered from a first perspective providing a field of view of the virtual environment to the user, provide a virtual mini-board to the user within the field of view, the virtual mini-board displaying a region of the virtual environment, detect an interaction event performed by the user on the virtual mini-board, identify the first object based on the interaction event performed on the virtual mini-board, and perform the interaction event on the first object within the virtual environment based on the interaction event performed on the virtual mini-board.Type: ApplicationFiled: September 8, 2016Publication date: March 9, 2017Inventors: Timoni West, Amir Pascal Ebrahimi