Patents by Inventor Robert Ellis
Robert Ellis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210090331Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Publication number: 20210090337Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: September 25, 2019Publication date: March 25, 2021Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Publication number: 20210090333Abstract: A progressive display system can compute a virtual distance between a user and virtual objects. The virtual distance can be based on: a distance between the user and an object, a viewing angle of the object, and/or a footprint of the object in a field of view. The progressive display system can determine where the virtual distance falls in a sequence of distance ranges that correspond to levels of detail. Using a mapping between content sets for the object and levels of detail that correspond to distance ranges, the progressive display system can select content sets to display in relation to the object. As the user moves, the virtual distance will move across thresholds bounding the distance ranges. This causes the progressive display system to select and display other content sets for the distance range in which the current virtual distance falls.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Varga, Jasper Stevens, Robert Ellis, Jonah Jones
-
Publication number: 20210090341Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: September 9, 2020Publication date: March 25, 2021Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Publication number: 20210085231Abstract: A biological fluid collection device that allows a blood sample to be collected anaerobically is disclosed.Type: ApplicationFiled: December 4, 2020Publication date: March 25, 2021Inventors: Adam Edelhauser, Anthony V. Torris, Robert Ellis, Bradley M. Wilkinson, Joseph Nathan Pratt, Bartosz Marek Korec
-
Publication number: 20210090332Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Patent number: 10955929Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs the artificial reality content. The artificial reality system identifies, from the image data, a gesture comprising a motion of a first digit of a hand and a second digit of the hand to form a pinching configuration a particular number of times within a threshold amount of time. The artificial reality system assigns one or more input characters to one or more of a plurality of digits of the hand and processes a selection of a first input character of the one or more input characters assigned to the second digit of the hand in response to the identified gesture.Type: GrantFiled: June 7, 2019Date of Patent: March 23, 2021Assignee: Facebook Technologies, LLCInventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Patent number: 10921879Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content, the artificial reality content including an assistant element. The gesture detector identifies, from the image data, a gesture that includes a gripping motion of two or more digits of a hand to form a gripping configuration at a location that corresponds to the assistant element, and subsequent to the gripping motion, a throwing motion of the hand with respect to the assistant element. The UI engine generates a UI element in response to identifying the gesture.Type: GrantFiled: June 7, 2019Date of Patent: February 16, 2021Assignee: Facebook Technologies, LLCInventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Patent number: 10919984Abstract: An injection molded article comprising a thin-walled body portion formed from a polymer-based resin derived from cellulose, wherein the thin-walled body portion comprises: i. a gate position; ii. a last fill position; iii. a flow length to wall thickness ratio greater than or equal to 100, wherein the flow length is measured from the gate position to the last fill position; and iv. a wall thickness less than or equal to about 2 mm; and wherein the polymer-based resin has an HDT or at least 95° C., a bio-derived content of at least 20 wt %, and a spiral flow length of at least 3.0 cm, when the polymer-based resin is molded with a spiral flow mold with the conditions of a barrel temperature of 238° C., a melt temperature of 246° C., a molding pressure of 13.8 MPa, a mold thickness of 0.8 mm, and a mold width of 12.7 mm.Type: GrantFiled: November 9, 2017Date of Patent: February 16, 2021Assignee: Eastman Chemical CompanyInventors: Wenlai Feng, Haining An, Michael Eugene Donelson, Thomas Joseph Pecorini, Robert Ellis McCrary, Douglas Weldon Carico, Spencer Allen Gilliam
-
Patent number: 10908986Abstract: Read operations are performed in a memory device which efficiently provide baseline read data and recovery read data. In one aspect, on-die circuitry, which is on a die with an array of memory cells, obtains recovery read data before it is requested or needed by an off-die controller. In another aspect, data from multiple reads is obtained and made available in a set of output latches for retrieval by the off-die controller. Read data relative to multiple read thresholds is obtained and transferred from latches associated with the sense circuits to the set of output latches. The read data relative to multiple read thresholds can be stored and held concurrently in the set of output latches for retrieval by the off-die controller.Type: GrantFiled: April 2, 2018Date of Patent: February 2, 2021Assignee: SanDisk Technologies LLCInventors: Robert Ellis, Daniel Helmick
-
Patent number: 10896724Abstract: A memory system comprises a plurality of memory dies and a controller (or other control circuit) connected to the memory dies. To reduce the time it takes for the memory system to program data and make that programmed data available for reading by a host (or other entity), as well as persistently store the data in a compact manner that efficiently uses space in the memory system, the data is concurrently programmed as single bit per memory cell (fast programming) and multiple bits per memory cell (compact storage). To accomplish this programming strategy, the controller concurrently transfers data to be programmed to a first memory die and a second memory die. The transferred data is programmed in the first memory die at a single bit per memory cell and in the second memory die at multiple bits per memory cell.Type: GrantFiled: December 18, 2018Date of Patent: January 19, 2021Assignee: Western Digital Technologies, Inc.Inventors: Jacob Schmier, Todd Lindberg, Robert Ellis
-
Patent number: 10890983Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned. The implementation of the artificial reality system does require use of both hands or use of other input devices in order to interact with the artificial reality system.Type: GrantFiled: June 7, 2019Date of Patent: January 12, 2021Assignee: Facebook Technologies, LLCInventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Patent number: 10880284Abstract: Disclosed are various embodiments for repurposing limited-functionality networked devices as authentication factors. In one embodiment, an authentication service identifies a limited-functionality networked device associated with an account and communicatively coupled to the network. The limited-functionality networked device is configured to perform a first function upon a predefined user interaction. The service configures the limited-functionality networked device to perform a second function based at least in part on the predefined user interaction. The service determines that the predefined user interaction has been performed by a user with respect to the limited-functionality networked device. The service authenticates the user at a client device for access to the account based at least in part on the predefined user interaction having been performed.Type: GrantFiled: August 19, 2016Date of Patent: December 29, 2020Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Daniel Wade Hitchcock, Bharath Kumar Bhimanaik, Robert Ellis Lee
-
Publication number: 20200387286Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device configured to capture image data representative of a physical environment. The HMD outputs artificial reality content. The gesture detector is configured to identify, from the image data, a gesture including a configuration of a wrist that is substantially stationary for at least a threshold period of time and positioned such that a normal from the wrist is facing the HMD. The UI engine is configured to generate a UI element in response to the identified gesture. The rendering engine renders the UI element overlaid on an image of the wrist.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200387213Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content, the artificial reality content including an assistant element. The gesture detector identifies, from the image data, a gesture that includes a gripping motion of two or more digits of a hand to form a gripping configuration at a location that corresponds to the assistant element, and subsequent to the gripping motion, a throwing motion of the hand with respect to the assistant element. The UI engine generates a UI element in response to identifying the gesture.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200387228Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200387214Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment, renders artificial reality content and a virtual keyboard with a plurality of virtual keys as an overlay to the artificial reality content, and outputs the artificial reality content and the virtual keyboard. The artificial reality system identifies, from the image data, a gesture comprising a first digit of a hand being brought in contact with a second digit of the hand, wherein a point of the contact corresponds to a location of a first virtual key of the plurality of virtual keys of the virtual keyboard. The artificial reality system processes a selection of the first virtual key in response to the identified gesture.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200388247Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a gesture detector, a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content. The gesture detector identifies, from the image data, a gesture including a configuration of a hand that is substantially stationary for at least a threshold period of time and positioned such that an index finger and a thumb of the hand form approximately a right angle. The UI engine generates a UI element in response to the identified gesture. The rendering engine renders the UI element as an overlay to the artificial reality content.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200387287Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. In one example, an artificial reality system comprises an image capture device configured to capture image data representative of a physical environment; a head-mounted display (HMD) configured to output artificial reality content; a gesture detector configured to identify, from the image data, a gesture comprising a motion of two fingers from a hand to form a pinching configuration and a subsequent pulling motion while in the pinching configuration; a user interface (UI) engine configured to generate a UI input element in response to identifying the gesture; and a rendering engine configured to render the UI input element as an overlay to at least some of the artificial reality content.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
-
Publication number: 20200387229Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs the artificial reality content. The artificial reality system identifies, from the image data, a gesture comprising a motion of a first digit of a hand and a second digit of the hand to form a pinching configuration a particular number of times within a threshold amount of time. The artificial reality system assigns one or more input characters to one or more of a plurality of digits of the hand and processes a selection of a first input character of the one or more input characters assigned to the second digit of the hand in response to the identified gesture.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox