Patents by Inventor Carl S. Marshall

Carl S. Marshall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190102047
    Abstract: Systems, apparatuses and methods for technology that provides smart work spaces in ubiquitous computing environments. The technology may determine a task to be performed in a smart work space and perform task modeling, wherein the task modeling includes determining one or more user interfaces involved with the task. One or more placements may be determined for the one or more user interfaces based on one or more ergonomic conditions, an incidence of an interaction, and a length of time of interaction. The technology may position the one or more user interfaces into the smart work space in accordance with the determined one or more placements.
    Type: Application
    Filed: September 30, 2017
    Publication date: April 4, 2019
    Inventors: Glen J. Anderson, Giuseppe Raffa, Sangita Sharma, Carl S. Marshall, Meng Shi, Selvakumar Panneer
  • Publication number: 20190051194
    Abstract: Methods and apparatus for drone collision avoidance. Processing circuitry of a drone extracts information from an encoded image captured by a detection device with a field view overlapping the encoded image. Based on the extracted information a determination is made whether a collision will occur on the flight trajectory of the drone with an external source. The flight trajectory of the drone is then altered to avoid a collision.
    Type: Application
    Filed: March 30, 2018
    Publication date: February 14, 2019
    Inventors: Leobardo Campos Macias, Carl S. Marshall, David Arditti Ilitzky, David Gomez Gutierrez, Jose Parra Vilchis, Julio Zamora Esquivel, Rafael De La Guardia Gonzalez, Rodrigo Aldana Lopez
  • Publication number: 20190051224
    Abstract: The disclosed embodiments generally relate to methods, systems and apparatuses to provide ad hoc digital signage for public or private display. In certain embodiments, the disclosure provides dynamically formed digital signage. In one application, one or more drones are used to project the desired signage. In another application one or more drones are used to form a background to receive the projected image. In still another application, sensors are used to detect audience movement, line of sight or engagement level. The sensor information is then used to arrange the projecting drones or the surface-image drones to further signage presentation.
    Type: Application
    Filed: December 28, 2017
    Publication date: February 14, 2019
    Applicant: Intel Corporation
    Inventors: Carl S. Marshall, John Sherry, Giuseppe Raffa, Glen J. Anderson, Selvakumar Panneer, Daniel Pohl
  • Publication number: 20190028683
    Abstract: Methods and apparatus for controlled shadow casting to increase the perceptual quality of projected content are disclosed. In some examples, an apparatus is to increase a perceptual quality of content projected onto a projection surface. In some examples, the apparatus includes a processor and memory. In some examples, the memory includes computer readable instructions. In some examples, the instructions, when executed, cause the processor to determine a target shutter position for a shutter based on a location of a light source and a location of a projection surface. In some examples, the instructions, when executed, further cause the processor to move the shutter to the target shutter position to cast a shadow onto the projection surface around a portion of content projected onto the projection surface.
    Type: Application
    Filed: September 24, 2018
    Publication date: January 24, 2019
    Inventors: Srenivas Varadarajan, Selvakumar Panneer, Carl S. Marshall
  • Publication number: 20190005673
    Abstract: The systems and methods disclosed herein provide determination of an orientation of a feature towards a reference target. As a non-limiting example, a system consistent with the present disclosure may include a processor, a memory, and a single camera affixed to the ceiling of a room occupied by a person. The system may analyze images from the camera to identify any objects in the room and their locations. Once the system has identified an object and its location, the system may prompt the person to look directly at the object. The camera may then record an image of the user looking at the object. The processor may analyze the image to determine the location of the user's head and, combined with the known location of the object and the known location of the camera, determine the direction that the user is facing. This direction may be treated as a reference value, or “ground truth.” The captured image may be associated with the direction, and the combination may be used as training input into an application.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Applicant: INTEL CORPORATION
    Inventors: GLENN J. ANDERSON, GIUSEPPE RAFFA, CARL S. MARSHALL, MENG SHI
  • Publication number: 20180314887
    Abstract: Methods, apparatus, and system to enable and implement interaction between a computer device and a person (or people) such as via images and objects identified in such images. The interaction may make possible rapid and convenient machine learning with respect to such objects.
    Type: Application
    Filed: April 28, 2017
    Publication date: November 1, 2018
    Inventors: CARL S. MARSHALL, RAVISHANKAR IYER, SEJUN KIM, DOYE C. EMELUE
  • Publication number: 20180285649
    Abstract: Systems and techniques for computer vision and sensor assisted contamination tracking are described herein. It may be identified that a food item has moved to a monitored area using computer vision. Sensor readings may be obtained from a sensor array. A contamination of the food item may be determined using the sensor readings. The contamination of the food item may be associated with a contamination area in the monitored area using the computer vision. A notification may be output for display in the contamination area indicating the contamination.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Meng Shi, Carl S. Marshall, Glen J. Anderson, Selvakumar Panneer, Anthony G. Lamarca, Mark J. Abel, Giuseppe Raffa
  • Publication number: 20180285741
    Abstract: An embodiment of an electronic processing apparatus may include a user interface to receive an input from a user, an assistant interface to communicate with at least two electronic personal assistants, and a coordinator communicatively coupled to the user interface and the assistant interface. The coordinator may be configured to send a request to one or more of the at least two electronic personal assistants based on the input from the user, collect one or more assistant responses from the one or more electronic personal assistants, and provide a response to the user based on the collected one or more assistant responses. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Carl S. Marshall, Selvakumar Panneer, Ravishankar Iyer
  • Publication number: 20180288354
    Abstract: Various systems and methods for presenting mixed reality presentations are described herein. A head-mounted display system for presenting mixed reality presentations including a processor subsystem to implement and interface with: a context engine to determine a user context of a user of a head-mounted display (HMD); a picture-in-picture (PIP) coordinator engine to determine a picture-in-picture (PIP) content for display in a PIP view of the HMD; and a graphics driver to simultaneously display an alternate reality content in a main view of the HMD and the PIP content in the PIP view of the HMD.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Glen J. Anderson, Carl S. Marshall
  • Publication number: 20180288380
    Abstract: In some embodiments, the disclosed subject matter involves a system for mapping projection of content to surfaces in an environment. Groups of users in the environment are identified and surfaces in the environment are selected/assigned for projection and/or touch input based on user preferences, ranking of surfaces for projectability or touchability, content to be displayed, proximity of user groups to one another and surfaces, and user feedback and control. Other embodiments are described and claimed.
    Type: Application
    Filed: March 29, 2017
    Publication date: October 4, 2018
    Inventors: Giuseppe Raffa, Carl S. Marshall, Selvakumar Panneer, Glen J. Anderson, Meng Shi, Sangita Ravi Sharma
  • Publication number: 20180286080
    Abstract: Various systems and methods for virtual reality transitions are described herein.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Carl S. Marshall, Glen J. Anderson, Selvakumar Panneer
  • Publication number: 20180286008
    Abstract: Techniques to patch a shader program after the shader has been compiled and/or while the shader is in an execution pipeline are described. The shader may be patched based on references to global constants in a global constant buffer. For example, the reference to the global constant buffer may be patched with the value of the global constant, conditional statements based on references to the global constant buffer may be replaced with unconditional statements based on the value of the global constant in the global constant buffer, to optimize the shader or increase computational efficiency of the shader.
    Type: Application
    Filed: May 21, 2018
    Publication date: October 4, 2018
    Applicant: INTEL CORPORATION
    Inventors: SELVAKUMAR PANNEER, CARL S. MARSHALL
  • Publication number: 20180284269
    Abstract: Systems, apparatuses and methods may provide for visually or audibly indicating to users what areas are being covered or monitored by cameras, microphones, motion sensors, capacitive surfaces, or other sensors. Indicators such as projectors, audio output devices, ambient lighting, haptic feedback devices, and augmented reality may indicate the coverage areas based on a query from a user.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Glen J. Anderson, Giuseppe Raffa, Sangita Sharma, Carl S. Marshall, Selvakumar Panneer, Meng Shi
  • Patent number: 10084996
    Abstract: Methods and apparatus for controlled shadow casting to increase the perceptual quality of projected content are disclosed. In some examples, an apparatus is to increase a perceptual quality of content projected onto a projection surface. In some examples, the apparatus includes a shutter position determiner to determine a target shutter position for a shutter based on a location of a light source and a location of the projection surface. In some disclosed examples, the apparatus further includes a shutter controller to move the shutter to the target shutter position to cast a shadow onto the projection surface around a portion of the content projected onto the projection surface.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: September 25, 2018
    Assignee: INTEL CORPORATION
    Inventors: Srenivas Varadarajan, Selvakumar Panneer, Carl S. Marshall
  • Publication number: 20180196576
    Abstract: In one example, a projection device includes a first light source to provide visible optical radiation. Additionally, the projection device includes a second light source to provide invisible optical radiation. Further, the projection device includes a projection mechanism. Also, the projection device includes a depth receiver. The projection device further includes a processor to cause the projection mechanism to project each of a group of pixels in a frame of an image using optical radiation provided by both the first light source and the second light source.
    Type: Application
    Filed: January 12, 2017
    Publication date: July 12, 2018
    Inventors: Selvakumar Panneer, Carl S. Marshall
  • Publication number: 20180188893
    Abstract: Apparatus and methods may provide for an interactive display projection with surface interactivity analysis. An interactive display projector is provided along with one or more of a camera or an electromagnetic radiation source to scan plural surfaces within a projection range of the interactive display projector. Logic, implemented at least partly in configurable or fixed functionality hardware may process reflected electromagnetic radiation to determine one or more of size, distance, texture, reflectivity, or angle with respect to the interactive display projector of the scanned plural surfaces and determine, based on the processing, interactivity of one or more of the plural surfaces for an interactive display.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 5, 2018
    Inventors: Carl S. Marshall, Selvakumar Panneer, Glen J. Anderson, Meng Shi, Giuseppe Raffa
  • Patent number: 10013731
    Abstract: Methods and systems may include a computing system having a graphics processor with a three-dimensional (3D) pipeline, one or more processing units, and compute kernel logic to process two-dimensional (2D) command. A graphics processing unit (GPU) scheduler may dispatch the 2D command directly to the one or more processing units. In one example, the 2D command includes at least one of a render target clear command, a depth-stencil clear command, a resource resolving command and a resource copy command.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: July 3, 2018
    Assignee: Intel Corporation
    Inventors: Selvakumar Panneer, Carl S. Marshall
  • Patent number: 10007965
    Abstract: Techniques to patch a shader program after the shader has been compiled and/or while the shader is in an execution pipeline are described. The shader may be patched based on references to global constants in a global constant buffer. For example, the reference to the global constant buffer may be patched with the value of the global constant, conditional statements based on references to the global constant buffer may be replaced with unconditional statements based on the value of the global constant in the global constant buffer, to optimize the shader or increase computational efficiency of the shader.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: June 26, 2018
    Assignee: INTEL CORPORATION
    Inventors: Selvakumar Panneer, Carl S. Marshall
  • Publication number: 20170286426
    Abstract: Apparatuses, methods, and storage medium associated with a browser for prioritized display of videos and/or photographs are disclosed herein. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to operate a browser to present a plurality of photos and/or videos for viewing. A subset of the plurality of the photos and/or videos may be selected based on the results of an analysis of sensor data collected by a plurality of wearable sensors. The subset of the plurality of the photos and/or videos may be prioritized over other photos and/or videos from the plurality of the photos and/or videos in terms of presentation space allocated for presentation.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Glen J. Anderson, Carl S. Marshall, Jeffrey R. Jackson, Selvakumar Panneer, Andrea E. Johnson
  • Publication number: 20170090688
    Abstract: Technologies for computing context replay include a computing device having a persistent memory and a volatile memory. The computing device creates multiple snapshots that are each indicative of a user's computing context at a corresponding sync point. The snapshots may include metadata created in response to system events, memory snapshots stored in a virtual machine, and/or video data corresponding to the computing context. At least a part of the snapshots are stored in the persistent memory. The computing device presents a timeline user interface based on the snapshots. The timeline includes multiple elements that are associated with corresponding sync points. The timeline elements may visually indicate a salience value that has been determined for each corresponding sync point. In response to a user selection of a sync point, the computing device activates a computing context corresponding to the snapshot for the selected sync point. Other embodiments are described and claimed.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Inventors: Glen J. Anderson, Jose K. Sia, JR., Dawn Nafus, Carl S. Marshall, Jeffrey R. Jackson, Heather Patterson, John W. Sherry, Daniel S. Lake