Patents by Inventor John C. Calef, III

John C. Calef, III has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12147934
    Abstract: A request is determined to move from a first location to a second location to receive an item, the request specifying a transport vehicle to move the mobile platform from the first location to the second location. The mobile platform is actuated to attach to the transport vehicle.
    Type: Grant
    Filed: September 14, 2023
    Date of Patent: November 19, 2024
    Assignee: DISH Network L.L.C.
    Inventors: Nicholas Brandon Newell, Prakash Subramanian, John C. Calef, III, Allyson Lotz, Zachary Pierucci
  • Patent number: 11985578
    Abstract: Systems and methods are directed towards the interpretation of driver intent relative to other vehicles. A computing device within a vehicle includes at least one camera, an output device, and circuitry. The computing device captures images of an area outside of the vehicle. The computing device identifies another vehicle relative to the vehicle. The computing device determines a driving intent of the driver, such as based on an analysis images or audio of the driver. The computing device determines whether the vehicle is moving within a threshold time after determining the driving intent. If the vehicle is moving within the threshold time, then the driver is identified as engaging in aggressive driving towards the other vehicle. The computing device may also provide information to the other vehicle indicating the driving intent of the driver.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: May 14, 2024
    Assignee: DISH NETWORK L.L.C.
    Inventors: Prakash Subramanian, Nicholas B. Newell, John C. Calef, III
  • Publication number: 20240013134
    Abstract: A request is determined to move from a first location to a second location to receive an item, the request specifying a transport vehicle to move the mobile platform from the first location to the second location. The mobile platform is actuated to attach to the transport vehicle.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 11, 2024
    Inventors: Nicholas Brandon Newell, Prakash Subramanian, John C. Calef, III, Allyson Lotz, Zachary Pierucci
  • Publication number: 20230342107
    Abstract: Systems, methods, and devices may generate speech files that reflect emotion of text-based content. An example process includes selecting a first text from a first source of text content and selecting a second text from a second source of text content. The first text and the second text are aggregated into an aggregated text, and the aggregated text includes a first emotion associated with content of the first text. The aggregated text also includes a second emotion associated with content of the second text. The aggregated text is converted into a speech stored in an audio file. The speech replicates human expression of the first emotion and of the second emotion.
    Type: Application
    Filed: April 13, 2023
    Publication date: October 26, 2023
    Applicant: DISH Technologies L.L.C.
    Inventor: JOHN C. CALEF, III
  • Patent number: 11769106
    Abstract: A request is determined to move from a first location to a second location to receive an item, the request specifying a transport vehicle to move the mobile platform from the first location to the second location. The mobile platform is actuated to attach to the transport vehicle.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 26, 2023
    Assignee: DISH Network L.L.C.
    Inventors: Nicholas Brandon Newell, Prakash Subramanian, John C. Calef, III, Allyson Lotz, Zachary Pierucci
  • Patent number: 11656840
    Abstract: Methods and devices produce an audio representation of aggregated content by selecting preferred content from a number of sources. The sources are emotion-tagged. The emotion-tagged preferred content sources are converted into audio files. A set of audio files corresponding to the converted preferred content is generated. The preferred content is individually converted into the audio files. The generated set comprises non-aggregated content.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: May 23, 2023
    Assignee: DISH Technologies L.L.C.
    Inventor: John C. Calef, III
  • Publication number: 20220351529
    Abstract: Systems and methods are directed towards the interpretation of driver intent relative to other vehicles. A computing device within a vehicle includes at least one camera, an output device, and circuitry. The computing device captures images of an area outside of the vehicle. The computing device identifies another vehicle relative to the vehicle. The computing device determines a driving intent of the driver, such as based on an analysis images or audio of the driver. The computing device determines whether the vehicle is moving within a threshold time after determining the driving intent. If the vehicle is moving within the threshold time, then the driver is identified as engaging in aggressive driving towards the other vehicle. The computing device may also provide information to the other vehicle indicating the driving intent of the driver.
    Type: Application
    Filed: July 18, 2022
    Publication date: November 3, 2022
    Inventors: Prakash Subramanian, Nicholas B. Newell, John C. Calef, III
  • Patent number: 11423672
    Abstract: Embodiments are directed towards the interpretation of driver intent and communication of that intent with autonomous vehicles at a traffic intersection. A computing device that sits on the dashboard of a vehicle includes at least one camera, an output device, and circuitry. The computing device captures first images of the driver in the vehicle and second images of the traffic intersection. The computing device identifies another vehicle at or approaching the intersection based on an analysis of the second images. The computing device determines an attention direction and hand movement of the driver based on an analysis of the first images to determine a driving intent of the driver. The computing device provides information to the other vehicle indicating the driving intent of the driver. The computing device may also obtain the driving intent of the other vehicle and provide information to the driver indicating the other vehicle's intent.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: August 23, 2022
    Assignee: DISH NETWORK L.L.C.
    Inventors: Prakash Subramanian, Nicholas B. Newell, John C. Calef, III
  • Publication number: 20210279034
    Abstract: A method for producing an audio representation of aggregated content includes selecting preferred content from a number of sources, wherein the sources are emotion-tagged, aggregating the emotion-tagged preferred content sources, and creating an audio representation of the emotion-tagged aggregated content.
    Type: Application
    Filed: May 12, 2021
    Publication date: September 9, 2021
    Applicant: DISH Technologies L.L.C.
    Inventor: JOHN C. CALEF, III
  • Patent number: 11016719
    Abstract: A method for producing an audio representation of aggregated content includes selecting preferred content from a number of sources, wherein the sources are emotion-tagged, aggregating the emotion-tagged preferred content sources, and creating an audio representation of the emotion-tagged aggregated content. The aggregation of emotion-tagged content sources and/or the creation of the audio representation may be performed by a mobile device. The emotion-tagged content include text with HTML tags that specify how text-to-speech conversion should be performed.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 25, 2021
    Assignee: Dish Technologies L.L.C.
    Inventor: John C. Calef, III
  • Publication number: 20210034889
    Abstract: Embodiments are directed towards the interpretation of driver intent and communication of that intent with autonomous vehicles at a traffic intersection. A computing device that sits on the dashboard of a vehicle includes at least one camera, an output device, and circuitry. The computing device captures first images of the driver in the vehicle and second images of the traffic intersection. The computing device identifies another vehicle at or approaching the intersection based on an analysis of the second images. The computing device determines an attention direction and hand movement of the driver based on an analysis of the first images to determine a driving intent of the driver. The computing device provides information to the other vehicle indicating the driving intent of the driver. The computing device may also obtain the driving intent of the other vehicle and provide information to the driver indicating the other vehicle's intent.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 4, 2021
    Inventors: Prakash Subramanian, Nicholas B. Newell, John C. Calef, III
  • Publication number: 20200202293
    Abstract: A request is determined to move from a first location to a second location to receive an item, the request specifying a transport vehicle to move the mobile platform from the first location to the second location. The mobile platform is actuated to attach to the transport vehicle.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Nicholas Brandon Newell, Prakash Subramanian, John C. Calef, III, Allyson Lotz, Zachary Pierucci
  • Publication number: 20180190263
    Abstract: A method for producing an audio representation of aggregated content includes selecting preferred content from a number of sources, wherein the sources are emotion-tagged, aggregating the emotion-tagged preferred content sources, and creating an audio representation of the emotion-tagged aggregated content. The aggregation of emotion-tagged content sources and/or the creation of the audio representation may be performed by a mobile device. The emotion-tagged content include text with HTML tags that specify how text-to-speech conversion should be performed.
    Type: Application
    Filed: February 28, 2017
    Publication date: July 5, 2018
    Applicant: Echostar Technologies L.L.C.
    Inventor: John C. Calef, III