Abstract: Embodiments provide functionality for identifying joints and limbs in images. An embodiment extracts features from an image to generate feature maps and, in turn, processes the feature maps using a single convolutional neural network trained based on a target model that includes joints and limbs. The processing generates both a directionless joint confidence map indicating confidence with which pixels in the image depict one or more joints and a directionless limb confidence map indicating confidence with which the pixels in the image depict one or more limbs between adjacent joints of the one or more joints, wherein adjacency of joints is provided by the target model. To continue, indications of the one or more joints and the one or more limbs in the image are generated using the directionless joint confidence map, the directionless limb confidence map, and the target model. Embodiments can be deployed on mobile and embedded systems.
Abstract: A vehicle unlocking device, including: a memory; a processor connected to the memory; and a reception section configured to receive identification information assigned to respective vehicles, from a terminal carried by a person present at a periphery of a specific vehicle that is one of the respective vehicles, the processor being configured to: acquire an image captured by an imaging section provided at the specific vehicle, and unlock a lock unit of the specific vehicle in a case in which the identification information that has been received is identification information corresponding to the specific vehicle and an image of the person has been acquired.
Abstract: Example vehicle occupancy management systems and methods are described. In one implementation, a method receives a current vehicle image representing a current interior of a vehicle. An occupancy management system detects at least one passenger in the vehicle based on the current vehicle image and determines a seating location of the passenger. A seat map is generated that identifies the seating location of the passenger in the vehicle.
Type:
Grant
Filed:
July 31, 2017
Date of Patent:
October 25, 2022
Assignee:
Ford Global Technologies, LLC
Inventors:
Bruno Sielly Jales Costa, Madeline J. Goh
Abstract: A method for matching a sensed signal to a concept structure, the method may include: receiving a sensed signal; generating a signature of the input image; comparing the signature of the input image to signatures of a concept structure; determining whether the signature of the input image matches any of the signatures of the concept structure based on signature matching criteria, wherein each signature of the concept structure is associated within a signature matching criterion that is determined based on an object detection parameter of the signature; and concluding that the input image comprises an object associated with the concept structure based on an outcome of the determining.
Abstract: A looking away determination device includes: a looking away determination unit that determines that a driver is in a looking away state when a proportion of an image in which a face having a degree of certainty equal to or greater than a threshold value is shown with respect to a plurality of images obtained by imaging the driver, the degree of certainty serving as an index of face likeness; and an updating unit that updates the threshold value based on a degree of certainty of a face shown in an image obtained by imaging the driver.
Abstract: A multi-modal emotion recognition system is disclosed. The system includes a data input unit for receiving video data and voice data of a user, a data pre-processing unit including a voice pre-processing unit for generating voice feature data from the voice data and a video pre-processing unit for generating one or more face feature data from the video data, a preliminary inference unit for generating situation determination data as to whether or not the user's situation changes according to a temporal sequence based on the video data. The system further comprises a main inference unit for generating at least one sub feature map based on the voice feature data or the face feature data, and inferring the user's emotion state based on the sub feature map and the situation determination data.
Abstract: An uplink transmission method includes; determining, by a terminal device according to an expected sending power of each uplink among a plurality of uplinks, an actual sending power of the each uplink; and sending, by the terminal device, signal on the each uplink using the actual sending power of the each uplink.
Type:
Grant
Filed:
January 21, 2021
Date of Patent:
October 4, 2022
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: According to one embodiment, a calculation system includes a detector, a combiner, and a duration calculator. The detector refers to a plurality of images of an object imaged from mutually-different angles, detects the object in each of the images, and calculates a provisional position of the object in a coordinate system for each of the images. The coordinate system is prescribed. The combiner calculates a combined position of the object in the coordinate system by using the calculated plurality of provisional positions. The duration calculator refers to process information and a plurality of execution regions and calculates a duration of at least a portion of a plurality of processes based on the combined position and the execution regions. The process information includes information relating to the processes. The execution regions are represented using the coordinate system. The processes are executed in the execution regions.
Abstract: An information processing system includes a vehicle and a server that is able to communicate with the vehicle. The server stores weather information indicating the weather. The vehicle receives the weather information from the server and acquires an image in which a scene outside the vehicle is captured when an operating state of an onboard device is not a prescribed operating state corresponding to the weather indicated by the weather information. The vehicle or the server detects the weather from the image. The server updates the weather information to indicate the weather detected from the image when the weather detected from the image and the weather indicated by the weather information do not match and provides information to a client using the updated weather information.
Abstract: A face recognition method and apparatus, and a storage medium are provided. The method includes: performing attribute feature extraction on an image to be processed including a target object to obtain N attribute features of the target object, N being an integer greater than 1; performing attention feature extraction on the image to be processed based on an attention mechanism to obtain N attention features of the target object; clustering the N attention features to obtain M clustered attention features, M being a positive integer and M<N; and determining the face recognition result of the target object according to the N attribute features and the M clustered attention features.
Abstract: There is provided an information processing apparatus and an information processing method capable of performing position estimation of a controller corresponding to a wearable device having an inside-out type. The information processing apparatus includes a user-position estimating unit that recognizes, based on a first captured image that is captured by a first image capturing unit provided to a wearable device mounted on a user, peripheral environment information of the user to execute position estimation of the user in a real space and a controller-position estimating unit that executes, based on a second captured image captured by a second image capturing unit that is provided to a controller being operated by the user, position estimation of the controller with reference to the recognized environment information.
Abstract: An information processing apparatus includes a display device, a detection unit that detects a terminal apparatus located around the information processing apparatus and carried by a user, and a display control unit that causes the display device to display a terminal image, which is an image corresponding to the terminal apparatus detected by the detection unit, and an information image, which is an image corresponding to information to be provided for the terminal apparatus.
Abstract: A trailer detection system for a vehicle includes a trailer sensor, an imaging system, and a controller. The controller implements a trailer angle indication process including receiving and processing image data to identify a rotation point of a trailer coupled with the vehicle about a fifth-wheel hitch, left and right lateral edges of the trailer, and a distance between a rear portion of a cab of the vehicle and a front face of the trailer. Trailer yaw rate data is received and processed to determine an angle of the trailer about the rotation point, and an expected position of the front face of the trailer and at least one of the left and right lateral edges of the trailer with respect to the cab of the vehicle is calculated.
Type:
Grant
Filed:
October 29, 2020
Date of Patent:
September 6, 2022
Assignee:
Ford Global Technologies, LLC
Inventors:
Luke Niewiadomski, Phillip Curtiss Couture, Li Xu, Alexander Lee Hunton
Abstract: In accordance with an aspect of the present disclosure, there is provided a method of calibrating a camera for a vehicle, comprising: obtaining attitude angle information of the vehicle by using a traveling direction of the vehicle obtained based on a satellite signal, and a vertical direction from ground obtained based on a high definition map; obtaining attitude angle information of the camera mounted on the vehicle by matching an image captured by the camera to the high definition map; and obtaining coordinate system transformation information between the vehicle and the camera by using the attitude angle information of the vehicle and the attitude angle information of the camera.
Abstract: Devices herein generally describe a protector for a device such as a phone or tablet that includes a magnet and allows for the transfer of power and data. The protector is generally flat, and affixes itself to the back of a device using the magnet. To prevent unintentional rotation and/or sliding off with respect to the device, rather than surround the device itself, the protector includes an opening that surrounds a protrusion on the device, which may include the device's camera.
Abstract: A method and system are disclosed and include, in response to a user being located within a vehicle associated with the vehicle-sharing request, obtaining, using a processor configured to execute instructions stored in a nontransitory computer-readable medium, image data corresponding to the user from a camera. The method also includes determining, using the processor, whether the image data corresponds to an image associated with a vehicle-sharing account of the user. The method also includes in response to determining the image data corresponds to the image, enabling, using the processor, the user to activate the vehicle.
Type:
Grant
Filed:
November 7, 2019
Date of Patent:
July 12, 2022
Assignees:
DENSO International America, Inc., DENSO CORPORATION
Inventors:
Robert Wunsche, III, Mustafa Mahmoud, Cameron Beyer
Abstract: A method of processing image data in a connectionist network includes: determining, a plurality of offsets, each offset representing an individual location shift of an underlying one of the plurality of output picture elements, determining, from the plurality of offsets, a grid for sampling from the plurality of input picture elements, wherein the grid comprises a plurality of sampling locations, each sampling location being defined by means of a respective pair of one of the plurality of offsets and the underlying one of the plurality of output picture elements, sampling from the plurality of input picture elements in accordance with the grid, and transmitting, as output data for at least a subsequent one of the plurality of units of the connectionist network, a plurality of sampled picture elements resulting from the sampling, wherein the plurality of sampled picture elements form the plurality of output picture elements.
Abstract: There are provided a communication apparatus and a method. The communication apparatus comprising: a circuitry operative to determine respective priorities of a plurality of target receivers to which data is transmitted, and assign power for transmitting the data at least according to a power loss feature related to a particular target receiver of which the priority is the highest or is higher than a power threshold; and a transmitter, operative to transmit, to the plurality of target receivers, the data with the assigned power.
Type:
Grant
Filed:
July 10, 2017
Date of Patent:
July 5, 2022
Assignee:
Panasonic Intellectual Property Corporation of America
Abstract: Methods and apparatus for using beacon signals are described. One or more sectorized base stations are used in some embodiments to transmit beacon signals into zones, e.g., each zone being at least partially covered by one or more beacon signals. Use of sectorized base stations allows a single base station, e.g., a Bluetooth or other base station capable of transmitting beacon signals, to cover a number of different zones avoiding the need for multiple different beacon transmitters at different locations to establish different beacon coverage areas. Sectorization of a Bluetooth base station and the ability to remotely or locally configure the base station allows for great flexibility to use beacon signals in stores or other locations without the need for numerous individual battery powered beacon transmitters at floor or display level.
Type:
Grant
Filed:
September 10, 2020
Date of Patent:
June 21, 2022
Assignee:
Juniper Networks, Inc.
Inventors:
Robert J. Friday, Neal Dante Castagnoli, Randall Wayne Frei
Abstract: A method and an apparatus for identity verification, an electronic device, a computer program, and a storage medium include: obtaining a first image of a document, where the first image contains a first face image; obtaining a second image containing a face of a to-be-verified person; performing face comparison on the first image and the second image to obtain a first comparison result; and obtaining an identity verification result according to the first comparison result.