Patents Issued in July 30, 2019
  • Patent number: 10366259
    Abstract: According to one embodiment, a reading device is configured to read information from an RFID tag attached to a product. The reading device includes a housing, an antenna, an opening-and-closing lid, a detector, a reading unit, and an alert unit. The housing includes an opening portion, and therein a space for accommodating the product. The antenna is provided within the space. The opening-and-closing lid is opening and closing the opening portion. The detector is configured to detect an open or closed state of the opening-and-closing lid. The reading unit is configured to read the information from the RFID tag. The alert unit is configured to issue an alert on an operation method in accordance with the open or closed state of the opening-and-closing lid and an operation state of the reading unit.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: July 30, 2019
    Assignees: TOSHIBA TEC KABUSHIKI KAISHA, FAST RETAILING CO., LTD.
    Inventors: Shigeaki Suzuki, Dai Namiki, Takahiro Tambara
  • Patent number: 10366260
    Abstract: A key locker for a vehicle includes a key locker body, a key locker door, and an access actuator. The key locker body is sized to store a vehicle key. The access actuator is configured to move the key locker door in response to an access signal.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: July 30, 2019
    Assignee: Firstech, LLC
    Inventors: Jason Lee, Jason Henry Kaminski
  • Patent number: 10366261
    Abstract: In some embodiments, systems, apparatuses, and methods are provided herein useful to monitor a shopping facility. The shopping facility can include an array of radio frequency identification (RFID) readers distributed throughout the facility to thereby receive and read signals generated from RFID tags within the facility. RFID tags can advantageously be coupled to and associated with products within the facility so that readings of the tags can be used to monitor the status of the products. A control circuit can be coupled to the RFID readers to thereby analyze the readings and compile readings over time. With this, the control circuit can monitor the shopping facility to identify scenarios requiring follow up. Upon identification of one of the scenarios, the control circuit can instruct an automated ground vehicle (AGV) to inspect an identified product at a location within the facility. The AGV can operate a sensor thereof to determine a status of the identified product.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: July 30, 2019
    Assignee: Walmart Apollo, LLC
    Inventors: Nicholaus A. Jones, Jeremy R. Tingler, Alvin S. Taulbee, Todd D. Mattingly
  • Patent number: 10366262
    Abstract: An information code reading system includes an information code terminal and a server communicably connected to the terminal. In the terminal, an information code with first and second information is imaged, and transmitted to the server information indicating the information code. The server decodes the information indicating the information code received from the terminal, and memorize information showing that the information code has become an object being read, when the second information is provided via the decoding process. From the server, either the first information or information related to the first information provided via the decoding process is transmitted to the terminal. Hence, in the terminal, a process is performed with the first information received from the server.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: July 30, 2019
    Assignee: DENSO WAVE INCORPORATED
    Inventors: Atsushi Tano, Takao Ushijima
  • Patent number: 10366263
    Abstract: This document describes systems, methods, devices, and other techniques for video camera self-calibration based on video information received from the video camera. In some implementations, a computing device receives video information characterizing a video showing a scene from a field of view of a video camera; detects an object that appears in the scene of the video; identifies a visual marking that appears on the detected object; determines a particular visual marking among a plurality of pre-defined visual markings that matches the visual marking that appears on the detected object; identifies one or more object characteristics associated with the particular visual marking; evaluates one or more features of the video with respect to the one or more object characteristics; and based on a result of evaluating the one or more features of the video with respect to the one or more object characteristics, sets a parameter of the video camera.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: July 30, 2019
    Assignee: Accenture Global Solutions Limited
    Inventors: Cyrille Bataller, Anders Astrom
  • Patent number: 10366264
    Abstract: A system and method for transferring content among multiple devices are disclosed. Herein, the system for transferring content may include a coupling controller configured to identify a user equipment in accordance with a content transfer request and to perform coupling with the identified user equipment, and a content transfer unit configured to transmit content to the user equipment or to receive content from the user equipment, when coupling is completed.
    Type: Grant
    Filed: August 6, 2014
    Date of Patent: July 30, 2019
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Sang Sik Kim, Joon Yeong Park, Sung Kwan Jung, Jun Seok Park, Yong Chul Shin, Yong Rok Kim, Hyo Ju Park
  • Patent number: 10366265
    Abstract: Methods and systems for monitoring process equipment such as field devices. A QR code can be associated with a field device, wherein the QR code contains data that identifies the field device, and also includes process data regarding the field device, the location of the field device, and maintenance information, installation information and fault information associated with the field device. The QR code can then be scanned and decoded in order to retrieve the data for use in in monitoring and maintaining field devices in the context of a connected plant.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: July 30, 2019
    Assignee: Honeywell International Inc.
    Inventors: Shubham Agarwal, Sharath Babu Malve, Amol Gandhi, Anant Vitthal Vidwans
  • Patent number: 10366266
    Abstract: A fingerprint sensing device, an electronic device, and a calibration method for a fingerprint sensor are provided. The calibration method includes following steps: obtaining an initial environment value while the fingerprint sensor performs initial environmental calibration, and determining whether the initial environment value is in a default environment range or not; determining whether the initial environment value is in one of a plurality of statistical ranges when the initial environment value is not in the default environment range, wherein each of the statistical ranges is obtained statistically by a plurality of fingerprint data of one of a plurality of categories; and, when the initial environment value is in a target statistical range, calibrating the fingerprint sensor according to a target value and an environment default value, wherein the target value corresponds to the target statistical range, and the environment default value corresponds to the default environment range.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: July 30, 2019
    Assignee: Acer Incorporated
    Inventors: Po-Chun Tsao, Hsu-Hsiang Tseng, Chih-Chiang Chen
  • Patent number: 10366267
    Abstract: Light is emitted on one side of a paper sheet 100, which is being transported on a transport path, from a first light source 11, and light is emitted on other side of the paper sheet 100 from a second light source 21 and a fourth light source 22. A first light receiving sensor 14 receives a first reflected light, which is the light emitted by the first light source 11 and reflected from the one side of the paper sheet 100. A second light receiving sensor 24 receives a second reflected light, which is the light emitted by the second light source 21 and the fourth light source 22 and reflected from the other side of the paper sheet 100, and receives a transmitted light that is the light emitted by the first light source 11 and that has passed through the paper sheet 100. With this, satisfactory reflection image and transmission image of the paper sheet can be acquired while realizing the downsizing of the device.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: July 30, 2019
    Assignee: GLORY LTD.
    Inventors: Akira Bogaki, Takahiro Yanagiuchi, Takaaki Morimoto, Satoru Oshima
  • Patent number: 10366268
    Abstract: Systems and methods for optical imaging are disclosed. The systems and methods include a display for imaging an input object. The display includes a sensing surface; a plurality of display pixels; a plurality of detector pixels; and a processing system. The processing system is configured to determine a location of the input object relative to the sensing surface; illuminate one or more display pixels of the plurality of display pixels according to a pattern depending on the location of the input object; and acquire image data of the input object from one or more detector pixels of the plurality of detector pixels, wherein the image data corresponds to light from the one or more display pixels that is reflected at the sensing surface.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: July 30, 2019
    Assignee: Synaptics Incorporated
    Inventors: Bob Lee Mackey, Arash Akhavan Fomani, Francis Lau
  • Patent number: 10366269
    Abstract: An apparatus may include an ultrasonic sensor array, a light source system and a control system. Some implementations may include an ultrasonic transmitter. The control system may be operatively configured to control the light source system to emit light that induces acoustic wave emissions inside a target object. The control system may be operatively configured to select a first acquisition time delay for the reception of acoustic wave emissions primarily from a first depth inside the target object. The control system may be operatively configured to acquire first ultrasonic image data from the acoustic wave emissions received by the ultrasonic sensor array during a first acquisition time window. The first acquisition time window may be initiated at an end time of the first acquisition time delay.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: July 30, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Yipeng Lu, David William Burns
  • Patent number: 10366270
    Abstract: Embodiments of the present disclosure provide a capacitive fingerprint sensor. The capacitive fingerprint sensor includes: a first electrode plate layer, a second electrode plate layer and a third electrode plate layer that are sequentially arranged. The first electrode plate layer forms a fingerprint capacitor with a finger, at least one fourth electrode plate layer is arranged between the first electrode plate layer and the second electrode plate layer, a first parasitic capacitor is formed between the first electrode plate layer and the fourth electrode plate layer, and a second parasitic capacitor is formed between the second electrode plate layer and the fourth electrode plate layer; and the capacitive fingerprint sensor further comprises an integrator having an integrating capacitor, and the integrating capacitor is formed between the second electrode plate layer and the third electrode plate layer.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: July 30, 2019
    Assignee: Shenzhen Goodix Technology Co., Ltd.
    Inventors: Mengwen Zhang, Chang Zhan, Tao Pi, Birong Lin
  • Patent number: 10366271
    Abstract: Fingerprint sensing technology of a fingerprint sensor for authenticating whether a fingerprint of a subject is forged or falsified by using a waveform reflected from the subject, such as an ultrasonic wave. The fingerprint authentication apparatus includes a fingerprint sensor configured to apply a wave signal to a subject and receive a wave signal reflected from the subject, a local waveform detector configured to detect local waveforms by dividing the received wave signal by a reception time, and a forgery detection unit configured to count the number of local waveforms and detect whether a fingerprint provided from the subject is forged or not based on the counted number of local waveforms.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: July 30, 2019
    Assignee: SHIN SUNG C&T CO., LTD.
    Inventors: Jae Hyun Ahn, Ci Moo Song, Keun Jung Youn, Yong Kook Kim
  • Patent number: 10366272
    Abstract: An electronic device and method of operating an electronic device is provided. The electronic device a display in which a fingerprint recognition area is formed in at least one portion thereof; a fingerprint sensor disposed under the display on which a screen is displayed, wherein the fingerprint sensor is adapted to acquire image information related to authentication of a fingerprint corresponding to an object that approaches a fingerprint recognition area at least partially based on light radiated from at least one pixel of the display and reflected by the object; and a processor adapted to control at least one function of the fingerprint sensor in association with the operation of acquiring the image information.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: July 30, 2019
    Assignee: Samsung Electronics Co. Ltd
    Inventors: Kyung Hoon Song, Kwang Sub Lee, Gyu Sang Cho, Yun Jang Jin, Se Young Jang, Chi Hyun Cho
  • Patent number: 10366273
    Abstract: A device for contact-based capture of human autopodial prints using disturbed total internal reflection, comprising a protective body with a contact surface, a sensor layer comprising light-sensor elements in an array for detecting light of a predefined wavelength range, and a light guide. Passband areas transparent for light of the predefined range are between the sensor elements. The light guide is transparent to light in the range and includes parallel lower and upper faces. The faces define a coupling-in surface for light emitted from a light source in a limited angular range around a preferred direction. Due to the directed angle of incidence, light entering the light guide is totally internally reflected at the faces. A mirror layer between the sensors and the guide reflects some light back into the light guide and transmits other light. Light exiting the guide is homogenized dependent upon a distance to the light source.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: July 30, 2019
    Assignee: JENETRIC GmbH
    Inventors: Joerg Reinhold, Dirk Morgeneier, Daniel Krenzer, Juergen Hillmann, Philipp Riehl, Undine Richter
  • Patent number: 10366274
    Abstract: In the fingerprint identification system according to the present disclosure, the fingerprint sensor collects multiple frames of fingerprint images sliding-inputted by a user, the judging unit determines whether, among the multiple frames of fingerprint images, there is a first overlap region between a current frame of fingerprint images and a previous frame of fingerprint images; if yes, the judging unit removes the first overlap region from the current frame of fingerprint images and superposes the previous frame of fingerprint images with the current frame of fingerprint images to form a superposed fingerprint image; the judging unit completes judgment of all the multiple frames of fingerprint images to obtain a template fingerprint image; the processing unit saves characteristic points of the complete template fingerprint image.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: July 30, 2019
    Assignee: BYD COMPANY LIMITED
    Inventors: Guilan Chen, Qiyong Wen, Tiejun Cai, Bangjun He, Yun Yang
  • Patent number: 10366275
    Abstract: A method and a device for improving a fingerprint template, and a terminal device are proposed. The method includes: extracting first feature information of a recorded fingerprint image based on preset fingerprint feature types; determining a degree of matching between the first feature information and second feature information based on the second feature information corresponding to the fingerprint feature types in a registered fingerprint template; determining whether the degree of matching is higher than or equal to a preset compensation threshold value; and acquiring a compensation image having no intersection with the fingerprint template from the fingerprint image if the degree of matching is higher than or equal to the compensation threshold value, and adding the compensation image to the fingerprint template.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: July 30, 2019
    Assignee: Guangdong Oppo Mobile Telecommunications Corp., Ltd.
    Inventors: Haiping Zhang, Yibao Zhou
  • Patent number: 10366276
    Abstract: An information processing device which processes information regarding a 3D model corresponding to a target object, includes a template creator that creates a template in which feature information and 3D locations are associated with each other, the feature information representing a plurality of 2D locations included in a contour obtained through a projection of the prepared 3D model onto a virtual plane based on a viewpoint, and the 3D locations corresponding to the 2D locations and being represented in a 3D coordinate system, the template being correlated with the viewpoint.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: July 30, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Alex Levinshtein, Guoyi Fu
  • Patent number: 10366277
    Abstract: The present document is directed to methods and systems that identify and characterize face tracks in one or more videos that include frames that contain images of one or more human faces. In certain implementations, values for attributes, such as age, ethnicity, and gender, are assigned to face-containing subimages identified in frames of the image. The occurrence or presence of a face in a sequence of frames is identified, by comparing attributes and location and dimension parameters assigned to each occurrence of the face in a face-containing subimage within a frame, as a face track that represents a four-dimensional tube or cylinder in space time. Attributes are assigned to each face track based on attributes assigned to the occurrences of subimages of the face in frames within the face track.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: July 30, 2019
    Assignee: Imagesleuth, Inc.
    Inventor: Noah S. Friedland
  • Patent number: 10366278
    Abstract: A method for processing data includes receiving a depth map of a scene containing at least a humanoid head, the depth map comprising a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values of at least some of the pixels in the matrix. The curvature values are processed in order to identify a face in the scene.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: July 30, 2019
    Assignee: APPLE INC.
    Inventor: Yaron Eshet
  • Patent number: 10366279
    Abstract: Embodiments of the present invention provide a system for executing multiple events in response to receiving an image and extracting identity and contact information from that image. As such, a facial recognition and image hashing process is applied to an image of multiple individuals associated with the multiple events to extract image hashes for each individual. These image hashes are then compared to known, stored image hashes to determine an identity and contact information for each individual. Once this information is collected, the system executes the multiple events based on the determined information about each individual.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: July 30, 2019
    Assignee: Bank of America Corporation
    Inventors: Udaya Kumar Raju Ratnakaram, Nagasubramanya Lakshminarayana, Pinak Chakraborty
  • Patent number: 10366280
    Abstract: A method for measuring periodic motion of an object includes the steps of: after receiving an axial acceleration and a radial acceleration, calculating a first included angle between a composite acceleration, which is a sum of the axial acceleration and the radial acceleration, and one of an axial direction and a radial direction, and a second included angle between the composite acceleration and the other one of the axial direction and the radial direction; and based on a magnitude relation between the second included angle and the first included angle, controlling a periodic motion counter to increment a motion counter value which is associated with a number of times the periodic motion of the object has occurred.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: July 30, 2019
    Assignee: Mitac International Corp.
    Inventor: Hsiang-Yu Hsieh
  • Patent number: 10366281
    Abstract: A method for gesture identification with natural images includes generating a series of variant images by using each two or more successive ones of the natural images, extracting an image feature from each of the variant images, and comparing the varying pattern of the image feature with a gesture definition to identify a gesture. The method is inherently insensitive to indistinctness of images, and supports the motion estimation in axes X, Y, and Z without requiring the detected object to maintain a fixed gesture.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: July 30, 2019
    Assignee: PIXART IMAGING INC.
    Inventor: Shu-Sian Yang
  • Patent number: 10366282
    Abstract: A human detection apparatus and method using low-resolution two-dimensional (2D) light detection and ranging (LIDAR) sensor are provided. The human detection method may include receiving LIDAR data generated by reflecting a laser signal that continues to be transmitted to a search region from a plurality of objects in the search region, clustering a plurality of points included in the received LIDAR data by the same objects based on a correlation between the plurality of points, deriving a characteristic function used to identify a shape of a human, based on the clustered points, and determining whether each of the objects is a human based on the derived characteristic function.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: July 30, 2019
    Assignee: Daegu Gyeongbuk Institute of Science and Technology
    Inventors: Jong Hun Lee, Seong Kyung Kwon, Sang Hyuk Son, Eugin Hyun, Jin Hee Lee
  • Patent number: 10366283
    Abstract: A method for processing change-of-address (COA) forms. The method includes capturing a first image of a first COA form. The method includes assigning a unique identifier to the first COA form and associating the unique identifier with the first image data. The method includes transmitting the first image data and the unique identifier to a cloud computing system. The cloud computing system performs an optical-character-recognition process on the first image data to produce name and address data including both an old address and a new address, validates the name and address data, and stores the name and address data in a change of address database.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: July 30, 2019
    Assignee: SIEMENS INDUSTRY, INC.
    Inventors: Abdul Hamid Salemizadeh, Hongjian Li
  • Patent number: 10366284
    Abstract: Image recognition and parsing techniques are provided herein. In the described examples, an input image, such as an image of a document (e.g., a scanned document), can be received. Scan mark candidates in the input image can be identified that correspond to blueprint scan marks for a stored set of form blueprints. The blueprint scan marks can indicate form entry areas or other features of a form associated with the form blueprint. Identified scan mark candidates can be compared with the corresponding blueprint scan marks. Based on the comparing, it can be determined that at least some of the scan mark candidates are confirmed scan marks. Based on the confirmed scan marks, one form blueprint can be identified that corresponds to the input image. Information can be extracted from the input image, for example by optical character recognition, based on the form blueprint to which the input image corresponds.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: July 30, 2019
    Assignee: David Prulhiere
    Inventors: David Prulhiere, David Prulhiere
  • Patent number: 10366285
    Abstract: Various embodiments of an apparatus and method for determining the operation of a vehicle safety system are disclosed. In one embodiment, the controller for a safety system comprises a sensor input for receiving a signal from a safety system sensor; a camera input for receiving a signal from a camera; and a processor having control logic. The control logic is capable of receiving the sensor signal indicating an absence of detected objects in a field of view of the safety system sensor; receiving the camera signal indicating at least one non-vehicle object identified in the field of view of the camera; and maintaining the active vehicle safety system as active in response to the sensor signal indicating the absence of detected objects and the camera signal indicating the identification of at least one visual non-vehicle object.
    Type: Grant
    Filed: April 9, 2015
    Date of Patent: July 30, 2019
    Assignee: Bendix Commercial Vehicle Systems LLC
    Inventors: Robert J Custer, William P Amato
  • Patent number: 10366286
    Abstract: Systems and methods of detecting traffic light signal changes are disclosed. For instance, it can be determined that a user is stopped at an intersection having a traffic light. A plurality of images can be captured in response to detecting that the user is stopped at the intersection having a traffic light. The plurality of images do not depict the traffic light. A tonal shift in one or more color values associated with at least one image of the plurality of images can be detected. The tonal shift is indicative of a change in signal provided by the traffic light. A notification indicative of the change in signal provided by the traffic light can be provided to the user.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: July 30, 2019
    Assignee: Google LLC
    Inventors: Seth Glickman, Emil John Feig
  • Patent number: 10366287
    Abstract: An unmanned aerial vehicle (UAV) solar irradiation assessment system may automate several design parameters of solar panel design, cost and payoff estimations, and installation. The system determines the irradiance at various locations on a roof during various time periods. The system accounts for the effects of various existing or potential obstacles on the roof of a structure and/or proximate the structure. In some embodiments, a visual model (e.g., two-dimensional or three-dimensional) of the roof may be shown with a heatmap of irradiance values and/or graphical placement of solar panels. In other embodiments, the data may be analyzed and reported without visual presentation.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: July 30, 2019
    Assignee: Loveland Innovations, LLC
    Inventors: Jim Loveland, Leif Larson, Dan Christiansen, Tad Christiansen
  • Patent number: 10366288
    Abstract: Disclosed systems and methods relate to remote sensing, deep learning, and object detection. Some embodiments relate to machine learning for object detection, which includes, for example, identifying a class of pixel in a target image and generating a label image based on a parameter set. Other embodiments relate to machine learning for geometry extraction, which includes, for example, determining heights of one or more regions in a target image and determining a geometric object property in a target image. Yet other embodiments relate to machine learning for alignment, which includes, for example, aligning images via direct or indirect estimation of transformation parameters.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: July 30, 2019
    Assignee: CAPE ANALYTICS, INC.
    Inventors: Ryan Kottenstette, Peter Lorenzen, Suat Gedikli
  • Patent number: 10366289
    Abstract: Systems and methods for providing vehicle cognition through localization and semantic mapping are provided. Localization may involve in vehicle calculation of voxel signatures, such as by hashing weighted voxel data (S900, S910) obtained from a machine vision system (110), and comparison of calculated signatures to cached data within a signature localization table (630) containing previously known voxel signatures and associated geospatial positions. Signature localization tables (630) may be developed by swarms of agents (1000) calculating signatures while traversing an environment and reporting calculated signatures and associated geospatial positions to a central server (1240). Once vehicles are localized, they may engage in semantic mapping. A swarm of vehicles (1400, 1402) may characterize assets encountered while traversing a local environment. Asset characterizations may be compared to known assets within the locally cached semantic map.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: July 30, 2019
    Assignee: Solfice Research, Inc.
    Inventors: Shanmukha Sravan Puttagunta, Fabien Chraim, Scott Harvey
  • Patent number: 10366290
    Abstract: In one embodiment, a server receives a request from a first autonomous vehicle for content delivery. In response to the request, a vision analysis is performed on an image obtained from the request to determine three-dimensional (3D) positioning information of the image. A list of content items are identified based on current vehicle information of the first autonomous vehicle in view of a user profile of a user ridding the first autonomous vehicle. A first content item selected from the list of content items is augmented onto the image based on the 3D positioning information of the image, generating an augmented image. The augmented image is transmitted to the first autonomous vehicle, where the augmented image is to be displayed on a display device within the autonomous vehicle in a virtual reality manner.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: July 30, 2019
    Assignee: BAIDU USA LLC
    Inventors: Quan Wang, Biao Ma, Shaoshan Liu, James Peng
  • Patent number: 10366291
    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
    Type: Grant
    Filed: September 9, 2017
    Date of Patent: July 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Marcin Nowak-Przygodzki, Gökhan Bakir
  • Patent number: 10366292
    Abstract: A system is provided for video captioning. The system includes a processor. The processor is configured to apply a three-dimensional Convolutional Neural Network (C3D) to image frames of a video sequence to obtain, for the video sequence, (i) intermediate feature representations across L convolutional layers and (ii) top-layer features. The processor is further configured to produce a first word of an output caption for the video sequence by applying the top-layer features to a Long Short Term Memory (LSTM). The processor is further configured to produce subsequent words of the output caption by (i) dynamically performing spatiotemporal attention and layer attention using the intermediate feature representations to form a context vector, and (ii) applying the LSTM to the context vector, a previous word of the output caption, and a hidden state of the LSTM. The system further includes a display device for displaying the output caption to a user.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: July 30, 2019
    Assignee: NEC Corporation
    Inventors: Renqiang Min, Yunchen Pu
  • Patent number: 10366293
    Abstract: In an example, a computing device comprises at least one processor, a memory, and a non-transitory computer-readable storage medium storing instructions thereon that, when executed, cause the at least one processor to perform functions comprising: performing an initial security screening on an object based on a first set of security-related data associated with the object and a first set of security screening parameters, and performing a supplemental security screening on the object based on a second set of security-related data associated with the object and a second set of security screening parameters. The first set of security-related data may be different from the second set of security-related data, and the first set of security screening parameters may be different from the second set of security screening parameters.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: July 30, 2019
    Assignee: Synapse Technology Corporation
    Inventors: Bruno Brasil Ferrari Faviero, Simanta Gautam, Ian Cinnamon
  • Patent number: 10366294
    Abstract: An object classification system for an automated vehicle includes a lidar and/or a camera, and a controller. The controller determines a lidar-outline and/or a camera-outline of an object. Using the lidar, the controller determines a transparency-characteristic of the object based on instances of spot-distances from within the lidar-outline of the object that correspond to a backdrop-distance. Using the camera, the controller determines a transparency-characteristic of the object based on instances of pixel-color within the camera-outline that correspond to a backdrop-color. The transparency-characteristic may also be determined based on a combination of information from the lidar and the camera. The controller operates the host-vehicle to avoid the object when the transparency-characteristic is less than a transparency-threshold.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: July 30, 2019
    Assignee: Aptiv Technologies Limited
    Inventors: Junqing Wei, Wenda Xu
  • Patent number: 10366295
    Abstract: An object recognition apparatus learns an axis displacement amount of a reference axis of first object detecting means, combines and integrates, as information belonging to a same object, a plurality of pieces of information present within a first combining area and a second combining area, when a positional relationship between the first combining area and the second combining area meets a predetermined combinable condition. The first combining area is set as an area in which pieces of information related to the object acquired by the first object detecting means are combined. The second combining area is set as an area in which pieces of information related to the object acquired by second object detecting means are combined. The object recognition apparatus variably sets sizes of the first combining area and the second combining area based on a learning state of the axis displacement amount of the reference axis.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: July 30, 2019
    Assignee: DENSO CORPORATION
    Inventor: Yusuke Matsumoto
  • Patent number: 10366296
    Abstract: Exemplary embodiments are directed to biometric enrollment systems including a camera and an image analysis module. The camera configured is to capture a probe image of a subject, the probe image including an iris of the subject. The image analysis module is configured to determine an iris characteristic of the iris in the probe image. The image analysis module is configured to analyze the probe image relative to a first enrollment image to determine if a match exists based on the iris characteristic. If the match exists, the image analysis module is configured to electronically store the matched probe image as an accepted image. The image analysis module is configured to select and establish the accepted image as a second enrollment image if the accepted image meets enrollment image criteria.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: July 30, 2019
    Assignee: Princeton Identity, Inc.
    Inventors: Barry E. Mapen, David Alan Ackerman, James Russell Bergen, Steven N. Perna
  • Patent number: 10366297
    Abstract: The technology disclosed relates to coordinating motion-capture of a hand by a network of motion-capture sensors having overlapping fields of view. In particular, it relates to designating a first sensor among three or more motion-capture sensors as having a master frame of reference, observing motion of a hand as it passes through overlapping fields of view of the respective motion-capture sensors, synchronizing capture of images of the hand within the overlapping fields of view by pairs of the motion-capture devices, and using the pairs of the hand images captured by the synchronized motion-capture devices to automatically calibrate the motion-capture sensors to the master frame of reference frame.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 30, 2019
    Assignee: Leap Motion, Inc.
    Inventor: David S. Holz
  • Patent number: 10366298
    Abstract: Disclosed is a computer implemented method for identifying an object in a plurality of images. The method may include a step of receiving, through an input device, a delineation of the object in at least one image of the plurality of images. Further, the method may include a step of identifying, using the processor, an image region corresponding to the object in the at least one image based on the delineation. Furthermore, the method may include a step of tracking, using the processor, the image region across the plurality of images.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: July 30, 2019
    Inventor: Shoou Jiah Yiu
  • Patent number: 10366299
    Abstract: A scanning camera upgrade adaptor system provides backwards compatibility when an existing scanning camera subsystem is replaced or upgraded in automated sorting equipment with a newer camera having a different data format. The adaptor system allows sorting equipment such as mail sorting equipment to be upgraded or repaired with a new camera while providing compatibility and optional fallback to a previous mode of operation of the existing equipment. The upgrade system enables legacy equipment and newly added sorting/processing equipment to be utilized in conjunction, while reducing cost of upgrade and necessity for completely new equipment as desirable features are added.
    Type: Grant
    Filed: October 12, 2012
    Date of Patent: July 30, 2019
    Assignee: BULL HN INFORMATION SYSTEMS, INC.
    Inventors: David Lowell Bowne, Shahrom Kiani, Carlos Macia, Russell W. Guenthner
  • Patent number: 10366300
    Abstract: Systems and methods are described for generating an enhanced prediction from a 2D and 3D image-based ensemble model. In various embodiments, a computing device can be configured to obtain one or more sets of 2D and 3D images and to standardize each of the 2D and 3D images to allow for comparison and interoperability. Corresponding 2D3D image pairs can be determined from the standardized 2D and 3D pairs where the 2D and 3D images correspond based on a common attribute, such as a similar timestamp or time value. The enhanced prediction can use separate underlying 2D and 3D prediction models where the 2D and 3D images of a 2D3D pair are each input to the respective underlying 2D and 3D prediction models to generate respective 2D and 3D predict actions.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: July 30, 2019
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Elizabeth Flowers, Puneit Dua, Eric Balota, Shanna L. Phillips
  • Patent number: 10366301
    Abstract: A method of object or feature detection. The method includes the steps of (A) receiving an array of scores and (B) applying a block based non-maximum suppression technique to the array of scores.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: July 30, 2019
    Assignee: Ambarella, Inc.
    Inventors: Elliot N. Linzer, Guy Rapaport, Leslie D. Kohn, Yu Wang
  • Patent number: 10366302
    Abstract: CNN based integrated circuit is configured with a set of pre-trained filter coefficients or weights as a feature extractor of an input data. Multiple fully-connected networks (FCNs) are trained for use in a hierarchical category classification scheme. Each FCN is capable of classifying the input data via the extracted features in a specific level of the hierarchical category classification scheme. First, a root level FCN is used for classifying the input data among a set of top level categories. Then, a relevant next level FCN is used in conjunction with the same extracted features for further classifying the input data among a set of subcategories to the most probable category identified using the previous level FCN. Hierarchical category classification scheme continues for further detailed subcategories if desired.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: July 30, 2019
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Lin Yang, Patrick Z. Dong, Baohua Sun
  • Patent number: 10366303
    Abstract: A polarization image acquisition unit (11) acquires polarization images of three or more polarization directions. A feature quantity computation unit (15) computes image feature quantities on the basis of the acquired polarization images. For example, the luminance of each polarization image is normalized for each pixel, and the normalized luminance of the polarization image is used as the image feature quantity. The luminance of the polarization image changes according to the surface shape of an object. Thus, the image feature quantities computed on the basis of the polarization images are feature quantities corresponding to the surface shape of the object. Image processing, for example, image recognition, feature point detection, feature point matching, or the like, can be performed on the basis of the surface shape of the object using such image feature quantities.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: July 30, 2019
    Assignee: SONY CORPORATION
    Inventor: Yuhi Kondo
  • Patent number: 10366304
    Abstract: A method comprising: obtaining a three-dimensional (3D) point cloud about an object; obtaining binary feature descriptors for feature points in a 2D image about the object; assigning a plurality of index values for each feature point as multiple bits of the corresponding binary feature descriptor; storing the binary feature descriptor in a table entry of a plurality of hash key tables of a database image; obtaining query binary feature descriptors for feature points in a query image; matching the query binary feature descriptors to the binary feature descriptors of the database image; reselecting one bit of the hash key of the matched database image; and re-indexing the feature points in the table entries of the hash key table of the database image.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: July 30, 2019
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Lixin Fan, Youji Feng, Yihong Wu
  • Patent number: 10366305
    Abstract: To precisely extract a static feature value from consecutive images taken in a dynamic environment that is crowded by many people. A feature value extraction apparatus includes: a consecutive-image acquisition unit configured to acquire consecutive images that are consecutively taken; a local feature value extraction unit configured to extract a local feature value at each feature point from the consecutive images; a feature value matching unit configured to perform matching between the consecutive input images for the local feature value extracted by the local feature value extraction unit; and an invariant feature value calculation unit configured to acquire, in the local feature values for which matching between a predetermined number of consecutive images has been obtained by the feature value matching unit, an average of the local feature values whose position changes between the consecutive images are equal to or less than a predetermined threshold value as invariant feature value.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: July 30, 2019
    Assignee: SOINN INC.
    Inventors: Toru Kayanuma, Osamu Hasegawa, Takahiro Terashima
  • Patent number: 10366306
    Abstract: This disclosure describes a system for automatically identifying an item from among a variation of items of a same type. For example, an image may be processed and resulting item image information compared with stored item image information to determine a type of item represented in the image. If the matching stored item image information is part of a cluster, the item image information may then be compared with distinctive features associated with stored item image information of the cluster to determine the variation of the item represented in the received image.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: July 30, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Sudarshan Narasimha Raghavan, Xiaofeng Ren, Michel Leonard Goldstein, Ohil K. Manyam
  • Patent number: 10366307
    Abstract: The coarse-to-fine search method includes: a first search step of detecting an object from a first image by means of template matching; and a second step of setting an area comprising n×m pixels within a second image having resolutions of horizontal n times and vertical m times as compared with the first image corresponding to a position detected in the first search step as a search range and detecting the object from the second image by means of template matching. During the coarse-to-fine search, data for the second image are rearranged on a work memory prior to the second search step such that data of the n×m pixels collated with same components of a template are stored in contiguous memory addresses, and n×m collation operations for the n×m pixels are executed in less than n×m calculation by SIMD commands in the second search step.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: July 30, 2019
    Assignee: OMRON Corporation
    Inventor: Yoshinori Konishi
  • Patent number: 10366308
    Abstract: Enhanced contrast between an object of interest and background surfaces visible in an image is provided using controlled lighting directed at the object. Exploiting the falloff of light intensity with distance, a light source (or multiple light sources), such as an infrared light source, can be positioned near one or more cameras to shine light onto the object while the camera(s) capture images. The captured images can be analyzed to distinguish object pixels from background pixels.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: July 30, 2019
    Assignee: Leap Motion, Inc.
    Inventors: David S. Holz, Hua Yang