Patents Issued in July 14, 2020
  • Patent number: 10713451
    Abstract: One example of an optical jumper includes an optical cable, a first connector, a second connector, and a tag. The first connector is optically coupled to a first end of the optical cable. The second connector is optically coupled to a second end of the optical cable. The tag is coupled to the first connector and stores data identifying the optical cable, the first connector, and the second connector. The tag is readable by a system with the first connector optically coupled to the system.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: July 14, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kevin Leigh, Paul Rosenberg, John Norton
  • Patent number: 10713452
    Abstract: A method and system can include: a station including a station communication unit, a station control unit, and a station storage unit; receiving transmissions of signals containing messages from beacons; detecting IDs from the messages; detecting a received strength of the signals; adding the IDs to a list; identifying one of the IDs as corresponding to an active user based on the list only having a single one of the IDs or based on a probability of the IDs being above a threshold, the active user being a user interfacing with the station; and disambiguating the IDs on the list based on the probability of the IDs being below the threshold.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: July 14, 2020
    Assignee: WashSense, Inc.
    Inventor: Andrew Felch
  • Patent number: 10713453
    Abstract: A Radio Frequency Identification (RFID) system including an RFID reader and a reader proxy authenticates itself to a verification authority. The proxy receives a proxy challenge from a verification authority and determines a proxy response based on the proxy challenge and a proxy key known to the proxy. The proxy response is then sent to the verification authority along with an identifier for the reader. The reader then authenticates an RFID tag by sending a tag response to the verification authority, which determines whether the reader is authentic based on the authenticity of the proxy response.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: July 14, 2020
    Assignee: Impinj, Inc.
    Inventors: Christopher J. Diorio, Scott A. Cooper, Matthew Robshaw
  • Patent number: 10713454
    Abstract: A system for monitoring the state of a screen basket of a screen for treating a fibrous suspension includes a cable-free identification unit assigned to the screen basket and disposed in a housing of the screen. An external, in particular mobile, reading unit is provided for the non-contact reading of technical data relating to the screen basket from the identification unit and for producing a connection to a further external database containing data relating to the screen.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: July 14, 2020
    Assignee: Voith Patent GmbH
    Inventors: Christian Gommeringer, Samee Faraji
  • Patent number: 10713455
    Abstract: Embodiments include a method and associated point-of-sale (POS) terminals. The method comprises receiving a receptacle into a stationary position within a scan zone arranged proximately to a surface. The receptacle contains one or more items. The method further comprises acquiring, using a first visual sensor having a predefined disposition relative to the surface, first image information that includes a first view and at least a second view of the scan zone. The second view is provided via a first mirror of one or more mirrors disposed near the surface and arranged around the scan zone. The second view includes a view of the one or more items relative to a surface of the receptacle. The method further comprises identifying, using image analysis of the first image information, the one or more items.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: July 14, 2020
    Assignee: Toshiba Global Commerce Solutions
    Inventors: Chih-Huang Wang, Yi-Sheng Lee, Wei-Yi Hsuan, Te-Chia Tsai
  • Patent number: 10713456
    Abstract: Signal detection and recognition employees coordinated illumination and capture of images under to facilitate extraction of a signal of interest. Pulsed illumination of different colors facilitates extraction of signals from color channels, as well as improved signal to noise ratio by combining signals of different color channels. The successive pulsing of different color illumination appears white to the user, yet facilitates signal detection, even for lower cost monochrome sensors, as in barcode scanning and other automatic identification equipment.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: July 14, 2020
    Assignee: Digimarc Corporation
    Inventors: Jacob L. Boles, Alastair M. Reed, John D. Lord
  • Patent number: 10713457
    Abstract: Techniques for generating and processing two-dimensional barcodes are described. One example method includes identifying original content to be encoded in a two-dimensional (2D) barcode structure; and generating a 2D barcode associated with the original content based on at least the 2D barcode structure and the original content, wherein the 2D barcode structure includes at least an identification field and a data field, and the identification field indicates one or more data elements in the data field.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: July 14, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Xi Sun, Hongwei Luo
  • Patent number: 10713458
    Abstract: A bio-sensor device, integrated with a display portion, includes a surface for touching by a body part, such as a finger. A light source, such as an array of LEDs, emit light through the surface so as to be reflected and partially absorbed by the body part An array of photodetectors detects light reflected back by the body part and generates signals corresponding to an image of the light reflection, which corresponds to the light absorption pattern in the body part. The light absorption pattern may correlate to a fingerprint, a blood vessel pattern, blood movement within the blood vessels, combinations thereof, or other biometric feature. A processor receives the signals from the photodetectors and analyzes the signals to determine a characteristic of the body part. The characteristic may be used to authenticate the user of the bio-sensor device by comparing the detected characteristic to a stored characteristic.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: July 14, 2020
    Assignee: InSyte Systems
    Inventors: Jerome Chandra Bhat, Richard Ian Olsen
  • Patent number: 10713459
    Abstract: Disclosed is a display. The display device, comprising an electroluminescence display panel including a display area configured to output sound and configured to recognized a fingerprint, an ultrasonic fingerprint sensor disposed at a first area of the electroluminescence display panel and a film-type speaker disposed at a second area of the electroluminescence display panel.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: July 14, 2020
    Assignee: LG Display Co., Ltd.
    Inventors: NamYong Gong, JinYeol Kim, SungPil Choi, YoungSoo Lee
  • Patent number: 10713460
    Abstract: A display panel, a method for manufacturing the display panel and a display device are provided. The display panel includes a substrate; a driving circuit on the substrate; an encapsulation film covering the driving circuit and the substrate; and a fingerprint recognition structure and a detection circuit on the encapsulation film, wherein the fingerprint recognition structure includes scan lines in a row direction, detection lines extending in a column direction, and fingerprint recognition circuits, and the scan lines intersect the detection lines to define fingerprint recognition regions; wherein each fingerprint recognition circuit is in one fingerprint recognition region; the driving circuit is connected to the scan lines; the detection circuit is connected to the detection lines and is configured to recognize fingerprints according to electrical signals from the detection lines.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: July 14, 2020
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., ORDOS YUANSHENG OPTOELECTRONICS CO., LTD.
    Inventors: Hao Zhang, Yanliu Sun
  • Patent number: 10713461
    Abstract: A sensor assembly includes a flexible substrate with conductive traces formed on opposed sides of the substrate and oriented transversely to each other. The substrate is wrapped around a core so that the traces formed on opposed sides of a first part of the substrate form a first sensor surface on one surface of the core, and the traces formed on opposed sides of a second part of the substrate form a second sensor surface on an opposed surface of the core. The core may comprise an encapsulant overmolded onto the conductive traces on a surface of the first part of the substrate, and the second part of the substrate is folded over the encapsulant. The sensor assembly may include an integrated circuit disposed on the flexible substrate, wherein one or more of the conductive traces are electrically connected to each integrated circuit.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: July 14, 2020
    Assignee: IDEX Biometrtics ASA
    Inventors: Fred G. Benkley, III, David N. Light
  • Patent number: 10713462
    Abstract: A fingerprint detection device includes: a substrate having a first surface and a second surface on an opposite side of the first surface, the first surface serving as a detection surface configured to detect unevenness of an object in contact or in proximity; a detection electrode provided on the second surface side of the substrate and configured to detect unevenness of a finger in contact or in proximity on the basis of an electrostatic capacitance change; and a drive circuit provided on the second surface side of the substrate and configured to supply a drive signal to the detection electrode.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: July 14, 2020
    Assignee: Japan Display Inc.
    Inventors: Hayato Kurasawa, Toshinori Uehara, Hiroshi Mizuhashi
  • Patent number: 10713463
    Abstract: A display method of user interface and an electronic apparatus using the same are provided. The display method of user interface is applied to fingerprint registration, and includes: sensing an object and obtaining a swiping image of the object; analyzing the swiping image to obtain a plurality of feature points of the swiping image; generating a pre-registration dataset according to the feature points, and analyzing the pre-registration dataset to obtain an image adjusting parameter; and displaying a user interface, and adjusting a range of a filled region of a reference image on the user interface according to the image adjusting parameter. Therefore, the user learns a real-time information of fingerprint registration progress when the user performs fingerprint registration in a swiping manner.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: July 14, 2020
    Assignee: Egis Technology Inc.
    Inventors: Yuan-Lin Chiang, Jun-Chao Lu, Hsien-Jen Hsu
  • Patent number: 10713464
    Abstract: A fingerprint identification substrate and a fabrication method thereof, a display panel and a display apparatus are provided. The fingerprint identification substrate includes a base substrate and a plurality of fingerprint identification units on the base substrate. Each fingerprint identification unit includes a photosensitive sense electrode and a thin film transistor which are positioned on the base substrate, the photosensitive sense electrode is positioned between the base substrate and the thin film transistor, and the photosensitive sense electrode is electrically connected with a source electrode or a drain electrode of the thin film transistor.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: July 14, 2020
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yanling Han, Xue Dong, Jing LV, Haisheng Wang, Chun Wei Wu, Xiaoliang Ding, Yingming Liu, Pengpeng Wang, Wei Liu, Xueyou Cao, Yanan Jia, Lijun Zhao, Changfeng Li, Rui Xu, Yuzhen Guo
  • Patent number: 10713465
    Abstract: An image capture apparatus including a light guide element, an image capture device and a light emitting device. The light guide element has a first side, a second side and a light emitting portion located at the second side. The light emitting portion includes a plurality of enhanced transmission microstructures. The image capture device is disposed on the second side of the light guide element corresponding to the position of the enhanced transmission microstructures. A light beam, which is generated by the light emitting device and transmitted at least by the light guide element, is totally reflected to form a signal light beam. Thereafter, the signal light beam passes through the enhanced transmission microstructures and then is received by the image capture device.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: July 14, 2020
    Assignee: Gingy Technology Inc.
    Inventor: Cheng-Jyun Huang
  • Patent number: 10713466
    Abstract: A fingerprint recognition method adapted to an electronic device is provided. The electronic device includes a processing unit and a fingerprint sensor. The fingerprint recognition method includes steps of: obtaining a plurality of swiping frames; extracting a plurality of feature points respectively from the plurality of swiping frames to generate a plurality of pre-registered fingerprint datasets accordingly; merging the plurality of pre-registered fingerprint datasets; generating a registration template according to the merged pre-registered fingerprint datasets; obtaining a pressing frame; extracting a plurality of feature points from the pressing frame to generate a verifying fingerprint dataset; and comparing the verifying fingerprint dataset with the registration template, so as to determine whether the verifying fingerprint dataset matches the registration template. The above electronic device is also provided.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: July 14, 2020
    Assignee: Egis Technology Inc.
    Inventors: Yuan-Lin Chiang, Jun-Chao Lu, Yu-Chun Cheng
  • Patent number: 10713467
    Abstract: Embodiments of the present disclosure provide an optical fingerprint verification method and a mobile terminal. The method may include: controlling the optical sensor to detect an external ambient light intensity when the mobile terminal acquires a fingerprint collecting instruction; controlling the optical fingerprint identification component to collect fingerprint data; and determining whether the fingerprint data matches to a set of target fingerprint template data corresponding to the external ambient light intensity via the AP, and when the fingerprint data matches to the set of target fingerprint template data, determining that a fingerprint verification is passed.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: July 14, 2020
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Yibao Zhou
  • Patent number: 10713468
    Abstract: A method, system, and computer program product is disclosed for checking credentials, using a drone. The drone detects a line of people, and the drone can communicate with the base station. The drone can request information from a person on the line for checking credentials of the people in the line with respect to a purpose for forming the line. The method and system captures information about the person, in response to the drone requesting information from the person, and the person providing the requested information. The method and system checks the information with data stored at the base station to verify the person's information with respect to required credentials being related to the purpose of the line. A message is communicated, using the drone, to the person on the line, in response to the checking of the information of the person.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: July 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Jeremy R. Fox, Gregory J. Boss, Christian B. Compton, Andrew R. Jones, John E. Moore, Jr.
  • Patent number: 10713469
    Abstract: Disclosed in some examples are methods, systems, computing devices, and machine readable mediums that provide for cropping systems that automatically crop digital images using one or more smart cropping techniques. Smart cropping techniques may include: cropping an image based upon emotion detection, cropping based upon facial recognition and matching, and cropping based upon landmark matching. In some examples, a single smart cropping technique may be utilized. In other examples, a combination of the smart cropping techniques may be utilized.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Benjamin Lee, Erez Kikin Gil
  • Patent number: 10713470
    Abstract: The present disclosure provides a method and an apparatus of determining an image background, a device and a medium. The method includes: recognizing a face region in an image, and obtaining a face distance based on the face region; obtaining a face distance parameter of each pixel in the image based on the face distance; processing the face distance parameter and corresponding color parameter of each pixel in the image by applying a pre-trained image region segmentation model to determine an image region type corresponding to each pixel; determining a background region of the image based on the image region type corresponding to each pixel and performing preset background image processing on the background region.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: July 14, 2020
    Assignee: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE CO., LTD.
    Inventor: Hanwen Chang
  • Patent number: 10713471
    Abstract: A system and method for simulating facial expression of a virtual facial model are provided. The system stores a plurality of three-dimensional facial models corresponding to a plurality of preset sentiments one-to-one. The system identifies a present sentiment according to an acoustic signal and selects a selected model from the three-dimensional facial models according to the present sentiment, wherein the preset sentiment corresponding to the selected model is same as the present sentiment. The system predicts an upper half face image according to a lower half face image, combines the lower half face image and the upper half face image to form a whole face image, and generates a plurality of feature relationships by matching the facial features of the whole face image with the facial features of the selected model so that a virtual facial model can simulate an expression based on the feature relationships.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: July 14, 2020
    Assignee: Institute For Information Industry
    Inventors: Rong-Sheng Wang, Wan-Chi Ho, Hsiao-Chen Chang
  • Patent number: 10713472
    Abstract: A first face region within a first image is determined. The first face region includes a location of a face within the first image. Based on the determined first face region within the first image, a predicted face region within a second image is determined. A first region of similarity within the predicted face region is determined. The first region of similarity has at least a predetermined degree of similarity to the first face region within the first image. Whether a second face region is present within the second image is determined. The location of the face within the second image is determined based on the first region of similarity, the determination of whether the second face region is present within the second image, and a face region selection rule.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: July 14, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Nan Wang, Zhijun Du, Yu Zhang
  • Patent number: 10713474
    Abstract: Methods, systems, and devices are described for warless communications. An apparatus may identifying a living person by recording a heat image of a person's facial area and detecting a local heat inhomogeneity in a predetermined detection range of the heat image in order to identify the living person. Identifying the living person may include detecting a heat pattern in a predetermined detection range and comparing the detected heat pattern to a heat reference sample. The predetermined detection range may be detected based on a heat image geometry, using pattern matching, by comparing the heat image to a white light image of a living person, or by masking the heat image.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: July 14, 2020
    Assignee: Bundesdrunkerei GmbH
    Inventors: Andreas Wolf, Manfred Paeschke
  • Patent number: 10713475
    Abstract: A single network encodes and decodes an image captured using a camera on a device. The single network detects if a face is in the image. If a face is detected in the image, the single network determines properties of the face in the image and outputs the properties along with the face detection output. Properties of the face may be determined by sharing the task for face detection. Properties of the face that are output along with the face detection output include the location of the face, the pose of the face, and/or the distance of the face from the camera.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: July 14, 2020
    Assignee: Apple Inc.
    Inventors: Thorsten Gernoth, Atulit Kumar, Ian R. Fasel, Haitao Guo, Onur C. Hamsici
  • Patent number: 10713476
    Abstract: The present invention provides for high throughput passenger identification in portal security. A method for high throughput passenger identification includes receiving in memory of a host computing system from an image acquisition device a contemporaneously acquired image of a group of individuals approaching a portal passageway and identifying a set of faces of the group. The method yet further includes querying a database of faces with each identified face in the set and for each face assigning a confidence value of having matched the face to a record of a known person in the database. Finally, the method includes visually decorating each face in the contemporaneously acquired image with an initial visual characteristic on condition that a correspondingly assigned confidence value falls short of a threshold, but otherwise with a different visual characteristic, and displaying the contemporaneously acquired image in a display of the host computing system.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: July 14, 2020
    Assignee: Royal Caribbean Cruises Ltd.
    Inventors: Richard Fain, Jay Schneider, Joey Hasty, David Smith, Joshua T. Nakaya, Laura Barnes
  • Patent number: 10713477
    Abstract: A determination result is easily obtained even in expression determination on a face image that is not a front view. A Robot includes a camera, a face detector, a face angle estimator, and an expression determiner. The camera acquires image data. The face detector detects a face of a person from the image data acquired by the camera. The face angle estimator estimates an angle of the face detected by the face detector. The expression determiner determines an expression of the face based on the angle estimated by the face angle estimator.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: July 14, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventors: Kouichi Nakagome, Keisuke Shimada
  • Patent number: 10713478
    Abstract: The present invention includes: a three-dimensional moving image generation module for generating a moving image of an experiment target person; an object extraction module for extracting the experiment target person, separately from a background, from the moving image; an object definition module for defining an object by measuring a length, a size and a weight center of the object and extracting a depth image of the object; a behaviour pattern definition module for defining a basic behaviour pattern of the object by cumulatively analyzing a movement speed and movement time of the weight center of the corresponding object, and changes in the extracted depth data when the object defined by the object definition module is extracted by the object extraction module; and a behaviour pattern analysis module for analyzing and identifying a lasting time and a frequency of the basic behaviour pattern with respect to the object.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: July 14, 2020
    Assignees: KOREA INSTITUTE OF INDUSTRIAL TECHNOLOGY, KOREA RESEARCH INSTITUTE OF BIOSCIENCE AND BIOTECHNOLOGY
    Inventors: Sang Kuy Han, Keyoung Jin Chun, Kyu Tae Chang, Young Jeon Lee, Yeung Bae Jin, Kang jin Jeong, Phil Yong Kang, Jung Joo Hong, Sang Rae Lee
  • Patent number: 10713479
    Abstract: The present invention relates to a motion recognition method and a motion recognition device; and provides a method for recognizing an actual motion of a user by acquiring information about a motion of a user and performing dynamic time-warping between the information about the motion and preset comparison target information. Accordingly, the motion of the user can be accurately and rapidly recognized.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: July 14, 2020
    Assignee: STARSHIP VENDING-MACHINE CORP.
    Inventor: Ji-yong Kwon
  • Patent number: 10713480
    Abstract: According to an illustrative embodiment, an information processing device is provided. The information processing device includes an image acquisition unit configured to receive an image; a recognition unit configured to acquire a recognition result of a user based on the received image, wherein the recognition result includes a position of the user, the user being associated with a display terminal; an image determination unit configured to determine an object based on the recognition result; and a display control unit configured to control display of the object on the display terminal.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: July 14, 2020
    Assignee: SONY CORPORATION
    Inventors: Ryo Fukazawa, Shunichi Kasahara, Osamu Shigeta, Seiji Suzuki, Maki Mori
  • Patent number: 10713481
    Abstract: A system for giving meaning to data in a non-standardized digital document. In some embodiments, the system includes a web portal, a recognition server and an extraction system. The web portal is accessible via a network for receiving a non-standardized digital source document. The recognition server is configured to perform optical character recognition analysis on the non-standardized digital source document and generates document recognition data including positional locations of a plurality of characters in the non-standardized digital source document. The extraction system is configured to identify labels and corresponding values represented in the non-standardized digital source document and automatically maps the labels to a plurality of predetermined variables in an external software system to which the values from the non-standardized digital source document are to be imported.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: July 14, 2020
    Assignee: CROWE HORWATH LLP
    Inventor: Jeffrey R. Schmidt
  • Patent number: 10713482
    Abstract: A computer-implemented method and system for identifying terms in a document in electronic form which includes: obtaining a document having title cased terms and defined terms; determining the location of each title cased term; accessing a library; comparing the title cased terms to the library of predetermined terms, wherein the title cased term is classified by a first predetermined identifier if the title cased term is not in the library, and wherein the title cased term is not classified by the first predetermined identifier if the title cased term is in the library; and determining each title cased term which is a defined term and a location and frequency of each defined term, wherein each defined term having a frequency value greater than one is reclassified by a second predetermined identifier and wherein each defined term having a frequency value of one is reclassified by a third predetermined identifier.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: July 14, 2020
    Assignee: Celant Innovations, LLC
    Inventor: Jason Yoon-Ho Lee
  • Patent number: 10713483
    Abstract: A digital imaging system processes digital images of a subject's fundus and/or pupils to determine a pupil edge. Two regions of a digital image are evaluated to determine a threshold value. Typically, the two regions are selected such that each region would usually not include artifacts. The threshold value can then be used to identify a pupil-iris threshold in the digital image. Based on the pupil-iris threshold, pupil edges are identified.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: July 14, 2020
    Assignee: Welch Allyn, Inc.
    Inventors: William Niall Creedon, Eric Joseph Laurin, Richard Allen Mowrey
  • Patent number: 10713484
    Abstract: A farming machine including a number of treatment mechanisms treats plants according to a treatment plan as the farming machine moves through the field. The control system of the farming machine executes a plant identification model configured to identify plants in the field for treatment. The control system generates a treatment map identifying which treatment mechanisms to actuate to treat the plants in the field. To generate a treatment map, the farming machine captures an image of plants, processes the image to identify plants, and generates a treatment map. The plant identification model can be a convolutional neural network having an input layer, an identification layer, and an output layer. The input layer has the dimensionality of the image, the identification layer has a greatly reduced dimensionality, and the output layer has the dimensionality of the treatment mechanisms.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: July 14, 2020
    Assignee: Blue River Technology Inc.
    Inventors: Andrei Polzounov, James Patrick Ostrowski, Lee Kamp Redden, Olgert Denas, Chia-Chun Fu, Chris Padwick
  • Patent number: 10713485
    Abstract: One embodiment provides a method, including: capturing at least one image of an object that is of interest to a user; identifying and capturing an environmental context of the object, wherein the environmental context (i) identifies a plurality of features of the environment surrounding the object, and (ii) comprises context captured from different modalities; storing the at least one image and the environmental context of the object, wherein the storing comprises indexing the object within the remote storage location using the identified features of the environment; receiving a request for the at least one image of the object; accessing the remote storage location and retrieving the at least one image of the object, wherein the retrieving comprises (i) searching for the at least one of the plurality of features and (ii) retrieving the at least one image of an object; and displaying the at least one image.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: July 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vijay Ekambaram, Shivkumar Kalyanaraman, Anirban Laha
  • Patent number: 10713486
    Abstract: A failure diagnosis support system includes first image acquisition means mounted on a robot for acquiring an image of the robot; and control means for controlling position and orientation of the first image acquisition means. The control means controls the position and orientation of the first image acquisition means at a predetermined timing so that the first image acquisition means faces a predetermined part of the robot. The first image acquisition means acquires an image of the predetermined part at the position and orientation controlled by the control means.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: July 14, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuto Murase, Yuka Hashiguchi
  • Patent number: 10713487
    Abstract: Disclosed is an object determining system comprising an optical sensor, a kind determining circuit and an element analyzing circuit. The optical sensor comprises a kind determining region and an element analyzing region, wherein the optical sensor captures at least one object image of an object via the kind determining region, and acquires element analyzing optical data via the element analyzing region. The kind determining circuit is configured to determine an object kind of the object according to the object image. The element analyzing circuit is configured to analyze element of the object according to the element analyzing optical data and the object kind. An object determining system applying tow stage object sensing steps to determine an object kind is also disclosed.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: July 14, 2020
    Assignee: PixArt Imaging Inc.
    Inventor: Guo-Zhen Wang
  • Patent number: 10713488
    Abstract: An image acquisition unit (2020) acquires a captured image containing an inspection target instrument. An inspection information acquisition unit (2040) acquires inspection information regarding the instrument contained in the captured image. The inspection information is information indicating an inspection item of the instrument. A first display control unit (2060) displays an indication representing an inspection spot corresponding to the inspection item indicated by the inspection information on the display device (10). For example, the first display control unit (2060) displays the indication representing the inspection spot so that the indication is superimposed on the inspection spot on a display device (10). For example, the first display control unit (2060) displays the indication in the inspection spot on the display device (10) or near the instrument.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: July 14, 2020
    Assignee: NEC CORPORATION
    Inventors: Yoshinori Saida, Shin Norieda, Makoto Yoshimoto, Kota Iwamoto, Takami Sato, Ruihan Bao
  • Patent number: 10713489
    Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for identifying and presenting information of a target user are presented. A live video stream comprising a target user may be displayed on a display of an augmented reality computing device. Data associated with one or more images of the target user may be sent to a facial recognition service, which may determine that a social network account matches the target user based on facial feature recognition. Information associated with the matched social network account may be received, and the live video stream on the augmented reality computing device may be augmented with a display of the received information associated with the matched social network account. In some examples, the augmented reality computing device may be augmented with one or more social network actions that are executable based on matching a target user to a social network account.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sandeep Ravi, Jorge Erick Santoyo Garduno, Sreevani Tippana
  • Patent number: 10713490
    Abstract: A system and method for monitoring vehicle traffic and collecting data indicative of pedestrian right of way violations by vehicles is provided. The system comprises memory and logic for monitoring traffic intersections and recording evidence indicating that vehicles have violated pedestrian right of way. Two sensor modalities collecting video data and radar data of the intersection under observation are employed in one embodiment of the system. The violation evidence can be accessed remotely by a traffic official for issuing of traffic citations.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: July 14, 2020
    Assignee: Polaris Sensor Technologies, Inc.
    Inventors: Richard P. Edmondson, Jonathan B. Hanks
  • Patent number: 10713491
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing object detection. In one aspect, a method includes receiving multiple video frames. The video frames are sequentially processed using an object detection neural network to generate an object detection output for each video frame. The object detection neural network includes a convolutional neural network layer and a recurrent neural network layer. For each video frame after an initial video frame, processing the video frame using the object detection neural network includes generating a spatial feature map for the video frame using the convolutional neural network layer and generating a spatio-temporal feature map for the video frame using the recurrent neural network layer.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: Menglong Zhu, Mason Liu
  • Patent number: 10713492
    Abstract: A device may receive one or more images captured by an image capture system. The one or more images may depict one or more objects. The device may process the one or more images using one or more image processing techniques. The device may identify the one or more objects based on processing the one or more images. The device may identify a context of the one or more images based on the one or more objects depicted in the one or more images. The device may determine whether the one or more objects contribute to a value of one or more metrics associated with the context. The device may perform an action based on the value of the one or more metrics.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: July 14, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Uvaraj Balasundaram, Kamal Mannar, Andrew K. Musselman, Devanandan Subbarayalu
  • Patent number: 10713493
    Abstract: This disclosure includes technologies for video recognition in general. The disclosed system can automatically detect various types of actions in a video, including reportable actions that cause shrinkage in a practical application for loss prevention in the retail industry. The temporal evolution of spatio-temporal features in the video are used for action recognition. Such features may be learned via a 4D convolutional operation, which is adapted to model low-level features based on a residual 4D block. Further, appropriate responses may be invoked if a reportable action is recognized.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: July 14, 2020
    Assignee: SHENZHEN MALONG TECHNOLOGIES CO., LTD.
    Inventors: Weilin Huang, Shiwen Zhang, Sheng Guo, Limin Wang, Matthew Robert Scott
  • Patent number: 10713494
    Abstract: In various embodiments, a Data Processing System for Generating Interactive User Interfaces and Interactive Game Systems Based on Spatiotemporal Analysis of Video Content may be configured to: (1) enable a user to select one or more players participating in a substantially live (e.g., live) sporting or other event; (2) determine scoring data for each of the one or more selected players during the sporting or other event; (3) track the determined scoring data; (4) generate a custom (e.g., to the user) user interface that includes the scoring data; and (5) display the custom user interface over at least a portion of a display screen (e.g., on a mobile computing device) displaying one or more video feeds of the sporting or other event. In this way, the system may be configured to convert a video feed of a sporting event into an interactive game.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: July 14, 2020
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 10713495
    Abstract: Techniques are disclosed for identifying a video using a video signature generated using image features derived from a portion of the video. In some examples, a method may include determining image features derived from a portion of a video, determining a video frame sequence of the video, and generating the video signature of the video based on the image features and the video frame sequence. The method may further include deriving a curve for the video based on the image features and the video frame sequence, and comparing the derived curve with one or more curves corresponding to respective one or more reference videos.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: July 14, 2020
    Assignee: Adobe Inc.
    Inventors: Kevin Gary Smith, William Brandon George
  • Patent number: 10713496
    Abstract: The present disclosure provides a computer-implemented method and system for hardware, channel, language and ad length agnostic detection of multi-lingual televised advertisements. The detection is performed across live streams of media content of one or more broadcasted channels. The method includes selection of a set of frames per second from a pre-defined set of frames. The method includes extraction of a pre-defined number of keypoints from each selected frame and derivation of a pre-defined number of binary descriptors from the extracted keypoints. The method includes creation of a special pyramid of the binary descriptors and accessing a second vocabulary of binary descriptors. The method includes comparison of each spatially identifiable binary descriptor from the first vocabulary with spatially identifiable binary descriptors in clusters of the second vocabulary. The method includes progressively scoring each selected frame and detection of the first ad in the live streams of the media content.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: July 14, 2020
    Assignee: Silveredge Technologies Pvt. Ltd.
    Inventors: Debasish Mitra, Hitesh Chawla
  • Patent number: 10713497
    Abstract: An evidence ecosystem that includes a capture system that detects physical properties in the environment around the capture system and captures data related to the physical properties. The capture system analyzes the captured data in accordance with patterns to detect characteristics and patterns in the captured data. Upon detecting a characteristic or a pattern, the capture system records the identified data and alignment data that identifies the location of the identified data in the captured data. The capture system sends the captured data, identified data, and alignment data to an evidence management system for use in generating reports and producing redacted copies of the captured data for distribution or presentation.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: July 14, 2020
    Assignee: AXON ENTERPRISE, INC.
    Inventors: Marcus William Lee Womack, James Norton Reitz, Nache D. Shekarri, Daniel J. Wagner, Mark A. Hanchett
  • Patent number: 10713498
    Abstract: A plurality of pairs of video cameras and interrogation devices may be placed in a public place along various paths that a person-of-interest might be expected to move. The person-of-interest is then located in multiple images acquired, collectively, by multiple video cameras. From each of the interrogation devices that are paired with these video cameras, a subset of the captured identifiers is obtained. Candidate identifiers are then restricted to those identifiers that are included in each of the subsets. A given identifier may be rejected as a candidate identifier. To automatically locate the person-of-interest in the images acquired by the “paired” video cameras, a processor may utilize video-tracking techniques to automatically track the person-of-interest, such that the person-of-interest is not “lost.” By virtue of utilizing such tracking techniques, the person-of-interest may be repeatedly located automatically, and with minimal chance of a false detection.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: July 14, 2020
    Assignee: VERINT SYSTEMS LTD.
    Inventors: Eithan Goldfarb, Boaz Dudovich
  • Patent number: 10713499
    Abstract: A method and system for real-time video triggering for traffic surveillance and photo enforcement comprises receiving a streaming video feed and performing a spatial uniformity correction on each frame of the streaming video feed and resampling the video feed to a lower spatial resolution. Motion blobs are then detected. Next a three-layered approach is used to identify candidate motion blobs which can be output to a triggering module to trigger a video collection action.
    Type: Grant
    Filed: April 23, 2012
    Date of Patent: July 14, 2020
    Assignee: Conduent Business Services, LLC
    Inventor: Wencheng Wu
  • Patent number: 10713500
    Abstract: A practical method and system for transportation agencies (federal, state, and local) to monitor and assess the safety of their roadway networks in real time based on traffic conflict events such that corrective actions can be proactively undertaken to keep their roadway systems safe for travelling public. The method and system also provides a tool for evaluating the performance of autonomous vehicle/self-driving car technologies with respect to safety and efficiency.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: July 14, 2020
    Assignee: Kennesaw State University Research and Service Foundation, Inc.
    Inventors: Jidong J. Yang, Ying Wang, Chih-Cheng Hung
  • Patent number: 10713501
    Abstract: A vehicle includes an interface that displays an image of objects in a vicinity of the vehicle, and a controller that alters a depth of field of the image based on a focal point associated with a direction of driver eye gaze relative to the image to alter blurriness of the image away from the focal point.
    Type: Grant
    Filed: August 13, 2015
    Date of Patent: July 14, 2020
    Assignee: Ford Global Technologies, LLC
    Inventor: Anthony Mark Phillips