Patents Issued in November 2, 2017
-
Publication number: 20170316231Abstract: A controller in an interrogation device performs, for each RF tag passing through an interrogation zone that is defined near an interrogation unit, an integration process of integrating a strength of a reception signal from the RF tag received by the interrogation unit. The integration process includes weighting of an integral value of the strength of the reception signal in a manner to cause an integrate value calculated for each reception signal to be larger than an integral value calculated for a preceding reception signal. When an RF tag moves to a predetermined position in the interrogation zone, the controller transmits, to a host device, an identifier of an RF tag having a maximum integral value selected from the integrated value calculated for each RF tag.Type: ApplicationFiled: February 23, 2017Publication date: November 2, 2017Applicant: OMRON CorporationInventors: Hidekatsu NOGAMI, Yoshimitsu NAKANO, Tomohiro NISHIMURA
-
Publication number: 20170316232Abstract: An interrogation device includes an interrogation unit that interrogates an RF tag in a contactless manner, and a controller that controls the interrogation unit. The controller obtains a signal strength value and an interrogation success rate from a reception signal received by the interrogation unit when controlling the interrogation unit to transmit a signal with transmission power that is being changed in stages, determines changed transmission power corresponding to a signal strength value and an interrogation success rate that are not less than a threshold selectively from a signal strength value and an interrogation success rate obtained in each stage of the transmission power, and outputs information about the determined transmission power.Type: ApplicationFiled: February 22, 2017Publication date: November 2, 2017Applicant: OMRON CorporationInventors: Yoshimitsu NAKANO, Hidekatsu NOGAMI
-
Publication number: 20170316233Abstract: Analog heterogeneous tags and methods and systems to configure the tags are described. The present invention relates to the field of electronic devices and more particularly to electronic tag devices using analog technology. Embodiments herein disclose a tag that can work in at least one of transmit only, receive only or transmit/receive modes and can transmit/receive using a plurality of communication technologies without the complex stack functionality with minimal hardware and memory requirements, wherein the tag uses I/Q samples corresponding to each technology pre-stored on the tag. Embodiments herein also disclose methods and systems for configuring the electronic tags using a configuration device, wherein the configuration device provides the I/Q samples of each technology to the tag.Type: ApplicationFiled: April 28, 2017Publication date: November 2, 2017Inventors: Arzad Alam Kherani, Ashutosh Deepak Gore, Anand Sudhakar Chiddarwar, Ok-Seon Lee, Sin-Seok Seo, Yong-Seok Park
-
Publication number: 20170316234Abstract: A sensory totem badge capable of transmitting individualized information and includes: a totem badge body attached or sewed onto an object surface; an e-tag, installed to the totem badge body, and including an NFC chip and an NFC coil; and a totem individualized information, stored in the e-tag or a cloud server; such that when a mobile sensing device is near the e-tag of the totem badge body, the implication represented by a totem on the totem badge body, the story behind it, or private words can be read. Therefore, the totem badge body is given with intangible specificity and commemoration to achieve higher value and sense of technology.Type: ApplicationFiled: November 30, 2016Publication date: November 2, 2017Inventors: LIN-HENG CHANG, KUANG-HUNG CHENG, CHIH-KAE GUAN
-
Publication number: 20170316235Abstract: Disclosed herein is an operating method of a reader in a radio frequency identification (RFID) system. In the method, a reader operates in a tag information collection mode in which tag information is collected from a plurality of tags and a tag recognition mode in which a frame of a predetermined size calculated according to the number of tags operating in each frame is allocated based on the tag information and at least some of the plurality of tags are recognized.Type: ApplicationFiled: April 27, 2017Publication date: November 2, 2017Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Tae-Jin LEE, Yunmin KIM
-
Publication number: 20170316236Abstract: A control and processing system for use with an interrogator and an interrogation system employing the same. In one embodiment, the control and processing system includes a correlation subsystem having a correlator that correlates a reference code with a reply code from a radio frequency identification (RFID) tag and provides a correlation signal therefrom. The control and processing system also includes a decision subsystem that verifies a presence of the RFID tag as a function of the correlation signal.Type: ApplicationFiled: July 11, 2017Publication date: November 2, 2017Applicant: Medical IP Holdings, LPInventors: John P. Volpi, Logan Scott, Eric McMurry
-
Publication number: 20170316237Abstract: This invention is for tracking at least one plant. A method of this invention comprises: putting at least one seed or at least one stem in a corresponding at least one pot; positioning a corresponding at least one RFID tag with respect to the seed or stem in a manner, wherein the RFID tag comprises a strap; packaging a harvested material into a packaged product and attaching the RFID tag from the potted plant, or a product RFID tag that is associated with the plurality of tags to the packaged product; confirming a request for authorization by a RFID buy card; and after confirming ID information, transferring at least one product.Type: ApplicationFiled: July 20, 2017Publication date: November 2, 2017Inventors: DILEK DAGDELEN UYSAL, JEFFREY LANE WELLS
-
Publication number: 20170316238Abstract: An imaging sensor of an imaging reader senses return light from a target to be read by image capture along an imaging axis over a field of view that extends along mutually orthogonal, horizontal and vertical axes. Two aiming light assemblies are offset from the sensor and are spaced apart along the horizontal axis at opposite sides of the sensor, and direct two aiming light marks, each having a predetermined brightness, at the target. The aiming marks are collinear along the horizontal axis and have inner end regions that overlap on the target to form a bright aiming mark having a brightness greater than the predetermined brightness to visually indicate a center zone of the field of view, as well as outer end regions that visually indicate approximate end limits of the field of view, over a range of working distances.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Inventors: Mark E. Drzymala, Edward D. Barkan, Darran M. Handshaw
-
Publication number: 20170316239Abstract: An indicia reader system includes: an indicia reader for reading symbol indicia and producing a symbol signal representative of the symbol indicia, the indicia reader capable of transferring and receiving data formatted in a plurality of protocols; a controller capable of transferring and receiving data formatted in a plurality of protocols; and, a translation interface for translating data from the controller which is in a first protocol to a second protocol for receipt by the indicia reader.Type: ApplicationFiled: May 11, 2017Publication date: November 2, 2017Inventors: Gregory Pasik, James S. Ledwith, Joseph Walczyk, Barry H. Keys, Dennis H. Cudzillo
-
Publication number: 20170316240Abstract: A method for controlling the output of contextual information to assist a user to perform a sequence of activities using a computing device, the computing device comprising or coupled to at least one wearable sensor and at least one output device for providing contextual information, the method comprising: identifying, using sensor information from the at least one sensor, an activity being performed by the user; selecting and controlling the output of contextual information based on the activity being performed by the user, the contextual information being output from the at least one output device to a user to assist the user in performing the identified activity.Type: ApplicationFiled: October 20, 2015Publication date: November 2, 2017Inventors: LUCA TIBERI, PAUL ANTHONY SHRUBSOLE, MAURICE HERMAN JOHAN DRAAIJER, RALF GERTRUDA HUBERTUS VONCKEN
-
Publication number: 20170316241Abstract: A recognition apparatus includes one or more processors, a memory to store a plurality of instructions which, when executed by the processors, cause the processors to extract an optically-readable symbol from an optically captured image including an image of the optically-readable symbol, the optically-readable symbol including a first cell line having a plurality of first cells, and one or more second cell lines each having one or more second cells, each of the one or more second cell lines connected to respective ones of the plurality of first cells of the first cell line, recognize first information expressed by the first cell line included in the extracted optically-readable symbol, recognize second information expressed by the one or more second cell lines included in the extracted optically-readable symbol, and acquire identification information included in the optically-readable symbol based at least in part on the first information and the second information.Type: ApplicationFiled: April 26, 2017Publication date: November 2, 2017Applicant: Ricoh Company, Ltd.Inventors: Hiroshi Katayama, Koichi Kudo, Toshihiro Okamoto, Tamon Sadasue, Ken Oikawa, Hanako Bando, Daisuke Maeda
-
IMAGE RECOGNITION APPARATUS, COMMODITY INFORMATION PROCESSING APPARATUS AND IMAGE RECOGNITION METHOD
Publication number: 20170316242Abstract: An image recognition apparatus includes an acquisition unit and a controller. The acquisition unit acquires an image which captures by photography a pattern indicative of an object. The controller is configured to specify a pattern area from a first image which the acquisition unit acquires, to recognize a pattern which the specified pattern area includes, to acquire a second image from the acquisition unit, to determine whether a disposition of the object of the first image and a disposition of the object of the second image coincide, and to specify a pattern area from the second image, if determining that the disposition of the object of the first image and the disposition of the object of the second image are non-coincident, and to recognize a pattern which the specified pattern area includes.Type: ApplicationFiled: July 19, 2017Publication date: November 2, 2017Inventors: Tetsuya NOBUOKA, Masaaki YASUNAGA -
Publication number: 20170316243Abstract: There is provided a capacitive fingerprint sensing device for sensing a fingerprint pattern of a finger, the capacitive fingerprint sensing device comprising: a protective dielectric top layer having an outer surface forming a sensing surface to be touched by the finger; at least one electrically conductive sensing structure arranged underneath the top layer; readout circuitry coupled to the at least one electrically conductive sensing structure to receive a sensing signal indicative of a distance between the finger and the sensing structure; and a plurality of individually controllable electroacoustic transducers arranged underneath the top layer and configured to generate a focused ultrasonic beam, and to transmit the ultrasonic beam through the protective dielectric top layer towards the sensing surface to induce an ultrasonic vibration potential in a ridge of finger placed in contact with the sensing surface at the location of the ultrasonic beam.Type: ApplicationFiled: January 19, 2017Publication date: November 2, 2017Inventor: Farzan Ghavanini
-
Publication number: 20170316244Abstract: The invention relates to a device for acquiring digital fingerprints which includes an image matrix sensor (1), said sensor being configured such as to acquire at least one image of the digital fingerprints of a finger (2) when said finger (2) is presented to said sensor in the acquisition field thereof, wherein the matrix sensor includes a body made of a semiconducting material (3) in which a matrix of active pixels (4) is formed, the pixels of said matrix of active pixels each including at least one photodiode (5) and being configured such as to operate in solar cell mode.Type: ApplicationFiled: October 22, 2015Publication date: November 2, 2017Inventor: Ni YANG
-
ELECTRODE STRUCTURE, FINGERPRINT RECOGNITION MODULE AND MANUFACTURING METHOD THEREOF, DISPLAY DEVICE
Publication number: 20170316245Abstract: Disclosed is an electrode structure including an electrode body, a composite layer disposed on the electrode body; a surface of the composite layer away from the electrode body being set to be a finger contact surface in a case of fingerprint recognition, wherein the composite layer is made from composite materials formed by a cured main body glue and one-dimensional nano-conductor materials distributed in the main body glue; and an end of each of the one-dimensional nano-conductor materials exposed from the finger contact surface of the composite layer, and the other of each of the one-dimensional nano-conductor materials makes contact with the electrode body. A fingerprint recognition module including the electrode structure and a manufacturing method thereof are also disclosed.Type: ApplicationFiled: October 24, 2016Publication date: November 2, 2017Inventor: Yingyi LI -
Publication number: 20170316246Abstract: The invention provides a system and method for rapid validation of identity from tissue using registered two dimensional and optical coherence tomography (OCT) scan images. The preferred embodiment provides, for a human fingerprint, validation that the surface fingerprint matches the primary fingerprint. An alternate embodiment provides validation of “aliveness” by ascertaining blood flow. Various embodiments are taught.Type: ApplicationFiled: June 26, 2017Publication date: November 2, 2017Inventor: Joshua Noel Hogan
-
Publication number: 20170316247Abstract: The invention relates to a fingerprint sensing device comprising a sensing chip comprising an array of capacitive sensing elements. The sensing device comprises a coating material arranged in a layer on top of the array of sensing elements and comprising a plurality of cavities filled with a dielectric material. The dielectric material comprises reduced graphene oxide. Locations of the cavities correspond to locations of the sensing elements such that a cross-section area of a cavity covers at least a portion of an area of a corresponding sensing element. A dielectric constant of the dielectric material is higher than a dielectric constant of the coating material. The invention also relates to a sensing device where the dielectric coating layer containing reduced graphene oxide comprises trenches corresponding to areas between the sensing pixels filled with a fill material, where the dielectric coating layer has a higher dielectric constant than the fill material.Type: ApplicationFiled: July 20, 2017Publication date: November 2, 2017Applicant: Fingerprint Cards ABInventors: Karl LUNDAHL, Hanna NILSSON
-
Publication number: 20170316248Abstract: A device is provided to include a display panel and an optical sensor module. The optical fingerprint sensor can detect an contact input and generate a signal indicative of an image of the fingerprint and to generate a signal indicative of a biometric marker different form the fingerprint. The generated sensor signal includes the signal indicative of the image of the fingerprint and the signal indicative of the biometric marker different from the fingerprint. The optical sensor module can capture different fingerprint patterns at different times to monitor time-domain evolution of the fingerprint ridge pattern deformation that indicates time-domain evolution of a press force from the contact input. The sensing circuitry can process the generated sensor signal to determine whether the contact input associated with the fingerprint belongs to a finger of a live person.Type: ApplicationFiled: July 18, 2017Publication date: November 2, 2017Inventors: Yi He, Bo Pi
-
Publication number: 20170316249Abstract: Provided is an electrical device including a display configured to display an image; a first transparent cover arranged on the display; a second transparent cover comprising a touch surface operable to be touched by a finger of a user; and a sensor disposed between the first transparent cover and the second transparent cover, the sensor being configured to receive a fingerprint of the finger.Type: ApplicationFiled: December 12, 2016Publication date: November 2, 2017Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Byungkyu LEE, Seokwhan CHUNG, Daekun YOON
-
Publication number: 20170316250Abstract: An electronic device is provided. The electronic device includes a touchscreen display, a pressure sensor positioned to sense external pressure against the display, a fingerprint sensor positioned to detect a fingerprint on at least a portion of the display, a processor electrically coupled to the display, the pressure sensor, and the fingerprint sensor, and a memory electrically coupled to the processor, in which the memory stores at least one registered fingerprint. The processor is configured to sense pressure of a user's finger against the display using the pressure sensor, upon sensing of the pressure, activate the fingerprint sensor, detect a fingerprint of the finger using the fingerprint sensor, determine whether the detected fingerprint is matched with any of the at least one registered fingerprint, and perform a preselected function without further requiring authentication, when the detected fingerprint is matched with any of the at least one registered fingerprint.Type: ApplicationFiled: April 7, 2017Publication date: November 2, 2017Inventors: Wan Ho ROH, So Young KIM, Dae Kwang JUNG
-
Publication number: 20170316251Abstract: A fingerprint identification module and a manufacturing method thereof, and a display device are provided. The fingerprint identification module includes: a first electrode layer and a second electrode layer opposed to each other, and a transparent insulating layer interposed between the first electrode layer and the second electrode layer; the first electrode layer includes a plurality of first electrode wires spaced apart from each other; the second electrode layer includes a plurality of second electrode wires spaced apart from each other; the plurality of second electrode wires and the plurality of first electrode wires are intersected with each other; the first electrode layer is formed of a transparent conductive material, and the second electrode layer is a wire grid polarizer.Type: ApplicationFiled: September 23, 2016Publication date: November 2, 2017Applicant: BOE TECHNOLOGY GROUP CO., LTD.Inventor: YINGYI LI
-
Publication number: 20170316252Abstract: A fingerprint identification apparatus and a mobile terminal are disclosed. The fingerprint identification apparatus includes a fingerprint sensor and an optical module electrically connected to the fingerprint sensor. The optical module comprises an optical emitter, an optical circuit module and a photoelectric converter; wherein the optical circuit module is electrically connected to the optical emitter and the photoelectric converter respectively; the optical emitter is configured to emit an optical signal having a specific wavelength; the photoelectric converter is configured to receive the optical signal emitted by the optical emitter and subjected to a touch object, and convert the optical signal into an electric signal; and the optical circuit module is configured to drive and control the optical emitter, and analyze the electric signal.Type: ApplicationFiled: July 21, 2017Publication date: November 2, 2017Inventors: Wangwang YANG, Yi HE, Yudong WANG
-
Publication number: 20170316253Abstract: An image of a physical environment is acquired that comprises a plurality of pixels, each pixel including a two-dimensional pixel location in the image plane and a depth value corresponding to a distance between a region of the physical environment and the image plane. For each pixel, the two dimensional pixel location and the depth value is converted into a corresponding three-dimensional point in the physical environment defined by three coordinate components, each of which has a value in physical units of measurement. A set of edge points is determined within the plurality of three-dimensional points based, at least in part, on the z coordinate component of the plurality of points and a distance map is generated comprising a matrix of cells. For each cell of the distance map, a distance value is assigned representing a distance between the cell and the closest edge point to that cell.Type: ApplicationFiled: April 27, 2016Publication date: November 2, 2017Inventors: Sian Phillips, Shantanu Padhye
-
Publication number: 20170316254Abstract: A vehicle configured to operate in a remote access mode is disclosed. In some examples, a camera at the exterior of the vehicle can capture one or more images of its surroundings, including the face of a person attempting to access the vehicle. A primary operator (e.g., owner or lessee) of the vehicle can receive the one or more images at a mobile device and send an input, via a user interface of the mobile device, to grant or deny access to the vehicle. In response to wirelessly receiving the input to allow access, the vehicle can be unlocked and started in the remote access mode. In some examples, the remote access mode can have a set of permissions and/or restrictions associated therewith.Type: ApplicationFiled: April 29, 2017Publication date: November 2, 2017Inventors: Mohamad Mwaffak Hariri, Adam Michael Kibit
-
Publication number: 20170316255Abstract: An identification device includes an inputter which receives image information of a person photographed by a camera, and a controller which identifies the person and detects parts, which are at least a head and hands, of the person based on the image information, thereby identifying a motion of the person based on the identified person, the detected parts, and a motion model in which a motion of a person is registered for every person, and outputs the identified motion of the person.Type: ApplicationFiled: April 10, 2017Publication date: November 2, 2017Inventor: KOJI ARATA
-
Publication number: 20170316256Abstract: A computer-implemented method includes identifying interesting moments from a video. The video is received and includes image frames. Continual motion of one or more objects in the video is identified based on identifying foreground motion in the image frames. Video segments from the video that include the continual motion are generated. A segment score for each of the video segments is generated based on animation criteria. Responsive to one or more of segment scores exceeding the threshold animation score, one or more corresponding video segments are selected. An animation is generated based on the one or more corresponding video segments.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Applicant: Google Inc.Inventors: Eunyoung KIM, Ronald Frank WOTZLAW
-
Publication number: 20170316257Abstract: A monitoring device to monitor and to count people in a certain area includes an extracting module to extract images of persons from a first signal; and a computing module to process the images of persons from the extracting module. The extracting module removes background of the first signal and extracts images of persons for the computing module. The computing module obtains coordinates of a center of each image of persons and a value of hue of each image of persons. The computing module can match images of persons to persons to be monitored and can constantly determine the instant number of persons being monitored.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventor: WEI-CHUN CHEN
-
Publication number: 20170316258Abstract: A method, apparatus and computer program product for improving differentiation in a gesture based security system is described. An image based feed from a camera is received by a gesture based security system. The camera views a secured area. The system recognizes a gesture within the feed. Non-gesture metadata is associated with the recognized gesture. The system determines whether the recognized gesture is an approved gesture within the secured area according to the non-gesture metadata associated with the recognized gesture.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Jeffrey Robert Hoy, Sreekanth Ramakrishna Iyer, Kaushal Kiran Kapadia, Ravi Krishnan Muthukrishnan, Nataraj Nagaratnam
-
Publication number: 20170316259Abstract: A method, apparatus and computer program product for improving differentiation in a gesture based security system is described. An image based feed from a camera is received by the gesture based security system. The camera has a view of a first secured area. A first gesture within the feed is recognized, producing a first recognized gesture. The first recognized gesture is determined to be an unclassified gesture for the first secured area. Non-gesture metadata is associated with the first recognized gesture. The first recognized gesture and the associated non-gesture metadata are transmitted together for classification of the first recognized gesture. The first recognized gesture is classified as one of the following: an approved gesture within the first secured area, an unapproved gesture within the first secured area or a suspicious gesture within the first secured area.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Jeffrey Robert Hoy, Sreekanth Ramakrishna Iyer, Kaushal Kiran Kapadia, Ravi Krishnan Muthukrishnan, Nataraj Nagaratnam
-
Publication number: 20170316260Abstract: Using mobile devices in a gesture based security system is described. An image based feed is received from a camera incorporated in a first mobile device. The first mobile device is in communication with the gesture based security system. The camera has a view of one of a plurality of secured areas monitored by the gesture based security system. A gesture is recognized within the feed. Non-gesture metadata from the mobile device is associated with the recognized gesture. The non-gesture metadata is used to determine that the image based feed is a view of a first secured area of the plurality of secured areas. The determination whether the recognized gesture is an approved gesture within the first secured area is made according to non-gesture metadata associated with the recognized gesture.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Jeffrey Robert Hoy, Sreekanth Ramakrishna Iyer, Kaushal Kiran Kapadia, Ravi Krishnan Muthukrishnan, Nataraj Nagaratnam
-
Publication number: 20170316261Abstract: Disclosed methods include a method of controlling a computing device includes the steps of detecting a gesture made by a human user, identifying the gesture, and executing a computer command. The gesture may comprise a change in depth of a body part of the human user relative to the 2D camera. The gesture may be detected via a 2D camera in electronic communication with the computing device. Disclosed systems include a 2D camera and a computing device in electronic communication therewith. The 2D camera is configured to capture at least a first and second image of a body part of a human user. The computing device is configured to recognize at least a first object in the first image and a second object in the second image, identify a change in depth, and execute a command in response to the change in depth.Type: ApplicationFiled: July 11, 2017Publication date: November 2, 2017Inventors: Ryan Fink, Ryan Phelps, Gary Peck
-
Publication number: 20170316262Abstract: Systems and methods are directed toward occupancy detection in a predefined space including one or more sub-zones. Occupancy of the predefined space can be determined by determining the occupancy of an outer zone comprising the predefined space, and determining the occupancy of individual sub-zones to the outer zone. Occupancy values of the individual sub-zones can be rescaled such that the sum of the individual sub-zones occupancies sum to the already determined outer zone occupancy. The occupancy of the predefined space then may be determined using the rescaled occupancies of the various sub-zones included in the predefined space. Such processes can be repeated for individual sub-zones, which may be treated as a separate outer zone including one or more sub-zones.Type: ApplicationFiled: April 28, 2017Publication date: November 2, 2017Inventors: Stuart Andrew Holliday, Neil Johnson
-
Publication number: 20170316263Abstract: Systems and methods are provided for processing and extracting content from an image captured using a mobile device. In one embodiment, an image is captured by a mobile device and corrected to improve the quality of the image. The corrected image is then further processed by adjusting the image, identifying the format and layout of the DL, binarizing the image and extracting the content using optical character recognition (OCR). Multiple methods of image adjusting may be implemented to accurately assess features of the DL, and a secondary layout identification process may be performed to ensure that the content being extracted is properly classified.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Inventors: Grigori NEPOMNIACHTCHI, Mike STRANGE
-
Publication number: 20170316264Abstract: A system for determining a gaze direction of a user of a wearable device is disclosed. The system may include a primary lens, an illuminator, an image sensor, and an interface. The illuminator may include a light guide, may be disposed at least partially on or in the primary lens, and may be configured to illuminate at least one eye of a user. The image sensor may be disposed on or in the primary lens, and may be configured to detect light reflected by the at least one eye of the user. The interface may be configured to provide data from the image sensor to a processor for determining a gaze direction of the user based at least in part on light detected by the image sensor.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Applicant: Tobii ABInventors: Simon Gustafsson, Anders Kingbäck, Peter Blixt, Richard Hainzl, Mårten Skogö
-
Publication number: 20170316265Abstract: Described is a system for feature selection for formal concept analysis (FCA). A set of data points having features is separated into object classes. For each object class, the data points are convolved with a Gaussian function, resulting in a class distribution curve for each known object class. For each class distribution curve, a binary array is generated having ones on intervals of data values on which the class distribution curve is maximum with respect to all other class distribution curves, and zeroes elsewhere. For each object class, a binary class curve indicating for which interval a performance of the known object class exceeds all other known object classes is generated. The intervals are ranked with respect to a predetermined confidence threshold value. The ranking of the intervals is used to select which features to extract from the set of data points in FCA lattice construction.Type: ApplicationFiled: May 10, 2016Publication date: November 2, 2017Inventors: Michael J. O'Brien, Kang-Yu Ni, James Benvenuto, Rajan Bhattacharyya
-
Publication number: 20170316266Abstract: The image processing method includes a luminance value information obtaining step of obtaining effective radiance values from a subject, and an image generating step of generating a picture image as a set of unit regions each of which has a luminance value obtained by at least partially removing a regular reflection light component on a surface of the subject from the effective radiance values.Type: ApplicationFiled: April 24, 2017Publication date: November 2, 2017Inventors: Suguru KAWABATA, Takashi NAKANO, Kazuhiro NATSUAKI, Takahiro TAKIMOTO, Shinobu YAMAZAKI, Daisuke HONDA, Yukio TAMAI
-
Publication number: 20170316267Abstract: Representative implementations of devices and techniques provide adjustable parameters for imaging devices and systems. Dynamic adjustments to one or more parameters of an imaging component may be performed based on changes to the relative velocity of the imaging component or to the proximity of an object to the imaging component.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Inventors: Markus DIELACHER, Josef PRAINSACK, Martin FLATSCHER, Michael MARK, Robert LOBNIK
-
Publication number: 20170316268Abstract: Disclosed herein are a video interpretation apparatus and method. The video interpretation apparatus includes an object information generation unit for generating object information based on objects in an input video, a relation generation unit for generating a dynamic spatial relation between the objects based on the object information, a general event information generation unit for generating general event information based on the dynamic spatial relation, a video information generation unit for generating video information including any one of a sentence and an event description based on the object information and the general event information, and a video descriptor storage unit for storing the object information, the general event information, and the video information.Type: ApplicationFiled: January 5, 2017Publication date: November 2, 2017Inventors: Jin-Young MOON, Kyu-Chang KANG, Yong-Jin KWON, Kyoung PARK, Jong-Youl PARK, Jeun-Woo LEE
-
Publication number: 20170316269Abstract: A compact image sequence descriptor (101), used for describing an image sequence, comprises a segment global descriptor (113) for at least one segment within the sequence, which includes global descriptor information for respective images, relating to interest points within the video content of the images. The segment global descriptor (113) includes a base descriptor (121), which is a global descriptor associated with a representative frame (120) of the image sequence, and a number of relative descriptors (125). The relative descriptors contain information of a respective global descriptor relative to the base descriptor allowing to reconstruct an exact or approximated global descriptor associated with a respective image of the image sequence. The image sequence descriptor (101) may further include a segment local descriptor (114) for a segment, comprising a set of encoded local feature descriptors.Type: ApplicationFiled: April 27, 2017Publication date: November 2, 2017Applicant: Joanneum Research Forschungsgesellschaft mbHInventors: Werner Bailer, Stefanie Wechtitsch
-
Publication number: 20170316270Abstract: A method, system, and device for processing video shooting are described. The method includes: shooting a subject in a background sample, the background sample, and a target background respectively, to generate a first video recording the subject and the background sample, a second video recording the background sample, and a third video recording the target background respectively, the shooting time lengths of the first video and the second video being smaller than the shooting time length of the third video; and comparing the first video and the second video, extracting images of the subject from the first video, generating a subject image frame sequence with a transparent background, sequentially superimposing each subject image frame in the subject image frame sequence to the third video, and generating a video file recording the subject and the target background.Type: ApplicationFiled: March 2, 2015Publication date: November 2, 2017Applicant: ZTE CORPORATIONInventors: Jianjiang CHEN, Yuanyuan XU
-
Publication number: 20170316271Abstract: According to one embodiment, an image capturing unit that captures images of a first region positioned at an entrance, a second region in which a customer himself/herself performs accounting of a commodity, and a third region positioned at an exit in a checkout region relating to registration and accounting of the commodity, an extraction unit that extracts feature information indicating features of the customer, from a captured image obtained in each of the regions, a tracking unit that tracks a movement path until the same customer reaches the third region from the first region, based on similarity of the feature information extracted from the captured image of the first region to the feature information extracted from the captured images of the second region and the third region, and a reporting unit that performs reporting when the movement path indicates that the customer reaches the third region from the first region without passing through the second region are provided.Type: ApplicationFiled: April 19, 2017Publication date: November 2, 2017Inventors: Takahiro Saitou, Daisuke Miyagi
-
Publication number: 20170316272Abstract: A driving assistance system includes at least one receiving module designed to receive perception data from a driving environment, a control module designed to control an on-board system, a conversion module designed to generate, on the basis of the perception data, a plurality of instances of classes of an ontology stored by the driving assistance system and defining relations between classes, and a reasoning tool designed to deduce, on the basis of the ontology, at least one property of an instance of the plurality. The control module is designed to control the on-board system on the basis of the deduced property.Type: ApplicationFiled: July 23, 2015Publication date: November 2, 2017Applicant: RENAULT s.a.sInventors: Alexandre ARMAND, Javier IBANEZ-GUZMAN, David FILLIAT
-
Publication number: 20170316273Abstract: Methods and devices for using a relationship between activities of different traffic signals in a network to improve traffic signal state estimation are disclosed. An example method includes determining that a vehicle is approaching an upcoming traffic signal. The method may further include determining a state of one or more traffic signals other than the upcoming traffic signal. Additionally, the method may also include determining an estimate of a state of the upcoming traffic signal based on a relationship between the state of the one or more traffic signals other than the upcoming traffic signal and the state of the upcoming traffic signal.Type: ApplicationFiled: July 19, 2017Publication date: November 2, 2017Inventors: David I. Ferguson, Bradley Templeton
-
Publication number: 20170316274Abstract: A determination apparatus includes: an inputter that receives image information captured by a camera; and a controller that detects a face direction angle of a person while detecting a position of a predetermined body parts of the person based on the image information, and determines a looking-back motion when the face direction angle is larger than a predetermined angle and the predetermined body parts is present in a predetermined position.Type: ApplicationFiled: April 11, 2017Publication date: November 2, 2017Inventors: SHUZO NORIDOMI, KOJI ARATA
-
Publication number: 20170316275Abstract: A detected quadrilateral area is displayed and no group of candidate lines is displayed in a normal state. While a user is selecting a side that the user desires to change, a group of candidate lines corresponding to the selected side is displayed. Then, whether to replace a position of the selected side with a position of a candidate line is determined based on a movement destination position of the selected side.Type: ApplicationFiled: July 19, 2017Publication date: November 2, 2017Inventor: Takashi Miyauchi
-
Publication number: 20170316276Abstract: New intra planar modes are introduced for predicting digital video data. As part of the new intra planar modes, various methods are offered for predicting a first sample within a prediction unit, where the first sample is needed for referencing to when processing the new intra planar modes. And once the first sample is successfully predicted, the new intra planar modes are able to predict a sample of video data within the prediction unit by processing a bi-linear interpolation of four previously reconstructed reference samples.Type: ApplicationFiled: July 14, 2017Publication date: November 2, 2017Inventors: Jaehyun LIM, Byeongmoon JEON, Seungwook PARK, Jaewon SUNG, Jungsun KIM, Yongjoon JEON, Joonyoung PARK, Younghee CHOI
-
Publication number: 20170316277Abstract: According to one embodiment, an article recognition apparatus includes an image acquisition unit, a recognition unit, a region detection unit, a storage unit, and a determination unit. The recognition unit recognizes each of the articles. The region detection unit determines article region information. The storage unit stores article information including a reference value for the article region information. The determination unit determines that an unrecognized article exists, if the reference value for the article region information of each article which the recognition unit recognized does not match with the article region information.Type: ApplicationFiled: July 18, 2017Publication date: November 2, 2017Inventors: Tetsuya NOBUOKA, Masaaki YASUNAGA
-
Publication number: 20170316278Abstract: An enhanced object detecting method and apparatus is presented. A plurality of successive frames is captured by a monocular camera and the image data of the captured frames are transformed with respect to a predetermined point of view. For instance, the images may be transformed in order to obtain a top-down view. Particular features such as lines are extracted from the transformed image data, and corresponding features of successive frames are matched. An angular change of corresponding features is determined and boundaries of an object are identified based on the angular change of the features.Type: ApplicationFiled: July 18, 2017Publication date: November 2, 2017Applicant: Applications Solutions (Electronic and Vision) LtdInventors: Rui Guerreiro, Alexander Sibiryakov
-
Publication number: 20170316279Abstract: A change degree deriving apparatus includes a receiving unit and a deriving unit. The receiving unit is configured to receive first image data of an object including an achromatic color and a first color and reference image data of the object. The first image data relates to the first color. The reference image data serves as a reference. The deriving unit is configured to derive a change degree of the object from a first difference based on the first image data and the reference image data received by the receiving unit. The first difference is a difference between the first image data and the reference image data, which occurs at a chromatic color portion when a portion corresponding to the achromatic color is set as a reference.Type: ApplicationFiled: November 7, 2016Publication date: November 2, 2017Applicant: FUJI XEROX CO., LTD.Inventors: Shinji SASAHARA, Hitoshi OGATSU, Junichi MATSUNOSHITA, Ken OGINO
-
Publication number: 20170316280Abstract: A system for generating a predictive virtual personification includes a wearable data capture device, a data store, and a saliency recognition engine, wherein the wearable data acquisition device is configured to transmit one or more physio-emotional or neuro-cognitive data sets and a graphical representation of a donor subject to the saliency recognition engine, and the saliency recognition engine is configured to receive the one or more physio-emotional or neuro-cognitive data sets, the graphical representation, and one or more identified trigger stimulus events, locate a set of saliency regions of interest (SROI) within the graphical representation of the donor subject, generate a set of SROI specific saliency maps and store, in the data store, a set of correlated SROI specific saliency maps generated by correlating each SROI specific saliency map a corresponding trigger event.Type: ApplicationFiled: June 30, 2017Publication date: November 2, 2017Inventor: JAMES MICHAEL CALDWELL