Patents Issued in April 11, 2017
  • Patent number: 9619692
    Abstract: A system and method for identifying objects being carried by an operator who is approaching an instrument. The system includes image-, motion-, and depth-capturing sensors that are in communication with the instrument. The captured image, motion, and depth data are compared to data stored in a database and the objects are identified. Once the objects have been identified, an action that corresponds to the identified objects is initiated in the instrument.
    Type: Grant
    Filed: September 18, 2013
    Date of Patent: April 11, 2017
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventor: Alexander Gelbman
  • Patent number: 9619693
    Abstract: In an imaging device in a display system, a control unit recognizes human traits from an image of a human being that is an object to be displayed, which is obtained by photography. The image of the human being is correlated to information indicating the recognized traits and then transmitted to a digital signage device by a communication unit. In the digital signage device, the display area on an image display unit is determined in accordance with this information indicating the human traits that have been correlated to the image of the human being that is the object to be displayed, and the image of the human being that is the object to be displayed is modified so to be displayed in the determined display area, after which the modified image is displayed on the determined display area of the image display unit.
    Type: Grant
    Filed: November 5, 2014
    Date of Patent: April 11, 2017
    Assignee: CASIO COMPUTER CO., LTD.
    Inventors: Taiga Murayama, Taichi Honjo
  • Patent number: 9619694
    Abstract: In particular embodiments, one or more images associated with a primary user are received. The image(s) may comprise single images, a series of related images, or video frames. In each image, one or more faces are detected and/or tracked. For each face, a set of one or more candidates are selected who may be identified with the face. The primary user has a computed measure of affinity for candidates in the set through a social network, or the candidate in the set is otherwise known to the primary user. A facial recognition score is calculated for each candidate. A subset of candidates is selected, wherein each candidate in the subset has a facial recognition score above a predetermined threshold. A candidate score is calculated for each candidate based on the facial recognition score and the computed measure of affinity. A winning candidate is selected based on the candidate scores.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: April 11, 2017
    Assignee: Facebook, Inc.
    Inventors: David Harry Garcia, Luke St. Clair, Jenny Yuen
  • Patent number: 9619695
    Abstract: A system for tracking a gaze of a driver of a vehicle includes a tracking device, a processor, a memory, and a display. The tracking device is configured to track a gaze of a driver of a vehicle. The processor is in electronic communication with the tracking device. The memory is in electronic communication with the processor. The memory includes programming code configured to be executed by the processor. The programming code is configured to determine in real-time a duration of the gaze of the driver of the vehicle tracked by the tracking device. The display is in electronic communication with the processor. The display is configured to display a symbol showing the determined duration, or a portion of the determined duration, of the gaze of the driver of the vehicle as determined by the processor.
    Type: Grant
    Filed: June 6, 2013
    Date of Patent: April 11, 2017
    Assignee: Visteon Global Technologies, Inc.
    Inventors: Michael Dean Tschirhart, Dale O. Cramer, Anthony Joseph Ciatti
  • Patent number: 9619696
    Abstract: In one embodiment, a method for detecting faces in video image frames is implemented on a computing device including: comparing current image frames to previously processed image frames to determine similarity, if a current image frame and a previously processed image frame are dissimilar, comparing a location within the current image frame for at least one detected facial image to a location within an associated image frame for at least one most recently stored facial image stored in a most recently used (MRU) cache, if the compared locations are dissimilar, comparing the at least one detected facial image to the at least one most recently stored facial image stored in the MRU cache to determine similarity, and storing the at least one detected facial image in the MRU cache if the at least one detected facial image and the at least one most recently stored facial image are not dissimilar.
    Type: Grant
    Filed: April 15, 2015
    Date of Patent: April 11, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Aviva Vaknin, Gal Moshitch
  • Patent number: 9619697
    Abstract: A system and method for identity authentication, including: registering an identity card issued by an authoritative entity to an individual with an identity authentication platform including generating a reference face recognition template for the individual in response to the identity card presented by the individual during registration; obtaining a real-time photograph of the individual in response to an assertion of an identity made by the individual; and authenticating the assertion made by the individual by generating a target face recognition template in response to the real-time photograph and matching the target face recognition template to the reference face recognition template.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: April 11, 2017
    Assignee: HOTCOAL INC.
    Inventor: Ramesh Pabbichetty
  • Patent number: 9619698
    Abstract: Methods and apparatuses for athletic performance monitoring with body synchronization analysis are disclosed. In one example, a first sensor output associated with a movement of an user foot and a second sensor output associated with a movement of an user arm are received, the user arm on a body side opposite the user foot. The first sensor output and the second sensor output are analyzed to identify a degree of synchronization between the movement of a user foot and the movement of a user arm.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: April 11, 2017
    Inventor: Thomas C. Chuang
  • Patent number: 9619699
    Abstract: The present invention discloses a method and a system for enhancing accuracy of human counting in at least one frame of a captured image in a real-time in a predefined area. The present invention detects human in one or more frames by using at least one human detection modality for obtaining the characteristic result of the captured image. The invention further calculates an activity probability associated with each human detection modality. The characteristic results and the activity probability are selectively integrated by using a fusion technique for enhancing the accuracy of the human count and for selecting the most accurate human detection modality. The human is then performed based on the selection of the most accurate human detection modality.
    Type: Grant
    Filed: November 7, 2012
    Date of Patent: April 11, 2017
    Assignee: Tata Consultancy Services Limited
    Inventors: Rohit Gupta, Aniruddha Sinha, Arpan Pal, Aritra Chakravorty
  • Patent number: 9619700
    Abstract: Provided is an image processing device capable of specifying a character area included in an image even if a variable-density difference in an area included in the image other than the character area is large. A feature point specifying unit specifies corners of edges in a target image as feature points. An area obtaining unit obtains, based on a specified result of the feature point specifying unit, an area including a plurality of feature points aligned in a substantially straight line. A character area specifying unit specifies a character area in the target image based on the area obtained by the area obtaining unit.
    Type: Grant
    Filed: May 30, 2013
    Date of Patent: April 11, 2017
    Assignee: RAKUTEN, INC.
    Inventors: Hiromi Hirano, Makoto Okabe
  • Patent number: 9619701
    Abstract: Systems and methods include an application operating on a device. The application causes the graphic user interface of the device to display an initial instruction to obtain a full-view image that positions all of an item within a field of view of a camera on the device. The application automatically recognizes identified features of the full-view image, by using a processor in communication with the camera. After displaying the initial instruction, the application causes the graphic user interface to display a subsequent instruction to obtain a zoom-in image that positions only a portion of the item within the field of view of the camera. Also the application automatically recognizes patterns from the zoom-in image, using the processor. Furthermore the application performs an authentication process using the identified features and the patterns to determine whether the item is valid, using the processor.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: April 11, 2017
    Assignee: Xerox Corporation
    Inventors: Francois Ragnet, Damien Cramet
  • Patent number: 9619702
    Abstract: A handwriting recognition system converts word images on documents, such as document images of historical records, into computer searchable text. Word images (snippets) on the document are located, and have multiple word features identified. For each word image, a word feature vector is created representing multiple word features. Based on the similarity of word features (e.g., the distance between feature vectors), similar words are grouped together in clusters, and a centroid that has features most representative of words in the cluster is selected. A digitized text word is selected for each cluster based on review of a centroid in the cluster, and is assigned to all words in that cluster and is used as computer searchable text for those word images where they appear in documents. An analyst may review clusters to permit refinement of the parameters used for grouping words in clusters, including the adjustment of weights and other factors used for determining the distance between feature vectors.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: April 11, 2017
    Assignee: Ancestry.com Operations Inc.
    Inventors: Jack Reese, Michael Murdock, Shawn Reid, Laryn Brown
  • Patent number: 9619703
    Abstract: A method and system is provided for geo-demographic classification of a geographical region. The present application discloses an unsupervised learning method and system for analyzing satellite imagery and multimodal sensory data in fusion for geo-demographic clustering. The present application also discloses an inexpensive and faster method and system for geo-demographic classification of a geographical region.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: April 11, 2017
    Assignee: Tata Consultancy Services Limited
    Inventors: Monika Sharma, Hiranmay Ghosh, Kiran Francis
  • Patent number: 9619704
    Abstract: The present technology relates to a computer-implemented method for tracking an object in a sequence of multi-view input video images comprising the steps of acquiring a model of the object, tracking the object in the multi-view input video image sequence, and using the model.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: April 11, 2017
    Assignee: MAX-PLANCK GESELLSCHAFT SUR FĂ–RDERUNG DER WISSENSCHAFTEN E.V.
    Inventors: Nils Hasler, Carsten Stoll, Christian Theobalt, Juergen Gall, Hans-Peter Seidel
  • Patent number: 9619705
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining the identity of an object in an image where the object in the image is in a disassembled state. In one aspect, a method includes accessing previous interactive sessions, each of the interactive sessions including images of a reference object in one or more disassembled states and each of the interactive sessions specifying an identity of the reference object in an assembled state; processing an image of a first object to identify characteristics of the first object, the first object being in a disassembled state in the image; comparing the image of the first object in the disassembled state to images of reference objects in disassembled states; and determining an identity of the first object based on the comparison and the identities of the reference objects in assembled states specified in the interactive sessions.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: April 11, 2017
    Assignee: Google Inc.
    Inventor: Paul G. Nordstrom
  • Patent number: 9619706
    Abstract: Systems and methods use an origin pattern to verify the authenticity of a host object. The origin pattern includes a text-based serial number and a computer-graphics based surface-texture component. The serial number includes a public manufacturer identifier and an object identifier sequence that is based on a hash of a private manufacturer identifier and a private object identifier. The surface-texture component includes a two dimensional texture mapped onto a three dimensional surface, the generation of the surface being based on the hash. In response to an authentication request for an origin pair, the system may verify the serial number component exists in a data store and generate a challenge surface-texture component based on data in the data store. If the challenge surface-texture component matches, the system may use a time-location stamp to determine an authenticity probability and provide an indication of authenticity in response to the verification request.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: April 11, 2017
    Assignee: Enceladus IP Holdings LLC
    Inventor: Wallace Penn Scott
  • Patent number: 9619707
    Abstract: A photographing unit photographs a face of the user who is looking at a screen displayed on a display unit. An area detecting unit detects, from the photographed image of the photographing unit, an eye area of the user and at least one of a face area of the user or a predetermined part area of the user other than the user's eyes. An areal size/position information obtaining unit obtains areal size information and position information of the eye area, and areal size information and position information of the at least one of the face area or the predetermined part area. A gaze position estimation unit estimates a position in the screen that the user is gazing at, based on the areal size information and the position information.
    Type: Grant
    Filed: July 30, 2012
    Date of Patent: April 11, 2017
    Assignee: RAKUTEN, INC.
    Inventor: Ryuji Sakamaki
  • Patent number: 9619708
    Abstract: A method for detecting a main subject in an image comprises the steps of: (i) computing a plurality of saliency features from the image (14) with a control system (20); (ii) generating a spatial weight map for each of a plurality of image segments of the image (14) with the control system (20); (iii) adjusting the plurality of saliency features via the spatial weight map to generate a plurality of adjusted saliency features; (iv) combining at least two of the plurality of adjusted saliency features to generate a saliency map of the image (14); and (v) extracting the main subject from the saliency map of the image (14). Additionally, the plurality of saliency features can include at least two of a sharp/blur saliency feature, a spectral residual saliency feature, a color spatial distribution saliency feature, and one or more color contrast saliency features.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: April 11, 2017
    Assignee: NIKO CORPORATION
    Inventor: Li Hong
  • Patent number: 9619709
    Abstract: A method for locating markers in an image captured by a mobile device moving about an operating space. The method includes preprocessing an image to generate a set of image data, locating fixed features of markers by tracing edges of the fixed features, and extracting variable data payloads of each of the markers associated with the located fixed features. The fixed features of each of the markers may include a pair of parallel lines extending along opposite sides of a data area containing the variable data payload, and each of the lines extends a distance beyond each exposed end of the data area to avoid missing data when markers are not arranged orthogonally to the scan direction. The preprocessing involves rotating or skewing the image to provide rotated or skewed versions of the image to facilitate locating markers regardless of their angular orientation in the image.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: April 11, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: James Alexander Stark, Clifford Wong
  • Patent number: 9619710
    Abstract: A system for analysis of remotely sensed image data of parking facilities, storage lots, or road regions, for determining patterns over time and determine time-based information such as facility capacities or vehicle movement.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventor: Mark Tabb
  • Patent number: 9619711
    Abstract: Automatic characterization or categorization of portions of an input multispectral image based on a selected reference multispectral image. Sets (e.g., vectors) of radiometric descriptors of pixels of each component of a hierarchical representation of the input multispectral image can be collectively manipulated to obtain a set of radiometric descriptors for the component. Each component can be labeled as a (e.g., relatively) positive or negative instance of at least one reference multispectral image (e.g., mining materials, crops, etc.) through a comparison of the set of radiometric descriptors of the component and a set of radiometric descriptors for the reference multispectral image. Pixels may be labeled (e.g., via color, pattern, etc.) as positive or negative instances of the land use or type of the reference multispectral image in a resultant image based on components within which the pixels are found.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventor: Georgios Ouzounis
  • Patent number: 9619712
    Abstract: A head mounted device (HMD) includes a transparent display, sensors to generate sensor data, and a processor. The processor identifies a threat condition based on a threat pattern and the sensor data, and generates a warning notification in response to the identified threat condition. The threat pattern includes preconfigured thresholds for the sensor data. The HMD displays AR content comprising the warning notification in the transparent display.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: April 11, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait
  • Patent number: 9619713
    Abstract: Techniques for grouping images are disclosed. In some situations, the techniques include identifying at least one event-based image group among a plurality of images based on an event that is associated with each identified image, receiving a selection of one or more objects in a first image of the identified event-based image group, identifying other images in the identified event-based image group that each include at least one of the selected one or more objects, and associating the identified images with the first image. In one instance, the selected objects include individuals captured in the image.
    Type: Grant
    Filed: December 31, 2015
    Date of Patent: April 11, 2017
    Assignees: A9.com, Inc, Amazon Technologies, Inc.
    Inventors: Matthew W. Amacker, Joel D. Tesler, Piragash Velummylum
  • Patent number: 9619714
    Abstract: Various aspects of a method and device for video generation are disclosed herein. The method includes determination of direction and location information of the device in motion for a plurality of captured video frames. Based on the determined location information, a path of the device in motion is generated. For a captured video frame from the plurality of captured video frames, an angle between a first vector and a second vector is calculated. The first vector corresponds to the determined direction information associated with the captured video frame. The second vector corresponds to the generated path. The method further includes selection of the captured video frame, for the generation of the video, based on at least the calculated angle.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: April 11, 2017
    Assignee: SONY CORPORATION
    Inventors: William Schupp, Hiroki Takakura
  • Patent number: 9619715
    Abstract: Alerts to object behaviors are prioritized for adjudication as a function of relative values of abandonment, foregroundness and staticness attributes. The attributes are determined from feature data extracted from video frame image data. The abandonment attribute indicates a level of likelihood of abandonment of an object. The foregroundness attribute quantifies a level of separation of foreground image data of the object from a background model of the image scene. The staticness attribute quantifies a level of stability of dimensions of a bounding box of the object over time. Alerts are also prioritized according to an importance or relevance value that is learned and generated from the relative abandonment, foregroundness and staticness attribute strengths.
    Type: Grant
    Filed: March 3, 2016
    Date of Patent: April 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Quanfu Fan, Sharathchandra U. Pankanti
  • Patent number: 9619716
    Abstract: A vision system of a vehicle includes a camera disposed at a vehicle and having a field of view exterior of the vehicle. The camera includes an imaging array having a plurality of photosensing elements arranged in a two dimensional array of rows and columns. The imaging array includes a plurality of sub-arrays comprising respective groupings of neighboring photosensing elements. An image processor is operable to perform a discrete cosine transformation of captured image data, and a Markov model compares at least one sub-array with a neighboring sub-array. The image processor is operable to adjust a classification of a sub-array responsive at least in part to the discrete cosine transformation and the Markov model.
    Type: Grant
    Filed: August 11, 2014
    Date of Patent: April 11, 2017
    Assignee: MAGNA ELECTRONICS INC.
    Inventor: Goerg Pflug
  • Patent number: 9619717
    Abstract: An apparatus for recognizing lane lines including a broken line. An image capture unit is configured to acquire an image of the surroundings including a roadway ahead of a subject vehicle. An edge-point extractor is configured to extract edge points in the image. An first-edge-point detector is configured to detect first edge points facing at least one missing section of a broken line in the edge points extracted by the edge-point extractor. A lane-line recognizes is configured to recognize a lane line using the edge points extracted by the edge-point extractor other than all or some of the first edge points.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: April 11, 2017
    Assignee: DENSO CORPORATION
    Inventors: Syunya Kumano, Naoki Kawasaki, Shunsuke Suzuki, Tetsuya Takafuji
  • Patent number: 9619718
    Abstract: A vehicle may include an object detection system and a control system. The object detection system may be configured to detect a presence of an object near the vehicle. In some implementations, the object detection system may include a sensor module and/or a camera system. The control system may be communicatively coupled to the object detection system and may be configured to determine an initial state of an area near the vehicle using the object detection system. The control system may be configured to activate a response if the object detection system detects the presence of an object near the vehicle based, at least in part, on the determined initial state. In some implementations, the response may include an activation of the camera system.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: April 11, 2017
    Inventors: Mark Michmerhuizen, Troy Mulder
  • Patent number: 9619719
    Abstract: Systems and methods are provided for detecting traffic signs. In one implementation, a traffic sign detection system for a vehicle include at least one image capture device configured to acquire at least one image of a scene including a traffic sign ahead of the vehicle. The traffic sign detection system also includes a data interface and at least one processing device programmed to receive the at least one image via the data interface, transform the at least one image, sample the transformed at least one image to generate a plurality of images having different sizes, convolve each of the plurality of images with a template image, compare each pixel value of each convolved image to a predetermined threshold, and select local maxima of pixel values within local regions of each convolved image as attention candidates, the local maxima being greater than the predetermined threshold.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: April 11, 2017
    Assignee: MOBILEYE VISION TECHNOLOGIES LTD.
    Inventors: Yair Kapach, Yoav Taieb, Yoel Krupnik
  • Patent number: 9619720
    Abstract: An imaging system for a vehicle is provided for distinguishing between tail lights of another vehicle and a flashing red stop light. The system includes an imager configured to image a forward external scene and to generate image data corresponding to the acquired images; and a processor configured to receive and analyze the image data to identify red light sources and to further analyze each red light source to determine if the red light source is detected for a predetermined time period. If the red light source is not detected within a predetermined time period after it is detected, the processor determines that the red light source is a flashing red stop light. Otherwise, if the red light source is detected for a predetermined time period, the processor determines that the red light source may be a tail light of another vehicle.
    Type: Grant
    Filed: August 19, 2014
    Date of Patent: April 11, 2017
    Assignee: GENTEX CORPORATION
    Inventors: Peter A Liken, Phillip R Pierce
  • Patent number: 9619721
    Abstract: To monitor a degree of attentiveness for a driver of a vehicle, a period is determined on the basis of a speed of the vehicle. It is determined whether an eyelid closed time and/or an eyelid closing time of the driver exceeds the period.
    Type: Grant
    Filed: October 14, 2015
    Date of Patent: April 11, 2017
    Assignees: Volkswagen AG, Audi AG
    Inventors: Nico Bogner, Gordon Groβkopf, Andreas Zachmayer, Robert Büthorn, Stefan Brosig, Hendrik Franke, Asem Eltaher
  • Patent number: 9619722
    Abstract: A gaze direction detection device according to the present technology includes a detector for detecting a gaze of a driver over a predetermined period of time, a determiner for outputting second gaze information indicating that the driver is gazing, from first gaze information detected by the detector, a generator for generating a gaze distribution from the second gaze information output by the determiner, and a corrector for correcting the first gaze information detected by the detector, where the corrector calculates a center of a reference distribution that is set in advance and a center of the gaze distribution generated by the generator, and causes the center of the reference distribution and the center of the gaze distribution to overlap each other, and then calculates a correction parameter based on a difference between the reference distribution and the gaze distribution, and corrects the first gaze information with the correction parameter.
    Type: Grant
    Filed: October 19, 2015
    Date of Patent: April 11, 2017
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Hidetoshi Takeda, Masayuki Kimura
  • Patent number: 9619723
    Abstract: The present invention employs a first step of stationary face recognition, followed by a facial expression test, a continuous movement tracking test, and a 3D perspective check to identify and authenticate a subject, prevent photo spoofing and facemask spoofing, and determining whether the subject is a living person. The method requires a subject to present her face before a camera, which can be the built-in or peripheral camera of a mobile communication device. The method also requires displaying to the subject certain instructions and the real-time video feedback of the subject face on a display screen, which can be the built-in or peripheral display screen of the mobile communication device or mobile computing device. The 3D perspective check uses a single camera to take two images of the subject's face for the calculating the stereoscopic view data of the subject's face.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: April 11, 2017
    Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventors: Felix Chow, Arvin Wai Kai Tang
  • Patent number: 9619724
    Abstract: An information interchange unit, a storage unit, and a display controller are configured such that, after a image selection unit selects a first image and a second image, the information interchange unit interchanges, automatically, first image information of the first image with second image information of the second image, or interchanges, automatically, first position information of the first image with second position information of the second image, the storage unit stores and correlates the first image information and the second position information, and stores and correlates the second image information and the first position information, and the display controller controls, automatically, a display to display the one image based on the first image information and the second position information, and the another image based on the second image information and the first position information.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: April 11, 2017
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventors: Takahiko Watari, Tatsuya Sato
  • Patent number: 9619725
    Abstract: Embodiments include a system configured to process location information for objects in a site comprising an imaging device configured to take a picture of an object, the picture containing a unique identifier of the object; a global positioning system (GPS) component associated with the imaging device and configured to tag the image of the object with GPS location information of the object to generate a tagged image; a communications interface configured to transmit the tagged image to a server computer remote from the imaging device over an Internet Protocol (IP) network; and a processor of the server configured to perform Optical Character Recognition (OCR) on the picture and to create an indicator code corresponding to the identifier of the object, wherein the processor is further configured to create a processed result containing the indicator code and the location to locate the object within the site.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: April 11, 2017
    Assignee: HKI Systems and Service LLC
    Inventor: Henry S King
  • Patent number: 9619726
    Abstract: An information input device includes a first detection device, a second detection device, a coupling member, a connecting line, and a protective member. The coupling member includes a fixing portion and a holding portion, and a folding portion. The connecting line includes an intermediate portion connecting the first detection device and the second detection device that detect a position of the writing tool. At least a part of the intermediate portion extends between the first detection device and the second detection device and is arranged to face the folding portion. The protective member includes an arrangement portion whose length is longer than a length of the intermediate portion. An end portion of the protective member is fixed to at least one of the fixing portion and the holding portion. The arrangement portion is disposed between the folding portion and the intermediate portion of the connecting line.
    Type: Grant
    Filed: July 24, 2015
    Date of Patent: April 11, 2017
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventors: Takehiko Inaba, Atsushi Kasugai
  • Patent number: 9619727
    Abstract: An inspection device that performs pattern matching on a searched image performs matching between a template image of an inspection object and the searched image by using: a feature region extraction process unit that extracts a feature quantity from the template image acquired for learning; a feature quantity extraction process unit that extracts a feature quantity from the searched image acquired for learning; a mutual feature quantity calculation process unit that calculates a mutual feature quantity of the template image and the searched image from the feature quantity extracted from the template image and the feature quantity extracted from the searched image; a learning process unit that calculates, using a plurality of the mutual feature quantities, a discrimination boundary surface that determines matching success or failure; a process unit that calculates a plurality of the mutual feature quantities from an image acquired from the inspection object; and the plurality of mutual feature quantities and
    Type: Grant
    Filed: July 16, 2013
    Date of Patent: April 11, 2017
    Assignee: Hitachi High-Technologies Corporation
    Inventors: Wataru Nagatomo, Yuichi Abe, Hiroyuki Ushiba
  • Patent number: 9619728
    Abstract: Multiple reference fiducials are formed on a sample on a sample for charged particle beam facilities processing of the sample. As one fiducial is degraded by the charged particle beam, a second fiducial is used to create one or more additional fiducials.
    Type: Grant
    Filed: May 31, 2015
    Date of Patent: April 11, 2017
    Assignee: FEI Company
    Inventor: Reinier Louis Warschauer
  • Patent number: 9619729
    Abstract: According to an embodiment, a density measuring device includes a first calculator, a second calculator, and a first generator. The first calculator calculates, from an image including objects of a plurality of classes classified according to a predetermined rule, for each of a plurality of regions formed by dividing the image, density of the objects captured in the region. The second calculator calculates, from the density of the objects captured in each of the regions, likelihood of each object class captured in each of the regions. The first generator generates density data, in which position corresponding to each of the regions in the image is assigned with the density of the object class having at least the higher likelihood than the lowest likelihood from among likelihoods calculated for object classes captured in the corresponding region.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: April 11, 2017
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Quoc Viet Pham
  • Patent number: 9619730
    Abstract: Multi-energy imaging is afforded with a single detector array by generating a first data set, indicative of a first radiation energy spectrum, using a first set of cells of the array, and by generating a second data set, indicative of a second radiation energy spectrum, using a second set of cells of the array (e.g., where substantially more cells are in the first set than the second). The first data set is comprised of measured data from the first set of cells and estimated data that would have been yielded from the second set of cells had the second set been configured to detect the first energy spectrum. The second data set is comprised of measured data yielded from the second set of cells and estimated data that would have been yielded from the first set of cells had the first set been configured to detect the second energy spectrum.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: April 11, 2017
    Assignee: Analogic Corporation
    Inventors: Julia Pavlovich, Ram Naidu
  • Patent number: 9619731
    Abstract: Methods for dip determination from an image obtained by a down-hole imaging tool. For each pixel forming the image, a probability that a symmetry axis coincides with the pixel is determined. A probability map is then generated, depicting the determined probability of each pixel coinciding with the symmetry axis. The probability map and the image are then superposed to generate a mapped image. The symmetry axis is then estimated based on the mapped image. Image pixels coinciding with a boundary of the geologic feature are then selected in multiple depth zones, and a segment of a sinusoid is fitted to the selected image pixels within each depth zone. Dip within each of the depth zones is then determined based on the fitted sinusoid segments therein.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: April 11, 2017
    Assignee: SCHLUMBERGER TECHNOLOGY CORPORATION
    Inventors: Taketo Akama, Josselin Kherroubi, Arnaud Etchecopar
  • Patent number: 9619732
    Abstract: Implementations generally relate to generating compositional media content. In some implementations, a method includes receiving a plurality of photos from a user, and determining one or more composition types from the photos. The method also includes generating compositions from the selected photos based on the one or more determined composition types. The method also includes providing the one or more generated compositions to the user.
    Type: Grant
    Filed: July 6, 2015
    Date of Patent: April 11, 2017
    Assignee: Google Inc.
    Inventors: Erik Murphy-Chutorian, Matthew Steiner, Vivek Kwatra, Shengyang Dai, John Spiegel, Nicholas Butko, Falk Sticken, Florian Kriener, Tom Binder, John Flynn, Troy Chinen, Steven Vandebogart, Nikolaos Trogkanis, Ingo Wehmeyer, Matthias Grundmann
  • Patent number: 9619733
    Abstract: Disclosed are a method of generating a hierarchical structured pattern based descriptor and a method and a device for recognizing an object in an image using the same. The method of generating a hierarchical structured pattern based descriptor may include generating a hierarchical structured pattern by defining a parent node based on a patch region for a feature point of an input image to be analyzed and defining a child node obtained by dividing the parent node to a predetermined depth, calculating a master direction vector of the patch region based on position coordinates and representative pixel values of the parent node and the child node, and calculating a rotation angle of the patch region based on the master direction vector and rotating the hierarchical structured pattern by the rotation angle.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: April 11, 2017
    Assignee: POSTECH ACADEMY—INDUSTRY FOUNDATION
    Inventors: In Su Kim, Dai Jin Kim
  • Patent number: 9619734
    Abstract: Land classification based on analysis of image data. Feature extraction techniques may be used to generate a feature stack corresponding to the image data to be classified. A user may identify training data from the image data from which a classification model may be generated using one or more machine learning techniques applied to one or more features of the image. In this regard, the classification module may in turn be used to classify pixels from the image data other than the training data. Additionally, quantifiable metrics regarding the accuracy and/or precision of the models may be provided for model evaluation and/or comparison. Additionally, the generation of models may be performed in a distributed system such that model creation and/or application may be distributed in a multi-user environment for collaborative and/or iterative approaches.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: April 11, 2017
    Assignee: DigitalGlobe, Inc.
    Inventors: Giovanni B. Marchisio, Carsten Tusk, Krzysztof Koperski, Mark D. Tabb, Jeffrey D. Shafer
  • Patent number: 9619735
    Abstract: An approach is provided in which a knowledge manager processes an image using a convolutional neural network. The knowledge manager generates a pixel-level heat map of the image that includes multiple decision points corresponding to multiple pixels of the image. The knowledge manager analyzes the pixel-level heat map and detects sets of decision points that correspond to target objects. In turn, the knowledge manager marks regions of the heat map corresponding to the detected sets of per-pixel decision points, each of the regions indicating a location of the target objects.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: April 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Nicholas A. Lineback, Michael S. Ranzinger
  • Patent number: 9619736
    Abstract: The image forming apparatus includes a non-volatile memory that stores data; a volatile memory into which the stored data is read for editing; a reading unit that reads the data from the non-volatile memory into the volatile memory; an editing unit that edits the read data; and a writing unit that writes the edited data into the non-volatile memory. The non-volatile memory further includes a writing type that is associated with the data and provides a rule of how to write at the time of writing the data into the non-volatile memory after editing the data on the volatile memory. The reading unit reads the writing type at the time of reading the data. The writing unit writes, based on the writing type associated with the edited data, the data into the non-volatile memory.
    Type: Grant
    Filed: January 22, 2015
    Date of Patent: April 11, 2017
    Assignee: Kyocera Document Solutions Inc.
    Inventor: Takashi Toyoda
  • Patent number: 9619737
    Abstract: A display apparatus that switches and displays a plurality of operation screens each including an operation object selectable by an operator and having a layered structure includes a storage unit which stores information on guidance of prompting an operator to select a predetermined operation object, a recognizing unit which recognizes that an operation object is added or deleted, and an update unit which updates information on guidance in response to a relation between an operation screen to/from which the operation object is added or deleted and an operation screen positioned higher or lower than the operation screen such that content of the guidance provided on the basis of the information reflects configurations of the plurality of operation screens after addition or deletion of the operation object, when the operation object is added or deleted.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: April 11, 2017
    Assignee: KONICA MINOLTA, INC.
    Inventor: Yoshifumi Wagatsuma
  • Patent number: 9619738
    Abstract: A method and system render rasterized data by receiving non-rasterized page description language data and a corresponding transformation matrix representing transformation operations to be performed. The non-rasterized page description language data is rasterizing to create rasterized data. The corresponding transformation matrix is decomposed into a plurality of individual transformation operation matrices and a discrete transformation operation value, from each corresponding individual transformation operation matrix, is generated for each transformation operation to be performed upon the rasterized data. The transformation operations are performed upon the rasterized data based upon the generated discrete transformation operation values.
    Type: Grant
    Filed: December 11, 2009
    Date of Patent: April 11, 2017
    Assignee: Xerox Corporation
    Inventor: Paul Roberts Conlon
  • Patent number: 9619739
    Abstract: Determination is made as to whether print setting information is designated for received print data, the print setting information being for designating a physical sheet size to be used for printing. When the print setting information is not designated and when specific character string information in the print data is set, a logical sheet size is set as the physical sheet size, and when specific character string information in the print data is not set, a default sheet size is set as the physical sheet size.
    Type: Grant
    Filed: June 5, 2015
    Date of Patent: April 11, 2017
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Norihiko Kobayashi
  • Patent number: 9619740
    Abstract: One or more image processing apparatuses, one or more image processing methods and one or more storage mediums are provided herein. In at least one embodiment, an image processing apparatus includes an acquisition unit configured to acquire one or more objects, a check unit configured to perform check processing for checking whether to combine an object with another object, for each of a plurality of objects acquired by the acquisition unit, and a determining unit configured to determine, based on a result of the check processing for each of the plurality of objects, whether to perform the check processing for checking whether to combine an object with another object, for an object acquired subsequently to the plurality of objects by the acquisition unit.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: April 11, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiroyuki Nakane
  • Patent number: 9619741
    Abstract: A processor of a card may detect variations (e.g., position, velocity, acceleration and direction) of a read head in relation to the card. Based on certain parameters (e.g., card length, initially detected read head position, and read head velocity) a processor of a card may adjust synchronization bit patterns that may synchronize communications between the card and a read head of a magnetic stripe reader. A processor of a card may generate a number of leading synchronization bits that is different than a number of trailing synchronization bits.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: April 11, 2017
    Assignee: DYNAMICS INC.
    Inventor: Christopher J. Rigatti