Patents by Inventor Edgar A. Bernal

Edgar A. Bernal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9525860
    Abstract: What is disclosed is a system and method for enhancing a spatio-temporal resolution of a depth data stream. In one embodiment, time-sequential reflectance frames and time-sequential depth frames of a scene are received. If a temporal resolution of the reflectance frames is greater than the depth frames then a new depth frame is generated based on correlations determined between motion patterns in the sequence of reflectance frames and the sequence of depth frames. The new depth frame is inserted into the sequence of depth frames at a selected time point. If a spatial resolution of the reflectance frames is greater than the depth frames then the spatial resolution of a selected depth frame is enhanced by generating new pixel depth values which are added to the selected depth frame. The spatially enhanced depth frame is then inserted back into the sequence of depth frames.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: December 20, 2016
    Assignee: Xerox Corporation
    Inventors: Wencheng Wu, Edgar A. Bernal, Himanshu J. Madhu, Michael C. Mongeon
  • Patent number: 9514537
    Abstract: What is disclosed is a system and method for adaptively reconstructing a depth map of a scene. In one embodiment, upon receiving a mask identifying a region of interest (ROI), a processor changes either a spatial attribute of a pattern of source light projected on the scene by a light modulator which projects an undistorted pattern of light with known spatio-temporal attributes on the scene, or changes an operative resolution of a depth map reconstruction module. A sensing device detects the reflected pattern of light. A depth map of the scene is generated by the depth map reconstruction module by establishing correspondences between spatial attributes in the detected pattern and spatial attributes of the projected undistorted pattern and triangulating the correspondences to characterize differences therebetween. The depth map is such that a spatial resolution in the ROI is higher relative to a spatial resolution of locations not within the ROI.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: December 6, 2016
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Wencheng Wu, Lalit Keshav Mestha
  • Publication number: 20160337555
    Abstract: Methods and systems for automatically synchronizing videos acquired via two or more cameras with overlapping views in a multi-camera network. Reference lines within an overlapping field of view of the two (or more) cameras in the multi-camera network can be determined wherein the reference lines connect two or more pairs of corresponding points. Spatiotemporal maps of the reference lines can then be obtained. An optimal alignment between video segments obtained from the cameras is then determined based on the registration of the spatiotemporal maps.
    Type: Application
    Filed: May 14, 2015
    Publication date: November 17, 2016
    Inventors: Qun Li, Edgar A. Bernal
  • Patent number: 9483838
    Abstract: This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: November 1, 2016
    Assignee: Xerox Corporation
    Inventors: Orhan Bulan, Edgar A. Bernal, Aaron M. Burry, Yusuf Oguzhan Artan
  • Patent number: 9477892
    Abstract: A method for training a vehicle detection system used in a street occupancy estimation of stationary vehicles. The method includes defining first and second areas on an image plane of an image capture device associated with monitoring for detection of vehicles. The method includes receiving video-data from a sequence of frames captured from the image capture device. The method includes determining candidate frames that include objects relevant to a classification task in the second area. The method includes extracting the objects from the candidate frames, extracting features of each extracted object, and assigning labels to the each extracted object. The method includes training at least one classifier using the labels and extracted features. The method includes using the at least one trained classifier to classify a stationary vehicle detected in the first area.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: October 25, 2016
    Assignee: Xerox Corporation
    Inventors: Wencheng Wu, Edgar A. Bernal, Yao Rong Wang, Robert P. Loce, Orhan Bulan
  • Patent number: 9471889
    Abstract: A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
    Type: Grant
    Filed: April 24, 2014
    Date of Patent: October 18, 2016
    Assignee: Xerox Corporation
    Inventors: Aaron M. Burry, Peter Paul, Edgar A. Bernal, Orhan Bulan
  • Patent number: 9442176
    Abstract: This disclosure provides methods and systems for form a trajectory of a moving vehicle captured with an image capturing device. According to one exemplary embodiment, a method forms a trajectory of a moving vehicle and determines if the vehicle is moving in one of a permitted manner and an unpermitted manner relative to the appropriate motor vehicle lane restriction laws and/or regulations.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: September 13, 2016
    Assignee: Xerox Corporation
    Inventors: Orhan Bulan, Edgar A. Bernal, Robert P. Loce
  • Patent number: 9436277
    Abstract: A method for computing output using a non-contact (invisible) input signal includes acquiring depth data of a scene captured by a depth-capable sensor. The method includes generating a temporal series of depth maps corresponding to the depth data. The method includes generating at least one volumetric attribute from the depth data. The method includes generating an output based on the volumetric attribute to control actions.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: September 6, 2016
    Assignee: Xerox Corporation
    Inventors: Michael R. Furst, Edgar A. Bernal, Robert P. Loce, Lalit K. Mestha
  • Patent number: 9424747
    Abstract: This disclosure provides method and systems of recording a predetermined event associated with a moving object, the predetermined event captured with an image capturing unit and one or more of the associated frames compressed, producing one or more motion vectors. According to one exemplary embodiment, vehicle counting is performed based on motion vectors produced during the data compression process, either inline or offline.
    Type: Grant
    Filed: April 23, 2012
    Date of Patent: August 23, 2016
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Robert P. Loce, Orhan Bulan
  • Patent number: 9418426
    Abstract: A camera outputs video as a sequence of video frames having pixel values in a first (e.g., relatively low dimensional) color space, where the first color space has a first number of channels. An image-processing device maps the video frames to a second (e.g., relatively higher dimensional) color representation of video frames. The mapping causes the second color representation of video frames to have a greater number of channels relative to the first number of channels. The image-processing device extracts a second color representation of a background frame of the scene. The image-processing device can then detect foreground objects in a current frame of the second color representation of video frames by comparing the current frame with the second color representation of a background frame. The image-processing device then outputs an identification of the foreground objects in the current frame of the video.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: August 16, 2016
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Qun Li
  • Publication number: 20160231411
    Abstract: A mobile electronic device processes a sequence of images to identify and re-identify an object of interest in the sequence. An image sensor of the device, receives a sequence of images. The device detects an object in a first image as well as positional parameters of the device that correspond to the object in the first image. The device determines a range of positional parameters within which the object may appear in a field of view of the device. When the device detects that the object of interest exited the field of view it subsequently uses motion sensor data to determine that the object of interest has likely re-entered the field of view, it will analyze the current frame to confirm that the object of interest has re-entered the field of view.
    Type: Application
    Filed: February 11, 2015
    Publication date: August 11, 2016
    Inventors: Jayant Kumar, Qun Li, Edgar A. Bernal, Raja Bala
  • Publication number: 20160234464
    Abstract: A computer-vision based method for validating an activity workflow of a human performer includes identifying a target activity. The method includes determining an expected sequence of actions associated with the target activity. The method includes receiving a video stream from an image capture device monitoring an activity performed by an associated human performer. The method includes determining an external cue in the video stream. The method includes associating a frame capturing the external cue as a first frame in a key frame sequence. The method includes determining an action being performed by the associated human performer in the key frame sequence. In response to determining the action in the key frame sequence matching an expected action in the target activity, the method includes verifying the action as being performed in the monitored activity.
    Type: Application
    Filed: April 16, 2015
    Publication date: August 11, 2016
    Inventors: Robert P. Loce, Beilei Xu, Edgar A. Bernal, Saurabh Prabhat, Wencheng Wu
  • Patent number: 9412185
    Abstract: A method for reconstructing an image of a scene captured using a compressed sensing device. A mask is received which identifies at least one region of interest in an image of a scene. Measurements are then obtained of the scene using a compressed sensing device comprising, at least in part, a spatial light modulator configuring a plurality of spatial patterns according to a set of basis functions each having a different spatial resolution. A spatial resolution is adaptively modified according to the mask. Each pattern focuses incoming light of the scene onto a detector which samples sequential measurements of light. These measurements comprise a sequence of projection coefficients corresponding to the scene. Thereafter, an appearance of the scene is reconstructed utilizing a compressed sensing framework which reconstructs the image from the sequence of projection coefficients.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: August 9, 2016
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Beilei Xu, Lalit Keshav Mestha
  • Patent number: 9405974
    Abstract: A system and method for optimizing video-based tracking of an object of interest are provided. A video of a regularized motion environment that comprise multiple video frames is acquired and an initial instance of an object of interest in one of the frames is then detected. An expected size and orientation of the object of interest as a function of the location of the object is then determined. The location of the object of interest is then determined in a next subsequent frame using the expected size and orientation of the object of interest.
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: August 2, 2016
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Howard A. Mizes, Robert P. Loce
  • Publication number: 20160217575
    Abstract: A camera outputs video as a sequence of video frames having pixel values in a first (e.g., relatively low dimensional) color space, where the first color space has a first number of channels. An image-processing device maps the video frames to a second (e.g., relatively higher dimensional) color representation of video frames. The mapping causes the second color representation of video frames to have a greater number of channels relative to the first number of channels. The image-processing device extracts a second color representation of a background frame of the scene. The image-processing device can then detect foreground objects in a current frame of the second color representation of video frames by comparing the current frame with the second color representation of a background frame. The image-processing device then outputs an identification of the foreground objects in the current frame of the video.
    Type: Application
    Filed: January 27, 2015
    Publication date: July 28, 2016
    Inventors: Edgar A. Bernal, Qun Li
  • Patent number: 9390328
    Abstract: This disclosure provides a static occlusion handling method and system for use with appearance-based video tracking algorithms where static occlusions are present. The method and system assumes that the objects to be tracked move in according to structured motion patterns within a scene, such as vehicles moving along a roadway. A primary concept is to replicate pixels associated with the tracked object from previous frames to current or future frames when the tracked object coincides with a static occlusion, where the predicted motion of the tracked object is a basis for replication of the pixels.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: July 12, 2016
    Assignee: Xerox Corporation
    Inventors: Matthew Adam Shreve, Qun Li, Edgar A. Bernal, Robert P. Loce
  • Patent number: 9390329
    Abstract: This disclosure provides a method and system to locate/detect static occlusions associated with an image captured scene including a tracked object. According to an exemplary method, static occlusions are automatically located by monitoring the motion of single or multiple objects in a scene over time and with the use of an associated accumulator array.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: July 12, 2016
    Assignee: Xerox Corporation
    Inventors: Matthew Adam Shreve, Edgar A Bernal, Qun Li, Robert P. Loce
  • Patent number: 9384554
    Abstract: What is disclosed is system and method for contemporaneously reconstructing images of a scene illuminated with unstructured and structured illumination sources. In one embodiment, the system comprises capturing a first 2D image containing energy reflected from a scene being illuminated by a structured illumination source and a second 2D image containing energy reflected from the scene being illuminated by an unstructured illumination source. A controller effectuates a manipulation of the structured and unstructured illumination sources during capture of the video. A processor is configured to execute machine readable program instructions enabling the controller to manipulate the illumination sources, and for effectuating the contemporaneous reconstruction of a 2D intensity map of the scene using the second 2D image and of a 3D surface map of the scene using the first 2D image. The reconstruction is effectuated by manipulating the illumination sources.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: July 5, 2016
    Assignee: Xerox Corporation
    Inventors: Beilei Xu, Lalit Keshav Mestha, Edgar A. Bernal
  • Patent number: 9377294
    Abstract: What is disclosed is a wireless cellular device capable of determining a volume of an object in an image captured by a camera of that apparatus. In one embodiment, the present wireless cellular device comprises an illuminator for projecting a pattern of structured light with known spatial characteristics, and a camera for capturing images of an object for which a volume is to be estimated. The camera is sensitive to a wavelength range of the projected pattern of structured light. A spatial distortion is introduced by a reflection of the projected pattern off a surface of the object. And processor executing machine readable program instructions for performing the method of: receiving an image of the object from the camera; processing the image to generate a depth map; and estimating a volume of the object from the depth map. A method for using the present wireless cellular device is also provided.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: June 28, 2016
    Assignee: Xerox Corporation
    Inventors: Wencheng Wu, Edgar A. Bernal, Lalit Keshav Mestha, Paul R. Austin
  • Publication number: 20160173771
    Abstract: A method and system for reconstructing an image of a scene comprises configuring a digital light modulator according to a spatially varying pattern. Light energy associated with the scene and incident on the spatially varying pattern is collected and optically focused on the photodetectors. Data indicative of the intensity of the focused light energy from each of said at least two photodetectors is collected. Data from the photodetectors is then combined to reconstruct an image of the scene.
    Type: Application
    Filed: December 11, 2014
    Publication date: June 16, 2016
    Inventors: Edgar A. Bernal, Xuejin Wen, Qun Li, Raja Bala