Patents by Inventor Raja Bala

Raja Bala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220076434
    Abstract: One embodiment can provide a system for detecting occlusion at an orifice of a three-dimensional (3D) printer nozzle while the printer nozzle is jetting liquid droplets. During operation, the system uses one or more cameras to capture an image of the orifice of the printer nozzle while the 3D printer nozzle is jetting liquid droplets. The system performs an image-analysis operation on the captured image to identify occluded regions within the orifice of the 3D printer nozzle, compute an occlusion fraction based on the determined occluded regions, and generate an output based on the computed occlusion fraction, thereby facilitating effective maintenance of the 3D printer.
    Type: Application
    Filed: November 17, 2021
    Publication date: March 10, 2022
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Vijay Kumar Baikampady Gopalkrishna, Raja Bala
  • Patent number: 11270123
    Abstract: Embodiments described herein provide a system for localized contextual video annotation. During operation, the system can segment a video into a plurality of segments based on a segmentation unit and parse a respective segment for generating multiple input modalities for the segment. A respective input modality can indicate a form of content in the segment. The system can then classify the segment into a set of semantic classes based on the input modalities and determine an annotation for the segment based on the set of semantic classes.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: March 8, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Karunakaran Sureshkumar, Raja Bala
  • Patent number: 11227400
    Abstract: One embodiment can provide a system for detecting occlusion at an orifice of a three-dimensional (3D) printer nozzle while the printer nozzle is jetting liquid droplets. During operation, the system uses one or more cameras to capture an image of the orifice of the printer nozzle while the 3D printer nozzle is jetting liquid droplets. The system performs an image-analysis operation on the captured image to identify occluded regions within the orifice of the 3D printer nozzle, compute an occlusion fraction based on the determined occluded regions, and generate an output based on the computed occlusion fraction, thereby facilitating effective maintenance of the 3D printer.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: January 18, 2022
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Vijay Kumar Baikampady Gopalkrishna, Raja Bala
  • Patent number: 11134848
    Abstract: A mobile hyperspectral camera system is described. The mobile hyperspectral camera system comprises a mobile host device comprising a processor and a display: a plurality of cameras, coupled to the processor, configured to capture images in distinct spectral bands; and a hyperspectral flash array, coupled to the processor, configured to provide illumination to the distinct spectral bands. A method of implementing a mobile hyperspectral camera system is also described.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: October 5, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Raja Bala, Sourabh Ravindran, Hamid Rahim Sheikh, Youngjun Yoo, Michael Oliver Polley
  • Publication number: 20210250492
    Abstract: One embodiment can include a system for providing an image-capturing recommendation. During operation the system receives, from a mobile computing device, one or more images. The one or more images are captured by one or more cameras associated with the mobile computing device. The system analyzes the received images to obtain image-capturing conditions for capturing images of a target within a physical space; determines, based on the obtained image-capturing conditions and a predetermined image-quality requirement, one or more image-capturing settings; and recommends the determined one or more image-capturing settings to a user.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Matthew A. Shreve, Raja Bala, Jeyasri Subramanian
  • Patent number: 11068746
    Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: July 20, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
  • Publication number: 20210217188
    Abstract: One embodiment can provide a system for detecting occlusion at an orifice of a three-dimensional (3D) printer nozzle while the printer nozzle is jetting liquid droplets. During operation, the system uses one or more cameras to capture an image of the orifice of the printer nozzle while the 3D printer nozzle is jetting liquid droplets. The system performs an image-analysis operation on the captured image to identify occluded regions within the orifice of the 3D printer nozzle, compute an occlusion fraction based on the determined occluded regions, and generate an output based on the computed occlusion fraction, thereby facilitating effective maintenance of the 3D printer.
    Type: Application
    Filed: January 10, 2020
    Publication date: July 15, 2021
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Vijay Kumar Baikampady Gopalkrishna, Raja Bala
  • Publication number: 20210209464
    Abstract: Embodiments described herein provide a system for generating synthetic images with localized editing. During operation, the system obtains a source image and a target image for image synthesis and selects a semantic element from the source image. The semantic element indicates a semantically meaningful part of an object depicted in the source image. The system then determines the style information associated with the source and target images. Subsequently, the system generates a synthetic image by transferring the style of the semantic element from the source image to the target image based on the feature representations. In this way, the system can facilitate localized editing of the target image.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 8, 2021
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Robert R. Price, Edo Collins
  • Publication number: 20210117685
    Abstract: Embodiments described herein provide a system for localized contextual video annotation. During operation, the system can segment a video into a plurality of segments based on a segmentation unit and parse a respective segment for generating multiple input modalities for the segment. A respective input modality can indicate a form of content in the segment. The system can then classify the segment into a set of semantic classes based on the input modalities and determine an annotation for the segment based on the set of semantic classes.
    Type: Application
    Filed: September 21, 2020
    Publication date: April 22, 2021
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Karunakaran Sureshkumar, Raja Bala
  • Patent number: 10972672
    Abstract: A method of generating an image from multiple cameras having different focal lengths is described. The method comprising receiving a wide image and a tele image; aligning the wide image and the tele image to overlap a common field of view; correcting for photometric differences between the wide image and the tele image; selecting a stitching seam for the wide image and the tele image; and joining the wide image and the tele image to generate a composite image, wherein a first portion of the composite image on one side of the stitching seam is from the wide image and a second portion of the composite image on the other side of the stitching seam is from the tele image. An electronic device for generating an image is also described.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: April 6, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ruiwen Zhen, John W. Glotzbach, Raja Bala, Hamid Rahim Sheikh
  • Publication number: 20210096537
    Abstract: A method operates a three-dimensional (3D) metal object manufacturing system to compensate for displacement errors that occur during object formation. In the method, image data of a metal object being formed by the 3D metal object manufacturing system is generated prior to completion of the metal object and compared to original 3D object design data of the object to identify one or more displacement errors. For the displacement errors outside a predetermined difference range, the method modifies machine-ready instructions for forming metal object layers not yet formed to compensate for the identified displacement errors and operates the 3D metal object manufacturing system using the modified machine-ready instructions.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 1, 2021
    Inventors: David A. Mantell, Christopher T. Chungbin, Daniel R. Cormier, Scott J. Vader, Zachary S. Vader, Viktor Sukhotskiy, Raja Bala, Walter Hsiao
  • Patent number: 10943352
    Abstract: One embodiment can provide a system for detecting outlines of objects in images. During operation, the system receives an image that includes at least one object, generates a random noise signal, and provides the received image and the random noise signal to a shape-regressor module, which applies a shape-regression model to predict a shape outline of an object within the received image.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: March 9, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Jin Sun, Sricharan Kallur Palli Kumar, Raja Bala
  • Patent number: 10885635
    Abstract: A method for curvilinear object segmentation includes receiving at least one input image comprising curvilinear features. The at least one image is mapped, using a processor, to output segmentation maps using a deep network having a representation module and a task module. The mapping includes transforming the input image in the representation module using learnable filters trained to suppress noise in one or more of a domain and a task of the at least one input image. The segmentation maps are produced using the transformed input image in the task module.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: January 5, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Venkateswararao Cherukuri, Vijay Kumar B G
  • Patent number: 10832413
    Abstract: A method for curvilinear object segmentation includes receiving at least one input image comprising curvilinear features. The at least one input image is mapped to segmentation maps of the curvilinear features using a deep network having a representation module and a task module. The mapping includes transforming the input image in the representation module using learnable filters configured to balance recognition of curvilinear geometry with reduction of training error. The segmentation maps are produced using the transformed input image in the task module.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: November 10, 2020
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Venkateswararao Cherukuri, Vijay Kumar B G
  • Patent number: 10719927
    Abstract: An electronic device, method, and computer readable medium for multi-frame image processing using semantic saliency are provided. The electronic device includes a camera, a display, and a processor. The processor is coupled to the camera and the display. The processor receives a plurality of frames captured by the camera during a capture event; identifies a salient region in each of the plurality of frames; determines a reference frame from the plurality of frames based on the identified salient regions; fuses non-reference frames with the determined reference frame into a completed image output.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: July 21, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Raja Bala, Hamid R. Sheikh, John Glotzbach
  • Publication number: 20200210770
    Abstract: A method for predicting the realism of an object within an image includes generating a training image set for a predetermined object type. The training image set comprises one or more training images at least partially generated using a computer. A pixel level training spatial realism map is generated for each training image of the one or more training images. Each training spatial realism map configured to represent a perceptual realism of the corresponding training image. A predictor is trained using the training image set and the corresponding training spatial realism maps. An image of the predetermined object is received. A spatial realism map of the received image is produced using the trained predictor.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Raja Bala, Matthew Shreve, Jeyasri Subramanian, Pei Li
  • Publication number: 20200210540
    Abstract: A system is provided for generating a custom article to fit a target surface. During operation, the system compares an input dataset with a number of cut template cut meshes. A respective cut template cut mesh includes one or more cutting paths that correspond to a boundary of the mesh. Next, the system identifies a template cut mesh that produces a closest match with the input dataset, and applies global geometric transformations to the identified template cut mesh to warp the template cut mesh to conform to the input dataset. The system further refines and projects a set of boundary and landmark points from the template cut mesh to the input dataset to define cutting paths for the input dataset. Next, the system applies cutting paths to the input dataset to produce a cut-and-trimmed mesh.
    Type: Application
    Filed: December 31, 2018
    Publication date: July 2, 2020
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Raja Bala, Vijay Kumar Baikampady Gopalkrishna, Chaman Singh Verma, Scott K. Stanley, Andrew P. Rapach
  • Publication number: 20200193607
    Abstract: One embodiment can provide a system for detecting outlines of objects in images. During operation, the system receives an image that includes at least one object, generates a random noise signal, and provides the received image and the random noise signal to a shape-regressor module, which applies a shape-regression model to predict a shape outline of an object within the received image.
    Type: Application
    Filed: December 17, 2018
    Publication date: June 18, 2020
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Jin Sun, Sricharan Kallur Palli Kumar, Raja Bala
  • Publication number: 20200193610
    Abstract: A method for curvilinear object segmentation includes receiving at least one input image comprising curvilinear features. The at least one image is mapped, using a processor, to output segmentation maps using a deep network having a representation module and a task module. The mapping includes transforming the input image in the representation module using learnable filters trained to suppress noise in one or more of a domain and a task of the at least one input image. The segmentation maps are produced using the transformed input image in the task module.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 18, 2020
    Inventors: Raja Bala, Venkateswararao Cherukuri, Vijay Kumar B G
  • Publication number: 20200193605
    Abstract: A method for curvilinear object segmentation includes receiving at least one input image comprising curvilinear features. The at least one input image is mapped to segmentation maps of the curvilinear features using a deep network having a representation module and a task module. The mapping includes transforming the input image in the representation module using learnable filters configured to balance recognition of curvilinear geometry with reduction of training error. The segmentation maps are produced using the transformed input image in the task module.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 18, 2020
    Inventors: Raja Bala, Venkateswararao Cherukuri, Vijay Kumar B G