Patents by Inventor Rajanish Calisa
Rajanish Calisa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10885346Abstract: A method of selecting frames of a video sequence. Image data for a plurality of consecutive frames of the video sequence is captured using a camera. Frames from the plurality of consecutive frames where the camera is moving are identified using the captured image data, wherein each of the identified frames comprises a subject. A size of the subject captured in each of the identified frames is determined. The identified frames are selected by detecting that the camera is moving towards and with the subject based on the size of the subject within each of a plurality of the identified frames.Type: GrantFiled: October 5, 2018Date of Patent: January 5, 2021Assignee: Canon Kabushiki KaishaInventors: Mark Ronald Tainsh, Rajanish Calisa, Sammy Chan
-
Publication number: 20190147226Abstract: A method of matching a first person in a first image to a person in a second image. At least one companion in the first image is determined and being different to the first person. A contextual confidence for the first person, the companion and each of a plurality of people in the second image is determined, the contextual confidence being a measure of prediction accuracy of a match to the first person. An appearance score between each person in the first image and each of the plurality of people in the second image, the appearance score measuring similarity of appearance. The method selects from the plurality of people in the second image, a match for the first person and, based on the match for the first person, a match for the companion, each of the matches being determined according to the contextual confidence and appearance score.Type: ApplicationFiled: November 13, 2017Publication date: May 16, 2019Inventors: Ben Yip, Geoffrey Richard Taylor, Rajanish Calisa
-
Publication number: 20190108402Abstract: A method of selecting frames of a video sequence. Image data for a plurality of consecutive frames of the video sequence is captured using a camera. Frames from the plurality of consecutive frames where the camera is moving are identified using the captured image data, wherein each of the identified frames comprises a subject. A size of the subject captured in each of the identified frames is determined. The identified frames are selected by detecting that the camera is moving towards and with the subject based on the size of the subject within each of a plurality of the identified frames.Type: ApplicationFiled: October 5, 2018Publication date: April 11, 2019Inventors: MARK RONALD TAINSH, RAJANISH CALISA, SAMMY CHAN
-
Patent number: 9918057Abstract: A method, apparatus and system of projecting text characters onto a textured document are described. The method comprises determining, from a captured image of the textured surface, a measure of the texture on the surface for a region of the textured surface over which the text characters are to be projected; selecting, based on a function of the determined measure, a glyph set, each glyph in the glyph set having visually contrasting inner and outer portions, the outer portion being sized proportionally to the inner portion according to the determined measure; and projecting the text characters onto the textured surface on of region using the selected glyph set.Type: GrantFiled: September 29, 2016Date of Patent: March 13, 2018Assignee: Canon Kabushiki KaishaInventors: David Robert James Monaghan, Belinda Margaret Yee, Rajanish Calisa
-
Patent number: 9633479Abstract: A method of displaying virtual content on an augmented reality device (101) is disclosed. The virtual content is associated with a scene. An image of a scene captured using the augmented reality device (101) is received. A viewing time of the scene is determined, according to a relative motion between the augmented reality device and the scene. Virtual content is selected, from a predetermined range of virtual content, based on the determined viewing time. The virtual content is displayed on the augmented reality device (101) together with the image of the scene.Type: GrantFiled: December 22, 2014Date of Patent: April 25, 2017Assignee: Canon Kabushiki KaishaInventors: Matthew John Grasso, Belinda Margaret Yee, David Robert James Monaghan, Oscar Alejandro De Lellis, Rajanish Calisa
-
Publication number: 20170094235Abstract: A method, apparatus and system of projecting text characters onto a textured document are described. The method comprises determining, from a captured image of the textured surface, a measure of the texture on the surface for a region of the textured surface over which the text characters are to be projected; selecting, based on a function of the determined measure, a glyph set, each glyph in the glyph set having visually contrasting inner and outer portions, the outer portion being sized proportionally to the inner portion according to the determined measure; and projecting the text characters onto the textured surface on of region using the selected glyph set.Type: ApplicationFiled: September 29, 2016Publication date: March 30, 2017Inventors: David Robert James Monaghan, Belinda Margaret Yee, Rajanish Calisa
-
Patent number: 9191630Abstract: Methods of displaying video data are disclosed. The methods generate a plurality of queries for determining from which of a plurality of video data sources video data is to be displayed and store each of the queries. One of the queries is selected for display and the selected query is matched with metadata from one or more of the plurality of video data sources. The video data from the video data sources that match the query is displayed.Type: GrantFiled: November 27, 2007Date of Patent: November 17, 2015Assignee: CANON KABUSHIKI KAISHAInventors: Hayden Graham Fleming, Rajanish Calisa, Rupert William Galloway Reeve, Andrew James Lo
-
Publication number: 20150206353Abstract: A method of displaying virtual content on an augmented reality device (101) is disclosed. The virtual content is associated with a scene. An image of a scene captured using the augmented reality device (101) is received. A viewing time of the scene is determined, according to a relative motion between the augmented reality device and the scene. Virtual content is selected, from a predetermined range of virtual content, based on the determined viewing time. The virtual content is displayed on the augmented reality device (101) together with the image of the scene.Type: ApplicationFiled: December 22, 2014Publication date: July 23, 2015Inventors: Matthew John Grasso, Belinda Margaret Yee, David Robert James Monaghan, Oscar Alejandro De Lellis, Rajanish Calisa
-
Patent number: 8843972Abstract: A method of requesting video data distributed across a plurality of video servers(210-21N) connected to a communications network (120) is disclosed. A request (e.g., 1000) is transmitted to one of the video servers (e.g., 210). The request (1000) includes at least time information about at least a first portion of the video data. The first portion of video data is received from the video server (210). A redirection message (e.g., 1008) is also received from the video server (210). The redirection message (1008) specifies a next one of the video servers (210) containing a temporally adjacent further portion of the video data.Type: GrantFiled: September 27, 2010Date of Patent: September 23, 2014Assignee: Canon Kabushiki KaishaInventors: Rajanish Calisa, Philip Cox
-
Patent number: 8155503Abstract: A method (400) of displaying video data using a video recording system (100). The method (400) records a first stream of video data captured by a first camera (e.g., 103) and a first event associated with the first camera (103). The method records a second stream of video data captured by a second camera (e.g., 104) and a second event associated with the second camera (104). A playback speed is determined based at least on a difference between a current play time position and a time position of a nearest one of the first event and the second event. The first stream and the second stream of video data are displayed in a synchronized manner. The first stream of video data and the second stream of video data are displayed at the playback speed.Type: GrantFiled: October 10, 2008Date of Patent: April 10, 2012Assignee: Canon Kabushiki KaishaInventors: Rajanish Calisa, Xin Yu Liu
-
Publication number: 20110078751Abstract: A method of requesting video data distributed across a plurality of video servers (210-21N) connected to a communications network (120) is disclosed. A request (e.g., 1000) is transmitted to one of the video servers (e.g., 210). The request (1000) includes at least time information about at least a first portion of the video data. The first portion of video data is received from the video server (210). A redirection message (e.g., 1008) is also received from the video server (210). The redirection message (1008) specifies a next one of the video servers (210) containing a temporally adjacent further portion of the video data.Type: ApplicationFiled: September 27, 2010Publication date: March 31, 2011Applicant: Canon Kabushiki KaishaInventors: Rajanish Calisa, Philip Cox
-
Publication number: 20100027965Abstract: Methods of displaying video data are disclosed. The methods generate a plurality of queries for determining from which of a plurality of video data sources video data is to be displayed and store each of the queries. One of the queries is selected for display and the selected query is matched with metadata from one or more of the plurality of video data sources. The video data from the video data sources that match the query is displayed.Type: ApplicationFiled: November 27, 2007Publication date: February 4, 2010Inventors: Hayden Graham Fleming, Rajanish Calisa, Rupert William Galloway Reeve, Andrew James Lo
-
Patent number: 7595833Abstract: Disclosed is an arrangement (100) for displaying video footage captured by a controllable camera (103), the arrangement comprising a memory (107) storing the captured footage, means for constructing a representation (505) of a field of view accessible by the camera (103), means for retrieving, from the memory (107), the stored footage (503) and parameters characterising the control state of the camera (103) when the footage was captured, and means for displaying the footage (503), the representation (505) of the field of view, and an indicator (502) on the representation (505) dependent upon the parameters.Type: GrantFiled: February 27, 2006Date of Patent: September 29, 2009Assignee: Canon Kabushiki KaishaInventor: Rajanish Calisa
-
Publication number: 20090136213Abstract: A method (400) of displaying video data using a video recording system (100). The method (400) records a first stream of video data captured by a first camera (e.g., 103) and a first event associated with the first camera (103). The method records a second stream of video data captured by a second camera (e.g., 104) and a second event associated with the second camera (104). A playback speed is determined based at least on a difference between a current play time position and a time position of a nearest one of the first event and the second event. The first stream and the second stream of video data are displayed in a synchronised manner. The first stream of video data and the second stream of video data are displayed at the playback speed.Type: ApplicationFiled: October 10, 2008Publication date: May 28, 2009Applicant: CANON KABUSHIKI KAISHAInventors: Rajanish Calisa, Xin Yu Liu
-
Publication number: 20060195876Abstract: Disclosed is an arrangement (100) for displaying video footage captured by a controllable camera (103), the arrangement comprising a memory (107) storing the captured footage, means for constructing a representation (505) of a field of view accessible by the camera (103), means for retrieving, from the memory (107), the stored footage (503) and parameters characterising the control state of the camera (103) when the footage was captured, and means for displaying the footage (503), the representation (505) of the field of view, and an indicator (502) on the representation (505) dependent upon the parameters.Type: ApplicationFiled: February 27, 2006Publication date: August 31, 2006Applicant: Canon Kabushiki KaishaInventor: Rajanish Calisa