Patents by Inventor Matthew Bell

Matthew Bell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20100060722
    Abstract: Information from execution of a vision processing module may be used to control a 3D vision system.
    Type: Application
    Filed: March 9, 2009
    Publication date: March 11, 2010
    Inventor: Matthew Bell
  • Publication number: 20100053089
    Abstract: A method of controlling a portable electronic device having a touchscreen display includes determining a first orientation of the portable electronic device, rendering a first virtual keyboard and a first data display area on the touchscreen display based on the first orientation of the portable electronic device, automatically detecting a change from the first orientation to a second orientation of the portable electronic device, and automatically reconfiguring the touchscreen display by rendering a second virtual keyboard, a second data display area, and data previously displayed in the first data display area in the second data display area on the touchscreen display based on the second orientation of the portable electronic device.
    Type: Application
    Filed: August 27, 2008
    Publication date: March 4, 2010
    Applicant: RESEARCH IN MOTION LIMITED
    Inventors: Jordanna KWOK, Matthew BELLS, Jennifer LHOTAK
  • Publication number: 20100039500
    Abstract: A self-contained hardware and software system that allows reliable stereo vision to be performed. The vision hardware for the system, which includes a stereo camera and at least one illumination source that projects a pattern into the camera's field of view, may be contained in a single box. This box may contain mechanisms to allow the box to remain securely and stay in place on a surface such as the top of a display. The vision hardware may contain a physical mechanism that allows the box, and thus the camera's field of view, to be tilted upward or downward in order to ensure that the camera can see what it needs to see.
    Type: Application
    Filed: February 17, 2009
    Publication date: February 18, 2010
    Inventors: Matthew Bell, Raymond Chin, Matthew Vieta
  • Publication number: 20100026624
    Abstract: An interactive directed beam system is provided. In one implementation, the system includes a projector, a computer and a camera. The camera is configured to view and capture information in an interactive area. The captured information may take various forms, such as, an image and/or audio data. The captured information is based on actions taken by an object, such as, a person within the interactive area. Such actions include, for example, natural movements of the person and interactions between the person and an image projected by the projector. The captured information from the camera is then sent to the computer for processing. The computer performs one or more processes to extract certain information, such as, the relative location of the person within the interactive area for use in controlling the projector. Based on the results generated by the processes, the computer directs the projector to adjust the projected image accordingly.
    Type: Application
    Filed: August 17, 2009
    Publication date: February 4, 2010
    Inventor: Matthew Bell
  • Publication number: 20100010815
    Abstract: To facilitate text-to-speech conversion of a username, a first or last name of a user associated with the username may be retrieved, and a pronunciation of the username may be determined based at least in part on whether the name forms at least part of the username. To facilitate text-to-speech conversion of a domain name having a top level domain and at least one other level domain, a pronunciation for the top level domain may be determined based at least in part upon whether the top level domain is one of a predetermined set of top level domains. Each other level domain may be searched for one or more recognized words therewithin, and a pronunciation of the other level domain may be determined based at least in part on an outcome of the search. The username and domain name may form part of a network address such as an email address, URL or URI.
    Type: Application
    Filed: July 11, 2008
    Publication date: January 14, 2010
    Inventors: Matthew Bells, Jennifer Elizabeth Lhotak, Michael Angelo Nanni
  • Publication number: 20100010816
    Abstract: To facilitate text-to-speech conversion of a username, a first or last name of a user associated with the username may be retrieved, and a pronunciation of the username may be determined based at least in part on whether the name forms at least part of the username. To facilitate text-to-speech conversion of a domain name having a top level domain and at least one other level domain, a pronunciation for the top level domain may be determined based at least in part upon whether the top level domain is one of a predetermined set of top level domains. Each other level domain may be searched for one or more recognized words therewithin, and a pronunciation of the other level domain may be determined based at least in part on an outcome of the search. The username and domain name may form part of a network address such as an email address, URL or URI.
    Type: Application
    Filed: July 11, 2008
    Publication date: January 14, 2010
    Inventors: Matthew Bells, Jennifer Elizabeth Lhotak, Michael Angelo Nanni
  • Publication number: 20090307365
    Abstract: A system and method are provided for localizing applications that are used with hand-held electronic devices.
    Type: Application
    Filed: August 18, 2009
    Publication date: December 10, 2009
    Applicant: Research In Motion Limited
    Inventors: Jon MacKay, Matthew Bells
  • Publication number: 20090267904
    Abstract: A method of determining input at a touch-sensitive input surface of a portable electronic device includes detecting a touch event at the touch-sensitive input surface, sampling touch attributes during the touch event, determining an actual touch location and determining at least one shift in touch location based on the touch attributes sampled during the touch event, and determining an input based on the actual touch location and the direction of shift of the touch location.
    Type: Application
    Filed: April 25, 2008
    Publication date: October 29, 2009
    Applicant: RESEARCH IN MOTION LIMITED
    Inventors: David MAK-FAN, Kuo-Feng TONG, Matthew BELLS, Douglas RIDER, Michael LANGLOIS, Jong-Suk LEE, Jason T. GRIFFIN, Colin HO
  • Publication number: 20090251685
    Abstract: A fragmented lens system for creating an invisible light pattern useful to computer vision systems is disclosed. Random or semi-random dot patterns generated by the present system allow a computer to uniquely identify each patch of a pattern projected by a corresponding illuminator or light source. The computer may determine the position and distance of an object by identifying the illumination pattern on the object.
    Type: Application
    Filed: November 12, 2008
    Publication date: October 8, 2009
    Inventor: Matthew Bell
  • Publication number: 20090235295
    Abstract: A method for managing an interactive video display system. A plurality of video spots are displayed on the interactive video display system. Data based on interaction with the interactive video display system corresponding to video spots of the plurality of video spots is gathered. The data is stored, wherein the data is for use in managing presentation of the video spots. By analyzing data relating to different video spots, popularity and other metrics may be determined for the video spots, providing useful information for managing the presentation of the video spots.
    Type: Application
    Filed: April 2, 2009
    Publication date: September 17, 2009
    Inventors: Matthew Bell, Russell H. Belfer
  • Patent number: 7590748
    Abstract: Resource bundles are provided that contain localized resources that a handheld device can use to adapt an application to the current locale of the hand-held electronic device. The resource bundles can be stored in a remotely-located server and downloaded over a network to the hand-held electronic device on request. Alternatively, a hand-held device can store resource bundles for multiple locales and choose a resource bundle that is appropriate for its current locale. A resource bundle can be used to allow a hand-held device to automatically adapt an application to the current locale such as by identifying an entered character sequence that matches a predetermined sequence associated with the current locale of the hand-held device, choosing an article for use with a word that is grammatically correct for the language associated with the current locale, and automatically replacing the character sequence with the article.
    Type: Grant
    Filed: March 10, 2004
    Date of Patent: September 15, 2009
    Assignee: Research In Motion Limited
    Inventors: Jon MacKay, Matthew Bells
  • Publication number: 20090225196
    Abstract: A method and system for processing captured image information in an interactive video display system. In one embodiment, a special learning condition of a captured camera image is detected. The captured camera image is compared to a normal background model image and to a second background model image, wherein the second background model is learned at a faster rate than the normal background model. A vision image is generated based on the comparisons. In another embodiment, an object in the captured image information that does not move for a predetermined time period is detected. A burn-in image comprising the object is generated, wherein the burn-in image is operable to allow a vision system of the interactive video display system to classify the object as background.
    Type: Application
    Filed: May 19, 2009
    Publication date: September 10, 2009
    Applicant: Intellectual Ventures Holding 67 LLC
    Inventor: Matthew Bell
  • Publication number: 20090222482
    Abstract: A handheld electronic device, such as a GPS-enabled wireless communications device with an embedded camera, automatically geotags a set of data, such as a digital photo, video, notes, or a blog, with a textual plain-language description of the current location. When the data is generated, the current location of the device is determined, e.g. using a GPS receiver. A textual plain-language description of the current location is then generated, e.g. by reverse geocoding the GPS position coordinates or by correlating the current time with a calendar event from which language descriptive of the event can be extracted. This textual plain-language description is automatically generated and written into a tag or metadata file associated with the photo or other set of data. By automatically geotagging data with textual plain-language descriptions that go beyond mere coordinates of latitude and longitude, data can be searched and managed more efficiently.
    Type: Application
    Filed: February 28, 2008
    Publication date: September 3, 2009
    Applicant: RESEARCH IN MOTION LIMITED
    Inventors: Gerhard Dietrich Klassen, Matthew Bells
  • Patent number: 7576727
    Abstract: An interactive directed beam system is provided. In one implementation, the system includes a projector, a computer and a camera. The camera is configured to view and capture information in an interactive area. The captured information may take various forms, such as, an image and/or audio data. The captured information is based on actions taken by an object, such as, a person within the interactive area. Such actions include, for example, natural movements of the person and interactions between the person and an image projected by the projector. The captured information from the camera is then sent to the computer for processing. The computer performs one or more processes to extract certain information, such as, the relative location of the person within the interactive area for use in controlling the projector. Based on the results generated by the processes, the computer directs the projector to adjust the projected image accordingly.
    Type: Grant
    Filed: December 15, 2003
    Date of Patent: August 18, 2009
    Inventor: Matthew Bell
  • Patent number: 7552142
    Abstract: Method an arrangement for affecting diagonal movement of a cursor 171 on the display screen 322 of a handheld communication device 300 having a reduced alphabetic keyboard. The method includes sensing movement at an auxiliary user input 328 of the handheld communication device 300 indicative of the user's desire to affect diagonal movement of the cursor 171 on the display screen 322 of the handheld communication device 300. X-direction signals and Y-direction signals are produced based on the sensed movement at the auxiliary user input 328. During that time while the necessary signals are being collected and processed, the cursor 171 is held steady on the display screen 322 until a predetermined criterion is met for discriminating whether the user has indicated x-direction cursor movement, y-direction cursor movement or diagonal cursor movement.
    Type: Grant
    Filed: June 13, 2006
    Date of Patent: June 23, 2009
    Assignee: Research in Motion Limited
    Inventors: Matthew Lee, Andrew Bocking, David Mak-Fan, Steven Fyke, Matthew Bells
  • Patent number: 7536032
    Abstract: A method and system for processing captured image information in an interactive video display system. In one embodiment, a special learning condition of a captured camera image is detected. The captured camera image is compared to a normal background model image and to a second background model image, wherein the second background model is learned at a faster rate than the normal background model. A vision image is generated based on the comparisons. In another embodiment, an object in the captured image information that does not move for a predetermined time period is detected. A burn-in image comprising the object is generated, wherein the burn-in image is operable to allow a vision system of the interactive video display system to classify the object as background.
    Type: Grant
    Filed: October 25, 2004
    Date of Patent: May 19, 2009
    Assignee: Reactrix Systems, Inc.
    Inventor: Matthew Bell
  • Publication number: 20090077504
    Abstract: Systems and methods for processing gesture-based user interactions with an interactive display are provided.
    Type: Application
    Filed: September 15, 2008
    Publication date: March 19, 2009
    Inventors: Matthew Bell, Tipatat Chennavasin, Charles H. Clanton, Michael Hulme, Eyal Ophir, Matthew Vieta
  • Publication number: 20090066650
    Abstract: A method, handheld electronic device and computer program product for inputting calendar information using a graphical user interface (GUI) of a calendar application are provided. The GUI is displayed on a display screen of the handheld electronic device which comprises a navigational input device for receiving navigational input in a first and a second direction.
    Type: Application
    Filed: September 6, 2007
    Publication date: March 12, 2009
    Inventors: Matthew Bells, Darrell May
  • Publication number: 20080252596
    Abstract: An interactive video display system allows a physical object to interact with a virtual object. A light source delivers a pattern of invisible light to a three-dimensional space occupied by the physical object. A camera detects invisible light scattered by the physical object. A computer system analyzes information generated by the camera, maps the position of the physical object in the three-dimensional space, and generates a responsive image that includes the virtual object. A display presents the responsive image.
    Type: Application
    Filed: April 10, 2008
    Publication date: October 16, 2008
    Inventors: Matthew Bell, Matthew Vieta, Raymond Chin, Malik Coates, Steven Fink
  • Publication number: 20080253757
    Abstract: A method is provided for dynamically determining a zoom-level to display to a user of a mapping application executing on a mobile device. The method comprises the following steps. The zoom-level is determined in accordance with at least one predefined parameter. The at least one predefined parameter is monitored for detecting a change. A new zoom-level corresponding with the detected change is determined. Lastly, the zoom-level of the mapping application is changed to the new zoom-level. A mobile device and computer-readable medium configured to implement the method are also provided.
    Type: Application
    Filed: April 16, 2007
    Publication date: October 16, 2008
    Inventors: Matthew Bells, Gerhard Klassen