BUSINESS INFORMATION AND INNOVATION MANAGEMENT
A system constructed using one or more of the techniques described includes a job or outcome engine for storing a job or outcome data structure in accordance with a coherent relational model, a solution engine for storing a solution data structure in accordance with a coherent relational model, and a capability computation engine for matching the job or outcome to the solution to determine the extent to which the solution meets the needs of the job or outcome. The results can then be provided to a commercial activity server for the purpose of acting on identified solutions that meet needs better than current solutions.
Latest STRATEGYN, INC. Patents:
This application claims priority to U.S. Provisional Patent Application No. 61/209,764 filed Mar. 10, 2009, which is incorporated by reference. This application is related to co-pending U.S. patent application Ser. No. 12/563,969, filed Sep. 21, 2009, which is incorporated by reference.
BACKGROUNDToday's modern business enterprises require and make use of sophisticated information systems to acquire vital insights into the performance or prospective future performance of their business relative to goals, market metrics, competitive information, and the like. This class of information products is known in the field today as Management Information Systems (MIS) or Business Intelligence (BI) tools. In addition businesses seek better ways to identify the right strategies and new ideas that can help them grow, and information solutions supporting these objectives are often referred to as Collaboration Technologies, and Innovation Management Systems. Collectively these information systems fall under the general category of Enterprise and Marketing Intelligence Systems and represent a critical part of today's business software and information systems marketplace.
While data management and reporting technologies have advanced to become adept at efficiently retrieving information stored in these systems and generating reports, the problem that plagues all these systems is the lack of a unifying information framework, or ontology, that provides a stable and fundamental frame of reference that is absolute and consistently meaningful across all domains for gleaning business insights or for facilitating value creation. The lack of an ontology means that evaluations on the information gathered are highly subjective and dependent on interpretation, and that each information domain tends to exist as an island where local rules prevail, rather than as a part of an integrated whole. The problems this creates for business are innumerable; consequently MIS and BI systems today, while enabling better informed decisions, have failed to deliver on their promise of transforming management decision making. For example, these systems can easily track the sales results and underlying demographics for a particular market, but utterly fail at providing any empirically defensible prediction, save extrapolation of past results, around whether these results are sustainable or what impact a new idea will have. More generally, the lack of a valid unifying and quantifiable frame of reference for business insight and intelligence means that compromises are made in making decisions and projections into future business impacts are largely guesswork. This problem has always existed in business information analysis and decision making, and it is a root cause of many mistaken beliefs and failures in business information technology initiatives.
SUMMARYPresented herein are techniques for facilitating commercial activity using a coherent relational model that includes jobs and outcomes, and solutions to one or more of the jobs and outcomes. Using one of the techniques, an entity can, for example, identify new product opportunities, assess the threat from market changes, quantify future economic value and development investment uncertainty, and provide information to capital markets related to asset value compared to others in its sectors.
A system constructed using one or more of the techniques can include a collective set of data structures, uniquely designed entities, information tools, and/or computational and machine methods useful to store, append, interact with, retrieve, process, and present data and information in a fashion that enables associations to be made between the entities and the jobs and outcomes that pertain to actual or potential markets of an enterprise, which have been identified using a methodology that facilitates the creation of a coherent relational model between jobs and outcomes and actual or potential solutions to those jobs and outcomes. Through the associations, users can attain insights and explore innovations and new business strategies that are virtually unworkable without the system.
A system constructed using one or more of the techniques described includes a job or outcome engine for storing a job or outcome data structure in accordance with a coherent relational model, a solution engine for storing a solution data structure in accordance with a coherent relational model, and a capability computation engine for matching the job or outcome to the solution to determine the extent to which the solution meets the needs of the job or outcome. The results can then be provided to a commercial activity server for the purpose of acting on identified solutions that meet needs better than current solutions.
Processes/decisions that can potentially be improved using a technique described in the detailed description can include, for example, Primary Market Research, Use of Secondary Market Research, Product Management and Marketing Strategy, Marketing Communications, R&D, New Product Development, General Business Strategy, Innovation Strategy, Innovation Collaboration, Ideation, Business Case Analysis, IP Strategy, and Mergers & Acquisition Strategy and Due Diligence. Business insights that can potentially be improved using a technique described in the detailed description can include, for example, Competitive Intelligence and Industry Benchmarking, Unmet Market Demand, Modeling of underlying market trends, Cause and Effect of Marketing Communications Results, New Technology Assessments and Scouting, and New Product/Platform or other Growth Investment Risk/Return. These improvements are intended to be examples, not limitations, and some of them may not be achieved in certain implementations of the techniques.
In the example of
A computer system, as used in this paper, is intended to be construed broadly. In general, a computer system will include a processor, memory, non-volatile storage, and an interface. A typical computer system will usually include at least a processor, memory, and a device (e.g., a bus) coupling the memory to the processor.
The processor can be, for example, a general-purpose central processing unit (CPU), such as a microprocessor, or a special-purpose processor, such as a microcontroller.
The memory can include, by way of example but not limitation, random access memory (RAM), such as dynamic RAM (DRAM) and static RAM (SRAM). The memory can be local, remote, or distributed. As used in this paper, the term “computer-readable storage medium” is intended to include only physical media, such as memory. As used in this paper, a computer-readable medium is intended to include all mediums that are statutory (e.g., in the United States, under 35 U.S.C. 101), and to specifically exclude all mediums that are non-statutory in nature to the extent that the exclusion is necessary for a claim that includes the computer-readable medium to be valid. Known statutory computer-readable mediums include hardware (e.g., registers, random access memory (RAM), non-volatile (NV) storage, to name a few), but may or may not be limited to hardware.
The bus can also couple the processor to the non-volatile storage. The non-volatile storage is often a magnetic floppy or hard disk, a magnetic-optical disk, an optical disk, a read-only memory (ROM), such as a CD-ROM, EPROM, or EEPROM, a magnetic or optical card, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory during execution of software on the computer system. The non-volatile storage can be local, remote, or distributed. The non-volatile storage is optional because systems can be created with all applicable data available in memory.
Software is typically stored in the non-volatile storage. Indeed, for large programs, it may not even be possible to store the entire program in the memory. Nevertheless, it should be understood that for software to run, if necessary, it is moved to a computer-readable location appropriate for processing, and for illustrative purposes, that location is referred to as the memory in this paper. Even when software is moved to the memory for execution, the processor will typically make use of hardware registers to store values associated with the software, and local cache that, ideally, serves to speed up execution. As used herein, a software program is assumed to be stored at any known or convenient location (from non-volatile storage to hardware registers) when the software program is referred to as “implemented in a computer-readable storage medium.” A processor is considered to be “configured to execute a program” when at least one value associated with the program is stored in a register readable by the processor.
The bus can also couple the processor to the interface. The interface can include one or more of a modem or network interface. It will be appreciated that a modem or network interface can be considered to be part of the computer system. The interface can include an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. “direct PC”), or other interfaces for coupling a computer system to other computer systems. The interface can include one or more input and/or output (I/O) devices. The I/O devices can include, by way of example but not limitation, a keyboard, a mouse or other pointing device, disk drives, printers, a scanner, and other I/O devices, including a display device. The display device can include, by way of example but not limitation, a cathode ray tube (CRT), liquid crystal display (LCD), or some other applicable known or convenient display device.
In one example of operation, the computer system can be controlled by operating system software that includes a file management system, such as a disk operating system. One example of operating system software with associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non-volatile storage and causes the processor to execute the various acts required by the operating system to input and output data and to store data in the memory, including storing files on the non-volatile storage.
Some portions of the detailed description may be presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. The signals take on physical form when stored in a computer readable storage medium, such as memory or non-volatile storage, and can therefore, in operation, be referred to as physical quantities. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it should be appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The algorithms and displays presented herein are not necessarily inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs to configure the general purpose systems in a specific manner in accordance with the teachings herein, or it may prove convenient to construct specialized apparatus to perform the methods of some embodiments. The required structure for a variety of these systems will appear from the description below. Thus, a general purpose system can be specifically purposed by implementing appropriate programs. In addition, the techniques are not described with reference to any particular programming language, and various embodiments may thus be implemented using a variety of programming languages.
Referring once again to the example of
Functionality of the USIMS server 104 can be carried out by one or more engines. As used in this paper, an engine includes a dedicated or shared processor and, hardware, firmware, or software modules that are executed by the processor. Depending upon implementation-specific or other considerations, an engine can be centralized or its functionality distributed. An engine can include special purpose hardware, firmware, or software embodied in a computer-readable medium for execution by the processor. Examples of USIMS functionality are described with reference to
In the example of
In the example of
In an example of a system where the ODI data repository 108 is implemented as a database, a database management system (DBMS) can be used to manage the ODI data repository 108. In such a case, the DBMS may be thought of as part of the ODI data repository 108 or as part of the USIMS server 104, or as a separate functional unit (not shown). A DBMS is typically implemented as an engine that controls organization, storage, management, and retrieval of data in a database. DBMSs frequently provide the ability to query, backup and replicate, enforce rules, provide security, do computation, perform change and access logging, and automate optimization. Examples of DBMSs include Alpha Five, DataEase, Oracle database, IBM DB2, Adaptive Server Enterprise, FileMaker, Firebird, Ingres, Informix, Mark Logic, Microsoft Access, InterSystems Cache, Microsoft SQL Server, Microsoft Visual FoxPro, MonetDB, MySQL, PostgreSQL, Progress, SQLite, Teradata, CSQL, OpenLink Virtuoso, Daffodil DB, and OpenOffice.org Base, to name several.
Database servers can store databases, as well as the DBMS and related engines. Any of the repositories described in this paper could presumably be implemented as database servers. It should be noted that there are two logical views of data in a database, the logical (external) view and the physical (internal) view. In this paper, the logical view is generally assumed to be data found in a report, while the physical view is the data stored in a physical storage medium and available to, typically, a specifically programmed processor. With most DBMS implementations, there is one physical view and a huge number of logical views for the same data.
A DBMS typically includes a modeling language, data structure, database query language, and transaction mechanism. The modeling language is used to define the schema of each database in the DBMS, according to the database model, which may include a hierarchical model, network model, relational model, object model, or some other applicable known or convenient organization. An optimal structure may vary depending upon application requirements (e.g., speed, reliability, maintainability, scalability, and cost). One of the more common models in use today is the ad hoc model embedded in SQL. Data structures can include fields, records, files, objects, and any other applicable known or convenient structures for storing data. A database query language can enable users to query databases, and can include report writers and security mechanisms to prevent unauthorized access. A database transaction mechanism ideally ensures data integrity, even during concurrent user accesses, with fault tolerance. DBMSs can also include a metadata repository; metadata is data that describes other data.
In the example of
Particularly where the USIMS server 104 functions as a business process management (BPM) server, it may be desirable to enable the USIMS server 104 to have access to mail data. BPM, as used in this paper, is a technique intended to align organizations with the needs of clients by continuously improving processes. BPM is an advantageous implementation because it tends to promote business efficacy while striving for innovation and integration with technology.
It should be noted that business process modeling and business process management are not the same, and, confusingly, share the same acronym. In this paper, business process management is given the acronym BPM, but business process modeling is not given an acronym. Business process modeling is often, though not necessarily, used in BPM. Business process modeling is a way of representing processes in systems or software. The models are typically used as tools to improve process efficiency and quality, and can use Business Process Modeling Notation (BPMN) or some other notation to model business processes.
A business process, as used in this paper, is a collection of related, structured activities or tasks that produce a service or product for a particular client. Business processes can be categorized as management processes, operational processes, and supporting processes. Management processes govern the operation of a system, and include by way of example but not limitation corporate governance, strategic management, etc. Operational processes comprise the core business processes for a company, and include by way of example but not limitation, purchasing, manufacturing, marketing, and sales. Supporting processes support the core processes and include, by way of example but not limitation, accounting, recruiting, technical support, etc.
A business process can include multiple sub-processes, which have their own attributes, but also contribute to achieving the goal of the super-process. The analysis of business processes typically includes the mapping of processes and sub-processes down to activity level. A business process is sometimes intended to mean integrating application software tasks, but this is narrower than the broader meaning that is frequently ascribed to the term in the relevant art, and as intended in this paper. Although the initial focus of BPM may have been on the automation of mechanistic business processes, it has since been extended to integrate human-driven processes in which human interaction takes place in series or parallel with the mechanistic processes.
Referring once again to the example of
The USIMS server 104 can, of course, be coupled to other external applications (not shown) either locally or through the network 102 in a known or convenient manner. The USIMS server 104 can also be coupled to other external data repositories.
The USIMS system 100 is but one example of systems with which techniques described in this paper can be used. For example, the ODI database 108 could be replaced with some other database that enables storage of a coherent relational model that includes jobs and outcomes, solutions, and other data.
In the example of
FIX is provided as an example in this paper because FIX is a standard electronic protocol for pre-trade communications and trade execution. Another example of a protocol is Society for Worldwide Interbank Financial Telecommunication (SWIFT).
Yet another example is FIX adapted for streaming (FAST) protocol, which is used for sending multicast market data. FAST was developed by FIX Protocol, Ltd. to optimize data representation on a network, and supports high-throughput, low latency data communications. In particular, it is a technology standard that offers significant compression capabilities for the transport of high-volume market data feeds and ultra low latency applications. Exchanges and market centers offering data feeds using the FAST protocol include: New York Stock Exchange (NYSE) Archipelago, Chicago Mercantile Exchange (CME), International Securities Exchange (ISE), to name a few.
The job or outcome engine 202, making use of a search engine, can search data streams for relevant data for tagging; identifying competitors; and populating product, market communications, service programs, NPD tables, etc. When the various products, competitors, and the like are found, they can be integrated into the core ODI model by storing relevant data entities in the relevant repositories in a coherent relational manner.
As another example, the job or outcome engine 202 could use a process engine implemented, for example, as a BPM engine or a BPM suite (BPMS). An example of a BPMS is Bluespring's BPM Suite 4.5. However, any applicable known or convenient BPM engine could be used. Of course, the BPM engine must meet the needs of the system for which it is used, and may or may not work “off the shelf” with techniques described in this paper.
As another example, the job or outcome engine 202 could use a segmentation engine that facilitates segmenting a market. This can involve providing data manipulation tools to facilitate compiling and loading data sets into external statistical analysis packages, providing tools to interact with statistical analysis and modeling packages and import additional metadata tags into a job/outcome data schema, and/or providing utilities to enhance the visual representation and tabular reporting of the statistical data properties.
As another example, the job or outcome engine 202 could use a metadata engine implemented as a data analysis engine that tags data records algorithmically, appends meta-data associated with business information to data records, facilitates pipeline prioritization, facilitates calculation, ranking and reporting of opportunity scores, facilitates interaction with data, and performs other functionality that makes data more useful in a BI context.
As another example, the job or outcome engine 202 could use a strategy engine implemented as a business intelligence (BI) tool. An example of a BI tool is Microsoft Office P
In general, a strategy engine can include tools that are useful for pulling in data from various sources so as to facilitate strategic planning, such as needs delivery enhancement strategy, needs-based IP strategy, innovation strategy, market growth strategy, consumption chain improvement strategy, etc. It is probably desirable to ensure that the tools in the strategy engine are user-friendly, since human input is often desirable for certain strategic planning.
As another example, the job or outcome engine 202 could include a reporting engine implemented as SSRS to prepare and deliver interactive and printed reports. Crystal Reports is another implementation, and any applicable known or convenient BI tool could be used. It is frequently seen as an advantage to have reports that can be generated in a variety of formats including Excel, PDF, CSV, XML, TIFF (and other image formats), and HTML Web Archive, which SSRS can do. Other report generators can offer additional output formats, and may include useful features such as geographical maps in reports.
In the example of
In general, an applicable known or convenient tool that acts as a collaborative workspace and/or tool for the management or automation of business processes could be implemented. Collaborative technologies are tools that enable people to interact with other people within a group more efficiently and effectively. So even email discussion lists and teleconferencing tools could function as a collaboration engine; though sophisticated tools are likely to encompass much more. For example, it is probably desirable to enable users to have greater control in finding, creating, organizing, and collaborating in a browser-based environment. It may also be desirable to allow organization of users in accordance with their access, capabilities, role, and/or interests.
As another example, the job or outcome engine 202 could include a transaction engine that provides interaction between engines capable of writing to or reading from the jobs and outcomes repository 204. If a data stream is being provided, a transformation rules engine may or may not transform the data into an appropriate format. Similarly, if data is being provided from the jobs and outcomes repository 204 to an engine that can make no, or limited, use of the data, the transformation rules engine can transform the data to some other format. In a specific implementation, the transformation rules engine is only needed when interfacing with external devices because all internal devices can use data in a standard format.
As another example, the job or outcome engine 202 could include an ETL engine that extracts data from outside sources, transforms the data to fit operational requirements, and loads the transformed data into the jobs and outcomes repository 204. The ETL engine can store an audit trail, which may or may not have a level of granularity that would allow reproduction of the ETL's result in the absence of the ETL raw data. A typical ETL cycle can include the following steps: initialize, build reference data, extract, validate, transform, stage, audit report, publish, archive, clean up.
In operation, the ETL engine can extract data from one or more source systems, which may have different data organizations or formats. Common data source formats are relational databases and flat files, but can include any applicable known or convenient structure, such as, by way of example but not limitation, Information Management System (MIS), Virtual Storage Access Method (VSAM), Indexed Sequential Access Method (ISAM), web spidering, screen scraping, etc. Extraction can include parsing the extracted data, resulting in a check if the data meets an expected pattern or structure.
In operation, the ETL engine transforms the extracted data by applying rules or functions to the extracted data to derive data for loading into a target repository. Different data sources may require different amounts of manipulation of the data. Transformation types can include, by way of example but not limitation, selecting only certain columns to load, translating coded values, encoding free-form values, deriving a new calculated value, filtering, sorting, joining data from multiple sources, aggregation, generating surrogate-key values, transposing, splitting a column into multiple columns, applying any form of simple or complex data validation, etc.
In operation, the ETL engine loads the data into the target repository. In a particular implementation, the data must be loaded in a format that is usable to the system 200, perhaps using a transformation rules engine. Loading data can include overwriting existing information or adding new data in historized form. The timing and scope to replace or append are implementation- or configuration-specific.
The ETL engine can make use of an established ETL framework. Some open-source ETL frameworks include Clover ETL, Enhydra Octopus, Mortgage Connectivity Hub, Pentaho Data Integration, Talend Open Studio, Scriptella, Apatar, Jitterbit 2.0. A freeware ETL framework is Benetl. Some commercial ETL frameworks include Djuggler Enterprise, Embarcadero Technologies DT/Studio, ETL Solutions Transformation Manager, Group 1 Software DataFlow, IBM Information Server, IBM DB2 Warehouse Edition, IBM Cognos Data Manager, IKAN—ETL4ALL, Informatica PowerCenter, Information Builders—Data Migrator, Microsoft SQL Server Integration Services (SSIS), Oracle Data Integrator, Oracle Warehouse Builder, Pervasive Business Integrator, SAP Business Objects—Data Integrator, SAS Data Integration Studio, to name several.
A business process management (BPM) server, such as Microsoft BizTalk Server, can also be used to exchange documents between disparate applications, within or across organizational boundaries. BizTalk provides business process automation, business process modeling, business-to-business communication, enterprise application integration, and message broker.
An enterprise resource planning (ERP) system used to coordinate resources, information, and activities needed to complete business processes, can also be accessed. Data derived from an ERP system is typically that which supports manufacturing, supply chain management, financials, projects, human resources, and customer relationship management from a shared data repository.
Derived data can also be Open Innovation (OI) data, which is an outside source of innovation concepts. This can include transactional data (send a network of outside problem solvers Opportunities for new ideas and receive the ideas back) and unstructured data (repository of ideas) for searching.
In the example of
The various data entities can be integrated with the core coherent relational model around, by way of example but not limitation, products, platforms, projects, competitors, technologies/IP, campaigns, organization, resources, and performance. At least in part because the model is coherent and relational, by way of example but not limitation, opportunity data, context information, prompts for sparking creativity, and management decisions can be implemented within a systematized idea generation process. For example, in a certain context, it may be the case that a limited number of parameters become relevant, and therefore prompts associated with such a context can be used to spark creativity by addressing one or more of the limited number of parameters.
Advantageously, customer needs can be captured as the needs related to a market, goods, and services. A core functional job can have emotional jobs (e.g., personal jobs and social jobs) and other functional jobs (e.g., jobs indirectly related to core job and jobs directly related to core job), each of which can be analyzed using a uniform metric. During a concept innovation phase, a job can be broken down into multiple steps, each step potentially having multiple outcomes associated with it. Desired outcomes are the metrics customers use to measure the successful execution of a job. When the outcomes are known or predicted, the concept innovation stage passes into the devise solution stage, and then a design innovation stage where consumption chain jobs are identified, such as purchase, receive, install, set-up, learn to use, interface, transport, store, maintain, obtain support, upgrade, replace, dispose, to name several. Then it is time to design/support a solution.
Using the various data, which is represented as input 216 in the example of
In the example of
In the example of
Using the various data, which is represented as input 218, which may or may not include data from the job or outcome engine 202, in the example of
In the example of
In the example of
The capability computation engine 210 can make use of various engines, such as those described with reference to the job or outcome engine 202, to obtain data (not shown) that can be used to identify how effectively a solution meets a need for a job or outcome. The capability computation engine 210 can create a capability/constraint difference record including a difference between the capability parameter for a solution and one or more constraints associated with a constraint parameter of a job or outcome. This could be triggered by a specific command, or the capability computation engine 210 could crunch through several jobs or outcomes and solutions to see what needs are unmet or are inadequately met. Regardless of when the computation occurs, the capability computation engine 210 compares a capability parameter indicative of a degree of capability of a solution in achieving a job or outcome using a constraint parameter of a job or outcome record to obtain a capability/constraint difference record, which can be stored in the capability/constraint difference repository 211.
In the example of
In the example of
In the example of
In the example of
Depending on whether the answer is specific or general, the system assigns an appropriate search strategy embodied within a string of external data sources that can include appropriate solutions. For outcomes associated with specific fields-of-use the search strategy includes specific and highly qualified external data sources which can include, for example, particular patent classification sub classes, trade or academic publications, or other applicable data.
For outcomes associated with general purpose needs the system determines systematically the best sources for new enabling technologies or solutions by automatically identifying and weighting these through a routine like modern textual search. The process continues by searching records found through this method for text strings that include synonymous terms for the objects of control or action from the particular outcome or job of interest. The process completes by recording the existence of this match as a data tag appended to the external data record identifying the outcome/job that was matched and a score value is assigned representing the closeness of this match.
In the example of
In the example of
In the example of
Solution value added assessment is an ad hoc use having the same general purpose as a needs delivery enhancement strategy report (see, e.g.,
Business case extracts of the database is an ad hoc use to assess the return on investment (ROI) of particular solutions. The extracts can be used by other reports, or separate business case models to facilitate or improve enterprise investment decision making. The data values extracted include, for example, job/outcome importance, satisfaction, and opportunity scores (both raw and in processed forms), customer data, satisfaction improvement estimates of solutions (see, e.g.,
Marketing and sales campaigns needs extracts are ad hoc reports to assess the market effect of new marketing and sales campaigns based on positioning a product to address unmet needs or otherwise using similar insights to design and assess new marketing and sales campaigns. Like other data in the coherent relational model, data entities can be integrated around the products and campaigns.
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
Referring once again to the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
In the example of
The flowchart 2400 can, of course, be repeated at various stages, including finding another solution that appears to be superior in some context (e.g., an originally identified first solution might have higher cost than a later identified second solution, and although the higher cost might be “worth it” in one context, the higher cost might not be “worth it” in another context; or it may be the case that a human can identify reasons why the identification failed to find the superior solution on the first attempt due to inadequate intelligence on the part of the system), or attempting to match a different job or outcome to the identified solution, or attempting to find solutions to jobs or outcomes that are part of a larger process, to name a few examples.
The flowchart 2400 can also be used in the context of selecting a growth strategy that includes organizing data around a market and optionally storing research results to improve the data (modules 2402-2408), determining under/overserved jobs or outcomes in the market and optionally determining how many under/overserved needs exist in outcome-based and job-based market segments if segment data exists (modules 2410-2412), and selecting and prioritizing which growth paths to pursue for the market and for specific outcome-based and job-based segments (module 2414). Additional actions that can be taken in association with module 2414 include gaining management agreement on pursuit of growth strategies (priority, timing, etc.), obtaining cost, timing, and boundary inputs from management for each targeted growth path, obtaining prioritized evaluation criteria from management for each targeted growth path, defining a pool of potential participants for idea generation, concept convergence, evaluation, concept testing, etc., collecting analogies/examples of creativity triggers, and signing up to get data pushed to an employee. Some of these additional activities could include refining the data and reexecuting the flowchart 2400. Similar techniques can be employed for business model idea generation and for feature idea generation.
The coherent relational model 2502 includes systems, such as described earlier in this paper, that store jobs and outcomes, solutions, and other data in a relational database. The model can include, for example, a relational ODI data environment. The model will likely include various features and engines that facilitate input, output, reorganization, and association of data.
The multidimensional data analysis and metadata engines 2504 are take advantage of the organization of the coherent relational model 2502. Conceptually, the engines are “built on top of” the coherent relational model 2502. Alternatively, the engines could be considered part of or an extension of the coherent relational model 2502. Multidimensional data analysis, as used in this paper, is essentially impossible to accomplish in a practical, useful manner without an underlying methodology that supports association of disparate solutions to jobs and outcomes, and comparisons between other disparate records (e.g., jobs and outcomes to jobs and outcomes, solutions to solutions, and other data to other conceptually, contextually, or otherwise dissimilar data). Metadata engines facilitate the association of various records on a metadata level, possibly without higher level “data” analysis, or can be used in conjunction with multidimensional data analysis.
The collaboration and knowledge integration platform 2506 provides the underlying data in a useful format to facilitate collaboration between humans or business entities, and to integrate new data into the existing relational model. The data derived by the collaboration and knowledge integration platform 2506 can “trickle down” to the multidimensional data analysis and metadata engines 2504 to further enhance or “tweak” the coherent relational model 2502.
The value added workflow engines 2508 are the “top level” of the platform 2500, and, in operation, provide insights, in the form of, for example, related insights data and media 2510 to an enterprise 2512. It may be noted that the related insights data and media 2510 could be connected to the collaboration and knowledge integration platform 2506 and passed through to the value added workflow engines 2508 and, as always, data from the coherent relational model 2502 can be passed up through the layers of the platform 2500, and other data (such as the related insights data and media 2510) passed down for integration into the coherent relations model 2502. The more the value added workflow engines 2508 learn about various aspects of the enterprise 2512, the better the insights will be related to what the enterprise 2512 does. This is because any data received about the enterprise 2512 is itself integrated into the coherent relational model 2502 (in this example, through the higher layers of the platform 2500). To this end, the enterprise 2512 can provide inputs, in the form of, for example, activities, assets, priorities, and constraints 2514. It may be noted that the activities, assets, priorities, and constraints 2514 can be recycled back to the enterprise 2512 with the aid of value added workflow engines 2508, and could, as always, be passed down to the coherent relational model 2502 for integration. In the example of
It is assumed that the coherent relational model 2502 will also be updated from time to time by extracting new data 2518 from markets and solvers 2520 in an automated fashion, though this would not include extracting new data in a manual fashion. The new data 2518 can be provided to the platform 2500 as innovation inputs and/or as raw data. Although the automated acquisition of the new data 2518 is believed to be desirable, it is, strictly speaking, optional, since a system could function without it after being built, at least for a time, in a “demo” build, or for some other reason.
Advantageously, by teaching the platform 2500 activities, assets, priorities, and constraints of the enterprise 2512, the value added workflow engines 2508 can enable the enterprise 2512 to create new ideas and allocate resources (assets) toward researching and/or implementing the new ideas, as well as other ideas that might be gleaned from the coherent relational model 2502 during a innovation cycle. Since the coherent relational model 2502 provides contextualized jobs and outcomes and solutions data, the enterprise 2512 is more likely to match solutions to needs, and to allocate resources to the jobs or outcomes that will benefit the most from the allocation. That is, the enterprise 2512 can allocate resources to the jobs or outcomes that have the largest needs-gap, or identify needs that are entirely unmet. It may be noted that an unmet need is for practical purposes no different than a poorly met need in the sense that the needs-gap is still determined, and it may be the case that the needs-gap is greater for a poorly met need than an unmet need. Or, stated differently, an unmet need is a job or outcome that has the solution “do nothing,” which may or may not have an explicit representation as a solution in the coherent relational model 2502.
Using the platform 2500, the methods illustrated in
Engines, as used in this paper, refer to computer-readable media coupled to a processor. The computer-readable media have data, including executable files, which the processor can use to transform the data and create new data. The engines transform data and create new data using implemented data structures, such as is described with reference to
The detailed description discloses examples and techniques, but it will be appreciated by those skilled in the relevant art that modifications, permutations, and equivalents thereof are within the scope of the teachings. It is therefore intended that the following appended claims include all such modifications, permutations, and equivalents. While certain aspects of the invention are presented below in certain claim forms, the applicant contemplates the various aspects of the invention in any number of claim forms.
For example, where this is an application in the United States, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C sec. 112, sixth paragraph, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. §112, ¶6 will begin with the words “means for”, but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. §112, ¶6.) Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the invention.
Claims
1. A system comprising:
- a needs-based job or outcome engine, wherein, in operation, the needs-based job or outcome engine creates a needs-based job or outcome record including at least one constraint parameter associated with a needs-based job or outcome, and stores the needs-based job or outcome record in accordance with a coherent relational model;
- a solution engine, wherein, in operation, the solution engine creates a solution record including a capability parameter indicative of a degree of capability of a solution in achieving the needs-based job or outcome using the constraint parameter, and stores the solution record in accordance with the coherent relational model;
- a capability computation engine coupled to the needs-based job or outcome engine and the solution engine, wherein, in operation, the capability computation engine computes a difference between the capability parameter for the solution and one or more constraints associated with the constraint parameter of the needs-based job or outcome;
- a commercial activity server coupled to the capability computation engine, wherein, in operation, facilitates management of commercial actions taken in association with the difference between the constraint parameter of the needs-based job or outcome and the capability parameter of the solution,
- wherein disparate marketing and product development information solutions are stored in the coherent relational model.
2. The system of claim 1, wherein the difference between the capability parameter and the constraint parameter is indicative of potential innovation to achieve a new solution to the needs-based job or outcome more effectively bounded by relevant constraints.
3. The system of claim 1, wherein inventory is collected in formation, and stored in data tables that are integrated relationally to job or outcome data.
4. The system of claim 1, wherein, in operation, the needs-based job or outcome engine finds passages in documents that relate semantically to the needs-based job or outcome that are systematically selected and related to the needs-based job or outcome record.
5. The system of claim 3, wherein, in operation, the commercial activity server uses the systematized relationships to provide information for work that can benefit from the information.
6. The system of claim 1, wherein, in operation, the solution engine finds passages in documents that relate semantically to the solution that are systematically selected and related to the solution record.
7. The system of claim 6, wherein, in operation, the commercial activity server uses the systematized relationships to provide information for work that can benefit from the information.
8. The system of claim 1, further comprising a jobs and outcomes repository coupled to the needs-based job or outcome engine, for storing the needs-based job or outcome record.
9. The system of claim 1, further comprising a solutions repository coupled to the solution engine, for storing the solution record.
10. The system of claim 1, wherein the capability computation engine creates a capability/constraint difference record, further comprising a capability/constraint difference repository coupled to the capability computation engine, for storing the capability/constraint difference record.
11. The system of claim 1, wherein, in operation, the commercial activity server identifies the needs-based job or outcome and identifies the solution in association with the needs-based job or outcome.
12. A method comprising:
- creating a job or outcome data structure including at least one constraint parameter associated with a job or outcome;
- storing the job or outcome data structure in accordance with a coherent relational model;
- creating a solution data structure including a capability parameter indicative of a capability of a solution to meet needs of the job or outcome using the constraint parameter;
- storing the solution data structure in accordance with the coherent relational model;
- computing a difference between the capability parameter for the solution and one or more constraints associated with the constraint parameter of the job or outcome;
- facilitating management of commercial actions taken in association with the difference between the constraint parameter of the job or outcome and the capability parameter of the solution.
13. The method of claim 12, wherein the difference between the capability parameter and the constraint parameter is indicative of potential innovation to achieve a new solution to the needs-based job or outcome more effectively bounded by relevant constraints.
14. The method of claim 12, further comprising finding passages in documents that relate semantically to the needs-based job or outcome that are systematically selected and related to the needs-based job or outcome record.
15. The method of claim 14, further comprising using the systematized relationships in work that can benefit from the information.
16. The method of claim 12, further comprising finding passages in documents that relate semantically to the solution that are systematically selected and related to the solution record.
17. The method of claim 12, further comprising linking disparate marketing and product development information solutions into a coherent relational model, including the capability parameter of the solution.
18. The method of claim 12, further comprising identifying the job or outcome and the solution in association with the job or outcome.
19. A system comprising:
- a means for parameterizing a needs-based job or outcome, including at least one constraint parameter associated with the job or outcome, to create a needs-based job or outcome data structure in accordance with a coherent relational model;
- a means for identifying a capability associated with a solution, wherein a capability parameter for the solution is indicative of a degree of capability of the solution in achieving the needs-based job or outcome;
- a means for parameterizing the solution in association with the needs-based job or outcome and the capability parameter of the solution, to create a solution data structure in accordance with the coherent relational model;
- a means for computing a difference between the capability parameter for the solution and constraints associated with the at least one constraint parameter of the needs-based job or outcome;
- a means for providing data associated with the difference between the capability and the constraints to a commercial activity engine, wherein the commercial activity engine identifies the needs-based job or outcome, identifies the solution in association with the needs-based job or outcome, and facilitates management of commercial actions taken in association with the difference between the constraint parameter of the needs-based job or outcome and the capability parameter of the solution.
20. The system of claim 16, further comprising a means for finding passages in documents that relate semantically to the needs-based job or outcome and systematically selecting and relating the passages to the needs-based job or outcome record.
21. The method of claim 17, further comprising a means for using the systematized relationships in work that can benefit from the information.
22. The method of claim 16, further comprising a means for finding passages in documents that relate semantically to the solution and systematically selecting and relating the passages to the solution record.
23. The method of claim 19, further comprising a means for using the systematized relationships in work that can benefit from the information.
24. A system comprising:
- a coherent relational model;
- a collaboration and knowledge integration platform coupled to the coherent relational model;
- a value added workflow engine coupled to the collaboration and knowledge integration platform,
- wherein, in operation: the value added workflow engine provides data to an enterprise and receives enterprise-specific inputs from the enterprise; the collaboration and knowledge integration platform integrates the enterprise-specific inputs into the coherent relational model; the coherent relational model provides augmented data to the enterprise, including proposed solutions to jobs or outcomes identified in the enterprise-specific inputs in accordance with activities, assets, priorities, or constraints identified in the enterprise-specific inputs;
- wherein the augmented data is useful to the enterprise in generating ideas or determining how to allocate resources to meet needs.
Type: Application
Filed: Mar 10, 2010
Publication Date: Dec 20, 2012
Applicant: STRATEGYN, INC. (Aspen, CO)
Inventors: Mark Jaster (Rosemont, PA), Anthony W. Ulwick (Aspen, CO)
Application Number: 13/319,066
International Classification: G06Q 10/00 (20120101);