HEALTH INFORMATION SYSTEM FOR SEARCHING, ANALYZING AND ANNOTATING PATIENT DATA
Disclosed herein are improved systems, methods, and machine readable media for implementing a service for enriching patient documents using natural language processing and a semantic health taxonomy, among other types of information. Enriched documents may be mined for improved diagnostic coding and health services documentation purposes, for example to identify missed and/or inaccurately coded diagnosis codes and quality gaps.
The present invention is a continuation of U.S. patent application Ser. No. 15/645,965, filed on Jul. 10, 2017, which claims the priority benefit of U.S. Provisional Patent Application No. 62/372,946, filed on Aug. 10, 2016, the disclosures of which are incorporated herein by reference in their entirety.
FIELD OF THE INVENTIONThe present invention relates to improved apparatuses, systems, computer readable media, and methods for the provision of services concerning semantic annotation, enrichment, and searching of patient data.
BACKGROUNDAccurate diagnoses and information about patient health can be lost in the large volume of structured and unstructured data that document a patient's health history. There is a need for improved systems for understanding the content of that volume of data and mining it for actionable information in order to improve the accuracy and efficiency of identifying patient medical acuity, treatments and health management and associated record-keeping. Disclosed herein are embodiments of an invention that address those needs.
The above and other aspects and advantages of the invention will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
Disclosed herein are systems, methods, and machine readable media for implementing a service for enriching patient-related data. In one aspect, the invention involves one or more memories configured to implement an improved application of natural language processing to enable deep-mining of the patient data. Thus, embodiments of the invention provide improvements in computer-related technology through techniques for enabling more accurate and comprehensive extraction of health-related concepts. These concepts are supported or suggested by textual or quantitative evidence embedded in one or more documents comprising a patient's medical record and associated information about the patient. For example, enriching patient data and documents using a semantic taxonomy permits improved, automated, natural language processing of documents and fields within documents containing unstructured text for more accurate and complete detection of health and patient-related concepts; the identified concepts may then be used for further application of automated techniques to improve the detection of patient health problems (e.g., automatic identification of potential complex health disorders such as irritable bowel syndrome and/or health conditions such as congestive heart failure) and accounting issues (e.g., automatic review of documentation of health services for purposes of insurance claims verification). Use of automated word-sense disambiguation and coordinate expansion corrects notorious sources of errors in natural language processing, enhancing the accuracy of the detection of health and patient-related concepts. Use of nuanced scaling for components of a document-concept ranking procedure as described herein improves the accuracy of processing and scope of the unstructured input that can be handled by the enrichment approach. The specific use of a stack or tiers of annotators described herein additionally improves accuracy and coverage of identifying and extracting instances of entities.
Embodiments of the invention further include improved techniques for annotating and searching patient data using clinical guidelines, which may involve the application of both natural language understanding to identify qualitative concepts or other entities and quantitative measurements that are present in structured and unstructured data, as well as the intelligent application of rules based on the clinical guidelines.
Embodiments of the invention further include automatically inferring proposed medical codes and quality care gaps supported by concepts identified in the patient data, for example in accordance with the service for enriching patient-related data. One aspect of this improved service is that the proposed codes (or predicted conditions) may incorporate evidence from multiple documents that can be associated with the patient, and thus the proposed codes may provide a more accurate assessment of a patient's condition or medical acuity compared to proposed codes that are limited to analysis of a single document at a time. A series of user interfaces for reviewing proposed codes in the context of documenting health services and conditions is also provided. In certain embodiments, the user interfaces provide proposed codes along with estimates for how confirming the codes affect an assessment of the level of risk associated with that patient (e.g., a “risk adjustment factor”, RAF), as well as the overall risk assessment for populations containing that patient. In certain embodiments, the RAF is continuously updated and provided for each patient as additional data is incorporated into the system (e.g., updates to electronic medical records, medical claims, laboratory test results, radiology reports, medical voice to text transcription, and the like). By continually identifying proposed codes and related RAFs, a patient's otherwise neglected health issues may be addressed closer in time to when issues arise or are identified, rather than after a retrospective analysis when the patient may be sicker because a health issue has been neglected for a longer period of time.
Embodiments of the invention further include proposing health-related services based on information about the patient identified in patient data, including, for example, recommended post-hospital services, medications, physician documentation at patient encounter or tests based on socioeconomic factors and identified care needs.
As used herein, a “patient” refers to an individual who may receive or have received medical services. Depending on the context, a “user” may be an administrator or health professional who is accessing or editing information about one or more patients. In certain contexts or embodiments, a “user” may also be a patient. In certain contexts or embodiments, a “user” may be a developer or curator involved with creation or maintenance of a data engine as described below.
In addition to associating constituent instances with concepts and other entities, enrichment of patient-related data may include, for example, normalization of values such as measurements, and annotation of documents using clinical rules and other types of rules. In certain embodiments, the enriched document 108 maintains a respective reference to the evidence in the original input data document 101 that supports each identified entity/concept. In certain embodiments, the enriched data 108 is the original document 102 or 104 along with additional metadata identifying linked concepts and other annotations.
In step 202, candidate concepts are extracted from preexisting ontologies, taxonomies, and validated information sources that define or manage controlled terminologies and codes. For example, concepts may be obtained from the National Cancer Institute Thesaurus (NCI-T), the Healthcare Common Procedure Coding System (HCPCS), International Classification of Diseases (e.g., ICD-9, ICD-10), Gene Ontology (GO) project, the Systematized Nomenclature of Medicine (SNOMED), Online Mendelian Inheritance in Man (OMIM), National Drug Code (NDC), Current Procedure Terminology codes (e.g., CPT-4), Logical Observation Identifiers Names and Codes (LOINC), National Library of Medicine (NLM) Medical Subject Headings (MeSH), Diagnosis-Related Group (DRG) codes, and NLM RxNorm.
In step 204, the candidate concepts are used to define, augment, and correct concepts and relationships between concepts in a semantic taxonomy 206. This step may involve, for example, automatic identification or human curation of semantic relationships between concepts, creation of consumer friendly names, performing clinical quality control, defining synonyms, acronyms and abbreviations for concepts and attributes, creating stemming and correction lists (e.g., equating inject/injects/injecting where appropriate, and defining common misspellings of words), handling same-spelling homonyms and phrases involving negation, and defining term- and query-specific rules. In certain embodiments, step 204 may involve automatically identifying or suggesting concepts through data mining of published clinical/scientific literature, or human curation using clinical/scientific literature. In certain embodiments, step 204 may involve incorporating organization-specific terminologies into the semantic taxonomy 206. In certain embodiments, sets of concepts may constitute individual databases within the semantic taxonomy 206. Augmenting concepts and relationships may involve associating categories or labels and associated values for attributes of concepts and concept relationships.
In certain embodiments, a terminology editor provides a user interface for facilitating one or more aspects of step 204 (e.g., human curation of concepts).
In step 208, a set of clinical rules 210 may be incorporated into data engine 106. This may involve, for example, extracting clinical rules from clinical rules input documents such as existing published clinical guidelines, organization-specific clinical guidelines, clinical policy bulletins, or published scientific and clinical literature including books and journals. Concepts in the semantic taxonomy 206 that are implicated by particular clinical rules may be associated with those rules within data engine 106. Defining new clinical rules may involve creating or augmenting such concepts as in step 204. In certain embodiments, data engine 106 may additionally include predefined collections of entities that are not concepts (e.g., dictionaries of entities).
In certain embodiments, a clinical guidelines editor provides a user interface for facilitating aspects of step 208 (e.g., creating a machine-readable clinical guideline from a clinical rules input document). In certain embodiments, creation of new clinical rules from clinical rules input documents is automated.
Examples of concept attributes 304 may include semantic type, medical name, medical codes, and synonyms. Medical codes may be defined to be a specific type of code, for example, an ICD-9, ICD-10, RxNorm, or CPT-4 code.
Examples of concept relationship 306 types (e.g., concept semantic types) may include, for example, symptoms, nutritional supplements, medications (e.g., concept: Doxorubicin), complications (e.g., concept: metastatic cancer), therapies, synonyms, preventions (e.g. concepts: breast feeding, low-fat diet), risk factors, physician specialties, treatments (e.g., concepts: chemotherapy, mastectomy), diagnostic procedures (e.g., concept: mammography), neoplastic processes.
Creation of a machine-readable guideline 406 may involve converting a decision tree represented visually (as in exemplary pictorial guideline 402a) by configuring one or more memories to represent the guideline using a corresponding decision tree or graph data structure (e.g., a data structure including aspects of the node listing in exemplary machine-readable guideline 406a). In certain embodiments, an input guideline 402 may be described in text rather than images or diagrams. In certain embodiments, optical character recognition may be used to automatically extract text associated with an input guideline 402. In certain embodiments, machine learning techniques may be used to automatically identify the sequence of clinical rules represented in a pictorial input guideline. In certain embodiments, aspects of this conversion process may be accomplished via a clinical guidelines editor that provides a user interface (e.g., for human curation of clinical guidelines and clinical rules). In certain embodiments, clinical guidelines may be sourced from the National Guideline Clearinghouse, www.guideline.gov, or a professional medical association for a particular medical specialty or practice area, such as the American Academy of Pediatric Dentistry or the American College of Radiology.
In certain embodiments, the result of evaluating a particular clinical rule or a guideline in regards to data such as a document or patient data 101 is that the data is enriched by associating it with a concept, tag, value, or other information indicating the result of the particular clinical rule (in the case of applying a single clinical rule), or the results of one or more constituent clinical rules (in the case of applying a guideline). In certain embodiments, the input data are associated with these additional concepts, tags, values, or other information in the same manner that entities identified in input data are linked with corresponding concepts in a semantic taxonomy. For example, in certain embodiments, a particular patient document such as a pathology report may provide evidence for deciding that a tumor with tubular histology (a result of rule 404-134) that is also ER-positive (a result of rule 404-135), and is staged as pT2 and pN1mi (a result of rule 404-137), and the tumor is >3 cm (a result of rule 404-140). Based on this evidence, data engine 106 may infer that adjuvant endocrine therapy is recommended (a result of rule 404-147), even if the pathology report does not state or otherwise suggest that “adjuvant endocrine therapy” is recommended or prescribed. In certain embodiments, this inference regarding endocrine therapy may be associated with the patient or the pathology report document along with information regarding the basis for the inference, such as “automatic inference based on clinical guideline <402a>” and a reference to the evidence underlying clinical rules 404-134, 404-135, 404-137 and 404-140. In certain embodiments, an evaluation of the confidence in the result of any individual clinical rule or overall clinical guideline (e.g., a statistical evaluation of the quality of the evidence or the inference based on the guideline) may be associated with the patient or the pathology report document.
In step 504, candidate concept instances are identified in each document that correspond to concepts in a taxonomy, such as semantic taxonomy 206. For example, concept instances may be identified by searching a graphical representation of the taxonomy using the segments. For example, the semantic taxonomy may contain concepts 302 having a concept attribute 304 of type “synonyms,” which may include synonyms, abbreviations, and acronyms (or these may be separate attributes). Thus a search of the semantic taxonomy using candidate instance/segments “M.I.” or “heart attack” may result in the association of a concept “myocardial infarction.” Additionally, as exemplary concept “myocardial infarction” may be related in the taxonomy to concept “percutaneous coronary intervention” (as a “treatment” of myocardial infarction), a candidate instance/segment containing the text “percutaneous coronary intervention” may also result in a suggested concept of “myocardial infarction.” Relationships between concepts in the taxonomy may be associated with relationship scores based on how closely the concepts are related, and this relationship score may be taken into account in estimating a confidence score. Methods associated with the data engine 106 may be used to execute this searching, and to assess, normalize, and adjust confidence, “hit”, or similarity scores used to evaluate candidate instance-concept mappings. Each candidate instance may be associated with a score that denotes a confidence measure as to whether the candidate instance accurately maps to a concept. In certain embodiments, if a score is below a threshold, the candidate instance may be disregarded. In certain embodiments, entity instances are identified based on a subset of the segment types, such as sentences. In certain embodiments, a single entity instance may be evidenced in segments distributed across two or more documents associated with the same patient. In another example, documents may be scored with respect to one or more entities to assess the relevancy of the document to each of the one or more entities.
Instances in the input data may be linked to corresponding entities using various techniques, such as using annotations, tags, or a relational database, or may be extracted and associated with a patient. In one example, the entity may be represented in the document as one or more segments in the document, e.g., a particular sentence or three words in various locations in the document. This example instance may be associated with an identifier for the corresponding entity/concept, and that entity identifier may be associated with the document text using markup language tags around the particular sentence or three words in various locations in a marked-up enriched version of the document, where the markup tags further denote the entity identifier.
In step 506, additional entity extractors may be applied to the document. In certain embodiments, segments/candidate instances may be evaluated using additional separate taxonomies or dictionaries encompassed by data engine 106. These additional entity extractors may represent, for example, non-medical concepts or terms such as names (patients and other names), geographical terms, and molecules (such as drugs in development). An additional entity extractor may correspond to organization-specific terms associated with a set of patients. In certain embodiments, input documents may be annotated or tagged using one or more additional entity extractors. In certain embodiments, entities may be extracted from the documents and associated with one or more patients.
In step 508, rule-based annotators may be applied to the documents. These rule-based annotators may be used to augment and correct the entities and other annotations associated with each input document. Rule-based annotators may operate using, for example, section-specific annotators, semantic type annotators (including clinical guidelines—e.g., a machine-readable guideline 406), and base-term-type annotators. In certain embodiments, one or more rule-based annotators may be used to select proposed medical codes based on patient documents for a particular patient, or to propose a care plan for a patient. Rule-based annotators may additionally use subject-specific knowledge bases to provide information that the annotators may use in annotating the input documents with entities and other information. Step 508 is described more specifically with respect to
In step 510, certain instances or terms in the document are evaluated to disambiguate word sense (e.g., where terms have homonyms). For example, a segment reciting “cold temperature” has a different meaning from “common cold.” In certain embodiments, such word sense disambiguation proceeds by determining whether additional words suggest the correct context for an ambiguous term. For example, the procedure may analyze segments of one or more particular granularities that contain the ambiguous term to represent the context for the ambiguous term (e.g., other words in the same sentence, other words in the same paragraph, or other words in the same document). The procedure may analyze other words within 1, 2, 3, 4, 5, 10, 15, or 20, 50, or 100 words of the ambiguous term. In one example, the term “shingles” may appear within 50 words of the terms “disease,” “herpes,” or “acyclovir” in a document. In a different document, the term “shingles” may appear within 50 words of the terms “house” or “rain.” Using, for example, a knowledgebase such as semantic taxonomy 206, terms in the context for the ambiguous term may be more closely associated with the disease sense based on the value or character of one or more concept attributes or concept relationships (e.g., because the disease “shingles” is caused by the virus “herpes zoster” and may be treated using the antiviral drug “acyclovir”) than the building construction sense by scoring the relatedness between the document or segment and the concept “shingles (disease)” versus the concept “shingles (construction material),” or by using methods of step 504. In certain embodiments, data engine 106 may maintain a list or database of ambiguous terms and associated collections of diagnostic terms that indicate one or more particular contexts for the terms (e.g., for ambiguous term A, diagnostic term collections A1, A2, and A3, where each of A1, A2, and A3 are groups of terms associated with three different meanings, respectively, where the presence or absence of any term from A1, A2, or A3 may be used to disambiguate between the three competing meanings for ambiguous term A when term A occurs in a document or input data 101). In certain embodiments, the associated collections of diagnostic terms may provide positive and negative indications that an ambiguous term has a particular meaning and should be linked to a particular entity. In certain embodiments, individual diagnostic terms may be associated with weights so that terms more strongly associated or negatively correlated with a particular meaning for an ambiguous term may have a larger effect on the disambiguation decision than less predictive diagnostic terms. In certain embodiments, data engine 106 implements a method for correcting existing word sense errors in existing instance-entity mappings. In certain embodiments, data engine 106 implements a method for suggesting new instance-entity mappings based on ambiguous terms in segments. As a significant fraction of medical coding errors result from miscoding of a homonym, including word sense disambiguation methods avoids such errors and greatly improves the accuracy and usefulness of the resulting enriched data.
In step 512, coordinate expansion is applied to the input data 101, segments created in step 502, or candidate instances identified in step 504. Coordinate expansion refers to the steps of recognizing where two or more entity instances exist in a condensed grammatical form (e.g., by identifying multiple instances linked by conjunctions such as “and,” “or”, or punctuation such as ‘/’), and accounting for the existence of all the instances. For example, the text “Diabetes Type I and II” is expanded to recite two separate instances—“Diabetes Type I” and “Diabetes Type II.” In another example, the text “lung/breast cancer” is expanded to identify “lung cancer” and “breast cancer.” In certain embodiments, data engine 106 implements a method for correcting errors using coordinate expansion in existing instance-entity mappings (e.g., where a term such as “Diabetes Type I and II” is only identified as the entity “Diabetes Type I”). In certain embodiments, data engine 106 implements a method for suggesting new instance-entity mappings based coordinate expansion of text in segments. As a significant fraction of medical coding errors result from failure to recognize the existence of all instances where the are expressed in a condensed grammatical form, including coordinate expansion methods avoids such errors and greatly improves the accuracy and usefulness of the resulting enriched data.
In certain embodiments, process 500 may further include methods for automatically generating document summaries by aggregating linked entities/concepts and generating a textual summary based on attributes of those entities/concepts.
In certain embodiments, process 500 may further involve generating an index for each enriched document 108 or an index for each patient. Such an index may include references to linked instances/concepts or concept attributes, or an extracted list of entities/concepts. In certain embodiments, such an index may be used to quickly search enriched documents. The index and enriched documents may be formatted as, for example, an Apache Lucene™ search index, and the associated documents may be compatible with an Apache Cassandra™ data store, and an Apache Solr™ search server.
In certain embodiments, steps 504-512 may be performed in a different order than shown
In step 604, semantic type annotators may be applied to segments. For example, a vital signs and observations annotator may be used to identify values for particular types of measurements based on patterns corresponding to measurement-value pairs, and accounting for common abbreviations—e.g., if a type of measurement such as “BMI” or “body mass index”, or “heart rate” is found, the annotator may search for a trailing colon followed by a number. In certain embodiments, the annotator may further identify the units for the measurement, and may evaluate whether the number is within the range of possibility for a measurement of that type. (E.g., an extracted value of weight=2 might be discarded as an unrealistic value for common units such as kilograms or pounds.) In certain embodiments, an annotator may associate a type of observation with a qualitative value, such as “skin condition: flushed.” A laboratory and test results annotator similarly may search in segments for the presence of test/value pairs (e.g., TSH (thyroid stimulating hormone), uric acid, or A1C/HbA1C (hemoglobin A1C, glycohemoglobin)) or panels of tests and result values (e.g., CMP (Comprehensive Metabolic Panel, including of 14 tests), hepatitis panel, or CBC (complete blood count)), where the values may be numerical or qualitative, such as “positive” where a tested condition is present. Additional semantic type annotators may include a drug and dosage annotator, a condition annotator, and a treatment procedures annotator, any of which may be based on a clinical guideline. In certain embodiments, semantic annotators may additionally use information about the context such as the section or type of document that includes the segment. In certain embodiments, annotators such as a condition annotator and treatment procedures annotator may apply one or more rules or clinical guidelines, such as machine-readable guideline 406a. In certain embodiments, a semantic-type annotator may initiate one or more base-term-type annotators, and the annotations of the semantic-type annotator may be dependent upon or rely upon the results or annotations of the base-term-type annotators.
In step 606, base-term-type annotators may be applied to segments. Base-term-type annotators may provide more specific information about an identified instance of a concept or other entity. Base-term-type annotators may include, for example, a negation annotator (e.g., determine if an instance or value is negated), an age group annotator (e.g., determine age of patient), a gender annotator (e.g., determine gender of patient), a geographic annotator, and a temporal value annotator. In certain embodiments, information from base-term-type annotators may provide the context to determine if, for example, a test result is within the normal range for the patient (e.g., where female and male patients are associated with different ranges, or expected values change based on age).
In certain embodiments, as part of any step of process 500, annotators may draw upon specific knowledgebases in order to identify additional entity instances and instance attribute values that are present in a segment or patient document. For example, specific knowledgebases may include code translations (to identify or tag, e.g., ICD-9 codes, CPT4, RxNorm), regular expression patterns (e.g., drug dosage patterns), temporal values, age values, geographic entities, semantic type concepts (e.g., Diseases, Laboratory tests, Drugs from semantic taxonomy 206), a database of document types and headings, a database of stemming, misspellings, and homonyms, clinical rules 210, and use-case-specific data, rules, and patterns.
In certain embodiments, steps 602-608 may be executed in a different order from the order shown in
In certain embodiments, for a given entity/concept, patient documents may be ranked to identify the most relevant documents to the given concept using a ranking procedure. In one example, a concept search term may be associated with a concept for use in ranking documents responsive to the search term, e.g., by matching or finding the most similar value of a representative attribute of the concept compared with the search term, such as the concept with a matching/similar name or title (such that the concept search term is equivalent to the representative attribute of the concept). In certain embodiments, one or more attributes of an entity may be used as entity search terms for the entity. Ranking may be based on (1) the occurrences of an entity/concept in the document—that is, the count and/or location of instances of an entity within particular fields of the document (e.g., ranking based on finding the entity search term at one or more locations in the title and/or the body of the document) and (2) relationship strength—that is, the strength of the relationship between the given entity and the concept instances occurring in the document. For example, a relationship may be stronger if the given entity and a document concept instance are directly connected in a concept taxonomy (having an edge count or distance of “1” in a graphical taxonomy of concepts). A relationship may be weaker if the given entity and document concept instance are indirectly connected by intervening concepts in the concept taxonomy (having edge counts or distances between concepts of 2 or more). In certain embodiments, only positive relationships are included in determining concept distances of 1 or more.
In a more specific example, one or more occurrence scores may be calculated by assessing the number of instances of an entity (e.g., measured as the number of occurrences of an entity search term) in a field of a patient document, where the field may be the item title (e.g., in the file name or in the text title within a document), the section title, the keywords field, the MESH keywords field, the abstract, or the body of the document. In certain embodiments, a higher score corresponds to a higher number of occurrences, and indicates greater relevancy to the given entity search term. For one or more fields, such as the body, the number of occurrences may be weighted according to where the instances are located within the field (e.g., higher weight earlier in the value for the field, and lower weight toward the end of the value or text).
Occurrence scores may be weighted by multiplying or adding a boost value to obtain a base score for one or more of the fields. Occurrence scores associated with the given entity for the patient document may be used to rank patient documents. A boost value is a positive or negative weighting factor. The boost values may be specific to particular fields. Base scores may thus be based on a combination of weighted occurrence scores. Base scores may be limited to a maximum base score by a threshold or cut off value. Base scores associated with the concept may be used to rank patient documents with respect to the given entity.
A relationship score for the patient document and given concept may be based on the base scores for instances of concepts/entities in the document that are related to the given entity, for example where a higher score indicates a stronger relationship. In certain embodiments, these related concepts must have a positive relationship to the given concept. For example, a positive relationship indicates that the two concepts have some positive semantic correlation, whereas in certain embodiments a negative relationship indicates that the two concepts are negatively correlated—i.e., the presence of one concept means that the second concept is less likely to be true or present. In certain embodiments, certain entities/concepts that might otherwise be related to the given entity are filtered out and not included in a relationship score, for example based on the value of an attribute or membership in a group. The relationship score may be the sum or product of the base score of related concepts (as indicated to be related by a graphical taxonomy structure) where the related concepts have an edge distance of 1, 2, or fewer than 3 edges relating the given entity to a related document concept. The relationship score may be the sum or product of a set of scores assessing the strength of the relevance of an individual document instance of a concept/entity to the given entity, where each of the set of scores is associated with an instance that is connected to the given concept in a taxonomy. In certain embodiments, relevance of an individual document instance of a concept/entity to the given entity may be based on, for example, a count of the number of instances of a query term/given entity in the document. The relationship score may be limited to a maximum value by a cutoff value, and/or re-scaled by a scaling value.
A title score may be calculated based on a count of the number of instances/occurrences of a query term/given entity in the title of the document. For purposes of the title score, the title of the document may be one or more of the file name, the title or headline appearing within the document, and section titles appearing within the document. The title score may be affected by the location of the query term/given entity within the title (i.e., where appearing earlier in the title leads to a higher score indicating greater relevancy), and the length or number of words in the title (i.e., where a greater length or larger number of words reduces the title score).
A map relevancy score may be calculated based on a combination of an occurrence score or base score, a relationship score, and a title score. Such a score may be adjusted or normalized based on the body length—for example, the score may be scaled inversely with the length of the body of the document.
In certain embodiments, documents in a set of documents or database may be ranked with respect to a query term or given entity based on one or more of an occurrence score or base score for the term/given entity, relationship score for the term/given entity, title score for the term/given entity, and/or map relevancy score for the term/given entity—for example, if a high score indicates better relevancy or a better match, the documents scoring higher than a threshold or the top 1, 2, 5, or 10 documents may be provided in response to a request for the top-ranked documents for a search term. In certain embodiments, a lower score may indicate a better match or better relevancy, and the documents scoring below a threshold may be provided in response to a request for the top-ranked documents for a search term.
In certain embodiments, occurrence scores, base scores, relationship scores, title scores, and/or map relevancy scores may be pre-calculated for a set of query terms or entities and stored in an index for a quick look-up. In certain embodiments, one or more of these scores may be calculated on an as-needed basis, for example at the time that a search term is provided by a user via a search user interface as a query term.
One specific example method for scoring documents is as follows:
****
(1) count the number of occurrences of concept in title, keywords, and other fields except body. Oti=the number of occurrences of concept in item title; Ots=the number of occurrences of concept in section title; Ok=the number of occurrences of concept in keywords; Okm=the number of occurrences of concept in mesh keywords; Oa=the number of occurrences of concept in abstract.
(2) calculate the body base score. Ob=the sum of (1−position/body length).
(3) calculate the base score for title, keywords, and body with boost value. Bt=boost value for title (default 8); Bk =boost value for keywords (default 4); Bb=boost value for body (default 1); B′=(Oti*Bt+Ots*Bt)+(Ok*Bk+Okm*Bk+Oa*Bk) +Ob*Bb.
(4) adjust the large base score for body and keywords. Cb=cutoff value linear to logarithmic (default 32); B=B′(B′≤Cb); B=Cb*(1+log(B′)−log(Cb)) if (B′≥Cb).
(5) remove concepts if part of an exceptions group.
(6) calculate a positive relationship score. P′=the sum of base score of positive concepts which has relationship with the concept with distance 1.
(7) adjust large relationship scores. Cp=cutoff value linear to logarithmic (default 48); P=P′ if (P′≤Cp); P=P′+(log(Cp))**2/(log(Cp)−1)*(P′/log(P′)−Cp/log(Cp)) if (P′≥Cp).
(8) adjust the relationship score. P=P*0.5.
(9) calculate the title special score (for itemtitle and sectiontitle). Br=term word count ratio boost value (default 64); Bp=term position boost value (default 32); We=the number of words of term (concept); Wt=the number of words of title; Pc=the position of term (concept) (in char); Lt=the length of title (in char); T(ils) =(Wc/Wt){circumflex over ( )}2*Br+(1−(Pc/Lt))*Bp; T=Ti+Ts.
(10) calculate map relevancy value as reference. M′=B+P+T.
(11) adjust map relevancy value with body length. Wb=the number of words of body; M=0 if (Wb <=50); M=M′*(Wb−50)/100 if (50≤Wb≤100); M=M′*(0.5+(Wb−100)/800) if (100≤Wb 500); M=M′ if (500 Wb≤1500); M=M′*(1−(Wb−1500)/2500) if (1500≤Wb≤2000); M=M′*0.8 if (Wb≥2000).
(12) adjust map relevancy value for Anatomy STY group. M=M*0.25 (if Anatomy STY group); M=M (otherwise).
(13) store M, B, P, T and Ob separately in the concept index and report them.
****
In step 702, a query is processed to identify concept instances in the query text that correspond to concepts in semantic taxonomy 206. In certain embodiments, this identification of instances in the query text uses the same or similar methods to those described in step 504 of method 500. In certain embodiments, the enriched documents will be searched using query entities/concepts that correspond to instances that exceeded a threshold score.
In step 704, data engine 106 will identify enriched documents that are also associated with the query entities/concepts, e.g. by searching an index for each enriched document 108 with each query entity. In certain embodiments, semantic taxonomy 206 will be used to identify concepts that are related to the query entities via concept relationships 306, and the resulting universe of entities/concepts will be used to search the enriched documents 108 and identify matching documents/patients.
In step 706, the matches between the query entities and patient documents will be evaluated by calculating one or more match scores to denote the quality of the match. Such a score may be based on an evaluation of the strength of the relatedness of query and hit entities/concepts in semantic taxonomy 206, or another measure of similarity. Matches below a threshold score may be discarded. In certain embodiments, the identification of enriched documents that may be relevant to the query entities (step 704) (that is, matches or hits to the query entities) and scoring of matches (706) may be performed by searching an index of enriched documents, where each document is associated with scores for ranking each respective document in accordance with the document's relevance to one or more concepts or search terms. In certain embodiments, identification of matches to the query entities may be performed by evaluating whether the search terms are implicated in the documents on demand, in response to receiving the query (e.g., each of a set of documents will be evaluated as to whether they contain instances of the query entity by annotating or enriching the documents with respect to the query entity).
In step 708, a list of matched patient documents, or identifiers to the matched documents may be provided. In certain embodiments, a list of patients (e.g., patients associated with the matched patient documents), or a list of objects generated based on entities instantiated in the documents may be provided (e.g., information extracted from the matched documents and provided in a different form). In certain embodiments, the list of documents may be organized by patient, and/or by the strength of the match score.
In certain embodiments, the query may be converted to a test that patients associated with enriched documents must meet—for example, patients who are undiagnosed but may have chronic kidney disease based on out-of-range readings for eGFR and microalbumin tests, or patients satisfy clinical guidelines for Type II diabetes.
In step 802, the patient documents may be enriched to identify entities including clinical measurements and their values using, for example, one or more steps of process 500.
In step 804, the patient documents may be evaluated using the clinical guidelines of interest by evaluating the patient document entities/concepts and attributes according to the rules extracted from the clinical guidelines, and for example enriching the patient documents, descriptions of particular patients, or patient records, using the results of one or more constituent rules and the overall guideline. See also the discussion of evaluating clinical rules in connection with
In step 806, for each patient, a match score may be calculated to estimate whether the patient satisfies the clinical guideline. In certain embodiments, an evaluation of the confidence in the result of any individual clinical rule or overall clinical guideline (e.g., a statistical evaluation of the quality of the evidence or the inference based on the guideline) may be associated with the patient or a patient document.
In step 808, the patient and/or patient documents may be associated with a designation, such as a concept or attribute regarding a determination based on a given guideline. Process 800 may alternatively provide one or more lists of patients falling into various categories with respect to the clinical guideline (e.g., list of patients meeting clinical guideline and list of patient not meeting clinical guideline, or have a particular classification under the guideline).
In certain embodiments, data engine 106 may be configured to generate and compare cohorts of patients having a particular annotation or set of annotations. For example, process 800 may be used to identify one or more cohorts of patients having a particular condition as defined by a clinical guideline.
For example, a clinical guideline (or similar set of rules) may be used to identify a group of patients who are likely to have a condition but have not been diagnosed with the condition. Data engine 106 may be used to identify all patients within a population who have not been diagnosed with chronic kidney disease by analyzing claims documents for the population, and excluding patients who are associated with a claim for treatment of chronic kidney disease (using, for example, process 500 to enrich the claims documents and identify patients already treated for or diagnosed with chronic kidney disease). Next, laboratory data for the patient population may be analyzed using two clinical rules associated with diagnosing chronic kidney disease: rules defining out-of-range readings for eGFR and microalbumin tests, using, for example, process 800. Data engine 106 may be configured to execute each of these processes, and to compare the second group patients associated with out-of-range readings to the first group of patients who have already been diagnosed with chronic kidney disease, and return the patients in the second group who are not also in the first group. Such a technique may be used to identify individuals with untreated or undiagnosed health issues who might benefit from proactive efforts to notify the patient or patient's practitioner and potentially provide additional care to the patients, rather than allowing care to be delayed until the next hospital or doctor visit.
In another example, a set of rules executing via data engine 106 may be used to propose optimal post-hospital discharge care for a patient. Certain socioeconomic factors are predictive of whether a patient may be shortly readmitted to a hospital after receiving care at a hospital. Frequently, a subsequent readmission suggests that the patient did not receive adequate post-discharge health services. Accordingly, a proposed post-discharge care plan designed to minimize inadequate post-discharge services may be based on socioeconomic data, such as a credit history or credit score from a credit bureau such as Equifax, TransUnion, or Experian, and non-medical factors such as whether the patient lives alone, the geographic location of the patient's residence, and the patient's income. Data engine 106 may propose a care plan by (1) identifying one or more needed post-acute-care health services based on the patient's health conditions (as evident from documents and other records of the patient's healthcare, e.g., using concept relationships in semantic taxonomy 206), and (2) proposing post-acute care resources capable of handling the needed post-acute-care health services based on the patient's socioeconomic or psychosocial data (e.g., proposing geographically appropriate post-acute care resources such as home health aids or care givers, medical malpractice attorneys), including contact information and names of providers. In certain embodiments, such proposed post-acute care resources may be assigned a score and prioritized using factors including, for example, estimated cost, coverage by the patient's health plan, and geographical distance from the patient's residence.
In certain embodiments, the knowledge bases and rules used by step 508 of process 500 may include a set of rules configured for selecting proposed medical codes based on patient documents for a particular patient, using data engine 106. Medical codes may be used to document health services for a patient, and to estimate the risk level of a patient and a population of patients. Automated generation of proposed codes using data engine 106 to mine structured and unstructured patient data enables coding of patient conditions that would otherwise be missed using a manual coding process. Proposed codes may be represented as attributes or instances of entities in enriched data 108. Proposed codes may be based on (1) evidence supporting a particular diagnosis that is present in the enriched patient documents using, for example, the processes described above (e.g., process 500 and 800), (2) whether a code has already been associated with the patient during the current health plan year (e.g., based on a rule that if a condition continues to exist in the patient, it may be claimed once per health plan year), and (3) given multiple applicable codes, which code most accurately describes the patient's condition.
Certain embodiments of the user interfaces described herein facilitate a user's ability to review and generate documentation for medical codes found in a patient's medical record. Certain embodiments of the user interfaces described herein facilitate a user's ability to review and generate documentation for healthcare quality measures and care gaps found in a patient's medical record. Certain embodiments of the user interfaces described herein facilitate a user's ability to review and generate documentation for clinical (e.g., conditions, treatments) and non-clinical (e.g., patient ID, addresses, provider names, dates of service) entities that may be captured from a patient's chart. Certain embodiments of the user interfaces described herein facilitate a user's ability to review and generate documentation for clinical decision support prompts, for example identifying medical conditions, medications, treatments, care gaps, and other clinical information that is surfaced (e.g., as identified entities) along with associated evidence, that can support a clinician's ability to make a more informed clinical judgment regarding the patient's care plan. These features may improve the outcomes of a patient's care plan, and help with more efficient use of health care resources.
The proposed code in sub-panel 906a corresponds to an ICD-10 code for Parkinson's Disease, G20. The evidence for the proposed code in sub-panel 906a is shown in panel 902, which displays a “Patient Plan” document for the patient (i.e., the third of six documents as indicated by document selector 903). The Patient Plan document is scrolled to the location of the strongest or most important evidence underlying the proposed code in sub-panel 906a and identified using an evidence tag 908a—in this example, the evidence is a notation of “Parkinson's Disease (332.0)” in the Patient Plan. In certain embodiments, the most important evidence is the evidence most strongly suggesting a proposed code, or the evidence making the largest contribution to RAF. In certain embodiments, the evidence tag is displayed as highlighting, underlining, bolding, other emphasis, or using an overlaid icon. For example, evidence used to support a proposed code under a particular model may be identified using a particular type of emphasis, such as highlighting in a particular color corresponding to the model (e.g., yellow highlighting of evidence corresponding to proposed codes based on the HCC model). In certain embodiments, the user may cycle through one or more evidence tags 908 (e.g., 908a and 908b) to view each item of evidence underlying a proposed code/concept using a link, button, or other user interface control, and with each new current evidence tag, the relevant document in panel 902 is scrolled to the location of the current evidence tag. Document selector 903 may indicate both which numbered document is being displayed in panel 902 and which documents include evidence that has been identified to support the current proposed code (see, e.g., an indicator—here, a horizontal bar—positioned over the numbered document containing evidence supporting the code); a second type of emphasis in document selector 903 indicates the currently displayed document (e.g. highlighting of the current document number).
Each sub-panel 906 may include action controls for marking a preliminary or final status as part of a work flow: e.g., the option to accept (i.e., confirm the proposed code), reject the code, or mark the code for further review. Sub-panel 906a shows a “comments” drop-down control that allows the user to tag a proposed code with comments such as “Does not meet the M.E.A.T. Criteria”; “Not an active condition/Historical”; “Incorrect inference/not supported”; “Already billed for the current plan year”; or “other.” Sub-panel 906a may additionally permit entry of free-text comments to be associated with the code.
The sub-panels 906 further include user interface controls for cycling through a sequence of proposed codes (see up/down arrows in subpanel 906a), and panel 904 includes an additional user interface element 910 (e.g., the vertical line with selectable dots corresponding to different sub-panels 906) for cycling through a sequence of proposed codes. The user interface may further support keyboard commands for moving to the next or previous code. Panel 904 shows selectable peeks 912 of additional proposed code subpanels 906; upon selection, peeks 912 expand to present a corresponding sub-panel 906.
In certain embodiments, a panel 902 may be used to present various types of patient documents, for example, a patient's chart. Panel 904 may be used to present sub-panels 906 (and selectable peeks 912, e.g. peek 912a and peek 912b), such that each sub-panel 906/peek 912 presents a suspected disease or condition. Predictions resulting in suspected diseases or conditions may be based on the content of the patient's chart, based on one, two, three, or more predictive models. The evidence supporting a particular suspected disease may be highlighted in the patient's chart (shown in panel 902) in a manner that corresponds to the model on which the prediction is based. (E.g., for a suspected disease X, presented in a sub-panel 906, with a prediction based on an HCC model, all supporting evidence in the chart may be highlighted with a color associated with the HCC model, such as yellow or red highlighting.) In certain embodiments, the highlighting may indicate a distinction between evidence highlighted as supporting the in-focus or current proposed code/disease in a panel 906 as distinguished from evidence that is color coded under the same prediction model but supporting a different proposed code/disease—for example, highlighting border vs. no border, stronger or more opaque highlighting vs. faded or less opaque highlighting. The sub-panel 906 may present, e.g., a code, textual label, and description of the suspected condition, as well as information such as the date of service, rendering provider, page number within the chart, an input box for receiving comments, and a display for any additional evidence that has been attached to the suspected condition (e.g., following a sequence similar to the use of window 1102 and additional evidence 1202 as explained below). In certain embodiments, separate controls may be respectively provided for scrolling through or displaying, in sequence, (1) highlighted evidence in patient documents, (2) patient documents (e.g., 903), and (3) proposed codes/predicted conditions/identified concepts (e.g., up/down arrows in sub-panel 906; element 910).
In certain embodiments, user interface 900 will clearly show the patient's name and supporting healthcare information (e.g., date of birth, gender, primary provider, patient ID). In certain embodiments, it will further provide workflow elements for optimizing processing of patient data (e.g., interactive checkboxes for marking coding as complete, and for marking quality assurance (QA) as complete (and in some examples, text boxes or other user interface controls for receiving QA-related comments), for example to track progress with processing a set of patient records). In certain embodiments, QA user interface controls are only displayed after an encounter is marked as “coding completed” (where an “encounter” is, e.g., a grouping associated with patient document or set of documents related to a patient care event) In certain embodiments, the user interface will provide a user control for accessing/viewing an original version of the document shown in panel 902 (e.g., a scanned PDF or raw text). In certain embodiments, the patient document in panel 902 and the current sub-panel 906 are synchronized—that is, as the user cycles through the sequence of proposed codes/suspected diseases, sub-panel 906 presents the current proposed code or suspected disease, and the chart or document presented in panel 902 is scrolled to the first or most important supporting evidence for the current code/disease. The most important supporting evidence may be the information providing the largest contribution to a score associated with the associated prediction model (e.g., HCC, RxHCC, and the like).
In certain embodiments, user interface 900 may be compatible with protected health information (PHI) best practices—for example, any display of the original document in panel 902 or a pop-up window may automatically close, and the information in panel 902 may refresh if the user selects a different patient in the user interface. This may avoid patient mismatch issues where a window or panel shows one patient's information while another area of the user interface shows data for another patient. In certain embodiments, a particular patient's data may be locked so that it cannot be altered via user interface 900 until the patient is unlocked via a special interface, for example by a user of the system having specified super user or administrator privileges.
In certain embodiments, user interface 900 may include a panel displaying already coded concepts—e.g., a listing of accepted codes for a patient or for a particular period of time for the patient. In certain embodiments, user interface 900 provides controls for accessing the next or previous patient information from a list of patients. In certain embodiments, the ordering of such a list of patients is based on the ordering of patients in a different user interface, such as user interface 1700 described below.
In the embodiment of user interface 1300 shown in
In certain embodiments, evidence tags 908 are presented as in-focus when they are used to support the current proposed code shown in sub-panel 906. In certain embodiments, evidence tags 908 are presented as in-focus when they represent the current tag 908 as selected using an evidence selector 2002. As shown in
The listing of patients in panel 1704 provides an overview of each patient 1710, including the current and projected RAF if all proposed codes are accepted, and the projected RAF adjustment 1712 (i.e., the difference between the projected and current RAF). The listing in panel 1704 may be exported to a tab-separated text file, Excel spreadsheet, or other appropriate format. The listing may be sorted by any column; for example, the listing shown in
In certain embodiments, one or more computing devices 2306 host a server 2308, such as an HTTP server, and an application 2312 that implements aspects of the data engine 106. Knowledgebases such as a code translation knowledgebase or other databases may be stored in data store 2314. Application 2312 may support an Application Programming Interface (API) 2310 providing external access to methods for accessing data store 2314. In certain embodiments, client applications running on user devices 2302 may access API 2310 via server 2308 using protocols such as HTTP or FTP.
Below are set out hardware (e.g., machine) and software architectures that may be deployed in the systems described above, in various example embodiments.
RF module 2406 may include a cellular radio, Bluetooth radio, NFC radio, WLAN radio, GPS receiver, and antennas used by each for communicating data over various networks, such as a telecommunications network.
Audio processor 2408 may be coupled to a speaker 2410 and microphone 2412. Touch sensitive display 2416 receives touch-based input. Other input modules or devices 1018 may include, for example, a stylus, voice recognition via microphone 2412, or an external keyboard.
Accelerometer 2420 may be capable of detecting changes in orientation of the device, or movements due to the gait of a user. Optical sensor 2422 may sense ambient light conditions, and acquire still images and video.
System 2500 includes a bus 2506 or other communication mechanism for communicating information, and a processor 2504 coupled with the bus 2506 for processing information. Computer system 2500 also includes a main memory 2502, such as a random access memory or other dynamic storage device, coupled to the bus 2506 for storing information and instructions to be executed by processor 2504. Main memory 2502 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 2504.
System 2500 includes a read only memory 2508 or other static storage device coupled to the bus 2506 for storing static information and instructions for the processor 2504. A storage device 2510, which may be one or more of a hard disk, flash memory-based storage medium, magnetic tape or other magnetic storage medium, a compact disc (CD)-ROM, a digital versatile disk (DVD)-ROM, or other optical storage medium, or any other storage medium from which processor 2504 can read, is provided and coupled to the bus 2506 for storing information and instructions (e.g., operating systems, applications programs and the like).
Computer system 2500 may be coupled via the bus 2506 to a display 2512 for displaying information to a computer user. An input device such as keyboard 2514, mouse 2516, or other input devices 2518 may be coupled to the bus 2506 for communicating information and command selections to the processor 2504.
The processes referred to herein may be implemented by processor 2504 executing appropriate sequences of computer-readable instructions contained in main memory 2504. Such instructions may be read into main memory 2502 from another computer-readable medium, such as storage device 2510, and execution of the sequences of instructions contained in the main memory 2502 causes the processor 2504 to perform the associated actions. In alternative embodiments, hard-wired circuitry or firmware-controlled processing units (e.g., field programmable gate arrays) may be used in place of or in combination with processor 2504 and its associated computer software instructions to implement the invention. The computer-readable instructions may be rendered in any computer language including, without limitation, Python, Objective C, C#, C/C++, Java, Javascript, assembly language, markup languages (e.g., HTML, XML), and the like. In general, all of the aforementioned terms are meant to encompass any series of logical steps performed in a sequence to accomplish a given purpose, which is the hallmark of any computer-executable application. Unless specifically stated otherwise, it should be appreciated that throughout the description of the present invention, use of terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, “receiving”, “transmitting” or the like, refer to the action and processes of an appropriately programmed computer system, such as computer system 2500 or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within its registers and memories into other data similarly represented as physical quantities within its memories or registers or other such information storage, transmission or display devices.
The foregoing description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” and the like are used merely as labels, and are not intended to impose numerical requirements on their objects.
Claims
1. A computing device comprising a display screen, the computing device being configured to:
- display on the display screen a current code in a code panel, wherein the current code was automatically determined by one or more of: based on evidence supporting a particular diagnosis that is present in the plurality of patient documents, based on whether a code has already been associated with the patient during the current health plan year, or given multiple applicable codes, identifying the code that most accurately describes the patient's condition;
- update the display on the display screen of a document panel showing a first portion of one or more of the plurality of patient documents by automatically scrolling to a second portion of the one or more of the plurality of patient documents, wherein the second portion is associated with a first evidence tag demarking evidence supporting the current code;
- detect a user selection of a third portion of the one or more of the plurality of patient documents;
- associate the third portion with the current code and creating a second evidence tag based on the third portion and an evidence category;
- update the display of the document panel to present the second evidence tag overlaying the third portion in accordance with the evidence category.
2. The computing device of claim 1, further comprising: detect a user selection of one of the plurality of code peeks corresponding to a selected code;
- display on the display screen a plurality of code peeks, each code peek corresponding to an additional code for the patient that is not the current code;
- update the display on the display screen of the document panel by automatically scrolling to a fourth portion of the one or more of the plurality of patient documents, wherein the fourth portion is associated with a third evidence tag demarking evidence supporting the selected code.
3. The computing device of claim 1, wherein the document panel comprises two or more sub-panels showing two or more portions of one or more patient documents.
4. The computing device of claim 3, further comprising:
- provide a document selector on the display screen in each of the two or more sub-panels for receiving user instructions to display a particular document of the one or more patient documents.
5. The computing device of claim 1, wherein evidence tags underlying the current code are shown with emphasis, and evidence tags not underlying the current code are shown without emphasis.
6. The computing device of claim 1, further comprising:
- provide on the display screen an evidence selector for receiving user instructions to navigate through evidence that is associated with an evidence tag that corresponds to the current code.
7. The computing device of claim 1, wherein the evidence category concerns additional evidence and associating the third portion with the current code comprises presenting a control including an option to attach additional evidence, receiving a selection of the option to attach additional evidence.
8. The computing device of claim 1, further comprising:
- upon detection of a selection of a fourth portion of the one or more of the plurality of patient documents, provide on the display screen a control for adding a new code associated with the fourth portion and the patient.
9. The computing device of claim 1, further comprising:
- provide on the display screen a document selector in the document panel, wherein no document associated with the document selector does not contain evidence supporting any code associated with the patient.
10. The computing device of claim 1, wherein the current code is an ICD-9, ICD-10, RxNorm, or CPT-4 code.
11. A method for facilitating evaluation and assignment of codes based on a plurality of patient documents associated with a patient, the method comprising: detecting a user selection of a third portion of the one or more of the plurality of patient documents;
- displaying a current code in a code panel, wherein the current code was automatically determined by one or more of: based on evidence supporting a particular diagnosis that is present in the plurality of patient documents, based on whether a code has already been associated with the patient during the current health plan year, or given multiple applicable codes, identifying the code that most accurately describes the patient's condition;
- updating the display of a document panel showing a first portion of one or more of the plurality of patient documents by automatically scrolling to a second portion of the one or more of the plurality of patient documents, wherein the second portion is associated with a first evidence tag demarking evidence supporting the current code;
- associating the third portion with the current code and creating a second evidence tag based on the third portion and an evidence category;
- updating the display of the document panel to present the second evidence tag overlaying the third portion in accordance with the evidence category.
12. The method of claim 11, further comprising:
- displaying a plurality of code peeks, each code peek corresponding to an additional code for the patient that is not the current code;
- detecting a user selection of one of the plurality of code peeks corresponding to a selected code;
- updating the display of the document panel by automatically scrolling to a fourth portion of the one or more of the plurality of patient documents, wherein the fourth portion is associated with a third evidence tag demarking evidence supporting the selected code.
13. The method of claim 11, wherein the document panel comprises two or more sub-panels showing two or more portions of one or more patient documents.
14. The method of claim 13, further comprising:
- providing a document selector in each of the two or more sub-panels for receiving user instructions to display a particular document of the one or more patient documents.
15. The method of claim 11, wherein evidence tags underlying the current code are shown with emphasis, and evidence tags not underlying the current code are shown without emphasis.
16. The method of claim 11, further comprising:
- providing an evidence selector for receiving user instructions to navigate through evidence that is associated with an evidence tag that corresponds to the current code.
17. The method of claim 11, wherein the evidence category concerns additional evidence and associating the third portion with the current code comprises presenting a control including an option to attach additional evidence, receiving a selection of the option to attach additional evidence.
18. The method of claim 11, further comprising:
- upon detection of a selection of a fourth portion of the one or more of the plurality of patient documents, providing a control for adding a new code associated with the fourth portion and the patient.
19. The method of claim 11, further comprising:
- providing a document selector in the document panel, wherein no document associated with the document selector does not contain evidence supporting any code associated with the patient.
20. The method of claim 11, wherein the current code is an ICD-9, ICD-10, RxNorm, or CPT-4 code.
Type: Application
Filed: Apr 13, 2020
Publication Date: Jul 30, 2020
Inventors: Niraj Katwala (San Francisco, CA), Shahyan Currimbhoy (San Francisco, CA), Dean Stephens (San Francisco, CA)
Application Number: 16/847,396