Re-analysis and generation of Overstay2 model: Difference between revisions

From CCMDB Wiki
Jump to navigation Jump to search
JMojica (talk | contribs)
JMojica (talk | contribs)
 
(19 intermediate revisions by 2 users not shown)
Line 2: Line 2:


== Defining the contributing factors data ==
== Defining the contributing factors data ==
The model depends on a regression analysis of a number of possible factors in our regularly collected data. Our data structure had changed since the original project, so we cleaned up our definitions, resulting in the [[Data definition for contributing factors for the Overstay2 project]].  
The model depends on a regression analysis of a number of possible factors in our regularly collected data. Our data structure had changed since the original project, so we cleaned up our definitions, resulting in the [[Data definition for factor candidates for the Overstay2 project]].  


{{Discuss | Still needs:
{{Discuss | Still needs:
* considerations
* considerations
* values we considered and rejected
* values we considered and rejected
* minimize duplication of [[Data definition for contributing factors for the Overstay2 project]], things that users of the data need to know going forward need to live there, decisions taken that don't affect ongoing process should be documented here.  
* minimize duplication of [[Data definition for factor candidates for the Overstay2 project]], things that users of the data need to know going forward need to live there, decisions taken that don't affect ongoing process should be documented here.  
  }}
  }}


Line 14: Line 14:
** \\ad.wrha.mb.ca\WRHA\HSC\shared\MED\MED_CCMED\Julie\MedProjects\Overstay_Project_2025   
** \\ad.wrha.mb.ca\WRHA\HSC\shared\MED\MED_CCMED\Julie\MedProjects\Overstay_Project_2025   
   
   
* '''Reference Admit DtTm:''' We based the date range on the first medicine admit date during a [[Data definition for contributing factors for the Overstay2 project#Hospitalization]], based on the earliest [[Boarding Loc]] dttm.  
* '''Reference Admit DtTm:''' We based the date range on the first medicine admit date during a [[Data definition for factor candidates for the Overstay2 project#Hospitalization]], based on the earliest [[Boarding Loc]] dttm.  


* '''Dataset inclusion criteria: (all/and) of the following
* '''Dataset inclusion criteria: (all/and) of the following
** ''Reference Admit DtTm'' >=2020-11-01 and <2025-01-01  
** ''Reference Admit DtTm'' >=2020-11-01 and <2025-01-01  
** [[RecordStatus]] = Vetted   
** [[RecordStatus]] = Vetted   
** final [[dispo]] of the [[Data definition for contributing factors for the Overstay2 project#Hospitalization]] is to a destination outside of the hospital of the admission (can be to other hospital)  
** final [[dispo]] of the [[Data definition for factor candidates for the Overstay2 project#Hospitalization]] is to a destination outside of the hospital of the admission (can be to other hospital)  
** HOBS: include the record only if:   
** HOBS: include the record only if:   
*** the first medicine admission during a hospitalization is on a HOBS unit, and  
*** the first medicine admission during a hospitalization is on a HOBS unit, and  
Line 33: Line 33:
| All  || All|| 42,078|| 1741 (4.1%) || 40,337 (95.9%)
| All  || All|| 42,078|| 1741 (4.1%) || 40,337 (95.9%)
|-
|-
| All  || Training|| 21,054|| 859 (2.0%) || 20,195 (48.0%)
| All  || Training|| 21,054|| 859 (4.1%) || 20,195 (95.9%)
|-
|-
| All  || Validation|| 21,024|| 882 (2.1%) || 20,142 (47.9%)
| All  || Validation|| 21,024|| 882 (4.2%) || 20,142 (95.8%)
|-
|-
|-
|-
Line 45: Line 45:
|-
|-
|-
|-
| SBGH || All|| 13,762|| 1741 (2.9%) || 13,364 (97.1%)
| SBGH || All|| 13,762|| 398 (2.9%) || 13,364 (97.1%)
|-
|-
| SBGH  || Training|| 6,905|| 204 (3.0%) || 6,701 (97.0%)
| SBGH  || Training|| 6,905|| 204 (3.0%) || 6,701 (97.0%)
Line 115: Line 115:
== Analysis and model generation ==
== Analysis and model generation ==
=== Parameter candidates ===
=== Parameter candidates ===
See [[Data definition for contributing factors for the Overstay2 project]] for the parameters currently in use.
See [[Data definition for factor candidates for the Overstay2 project]] for the definitions.
# Age  
 
# Pre-living situation – from non-PCH/Chronic  
==== Location Grouping considerations ====
# ADL components and   
{{DJ |
#* (ADLSCore-12) *NH  among those who came from PCH/CHF  
* When I looked at your code that breaks out {{OSDD|Location / living arrangement}} into groupings and measures it seemed to me that it was mixing up data cleaning and validation with measure definition and it might be good to keep those separate. Cleaning and validation should apply to the data in general, not just this model, no? It would make sense to document the steps taken and things found and remedies implemented on this page, but having them part of the definition seems problematic. I think I sent that as an email, but I think it would be better to track this on the wiki to have a trail for the decisions. [[User:Ttenbergen|Ttenbergen]] 12:03, 25 June 2025 (CDT)
#* (ADLSCore-12) *Age - interaction with Age   
}}
# GCS components
 
# Postal Code  
=== reference/examples for links ===
#* Winnipeg - R2* an R3*
{{DJ|
#* Northern
* leaving these here as examples how to link to the definitions on [[Data definition for factor candidates for the Overstay2 project]]. The currently used definition should live there, but changes and reasons should probably live here. We can change that format, talk to me if needed. [[User:Ttenbergen|Ttenbergen]] 11:35, 25 June 2025 (CDT)
#* Northwestern Ontario
}}
#* P0X (Kenora Region) 
* {{OSDD|Age}}
#* P0Y (Whiteshell Park Region) 
* {{OSDD|PCH/Chronic Care}}
#* Urban P9N (Kenora)
* other {{OSDD|Location / living arrangement}}
#* Urban P8T (Sioux Lookout)
* {{OSDD|ADL components}} and   
#* Urban P8N (Dryden)
** {{OSDD|ADL_Adlmean_NH }} - among those who came from PCH/CHF  
#* P0W (Rainy River Region) 
** {{OSDD|ADL_Adlmean_age}} - interaction with Age   
#* P9A(Fort Frances)
* {{OSDD|Glasgow Coma Scale}}
#* Rest MB 
* {{OSDD|Location / living arrangement}} Postal Code (also see [[#Location Grouping for [[Postal Code]] is N/A]])
#* Rest 
* {{OSDD|Charlson Diagnoses}} (Categories and Total Score)
#** Analysis notes: JM found postal code N/A =2759, JM used the R_Province, Pre_inpt_Location, Previous Location instead to define the 5 categories above. Also encountered no match in the Postal_Code_Master List but was able to categorized based on the first 3 characters (N=273) - list given to Pagasa to add. (DR agreed in the meeting with JM Feb10)  
** MI, CHF, PVD, CVA , Pulmonary, Connective, Ulcer, Renal
# Charlson Comorbids (Categories and Total Score)
** {{OSDD|Charlson Comorbidity Index}}
#* MI, CHF, PVD, CVA , Pulmonary, Connective, Ulcer, Renal
** {{OSDD|Charlson Score * NH }} - among those who came from PCH/CHF
#** score of the component is used 
* {{OSDD|Diagnoses}} that might prevent/delay meeting PCH/Home Care criteria
#* Charlson Score * NH among those who came from PCH/CHF
* {{OSDD|Homeless}}
# Other Diagnoses (admit and comorb) : 
 
# Diagnoses  that might prevent/delay meeting PCH/Home Care criteria
=== Location Grouping for [[Postal Code]] is N/A ===
#* having a trach: "Tracheostomy, has one" ICD10 Z93.0
Analysis notes: JM found postal code N/A =2759, JM used the R_Province, Pre_inpt_Location, Previous Location instead to define the 5 categories above. Also encountered no match in the Postal_Code_Master List but was able to categorized based on the first 3 characters (N=273) - list given to Pagasa to add. (DR agreed in the meeting with JM Feb10)
#* having a PEG (Percutaneous Feeding Tube): (has gastrostomy code, ICD10 Z93.1)
#* possibly "Iatrogenic, mechanical complication/dysfunction, internal prosthetic device or implant or graft NOS" - it implies that an internal device is there, PCHs would disallow some of these. ICD10 T85.6
#* possibly: CCI "Implantation of Internal Device" - PEGs are included in this, and some others might also disqualify; if the device stays the pt might not be accepted by PCH but we would not necessarily code removal CCI component2 53
#* "Suprapubic catheter, indwelling, has one" ICD10 Z93.5
#* "Artificial opening NOS, has one" - these include Ileal conduit (urostomy), PD catheter, Nephrostomy tube, Mitrofanoff procedure ICD10 Z93.8
#*"Ileostomy or colostomy, has one", "Gastrostomy, has one” ICD10 Z93.4, ICD10 Z93.1
#*  addiction opioids and stimulants 
#** stimulants: 
#** "Stimulants incl methamphetamine, * "
#** "Cocaine, *" ICD10 F15.0,F15.2,F15.3, T40.5, F14.0, F14.2, F14.3
#** opioids: "Opioid/narcotic, *" ICD10 T40.6
#* Dementia ICD10 F01.1, F03, G30
# Pre-admit inpatient location: homeless


=== Dataset split into training and validation data ===
=== Dataset split into training and validation data ===
Line 166: Line 153:


=== Decision on a model ===
=== Decision on a model ===
For each site's training set and validation set, chi square test for independence between the variable OS (Overstay >= 10days and Overstay < 10d) and each factors listed [[Data definition for contributing factors for the Overstay2 project]] to identify the factors that may affect the overstay.
*For each site's training set and validation set, perform chi square test for independence between the variable OS (Overstay >= 10days and Overstay < 10d) and each factors listed [[Data definition for factor candidates for the Overstay2 project]] to identify the factors that may affect the overstay individually.
Methodology to find the '''best''' model involves  
*Training data set - Methodology to find the '''best''' model involves  
1) Basic plan for selecting the variables for the model
** Basic plan for selecting the variables for the model -
{{DJ |
*** Perform logistic model with the OS as the dependent variable and the independent variables beginning with the results from univariable analysis above and
* the statistical tests that were done to evaluate the model
*** Then by multivariable analysis using all independent variables (full model) and select via stepwise procedure both forward and backward selection.
* the factors leading to our decision on a given model per site
*** Examine the importance of each variable included based on the probability result of its coefficient. 
* links to files
*** Those not contributing to the model are eliminated and new model is fitted. The process of deleting, refitting and verifying continues until it appears that all important variables are already included.
}}
** Assess the adequacy of the model both in terms of the individual variables and its overall fit by the following : 
***Estimated coefficients showing p-values of < 0.05 or having clinical relevance with p-values higher or close to 0.05 are included in the model.
***The association of the predicted probabilities and observed responses is calculated by the Concordance (C) index and area under the curve (AUC)  between the true positive rate (sensitivity) and false positive rate (1-specificity).  A value > 0.5 implies ability to discriminate the positive and negative outcomes while a value 1 implies perfect classification.  This quantity indicates how well the model ranks predictions .
***The Hosmer-Lemeshow Goodness-of-fit test is used to assess how well the logistic regression model fits the data.  A high p-value (usually > 0.05) means the model fits well while a low p-value (≤ 0.05) indicates poor fit of the model to the data.
*Validation data set involves:
** Using the candidate models from the training data set - fit the model using the validation data set.
** From the predicted values, determine the Concordance (C) index and area under the curve (AUC)  between the true positive rate (sensitivity) and false positive rate (1-specificity).  It must result to values closer to 1.
** Group the predicted data into deciles (10 groups) and for each group, the observed number of events is compared to the expected number of events predicted by the model. The sum of these 10 groups called  Chi-square statistic with 8 degrees of freedom must have p-value > 0.05 to denote good fit.
* If both the training data set and validation data set gave good results in all tests, then the model is a candidate for selection. If there are more than one candidate models, the one having more clinical relevance is opted. 


This resulted in [[Overstay2 scoring models]].
* This resulted in [[Overstay2 scoring models]] by site.


=== Decision on a probability threshold ===
=== Decision on a probability threshold ===

Latest revision as of 15:43, 12 August 2025

This page is about the development of the model for generating scores/colours for Project Overstay2. Since our data collection and the healthcare system changed since the first iteration, we did a re-analysis and generation of Overstay2 model, resulting in the Overstay2 scoring models that generate the Overstay2 colour. Also see the Overstay2 Overview.

Defining the contributing factors data

The model depends on a regression analysis of a number of possible factors in our regularly collected data. Our data structure had changed since the original project, so we cleaned up our definitions, resulting in the Data definition for factor candidates for the Overstay2 project.

Still needs:
  • SMW


  • Cargo


  • Categories

Model dataset and date range

  • Dataset: We used the file 2025-2-3_13.56.31_Centralized_data.accdb as a basis for the project. A copy for future reference is at
    • \\ad.wrha.mb.ca\WRHA\HSC\shared\MED\MED_CCMED\Julie\MedProjects\Overstay_Project_2025
  • Dataset inclusion criteria: (all/and) of the following
    • Reference Admit DtTm >=2020-11-01 and <2025-01-01
    • RecordStatus = Vetted
    • final dispo of the Data definition for factor candidates for the Overstay2 project#Hospitalization is to a destination outside of the hospital of the admission (can be to other hospital)
    • HOBS: include the record only if:
      • the first medicine admission during a hospitalization is on a HOBS unit, and
      • there is a Transfer_Ready_Dttm associated with that unit, and
      • the patient is discharged from that unit to a a destination outside of the hospital of the admission (can be to other hospital)
  • This resulted in a dataset with the following:
    • Total hospitalizations: 42,078
Site Data Set Total Overstay >= 10d Overstay < 10 days
All All 42,078 1741 (4.1%) 40,337 (95.9%)
All Training 21,054 859 (4.1%) 20,195 (95.9%)
All Validation 21,024 882 (4.2%) 20,142 (95.8%)
HSC All 16,813 616 (3.7%) 16,197 (96.3%)
HSC Training 8,371 295 (3.5%) 8,076(96.5%)
HSC Validation 8,442 321 (3.8%) 8,121 (96.2%)
SBGH All 13,762 398 (2.9%) 13,364 (97.1%)
SBGH Training 6,905 204 (3.0%) 6,701 (97.0%)
SBGH Validation 6,857 194 (2.8%) 6,663 (97.2%)
GGH All 11,503 727 (6.3%) 10,776 (93.7%)
GGH Training 5,778 360 (6.2%) 5,418 (93.8%)
GGH Validation 5,725 367 (6.4%) 5,358 (93.6%)

The SAS code defining this dataset can be found in S:\MED\MED_CCMED\Julie\MedProjects\Overstay_Project_2025\Data\prepdata_7Feb2025.sas

The CFE code defining this dataset

still needs to be set up by Tina...

  • SMW


  • Cargo


  • Categories
Specific decisions were discussed and made.   

JM had found Vetted n=226 cases with Last discharge DtTm (in ICU or Med) after 2024 until Feb 3,2025. Only 13 did not leave own site, 19 expired, 194 left the site. From the 213, some are long stayed patients admitted Aug –1, Sept-3, Oct-8, Nov-18, Dec=196. (DR agreed in the meeting with JM Feb10).

  • First Med Admits who were RecordStatus = incomplete but with Dispo DtTm present are excluded.
  • First Med Admits who were still in the unit are excluded (ie no Dispo DtTm)
  • First Med Admits who were RecordStatus = vetted are included.
  • Deceased should be included: I think there was talk about excluding these; I don’t think that is valid. We don’t know when they arrive that they will die, and if they die after becoming transfer ready that is still an overstay we could have avoided.
  • Discharge to or Previous Location = Hospice should be included – for the same reason we would include PCH.
  • Palliative patients should be included
    • because our definition “Palliative care” (ICD10 Z51.5) doesn’t imply death is imminent. Palliative patients were excluded before, but our definition has changed, and how this appears to be handled now has as well. Also, they may be waiting in hospital for a hospice, so again, that’s overstay.
    • Discharged to STB Palliative Care - -included (DR agreed in the meeting with JM feb10)
  • AMA – include these.
    • Initial thought was that AMA implies they were not discharge ready, but it could also include those who were sick of waiting for a PCH and walked out. They might just be someone who waited for 2 weeks while dispo ready and eventually ran away because they did not want to wait for home care or etc any longer. But can someone be transfer ready and still leave AMA? Yes, e.g. when they were transfer ready but the discharge took so long that they no longer are and now can leave AMA again.
      • JM found 3061 dispo AMA (2810 wo TR_dt, 251 w TR_Dt)
  • Dispo TCU/TCE – include, and treat as discharge from this hospital
  • Dispo HSC Lennox Bell/Institution NOS – treat as we would back-to-PCH/home
  • Dispo another ward within WPG (LAU at CON, OAKS, VIC)? – include, and treat as discharge from this hospital
  • Unknown disposition at discharge on the last admission – those transferred to another service (ICU/ OR/ etc within the same hospital - already excluded with RecordStatus = ”incomplete” and by only including if (1c)
  • Dispo Transfers to different hospital ICU within Winnipeg – include
  • Transfers outside WPG – include and treat as if discharged
  • Overstay 5 to 9 days - included as normal (Rodrigo excluded these from model building)
  • A null Tr_DtTm will be allowed
  • This defines “hospitalization” as per-site, so if the patient is moved to subsequent medicine wards at a different hospital there will be a new record
  • EMIP / TR_DtTm during ED portion of visit: treat this as you would on the ward. The First TR DTtm at ER will be taken regardless whether there is a second TR dttm when patient moved to a Med Med ward (DR agreed in the meeting with JM Feb10)

Model development Inclusion/Exclusion of "Green" admissions

If we plan to generate overstay colours like the last time, then the one group who would not have the model applied to them would be the “greens”, since the decision tree turns them green before the model would be applied. If we were able to determine who these greens would have been, would we want to exclude them from the model?

There is no way to exclude the greens from the model, so we won’t try.

Analysis and model generation

Parameter candidates

See Data definition for factor candidates for the Overstay2 project for the definitions.

Location Grouping considerations

  • When I looked at your code that breaks out Location / living arrangement into groupings and measures it seemed to me that it was mixing up data cleaning and validation with measure definition and it might be good to keep those separate. Cleaning and validation should apply to the data in general, not just this model, no? It would make sense to document the steps taken and things found and remedies implemented on this page, but having them part of the definition seems problematic. I think I sent that as an email, but I think it would be better to track this on the wiki to have a trail for the decisions. Ttenbergen 12:03, 25 June 2025 (CDT)
  • SMW


  • Cargo


  • Categories

reference/examples for links

  • SMW


  • Cargo


  • Categories

Location Grouping for Postal Code is N/A

Analysis notes: JM found postal code N/A =2759, JM used the R_Province, Pre_inpt_Location, Previous Location instead to define the 5 categories above. Also encountered no match in the Postal_Code_Master List but was able to categorized based on the first 3 characters (N=273) - list given to Pagasa to add. (DR agreed in the meeting with JM Feb10)

Dataset split into training and validation data

We separated the population into two datasets based on the odd/even status of the last digit of the Chart number:

  • Even: Training set
  • Odd: validation set

Model generation and testing

See \\ad.wrha.mb.ca\WRHA\HSC\shared\MED\MED_CCMED\Julie\MedProjects\Overstay_Project_2025 and emails between Julie, Tina and Dan Roberts ~2025-02

Decision on a model

  • For each site's training set and validation set, perform chi square test for independence between the variable OS (Overstay >= 10days and Overstay < 10d) and each factors listed Data definition for factor candidates for the Overstay2 project to identify the factors that may affect the overstay individually.
  • Training data set - Methodology to find the best model involves
    • Basic plan for selecting the variables for the model -
      • Perform logistic model with the OS as the dependent variable and the independent variables beginning with the results from univariable analysis above and
      • Then by multivariable analysis using all independent variables (full model) and select via stepwise procedure both forward and backward selection.
      • Examine the importance of each variable included based on the probability result of its coefficient.
      • Those not contributing to the model are eliminated and new model is fitted. The process of deleting, refitting and verifying continues until it appears that all important variables are already included.
    • Assess the adequacy of the model both in terms of the individual variables and its overall fit by the following :
      • Estimated coefficients showing p-values of < 0.05 or having clinical relevance with p-values higher or close to 0.05 are included in the model.
      • The association of the predicted probabilities and observed responses is calculated by the Concordance (C) index and area under the curve (AUC) between the true positive rate (sensitivity) and false positive rate (1-specificity). A value > 0.5 implies ability to discriminate the positive and negative outcomes while a value 1 implies perfect classification. This quantity indicates how well the model ranks predictions .
      • The Hosmer-Lemeshow Goodness-of-fit test is used to assess how well the logistic regression model fits the data. A high p-value (usually > 0.05) means the model fits well while a low p-value (≤ 0.05) indicates poor fit of the model to the data.
  • Validation data set involves:
    • Using the candidate models from the training data set - fit the model using the validation data set.
    • From the predicted values, determine the Concordance (C) index and area under the curve (AUC) between the true positive rate (sensitivity) and false positive rate (1-specificity). It must result to values closer to 1.
    • Group the predicted data into deciles (10 groups) and for each group, the observed number of events is compared to the expected number of events predicted by the model. The sum of these 10 groups called Chi-square statistic with 8 degrees of freedom must have p-value > 0.05 to denote good fit.
  • If both the training data set and validation data set gave good results in all tests, then the model is a candidate for selection. If there are more than one candidate models, the one having more clinical relevance is opted.

Decision on a probability threshold

The predictive models we established are used to stratify the patient population for different Overstay2 processes on the units to reduce discharge delay. Details about establishing a threshold for the probabilities of the Overstay2 scoring models are in

Related articles

Related articles: