Requirements for Re-Platforming: Difference between revisions

No edit summary
No edit summary
 
(13 intermediate revisions by 2 users not shown)
Line 11: Line 11:
* 15-20 users on laptops, sometimes working from home, sometimes not connected to the network due to lack of [[wifi]]
* 15-20 users on laptops, sometimes working from home, sometimes not connected to the network due to lack of [[wifi]]
* About 2-2.5GB of data altogether as stored in various MS Access DBs (size may vary on other platforms)
* About 2-2.5GB of data altogether as stored in various MS Access DBs (size may vary on other platforms)
* Uses a locally customized and optimized data entry front end with facilitation/automation; using a generic tool would slow down collection and might require additional staffing to do the same work
* Has several highly customized front-ends that facilitate efficient and low-error data entry and processing
* Has several highly customized front-ends that facilitate efficient and low-error data entry and processing
** facilitates data entry from a daily dump received from ADT (and other intermittent dumps)
** facilitates data entry from a daily dump received from ADT (and other intermittent dumps)
* Data we store is in [[Auto Data Dictionary]]
* Data we store is in [[Auto Data Dictionary]]
** it is currently stored in [[CCMDB Data Structure]] - this structure could be stored differently but would cause large changes
** it is currently stored in a relational [[CCMDB Data Structure]] - this structure could be stored differently but would cause large changes
* We have (and continuously improve) [[Data Integrity Checks]]
* We have (and continuously improve) [[Data Integrity Checks]]
* Number of fields not necessarily relevant because of [[Entity–attribute–value model of the L Tmp V2 table]]
* Number of fields not necessarily relevant because of [[Entity–attribute–value model of the L Tmp V2 table]]
Line 32: Line 33:
** Allow modification of individual data items in individual records (5)
** Allow modification of individual data items in individual records (5)
** Allow modification of multiple records via queries, programming and/or automation (5)
** Allow modification of multiple records via queries, programming and/or automation (5)
** Function in with poor or non-existent [[wifi]]
** Needs to work with poor or non-existent [[wifi]] (5)
*** The current tool is locally installed and allows for collection without requiring network access; if a new proposed tool was cloud based this could be problematic for collectors who work from home or from locations where wifi is spotty.
** Team maintainable real-time [[Data Integrity Checks]] ([[Cross Check Engine]] implementation could provide team maintainability)
{{DL |
* What priority? [[User:Ttenbergen|Ttenbergen]] 00:01, 31 March 2025 (CDT)
* difficult to say, right now we have wifi access everywhere, used to be sketchy at the death desk  but we no longer are able to review charts there, SBGH does not have wifi in the basement where med records is located would we be able to use the database offline? like we currently do  [[User:Lkaita|Lisa Kaita]] 20:59, 23 March 2025
** That was the point I was making. We also have people working from home on slow network connections, so a cloud based tool could be problematic.
}}


=== Data control and possibly transfer ===
=== Data control and possibly transfer ===
* We have a "[[sending]]" process which currently includes both the movement of data from the locally-installed database to the central one, and the setting of the [[RecordStatus]] field that encodes whether the collector maintains "control" of the record or that control has been handed off to [[#Data processing]]; the collector maintains control of "incomplete" records until they "complete" the record, which triggers some mandatory final cross checks that will prevent completion unless passed.  
* We have a "[[sending]]" process which currently includes both the movement of data from the locally-installed database to the central one, and the setting of the [[RecordStatus]] field that encodes whether the collector maintains "control" of the record or that control has been handed off to [[#Data processing]]; the collector maintains control of "incomplete" records until they "complete" the record, which triggers some mandatory [[cross check]]s for both final and [[Minimal Data Set]] data that will prevent completion unless passed.  
* A new platform would need to provide this functionality:  
* A new platform would need to provide this functionality:  
** Make incomplete and complete data available to the [[#Data processing]] and [[#Data analysis]] stages(5)
** Make incomplete and complete data available to the [[#Data processing]] and [[#Data analysis]] stages(5)
Line 51: Line 47:
* This would replace our current [[CFE]] Access front-end and would need to  
* This would replace our current [[CFE]] Access front-end and would need to  
** Maintain the current [[Data Processing]] functionality for the [[data processor]], who often works remotely (5)
** Maintain the current [[Data Processing]] functionality for the [[data processor]], who often works remotely (5)
** Maintain ability to run the integrity checks performed during [[Centralized data Vetting Process]], leading to the [[RecordStatus]] field being set to "vetted" if passed(5)
** Maintain ability to run the integrity checks performed during [[Centralized data Vetting Process]], leading to the [[RecordStatus]] field being set to "vetted" if passed (5)
** Maintain the ability to the ability to browse, search, sort, filter and update (add, update, delete) the data interactively, including data validation(5)
** Maintain the ability to browse, search, sort, filter and update (add, update, delete) the data interactively, including data validation (5)
** Allow modification of individual data items in individual records (5)
** Allow modification of individual data items in individual records (5)
** Allow modification of multiple records via queries, programming and/or automation (5)
** Allow modification of multiple records via queries, programming and/or automation (5)
** Maintain a user interface that has the general look and feel of the current one(3)
** Maintain a user interface that has the general look and feel of the current one (3)


=== Data analysis ===
=== Data analysis ===
* Allowing our database personnel to transfer data into and out of the database, such as an export to file or import from file, ad hoc (5)
* Allowing our database personnel to transfer data into and out of the database, such as an export to file or import from file, ad hoc (5)
** The goal is to allow analysis of this data using other tools, including but not limited to SAS, on a local PC
* Retain the ability to analyze the data using other tools, including but not limited to SAS (5)
* Currently any edits to the data are delegated to [[#Data processing]], ability to update is not required at this stage (but might be good to have - this limitation is not intentional but due to process limitations)


=== Ongoing Improvements ===
=== Ongoing Improvements ===
* Our database personnel needs the ability to do the following:  
* The ability to do the following:  
** add / remove / change fields and tables in the data structure (?)
** add / remove / change fields and tables in the data structure (5)
** update the user interfaces to incorporate these data changes (?)
** update the user interfaces to incorporate these data changes (5)
** add / remove / change data validation and cross checks (?)
** add / remove / change data validation and [[cross check]]s (5)
* They need to be able to do this without relying on other teams, development cycles or funding of individual changes (ie gatekeeping) (?)
* For our database personnel to do these changes independently following a reasonable change management process (4)
{{Discuss | let's talk about this... are these 5s? They are to me.}}


=== Miscellaneous items ===
=== Miscellaneous items ===
* Maintaining the back end data format/structure as much as possible (3)
* Maintaining the back end data format/structure as much as possible (3)
* Maintaining our various "Created_*" queries/generated data functionality that provides APACHE score and individual element score, Charlson Score, etc
* Maintaining our various "Created_*" queries/generated data functionality that provides APACHE score and individual element score, Charlson Score, etc
*Ability, in future, to expand the capabilities of the databases by linking the data to other data obtained automatically -- e.g. Canadian Blood Services data about blood transfusions (4)
* Ability  to built the ETL to manage these updates, rather than rely on other teams (4)
{{Discuss | I think we would have this covered technically with the CCMDB team's ability to edit data, queries, automation and front end, so this one needs to cover the governance portion of our ability (permission?) to do this. How do we need to paraphrase it? [[User:Ttenbergen|Ttenbergen]] 14:33, 17 March 2025 (CDT)}}
* Ability, in future, to expand the capabilities of the databases by linking the data to other data obtained automatically -- e.g. Canadian Blood Services data about blood transfusions (4)
**Ability for our team to built the ETL to manage these updates, rather than rely on other teams (4)
 
**Technical ability, in future, to link our database with the Shared Health Datamart (or name-of-the-day) (?)
* Ability for our team to do the update and maintenance on data structure and interface (4)
{{Discuss |
* What would be the priority for this?
* Technical because governance and permissions are a separate issue we would need to address; not SH's, but our service providers for any solution.
* Let's talk about this; this is where SH data will live, and if we can't interact with it in the future the utility of our data will be greatly reduced. [[User:Ttenbergen|Ttenbergen]] 14:33, 17 March 2025 (CDT) }}


== Related articles ==  
== Related articles ==  
Line 86: Line 76:




[[Category: Re-platforming]]
[[Category:Re-platforming]]