Requirements for Re-Platforming: Difference between revisions
Ttenbergen (talk | contribs) No edit summary |
|||
Line 52: | Line 52: | ||
** Maintain the current [[Data Processing]] functionality for the [[data processor]], who often works remotely (5) | ** Maintain the current [[Data Processing]] functionality for the [[data processor]], who often works remotely (5) | ||
** Maintain ability to run the integrity checks performed during [[Centralized data Vetting Process]], leading to the [[RecordStatus]] field being set to "vetted" if passed(5) | ** Maintain ability to run the integrity checks performed during [[Centralized data Vetting Process]], leading to the [[RecordStatus]] field being set to "vetted" if passed(5) | ||
** Maintain | ** Maintain the ability to browse, search, sort, filter and update (add, update, delete) the data interactively, including data validation(5) | ||
** Allow modification of individual data items in individual records (5) | ** Allow modification of individual data items in individual records (5) | ||
** Allow modification of multiple records via queries, programming and/or automation (5) | ** Allow modification of multiple records via queries, programming and/or automation (5) |
Revision as of 09:52, 31 March 2025
This is an index for content relevant to the U of M IT team in proposing a solution for hosting our database.
Repeated failure points we should address before going too far
We have had many failed attempts at re-platforming. The technical change would be tedious but doable, the sticking points have always been around the following, so we should discuss these before planning too much further.
- governance of any implemented system
- data ownership
- support model
- our team's continued ability to change this as needed
Relatively Hard Facts
- 15-20 users on laptops, sometimes working from home, sometimes not connected to the network due to lack of wifi
- About 2-2.5GB of data altogether as stored in various MS Access DBs (size may vary on other platforms)
- Has several highly customized front-ends that facilitate efficient and low-error data entry and processing
- facilitates data entry from a daily dump received from ADT (and other intermittent dumps)
- Data we store is in Auto Data Dictionary
- it is currently stored in CCMDB Data Structure - this structure could be stored differently but would cause large changes
- We have (and continuously improve) Data Integrity Checks
- Number of fields not necessarily relevant because of Entity–attribute–value model of the L Tmp V2 table
- Implemented as a system of intermittently linked MS Access databases with a fair bit of batch file and other automation facilitating their use and maintenance
Requirements
as also discussed in UM MedIT Re-platforming Meetings, with decisions made tracked there and the "current master" located here.
Here is a draft, including for each item, a preliminary notation of its priority, on a scale of 1=lowest priority to 5=highest priority. After the draft is completed, we will need to complete the MedIT Project Intake Process form, and then Kiran and her team will consider the best options for our needs.
This defines the functionality needed; the tools are up for discussion unless noted.
Data collection
- This would replace our current CCMDB.accdb Access front-end and would need to:
- Provide facilitated, partly automated input of admissions from the daily Shared Health data export (5)
- Maintain a user interface that has the general look and feel of the current one (3)
- Allow modification of individual data items in individual records (5)
- Allow modification of multiple records via queries, programming and/or automation (5)
- Function in with poor or non-existent wifi
- The current tool is locally installed and allows for collection without requiring network access; if a new proposed tool was cloud based this could be problematic for collectors who work from home or from locations where wifi is spotty.
![]() |
|
Data control and possibly transfer
- We have a "sending" process which currently includes both the movement of data from the locally-installed database to the central one, and the setting of the RecordStatus field that encodes whether the collector maintains "control" of the record or that control has been handed off to #Data processing; the collector maintains control of "incomplete" records until they "complete" the record, which triggers some mandatory final cross checks that will prevent completion unless passed.
- A new platform would need to provide this functionality:
- Make incomplete and complete data available to the #Data processing and #Data analysis stages(5)
- Allow data collectors to add and update the data they are working on -- up until they mark a record as "complete" (5)
- No longer allow them to update or view the record once it is set to "complete", or to the later stages it is set to during #Data processing(5)
- If collection happens in a separate database, move and synch updated data from the #Data collection tool(5)
Data processing
- This would replace our current CFE Access front-end and would need to
- Maintain the current Data Processing functionality for the data processor, who often works remotely (5)
- Maintain ability to run the integrity checks performed during Centralized data Vetting Process, leading to the RecordStatus field being set to "vetted" if passed(5)
- Maintain the ability to browse, search, sort, filter and update (add, update, delete) the data interactively, including data validation(5)
- Allow modification of individual data items in individual records (5)
- Allow modification of multiple records via queries, programming and/or automation (5)
- Maintain a user interface that has the general look and feel of the current one(3)
Data analysis
- Allowing our database personnel to transfer data into and out of the database, such as an export to file or import from file, ad hoc (5)
- The goal is to allow analysis of this data using other tools, including but not limited to SAS, on a local PC
- Currently any edits to the data are delegated to #Data processing, ability to update is not required at this stage (but might be good to have - this limitation is not intentional but due to process limitations)
Ongoing Improvements
- Our database personnel needs the ability to do the following:
- add / remove / change fields and tables in the data structure (?)
- update the user interfaces to incorporate these data changes (?)
- add / remove / change data validation and cross checks (?)
- They need to be able to do this without relying on other teams, development cycles or funding of individual changes (ie gatekeeping) (?)
Miscellaneous items
- Maintaining the back end data format/structure as much as possible (3)
- Maintaining our various "Created_*" queries/generated data functionality that provides APACHE score and individual element score, Charlson Score, etc
- Ability, in future, to expand the capabilities of the databases by linking the data to other data obtained automatically -- e.g. Canadian Blood Services data about blood transfusions (4)
![]() |
I think we would have this covered technically with the CCMDB team's ability to edit data, queries, automation and front end, so this one needs to cover the governance portion of our ability (permission?) to do this. How do we need to paraphrase it? Ttenbergen 14:33, 17 March 2025 (CDT) |
- Ability for our team to built the ETL to manage these updates, rather than rely on other teams (4)
- Technical ability, in future, to link our database with the Shared Health Datamart (or name-of-the-day) (?)
![]() |
|
Related articles
Related articles: |