Centralized data.mdb

From CCMDB Wiki
Jump to navigation Jump to search

Data Repository containing:

Sending to ...

version 2012-12-21 implemented sending of data to Centralized_data.mdb. This is accomplished by queries

  • "Send_Centralized_update" (update existing records to data currently in the collector's ccmdb_data.mdb)
  • "Send_Centralized_append" (append new records from the collector's ccmdb_data.mdb)
    • why do we need 2 separate queries? - I think one is enough which will do both. If the DC sends their profiles from their ccmdb_data.mdb to the Centralized_data.mdb, the updating and/or appending should both occur depending on whether the profiles are previously sent or not - see below for more details. One important data I need is the field that will differentiate the completed profiles from incomplete ones (i.e. outstanding/ patients who are still in unit/ward). If that is already present, I need to know the label name. JMojica 13:03, 2013 April 9 (EDT)

Template:Discussion

  • If this is working as intended then any updates Pagasa makes to fix bad data will be re-overwritten with the bad data on the collection laptop every time the collector sends. Possible remedies are to make collectors delete sooner after sending or getting Pagasa to email collectors to fix errors at the root and waiting for that to update the dataset. That is assuming it works as intended. From your experience with the entries in L_Log and the sending/data retention habits of the collectors involved, does it sound like this is how it is working? Ttenbergen 18:05, 2013 April 8 (EDT)
  • JMojica 11:24, 2013 April 9 (EDT) Here is my suggestion - Populate the Centralized_data.mdb from DC's laptop at sending time (once a week), not when the DCs back-up their data at laptop . The data will include both those checked completed and those incomplete.
    • On next's time sending (normally the following week or whatever frequency we decide), DC must make sure that the previously sent 'completed' are already removed from their laptop. This is required to avoid overwriting the data already in the Centralized_data.mdb which may have been changed by Pagasa due to some errors.
      • There will be again a batch of completed profiles, some are new and some are already included in the last week's sent; and another batch of incomplete profiles, some are new and some are already included in the last week's sent. Upon sending, these will populate the Centralized_data.mdb. Those previously incomplete profiles but are now complete will be overwritten (e.g. all data will be updated). Those previously incomplete profiles but still incomplete in this sending will also be overwritten (e.g. some data will be updated). Those with no previous profiles (complete or incomplete) will just be appended.
    • Finding Errors - The integrity checks of data will be applied to both complete and incomplete profiles.
    • Fixing Errors - Pagasa will do the fixing of errors in the Centralized_data.mdb but only for those with complete profiles. If Pagasa finds errors in the data of the incomplete profiles, she will tell the error(s) to the corresponding DC, ask her the correction and the DC will make any correction(s) in the laptop herself. The corrections for the incomplete profiles will be shown later in the next sending. Pagasa will do nothing in the incomplete profiles of Centralized_data.mdb .
      • if I need to use the data from the incomplete profiles of the Centralized_data.mdb and if there are some errors, I will substitute the erroneous value with the corrected value (provided by DC or Pagasa) in the SAS program in order to generate a correct report. Once the correction has been sent to the Centralized_data.mdb, I will just change the SAS program.

How to populate this

Template:Discussion We talked today at the ICU Task Meeting about whether or not to populate this table by push at every send. Julie was concerned that we would still not have as up-to-date data that way as we might be able to. One suggestion was to pull it from the backups instead. This could be done, but would need to happen very regularly to not miss patients who, e.g. arrive and get discharged on Wednesday morning, get entered by the collector, sent and deleted. Might need a multi-prong approach, where this table is push-populaetd when collectors send, and pull populated whenever Julie needs data. That way no patients would get missed and fresh data would be available when needed. I am planning to populate this table with a pair of queries: first one updating records that are already present, second one adding records that are new. Thoughts? Ttenbergen 17:09, 2012 December 21 (EST)

Dependencies

TISS28 Data.mdb pulls data from Centralized data.mdb.