In general, an efficient archiving and a rapid traceability of relevant documents are those two core arguments for electronic documentation in healthcare. A patient’s individual data collection certainly does more: Especially if it is fairly complete it can serve as a basis for automatic analyses. Such solutions make things easier for a physician to identify patients qualified for certain therapies.
Whether abuse is impending or not – the light will tell
Some of it is reality today: For example diverse EDP-manufacturers for physicians’ offices provide software modules allowing the physician to identify patients suitable for e.g. a disease management program or for an extra-budgetary contract. Diagnostically directed IT-solutions are available as well – the simplest one warning in cases of peculiar lab constellations. What Dr. Ben Reis of the Children’s Hospital Informatics Program at Harvard Medical School and other experts are now tossing in the debate in the journal BMJ (2009; 339:b3677) goes the extra mile. In a study they examined whether it is possible to determine whether longitudinal data in patients’ historical records, commonly available in electronic health record systems, can be used to predict a patient’s future risk for getting a diagnosis of domestic abuse. Reis emphasizes: “Physicians normally do not have the time to study patient files intensively if they only get to briefly meet the patient face to face. This leads to domestic violence remaining undiscovered for years. The actual diagnosis often is hidden behind acute afflictions seeming to be the apparent reason for visiting the doctor.” The thesis: An automatic algorithm could be a remedial measure warning the physician of potential abuse in case of certain risk constellations.
Many visits in the emergency room as a predictor for abuse
In order to develop such an algorithm, the scientists looked at more than half a million patient data collected without gaps in the medical documentation for over four years or more. In the beginning they used ICD 9 codes to identify cases of abuse, on one hand those codes directly linked to domestic violence. On the other hand they used a number of related codes perhaps indicating domestic violence but not necessarily. Mainly codes for intentionally produced injuries, for human bites or attempted poisoning fall into this category as well as codes associated with neglect of children or relatives. Based on this data, a software was developed trying to predict potential cases of abuse independent from the ICD coding by means of factors like the number of stays in the hospital respectively in the emergency room or also by type of diagnoses. The validation was performed with the entire database in order to see how reliable and in which stage, i. e. how early risk candidates for domestic abuse can be identified. In addition to the pure number of visits at the doctor’s office for example injuries and various psychological disorders correlated with the risk for domestic violence. Alcoholism and poisonings were predictive mainly for women while affective disorders were leading the way for men when it comes to abuse.
Software sometimes warns years ahead
The actually interesting question now was how well the algorithm really was working. And here we have to say that the US experts actually did a pretty good job. For example: They achieved a sensitivity of nearly 90 percent for the prediction of later abuse diagnoses. With a tolerance limit of 15% of false alarms, the sensitivity was still at 80%. As an average, the allocation of risks was successful 10 to 30 months prior to the actual diagnosis of abuse. That’s pretty good for a first attempt and scientist Reis is accordingly confidential: “The more data is available the larger the potential to make the vision of predictive medicine a reality with this method using large amounts of information to identify individual risks for disorders.”
On one hand this is right. On the other hand, of course ethic questions occur with this quasi-industrialized prediction of domestic abuse which would not occur like that with the identification of DMP-candidates or also with patients with an increased cardiovascular risk. A ”suspected diagnosis“ of abuse not only stigmatizes the patient but also his or her family. If it is wrong in an individual case and if this misinformation – for example due to IT systems not secured well – falls into the hand of a third person, it might cause considerable human damage. Such constellations already exist today, though. Nobel laureate Harald zur Hausen for example recently warned his colleagues to be careful with jumping into the conclusion of sexual abuse in cases of young girls suffering from genital warts. However, these are individual cases. A software though might make general suspicions a mass phenomenon.