Monthly Archives: July 2013

About SNOMED

This week the focus has been on getting reference terms populated into the database by uploading a SNOMED file.

Here is some information about SNOMED (taken from wikipedia):

SNOMED CT[a] or SNOMED Clinical Terms[2] is a systematically organized computer processable collection of medical terms providing codes, terms, synonyms and definitions used in clinical documentation and reporting. SNOMED CT is considered to be the most comprehensive, multilingual clinical healthcare terminology in the world.[3][non-primary source needed] The primary purpose of SNOMED CT is to encode the meanings that are used in health information and to support the effective clinical recording of data with the aim of improving patient care. SNOMED CT provides the core general terminology for electronic health records. SNOMED CT comprehensive coverage includes: clinical findings, symptoms, diagnoses, procedures, body structures, organisms and other etiologies, substances, pharmaceuticals, devices and specimen.[3]

SNOMED CT provides for consistent information interchange and is fundamental to an interoperable electronic health record. It allows a consistent way to index, store, retrieve, and aggregate clinical data across specialties and sites of care. It also helps in organizing the content of electronic health records systems by reducing the variability in the way data is captured, encoded and used for clinical care of patients and research.[4] SNOMED CT can be used to record clinical details of individuals in the electronic patient records. It also provides the user with a number of linkages to clinical care pathways, shared care plans and other knowledge resources, in order to facilitate informed decision-making and support long term patient care. The availability of free automatic coding tools and services, which can return a ranked list of SNOMED CT descriptors to encode any clinical report, could help healthcare professionals to navigate the terminology.

SNOMED CT is a terminology that can cross-map to other international standards and classifications.[3] Specific language editions are available which augment the international edition and can contain language translations, as well as additional national terms. For example, SNOMED CT-AU, released in December 2009 in Australia, is based on the international version of SNOMED CT, but encompasses words and ideas that are clinically and technically unique to Australia.[5]

We will be using the information from the SNOMED  REF2 release to populate our reference term table preparing for the ability to get the hierarchy information later on in the project.

Advertisements

JQuery Plugins

So this week I had started out by working on some code to upload snomed data files and save the data in the system. I had started looking at creating a progress bar to let it be known how long the upload would take. This had to take a back-seat because my mentor was off on business and was not able to help with the confusion I had about how the files mapped to the tables in the database. So, we changed direction and instead I created a Reference Term Browser.

Working on the Reference Term Browser I used the DataTables jQuery plugin. Wow, this really saves a lot of time so I would like to show how I incorporated it into my code with OpenMRS.

It is really quite easy. First: download all the files you will need for the plugin. They are css and js files usually. I needed: jquery.dataTables.min.js and fourButtonPagination.js

Then import them in your gsp with:

ui.includeJavascript(“yourModuleName”, “jquery.dataTables.min.js”);
ui.includeJavascript(“yourModuleName”, “fourButtonPagination.js”);

create a fragment to make a getJSON call for the datatable data:
<script type=”text/javascript”>
jq.getJSON(‘${ ui.actionLink(“yourModuleName”, “browseTableOfReferenceTerms”, “getPage”) }’)
.success(function(data) {

jQuery(‘#demo’).html( ‘<table cellpadding=”0″ cellspacing=”0″ border=”0″ id=”example”></table>’ );
jQuery(‘#example’).dataTable( {
“sPaginationType”: “four_button”,
“aaData”: data,
“aoColumns”: [
{ “sTitle”: “source” },
{ “sTitle”: “code” },
{ “sTitle”: “name” },
{ “sTitle”: “description”}
]
} );
})
.error(function(xhr, status, err) {
alert(‘Reference Term AJAX error’ + err);
});
</script>

Then include that fragment with:

${ ui.includeFragment(“yourModuleName”, “yourFragmentName”)}

and I needed a div for displaying my table:
<div id=”demo”>

</div>

You will also need a controller to get the data when you make the json call. Here are the contents of mine:

package org.openmrs.module.conceptmanagementapps.fragment.controller;
import java.util.ArrayList;
import java.util.List;

import org.openmrs.ConceptReferenceTerm;
import org.openmrs.api.context.Context;
import org.openmrs.module.appui.UiSessionContext;
import org.openmrs.module.conceptmanagementapps.api.ConceptManagementAppsService;
import org.openmrs.ui.framework.page.PageModel;

public class BrowseTableOfReferenceTermsFragmentController {

public List<String[]> getPage() throws Exception {
ConceptManagementAppsService conceptManagementAppsService = (ConceptManagementAppsService) Context
.getService(ConceptManagementAppsService.class);
List<ConceptReferenceTerm> referenceTermList = conceptManagementAppsService.getReferenceTermsForAllSources(0, 200);
List<String[]> referenceTermDataList = new ArrayList<String[]>();
for (ConceptReferenceTerm crt : referenceTermList) {
String[] referenceTermArray = { crt.getConceptSource().getName(), crt.getCode(), crt.getName(),
crt.getDescription() };
referenceTermDataList.add(referenceTermArray);

}
return referenceTermDataList;
}

public void get(UiSessionContext sessionContext, PageModel model) throws Exception {

}
}

That is it. It is very straightforward and really saves time because it has a very organized page that you can scroll through search and filter  with very little code.

Here is a link to the plugin I used if you would like to see it in action:http://datatables.net/index

 

CSV files

This week I have been focusing on cleaning up my code and changing my upload and download to use Super CSV.

Super CSV is a very good tool for reading and writing CSV files. CSV files are simply files which contain multiple rows and each field is delimited by a comma. Some of the difficult parts about working with CSV files is:

  • they may or may not contain headers
  • each field may or may not be enclosed in double quotes
  • within the header and each record, there may be one or more fields, separated by comma
  • the last record in the file may or may not have an ending line break

Super CSV is a library that has been created to help work with some of these difficulties in a csv file.

It has 4 different readers to work with: CsvBeanReader,    CsvDozerBeanReader,    CsvListReader,    CsvMapReader

For my needs I chose the CsvMapReader. It is very useful. First you have to instantiate a reader and then set up your Cell Processors. The cell processors are what help Super CSV effectively parse your file. The cells are then mapped to the header names and put in a Map<String, Object>. Here is an example of the cell processors from the Super CSV site:

private static CellProcessor[] getProcessors() {

        final String emailRegex = "[a-z0-9\\._]+@[a-z0-9\\.]+"; // just an example, not very robust!
        StrRegEx.registerMessage(emailRegex, "must be a valid email address");

        final CellProcessor[] processors = new CellProcessor[] { 
                new UniqueHashCode(), // customerNo (must be unique)
                new NotNull(), // firstName
                new NotNull(), // lastName
                new ParseDate("dd/MM/yyyy"), // birthDate
                new NotNull(), // mailingAddress
                new Optional(new ParseBool()), // married
                new Optional(new ParseInt()), // numberOfKids
                new NotNull(), // favouriteQuote
                new StrRegEx(emailRegex), // email
                new LMinMax(0L, LMinMax.MAX_LONG) // loyaltyPoints
        };

        return processors;
}

Here is how you get the cell processor: final CellProcessor[] processors = getProcessors();

and the headers: final String[] header = beanReader.getHeader(true);

Then when you need to know the value of a field you keep up with the row by:

Map<String, Object> exampleMap; 
while( (examleMap = mapReader.read(header, processors)) != null ) {
}

and simply call mapList.get(“object name  which matches header”). It really saves time because there are many ways to parse a csv file but this has taken the best ways and but them into a useful library.

The rest of the week has been spent trying to make fields easy to understand and making it look better.

 

Validations

This week I have been trying to learn how to do validations with the UIFramework. This is the last piece I need for finishing step one before it is ready to send off for code review and testing.

So one of the things that is helpful to know how to do is to use fragments. If you include a fragment for the fields you wish to validate rather than putting the field directly in your page it makes readability and validation much easier. For example, I have a new field I want required so I created a fragment called conceptClasses.gsp under (yourmodule)/omod/src/main/webapp/fragments and a controller called ConceptClassesFragmentController.java under (yourmodule)/omod/src/main/java/org/openmrs/module/(yourmoduleid)/fragment/controller. In the gsp I have the label and input field that looks like this (so far but may change as I work through the validation):

<%
config.require(“label”)
config.require(“formFieldName”)
%>

<p <% if (config.left) { %> <% } %> >

<label for=”${ config.id }-field”>
${ config.label } <% if (config.classes && config.classes.contains(“required”)) { %><span>(${ ui.message(“emr.formValidation.messages.requiredField.label”) })</span><% } %>
</label>

<input type=”text” id=”${ config.id }-field” name=”${ config.formFieldName }” value=”${ config.initialValue ?: ” }”
<% if (config.classes) { %>class=”${ config.classes}” <% } %>
autocomplete=”off” />

${ ui.includeFragment(“uicommons”, “fieldErrors”, [ fieldName: config.formFieldName ]) }
<% if (config.optional) { %>
${ ui.message(“emr.optional”) }
<% } %>
</p>

I can then include by adding the following code to my page:

${ ui.includeFragment(“conceptmanagementapps”, “field/conceptClasses”, [
label: “concept classes”,
size: 1,
formFieldName: conceptClass,
left: true,
classes: “required”

])}

Creating the form like this <form class=”simple-form-ui” method=”post”> will make it so that required fields show up with red.

Just for clarification. I did not end up using the UIFramework standard validation. It seems to only work when there is a need for the navigation on the side when there is dialogue that depends on the answers to other dialogue. You can see how this works by looking at the code to register a patient. There you can not leave the field for a person name until it is entered and then you can move on to other fields to finish answering the questions to register a patient.

I ended up just creating my own quick function to validate my 2 fields. However, it was important to me to understand how it should be done with the new framework if there is dialogue in case I need it in the future.

For showing errors in  the upload file we have decided to create a new spreadsheet and listing only the rows that failed and putting a column that shows the reason why it failed.

I may have to update this post with more info later as I am still working through the validation process.