oh boy! since you asked...
This x12 parser and all the others you have reviewed ARE compatible with Domain Driven Design. You are implying that they are not.
Before I can answer your question I must first explain that there are at least 4 levels (if not more) of x12 standardization:
- Level 1 - the message format. Each transaction set (there are over 300 of them) has a standard permissible syntax for a given version (4010 vs 5010). this allows multiple parties to split these messages apart and rearrange them to their
final destination without needing to know much about how it will be applied.
- Level 2 - industry published implementation guides. These will conform to level 1 standardization and has some domain knowledge in that it species how to interpret different identifier codes within a specific industries application of that
transaction set. For example, 837P, 837I and 837D all conform to 837 X12 but are three different implementations.
- Level 3 - a trading partner's implementation guide. This looks very much like level 2 but is governed by a company rather than a standards organization.
- Level 4 – a trading agreement. This is usually like level 3 but might contain any specifications related to the B2B transactions.
Because of this it is very easy to develop one mechanism for persisting all x12 to a relational database at the Level 1 awareness, but each subsequent level requires knowledge of the implementation guide(s) (there could be many across companies and versions)
for that transaction set (thus the need for domain driven design). That doesn’t make it difficult, only tedious. And there are many companies willing to charge you an arm and a leg to help you with that.
The next sequential step in the pipeline after getting the data into a hierarchical structure (whether that be XML or the x12-aware objects) is to transform it into its domain model. The Oopfactory.X12.Hippa.dll assembly is an EXAMPLE of this that uses XSLT
transformations, but I found that most enterprise developers are quite weak in XSLT. The database feature added early this year was to allow for the transformation to occur without knowing XSLT. You can accomplish the same logically by using this tool to stage
your data and performing the same logical transformation to domain specific tables using SQL.
The reason I said “the next sequential step” in the previous paragraph is to point out the order in which the message is processed. Per domain driven design principles you should design your model first and then concern yourself later with how it will be hydrated
from the persistence store that is provided to you (in this case an x12 file, though now you can do it from the database instead). It sounds like this is where you are currently asking for some guidance.
I have actually spent years doing many implementations for clients using this pattern where I would transform the X12 using code into a domain model that I could act upon in code for validation etc., but there was a fatal flaw with this approach (which the
database feature mitigates). When I transform the x12 first and save the result to more closely represent my domain objects, there was a loss in fidelity because I had always parsed out much less than the standard defined (in this case for an 837 health claim
which has 3 different 600 page documents to describe it). Since I might have missed a few elements that I didn’t need at the time, that information was lost, because I didn’t store it to the database because my current domain model didn’t need it. As requirements
got added and we needed to know other segments that were getting sent in the files, it was tedious to reparse the files to find those previously ignored segments (and many times we didn’t do it).
With the staging database, it is much easier to query and do analysis work on the full set of information sent to you. Many clients don’t really know what they want from a file until you tell them what is getting sent in a file and this is easier for you to
do if you don’t have to think about your domain model before getting your x12 into database.
So, like you have stated a transformation has to be done. But before you dismiss using the database for staging let me tell you the pros and cons of using the database versus using xslt directly.
Transformation in XSLT:
- Requires less layers of abstraction (as you have stated).
- Requires less server storage for data that you will eventual save in another format
- Very few enterprise develops are experienced and comfortable with XSLT code
- Any data not transformed by the XSLT will be lost (unless you reparse the files)
- It is difficult to query for what is already in the files except for loading each file individual and running your XSLT.
Transformation in SQL:
- Many more enterprise developers will be able to understand a SQL query statement that transforms one table format to another
The full contents of the X12 file is ALWAYS parsed
- You can use the staging database for data-mining beyond what you initially needed for your domain object
- An extra layer of transformation has to occurred (though this is written for you, so this is only a minus if you don’t like having another database floating around or you like XLST more than you like SQL)
If you don’t want your production process to have a staging database, you could use the SQL feature just for analysis tool and use the XSLT method to do your transformations. There are plenty of examples of how to do this in the OopFactory.X12.Hippa project
that is in the source code.
Hopefully this gives you some ideas as to the best approach for you.