Monday, April 29, 2019

Microservices Architecture pt.2: Why do we want Microservices architecture?

After exploring what a microservices architecture actually is (see Microservices Architecture pt.1: Definition), we can ask ourselves why we want such an architecture. After all, it seems rather complex, discourages reusability, can lead to data inconsistency and any hype will be overtaken by something else in the future. However, there are benefits too and most of the downsides can be mitigated. It's also not always necessary to go for the most hardcore version of an idea, some middle ground can be reached to come up with a reasonable solution.

The most important reason for microservices architecture is to get rid of dependencies. Many systems are very hard to manage and maintain, because a small change can create a massive butterly effect and the entire application may be at risk. With microservices architecture, you isolate business modules, so a change in the insurance part of the company will not affect their marketing application and vice versa. This also makes it easier to release changes and updates, because the bounded context protects you from unexpected and undesirable side effects.

Another reason is that many organizations are disappointed with the traditional approach to SOA. In many cases, implementing SOA has not led to more flexibility, but actually less, as now multiple layers need to be tweaked to make a single change and massive enterprise metadata repositories make it virtually impossible to change things without massive consequences. If all your services are using your Person.xsd and you need to make a change to that one, you're going to be royally screwed. Besides, the Person.xsd will most likely contain everything from every context, which makes it unfocused and hard to work with. On the other hand, not using these metadata models also have downsides, as you need to make a lot of transformations. Microservices architecture can be a nice middle ground here, because you can isolate the Person's context to the business module and there are no dependencies between the different business domains. So, the context of a person is completely different for insurances as it is for marketing and guess what... that's totally okay and no longer an issue.

A third and also important reason is that microservices architecture forces organizations to really think about their business domains, leading to more proper architecture. There will be no confusion about which service does what (if done well, of course) and development teams can be organized around a product that they will know a lot about, instead of being unfocused by doing many different things. This also relates microservices architecture to the best practices of scrum, helping organizations to be more agile and manage things in a more logical way.

Have I written about deployment yet? With microservices architecture, it will be much easier to create releases and think about containerization, because there are much less dependencies. Applications gain a lot of independence like this and you can release more frequently with less risk of creating any issues. In a world where time-to-market needs to get faster and faster, this can be crucial for many businesses!

The most popular argument for microservice architecture is that microservices can be scaled independently. While I don't consider this crucially important for most organizations, who have predictable, stable loads on their systems, this could be interesting if there are big differences between different parts of the system or you need some flexibility to quickly make changes.

Last, but not least: runtime dependencies. When a struck thread pulls down an entire application server that runs everything, your business is on hold. If a struck thread pulls down an application server that runs a single microservice, everything else will still function well enough. So, errors and problems get isolated, which does not only increase stability, but also makes it easier to analyze what happened.

What about the downsides?

Microservices architecture is inherently complex. So, for smaller applications, it is not a recommended approach, because of the relatively large overhead. Also, when your business domains are not quite clear yet, you still have some work laid out for you before you will be ready for microservices.

The lack of reusability I actually see as an advantage, not as a disadvantage. You will be forced to take a more modular approach, but within your business domain you can reuse as much as you want, while maintaining flexibility in the interaction with the outside world. Obviously, there are some things that need to be centrally organized, like your API management, monitoring and traceability, so keep that in mind. Especially the latter is not going to be an easy ride when you're not running everything on the same platform or with the same technology.

Data inconsistency sounds really scary, doesn't it? It is and therefore it should be avoided. In microservices architectures, it is common to work with eventual consistency, which means that distributed will eventually be consistent throughout the system. Let's say you have three microservices all managing their own data and this data overlaps. A change happens in the first service, which pushes an event that can be consumed by the others. At that moment, your data are not consistent, but a few seconds later it will be, as the event gets consumed and the data gets processed. If you have a persistent publish/subscribe system like Apache Kafka, even newly added microservices can be fed with previous data or corrupted databases can be repopulated based on the information in the events. Obviously, you should test this well!

Conclusion: while we must be aware of the risks and mitigate them properly, microservices architectures have many advantages for creating consistent, independent and well-organized systems. Combined with agile principles, it encourages greater productivity and faster time-to-market. So, in many cases it will be good to at least add some elements of microservices architecture to your systems.

Sunday, June 10, 2018

Microservices Architecture pt.1: Definition

There is a lot of talk going on about Microservices architecture these days. Since I presented about the subject at the NLOUG Tech Experience this year, I've been getting a lot of questions and comments about it, so I've decided to make a more structured breakdown through a series of blogs. To kick things off, I will explain what Microservices architecture is, which in itself is not that simple.


Definition

There is no generally accepted definition of Microservices architecture. However, there are some starting points and generally accepted properties that I consider to be valid enough to see as part of the definition. Let's just start with what Adrian Cockcroft, the architect who introduced the concept at Netflix, has to say about it:

"Microservices are loosely coupled service oriented architecture with bounded contexts"

This pretty much sums it up, but it lacks detail to get a proper understanding of Microservices architecture. So, we need to analyze this sentence and draw some conclusions from it.

First of all, mr. Cockcroft is clearly seeing Microservices architecture as a form of SOA, instead of something completely different. Then he goes on to put some limitations on how this SOA should be done:
  1. Loosely coupled
  2. With bounded contexts

Why are these aspects so important?

First of all, to understand why loosely coupled is so important here, one needs to understand how much SOA applications are built. They mostly rely heavily on reusable business services, a canonical data model and a shared runtime. It's not exactly what I would call loosely coupled, since there are quite some dependencies and the impact of making changes to a service or the runtime configuration can be rather large.

Secondly, bounded context is another thing that is lacking in some SOA applications. While I've been working on projects where SOA was done quite right and each service had a single purpose, there are also many examples of services doing multiple things, having poor naming standards and rather unclear service definitions. Therefore, the context is no longer bounded and services become rather ambiguous. Any changes to them could have unexpected impact and there's a risk of overlapping functionality between services. Even if you don't consider Microservices architecture, bounded context is important for the sanity of your SOA application.
An important thing about bounded context is that the term does not imply that each Microservice can only contain one operation or only be about one object. It just has to be bound to a certain context and the granularity is up to you. In most cases, business domains are a nice way to identify Microservices, but it can be done in smaller parts, as long as the Microservice can remain loosely coupled.


Implications of "Loosely Coupled"

So, we want our Microservice to be loosely coupled. What does this mean? It means that dependencies are evil. This is a major shift from the "redundancy is evil" train of thought that we know and love from traditional database design. However, for a Microservice to be fully independent, it needs to control its own data, which inevitably means that you're going to have some data replication going on.

Since we don't want any runtime dependencies, we are no longer bound to one deployment platform. This opens the opportunity to be polyglot: for each Microservice, you can choose the appropriate technology. This can be very powerful, but be aware of the risk of too many different technologies in your organization. It would be good to set some restrictions, so on-boarding or switching developers between teams on occasion will not be a major headache.

With a separate runtime environment for each Microservice, we're also gaining the advantage of scalability. Each Microservice can be scaled to its own particular needs and a heavy load on one Microservice doesn't have the risk of slowing down another.

Ever thought of what it means for the Operations team having to manage a lot of different runtime environments with different technologies? They are going to be hard-pressed and probably overloaded. So, with Microservices architecture comes a "you build it, you run it" attitude, making the team fully responsible for the runtime application, while the Operations team will focus more on the underlying hardware.

Choreography, instead of orchestration. While in traditional SOA we are used to orchestrate services, especially in case of BPM implementations, this is introducing dependencies that we cannot have in Microservices architecture. So, we need to change our minds and think of choreography. Instead of a service telling other what to do, the service will now basically push a milestone (registered an employee, received a payment etc...) and it's up to other services to pick this up if they're interested.

Implications of "Bounded Context"

A Microservice will be focus on a certain business function. For example on-boarding, supply management, shipping or billing. Since the Microservice is independent, it means that the team developing it needs all skills to build and run it, from UI to Middleware, Database and Operations. So, you're going to want DevOps teams centered around business functionality, instead of technology. For many organizations, this is a major change in culture, while others who are further on the road of agile development can adopt this quite easily.

You need to understand your business domains really well. Microservices architecture is not just a technology fest, but it's ultimately about offering business functionality in an organized, independent manner. Therefore, you need to be able to separate your business functions really well and have a clear understanding about what each Microservice should be doing.

Summary

This blog is just a starting point to make clear what kind of thing we are talking about. I hope it helps to provide some understanding about what Microservices architecture means and how it compares to traditional SOA. In future blogs, I will expand further on what is required to be ready for Microservices, what the risks are, when it should (not) be applied, how it relates to BPM, what the relationship with DevOps culture is, how to handle data replication, how it impacts continuous delivery, how to handle events and what the Oracle Cloud has to offer in terms of Microservices. Most likely, even more will come up during the process.

Thursday, March 8, 2018

Creating REST APIs with Oracle Service Bus



When you think of Oracle Service Bus, you probably think about integration with SOAP and XML messages. However, since the introduction of REST adapters, it's also possible to offer RESTful APIs with JSON messages to your service consumers. Since RESTful APIs tend to be more light-weight than SOAP services, they have certain advantages in performance, especially for mobile usage, while also simplifying the interaction with your service. In this blog, I will show you how to create such an API based on an XSD for internal XML processing and what things to pay specific attention to. In Github I have provided a sample application created in version 12.2.1.2.0: https://github.com/lthijssen/MyMusic

Step 1: create your project and XSD

First of all, you will need to create a Service Bus Application in JDeveloper and a project within that. From there, create a Schemas folder and within that folder a MyMusic.xsd XML Schema with the following content:


As you can see, it's quite a straight-forward, simple XSD supporting the most common operations for integrating with a music database: get, getById, create, update & delete. Now, before you get overexcited and start hacking away on a WSDL, please don't! The WSDL will be automatically created later on in the process and making one manually will only hurt you. Needless to say that I found this out the hard way!
The interesting part of the XSD is that it contains some attributes in the schema declaration that you may be unfamiliar with: the nxsd parts are necessary to convert XML to JSON and vice versa later on, so they are absolutely mandatory. Also found this one out the hard way...

While normally I would create a container element Albums with an unbounded element Album in it, this is not looking pretty in JSON, so I have chosen to make Albums unbounded. If you want to know more about how XML is converted to JSON and vice versa, check the following blog by Oracle's A-team:
http://www.ateam-oracle.com/rest-adapter-and-json-translator-in-soaosb-12-1-3-2/

Another thing that I found out the hard way: don't put any "anyType" elements in your XSD. Later on, your generated WADL will fail because of this.

Step 2: the REST adapter

Once you have completed Step 1, you can go to your composite.xml, right-click on the left swimlane and choose REST. Call the REST Binding "MyMusicAPI" and tick the checkbox in front of "Service will invoke components using WSDL interfaces". Click "Next" and you will get to the Resources screen. This is where you bind your REST methods with your (generated) WSDL operations, so this one is highly important. You need two resource paths: "/" and "/{id}". These paths are coming back in the endpoints that your consumers need to invoke, so keep them as simple as possible and don't put any verbs in there, as they will be supplied by your methods already. The "/" path is allowing methods on the root, while "/{id}" is demanding an id to be sent in the URL and offering methods on that. A simple example: GET on "/" will get you all albums, while GET on "/{id}" gets you a specific album, based on the id supplied.

Now, under Operation Bindings, you start binding the REST methods on your paths to WSDL operations. Click the + button to start! In the next screen, you can set up the specific details. I will call the first operation "getAlbums", which is using the "/" resource and the "GET" HTTP Verb. Under the "Request" tab, in the schema URL, select the "getAlbumsRequest" element from your XSD. Any URI parameters that have been generated should be deleted, since the "/" resource is not having any, so they can't be bound to XML payload. In the Response tab, select "getAlbumsResponse" and leave everything else at the default setting. While it's a good practice to implement error handling, we will leave it for the next blog. Now click "OK" and you have created your first Operation Binding.

Now create "getAlbumById", which binds a GET method on resource "/{id}". Select the appropriate elements from the XSD under Request and Response and make sure to remove the second "id" parameter under URI parameters in the Request part, one is enough. The mapping to the internal XML payload is automatically created!

The next operation is "createAlbum", which is a POST method on resource "/". Remove any URI parameters in the Request and select JSON for the Response before you can select the appropriate XSD element. You can create a sample payload in the Request if you like, so you can see how your XSD content translates to JSON.

Next is "updateAlbum", which is a PUT on "/{id}". The implementation is similar to the POST we just did, but we don't remove any URI parameters here unless they're double. "deleteAlbum" can be implemented the same way.

Now you can press "Finish" and Oracle Service Bus will generate a ProxyService, a WSDL and a WADL. Look into these files and see what they contain! In your composite, you now see an errored Proxy Service, since it's not tied to anything. That's okay for now, no cause for concern. When you right-click on your Proxy Service, you can choose "Edit" and "Edit REST". The first one is the edit function that we know for Proxy Services, including Endpoint URI etc... but "Edit REST" allows you to edit the bindings and the paths exposed by the API.

Step 3: implement the Pipeline

Right-click in the middle swimlane of your composite and select "Pipeline". Call this one "MyMusicPipeline" and on the next screen select "WSDL" as service type. Here you will choose the WSDL that OSB has generated. Untick "Expose as a Proxy Service", since we already have one and we want to make a RESTful API. Now you can drag a wire between the Proxy Service and the Pipeline in the composite or edit the Proxy Service and select the Pipeline as target. In the Pipeline, you need to create an operational branch with all the operations in the WSDL. Give each operation a Pipeline Pair.

Now you can create a mock response in the Response pipelines. You can do this with a Replace activity, which gives you several XQuery and XSLT options to do so. My preference is to use XSLT resources. Why a mock response? Because we want to follow an API first pattern, which means that we deliver our API to consumers as soon as possible, so they can start working on their side and deliver feedback, while we work on the real implementation in the back end.

To create the XSLTs, I've created a Transformations folder and will make an XSL Map for each of the operations. Use the request and response elements from the XSD as sources and targets and fill in the mock response as you desire. Check the sample code in Github to see my implementation. When implementing the Replace activities, choose "body" as Location and "replace node contents" as Replace Option. For Value, you select "XSLT Resources" and select your XSL Map in the next screen, while setting Input Document Expression to $body/* and you're done.

Step 4: wrap it up and deploy

Go back to the Proxy Service and choose 'Edit'. Under "Transport", change the endpoint to /rest/albums to make it more straight-forward for the consumers. Now we're ready to deploy and test! You can deploy directly from JDeveloper, with Maven, through any continuous delivery tool, it doesn't matter to me. For this, I will just use JDeveloper and deploy to my localhost environment by right-clicking on my project, choosing deploy and following the wizard. Of course, you do need an OSB domain for this, so make sure you have. It falls out of the scope of this blog to teach you how to create one. Now you can check http://localhost:7001/servicebus to verify that "MyMusicProject" is there and start testing the Proxy Service!

Step 5: test your REST API

You are ready to test your proxy service. You can do this directly from the OSB console or from any client that can handle REST, like SoapUI/ReadyAPI, curl or Postman. For GET methods, even your browser will work! In this case, I choose the OSB console, which can be accessed at http://localhost:7001/servicebus. Just open the project, select the MyMusicAPI Proxy Service and hit the green button saying "Launch Test Console".

Test 1, "GET" method on resource "/" should give us a list of albums and it does.
Test 2, "POST" method on resource "/" should create an album and it does.
Test 3, "GET" method on resource "/{id}" expects an id to be entered and then returns the album.
Test 4, "DELETE" method on resource "/{id}" expects an id to be entered and removes an album.
Test 5, "PUT" method on resource "/{id}" expects an id to be entered and gives us a nasty error message!

Step 6: fix the bug!

What happened here? Since the id element in the XSD for internal usage, it's also exposed in the API. However, we've also mapped the id from the resource path to this element, so OSB doesn't know which one we want to use: the one from the JSON payload (even if the id is empty or removed). Following RESTful standards, we will want to use one in the resource path, but how do we do that? Let's go back to JDeveloper and "Edit REST" on our Proxy Service. Select the "updateAlbum" binding and edit. Edit the "id" parameter and change its mapping to "$property.id", so it's stored as a property, instead of fighting with the id in the JSON payload.

Now go to your Pipeline and do the following:
Click on the arrow button in the left top corner of your Pipeline to see "Shared Variables" and "External Services". Right-click and create a Shared Variable called albumId. Now add an Assign activity above the Replace activity. Choose "XQuery Expression" for Value and access your property.id with the following expression:
$inbound/ctx:transport/ctx:request/tp:user-metadata[@name='id']/@value
For Variable, select the "albumId" variable that you've just created.

Now create a new Mock XSL Map for Update... you can also edit the old one, but for demonstration purposes, I've created a second one (UpdateAlbumMock2) to show the differences. Create it the same way as the other one, just add an additional source (green + button) and call it "albumId". Leave all the default settings. Now you will do your mapping to id from this parameter, not from the id in the payload. Now update the XSLT Resource in the Replace activity and bind the parameter to the albumId variable. Save and redeploy. Test your PUT method again and you'll see that it works!

Conclusion

Through this blog, you should now understand the basics of creating a RESTful API on Oracle Service Bus with JSON input and output. In my next blog, I will show you how to deal with search parameters and error handling.


Thursday, December 7, 2017

UKOUG Tech - PaaS & Development review

Introduction

This year, I went to UKOUG Tech for the first time, as I got my paper about Oracle Process Cloud Service accepted. Looking at the agenda in advance, the content of the conference looked very interesting and I can already say that I wasn't disappointed. Not being particularly interested in the vast amount of database sessions, I decided to mainly focus on the Middleware and Development tracks, to see the latest developments on Oracle's PaaS offerings and the coolest new technology trends.

Even though many visitors to the conference are still working on on-premise projects, I haven't been able to find even one session about on-premise middleware. This is only logical, since SOA Suite, BPM Suite and Oracle Service Bus are lacking spectacular new features: it's all happening in the cloud. So, what exactly is happening in the cloud?


PaaS (Platform as a Service)

The main thing is that Oracle's iPaas (Integration Platform as a Service) portfolio is ever growing stronger. Every three months, these products are getting updated, rapidly maturing and expanding in, occasionally overlapping, functionality. Oracle Mobile Cloud Service impressed me as a solid back end for mobile, offering APIs, offline synchronization, authorization and tons of other features that are really useful for the challenges that come with mobile (or multi-channel) integration. Oracle API Platform is growing stronger as well and makes us re-think our way of agile development. API first is the way to go, so we get feedback from consumers early, while we are still working on the actual implementation in the back end. Another advantage here is that the back end is not impacting the API design much, so we can keep things clean and smart.

Moving further down the road, we see that Integration Cloud Service is turning more and more into a full-blown SOA platform and I was happy to present the Decision Models of Process Cloud Service myself. Once Dynamic Processes (Case Management) capabilities are released, I think we can say goodbye to BPM Suite, at least for new projects. Development in Process Cloud Service has become a smooth experience and the UI has improved dramatically since the product was launched in 2014.


Open Source & Development

But PaaS is not everything. We have seen an increasing interest in open source technology recently and even Oracle is embracing those products these days, standing at the very heart of their cloud offerings. So, I had the opportunity to learn more about Docker, which is a key element in Oracle's many Container oriented cloud offerings, Kubernetes, for which Oracle will provide a managed platform soon and Wercker, which can be used for continuous integration/continuous delivery of containerized microservices.

However, the star of the show was Apache Kafka. Brought to us with much grandeur by Robin Moffatt and Guido Schmutz, among others, Apache Kafka is looking extremely promising for not big data and streaming content, but basically for any event-driven style of architecture. Kafka can be used as an open source product, but you can also choose to use the Confluent Platform or Oracle's Event Hub Cloud Service. I believe that Kafka will be the cornerstone of modern integration architecture, powerfully delivering the promise that traditional SOA couldn't live up to. It's also perfect for being the event hub between your microservices, so they can communicate with each other without dependencies.

All in all, I can say that it was a fantastic conference, with not just great content, but also great social activities. It was a great opportunity to catch up with my friends, meet new people, exchange ideas and attend my first Oracle ACE dinner. I hope to be back next year!

Wednesday, November 29, 2017

Running SoapUI Test Cases with Maven



So, you have developed your software and you've done the right thing by creating your tests in SoapUI and they're all running smoothly. Now it's time to make the next step: make sure that your tests can be run automatically, preferably on different environments, for example every night or after a deployment. This is a major improvement in your continuous integration and delivery efforts, but how can this be achieved? In this blog, I will show how it can be done by using Maven, which can then again be used in for example Bamboo, Jenkins or any other continuous integration tooling.

With Maven, you basically just need a POM and a command to kick things off. In this example, we are using SoapUI 3.5.0, which is the latest open source version, Maven 3.5.2 and (since we need Java) JDK 1.8.0_131, but any other recent version will do as well.

Now, before we begin, if you use any JDK 1.8.x version, you need to copy a jar file named "jfxrt.jar" from ..\jdk1.8.0_131\jre\lib\ext into the ..\jdk1.8.0_131\jre\lib folder to make things work. If you resist, you will get a nasty error message and your test will not run with Maven. This obviously also applies to your continuous integration server.

Once you're setup, you will need to create a pom.xml file like the one below. You can place it in the same folder as your actual SoapUI test, but if you prefer to put it elsewhere, that's fine too. Just make sure to adjust the path to the SoapUI test then and be aware of differences between Windows and Linux environments (forward and backward slash). This is why I prefer to put the POM in the same folder as the SoapUI project.



Now there are several important elements here that you need to change for your own testing:
1. projectFile: the location plus name of your SoapUI project. Don't forget the .xml extension.
2. testSuite: the TestSuite that you want to be executed. If you leave it out, all TestSuites will be run.
3. testCase: the TestCase that you want to be executed. If you leave it out, all TestCases will be run within the specified TestSuite.
4. projectProperties: here you can manipulate the Custom Properties in your SoapUI project. Very useful for environment settings, for example.

Once you have the POM in place, you can navigate to its folder with command line, powershell or whatever tool you use and execute the following command:

mvn com.smartbear.soapui:soapui-maven-plugin:5.3.0:test -Denv=localhost

So, in the sample above, I have chosen to test my local environment by entering the value "localhost" into my Custom Property called "Env".

You might see some harmless error messages now, but they will not stop the test from running adequately. If you want to get rid of these, navigate to C:\Users\[Your User]\.soapuios\plugins and remove all files from there.

Once you run the test project like this, it will go through the specified TestSuite and TestCase, performing all the TestSteps in those and reporting on all the assertions. In the end, you'll get a BUILD SUCCESS or BUILD FAILURE message, depending on whether the result of the test matches the expectations set in your assertions.

Now you are ready to use the same command from your continuous integration tooling, decide when it should be run and which environment should be triggered.

Tuesday, October 31, 2017

REST API for Oracle Adaptive Case Management


For all of you who have been struggling with how to interact with their cases, there is good news. Since 12c, Oracle has created a REST API for Adaptive Case Management (ACM):
https://docs.oracle.com/middleware/1221/bpm/bpm-rest-api-ref/api-All%20Endpoints.html

Since the API is pretty much self-explanatory and fairly easy to use, I will not write a lot of detail about it (at least not right now, maybe later). However, I think that those of you who are struggling with the Java API or something custom made will definitely find something much easier to use here, for both integration and testing. Since most blogs about the subject have been written before this REST API became available, I thought it would be good to draw people's attention to this.

Wednesday, May 10, 2017

Oracle Process Cloud Service - Decision Model Notation part 2


In my previous blog, I showed how to get started with Decision Model Notation (DMN) in Oracle Process Cloud Service and how to create a simple Decision Table. Picking up from there, we will now look into creating If-Then-Else rules, which should also be familiar to people who know Oracle's old Business Rules. We will also create a service and call it from a process.

Creating an If Then Else Decision

As Input, I have created a TotalAmount object, which is the total amount of a Sales Order. Based on this TotalAmount, we are going to calculate a Discount Price, for which I have created a DiscountPrice type to make the service interface a bit prettier than just 'output'. To create an If-Then-Else rule, just click the + button next to Decisions, enter a name and set the output type to string, number or any other type, in this case DiscountPrice.


Now, Oracle will have created a rule for you, in which you only need to fill in the "if", "then" and "else". Since you've already decided your output object, we will not use that one in the expression, which is different from the old Oracle Business Rules. So just enter the value that you want for this object and you'll be fine. You can also create nested expressions, as shown below:

One thing that I don't like, is that all the nesting needs to be done in the "else" part. I hope for Oracle to acknowledge this and create a new "if" section (for example with indent), where I can happily nest away in a more user-friendly manner. However, it works (use the Test feature to verify) and if you don't make things too complicated, it's mainly a minor display issue.

Calling an If Then Else Decision

Calling any Decision from a process is super easy. Just make sure to have a service created for your Decision and deploy it, so the process can find it. In Oracle Process Cloud's Process Composer, you can then select "Decision" as a system task, select the Decision Model that you want to use and the service within that Decision Model that you want to call:


From here, you can make your data associations and you're done. Obviously, a process is generally not as simple as this one, but using Decisions within processes is.

So that's the second part of this blog series. The third part will be an overview of other DMN functionality: Expression, Relation, List, Function and Context. I still think that we will mostly be using If/Then/Else and Decision Tables though, so for most use cases, the information in this blog and the previous one should provide you with a nice kickstart.