Deploy an app to Heroku using Wildfly and Postgresql driver

Since a Postgres Database is the most simple persistent datastore at Heroku, I was trying to setup a jee application example using this SQL database. If you have already read some of my previous blogposts, I am using Buildpack API to deploy the application to a Dyno.

Since I already got an example working with Wildfly and Mysql, it was an easy step to conduct a buildpack to install the Postgresql driver into the Wildfly container. It’s now available at https://github.com/mwiede/heroku-buildpack-wildfly-postgresql.

I also provide an example of the application, which uses maven properties to extract the database url and credentials from system properties and use them in persistence.xml.

Here are the build and deploy steps to make it work (compare to https://github.com/mwiede/greeter#usage)

$ git clone https://github.com/mwiede/greeter.git
$ cd greeter
$ heroku create
$ heroku addons:create heroku-postgresql:hobby-dev
$ heroku buildpacks:clear
$ heroku buildpacks:add heroku/java
$ heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-wildfly
$ heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-wildfly-postgresql
$ git push heroku master

encode special characters using maven-antrun-plugin

Recently I was deploying an java web application on heroku, but one of the fields, which were picked up from system properties to be used during the maven build to set the credentials of the datasource, contained a non escaped character, which brought the deployment to fail:

21:37:17,355 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC000001: Failed to start service jboss.deployment.unit."ROOT.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."ROOT.war".PARSE: WFLYSRV0153: Failed to process phase PARSE of deployment "ROOT.war"

        at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:154)

        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)

        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:748)

Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: Unexpected character '<' (code 60) (expected a name start character)

 at [row,col {unknown-source}]: [25,26]

        at org.jboss.as.connector.deployers.ds.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:105)

        at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:147)

        ... 5 more

Caused by: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '<' (code 60) (expected a name start character)

 at [row,col {unknown-source}]: [25,26]

        at com.ctc.wstx.sr.StreamScanner.throwUnexpectedChar(StreamScanner.java:647)

        at com.ctc.wstx.sr.StreamScanner.parseFullName(StreamScanner.java:1933)

        at com.ctc.wstx.sr.StreamScanner.parseEntityName(StreamScanner.java:2057)

        at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1525)

        at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2748)

        at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1073)

        at com.ctc.wstx.sr.BasicStreamReader.getElementText(BasicStreamReader.java:670)

        at org.jboss.jca.common.metadata.common.AbstractParser.rawElementText(AbstractParser.java:166)

        at org.jboss.jca.common.metadata.common.AbstractParser.elementAsString(AbstractParser.java:153)

        at org.jboss.jca.common.metadata.ds.DsParser.parseDataSource(DsParser.java:1149)

        at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:177)

        at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:120)

        at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:79)

        at org.jboss.as.connector.deployers.ds.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:90)

        ... 6 more

To come around this problem, we need to encode those characters as proper xml entities. So here is how I did this:

First, introduce the system property as maven property inside of the property section:

 <properties>
<db.url>${JDBC_DATABASE_URL}</db.url>
<db.user>${JDBC_DATABASE_USERNAME}</db.user>
<db.password>${JDBC_DATABASE_PASSWORD}</db.password>
</properties>

Second, define the maven-antrun-plugin as plugin to be executed during validate phase, the phase, before even resources are copied or filtered by maven. Important is that the flag exportAntProperties is set to true. I was not able to override an existing property, therefore I created new ones.

<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<phase>validate</phase>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<script language="javascript">
<![CDATA[ 

if (!String.prototype.encodeHTML) {
String.prototype.encodeHTML = function () {
return this.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
};
}

project.setProperty("enc.db.url",project.getProperty("db.url").encodeHTML());
project.setProperty("enc.db.user",project.getProperty("db.user").encodeHTML());
project.setProperty("enc.db.password",project.getProperty("db.password").encodeHTML());

]]>
</script>
</target>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>

Third, use the properties in any resouce file while filtering:

<datasources xmlns="http://www.jboss.org/ironjacamar/schema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.jboss.org/ironjacamar/schema http://docs.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
<!-- The datasource is bound into JNDI at this location. We reference
this in META-INF/persistence.xml -->
<datasource jndi-name="java:jboss/datasources/GreeterQuickstartDS"
pool-name="greeter-quickstart" enabled="true" use-java-context="true">
<connection-url>${enc.db.url}</connection-url>
<driver>postgresql</driver> 
<security>
<user-name>${enc.db.user}</user-name>
<password>${enc.db.password}</password>
</security>

</datasource>
</datasources>

See the complete code example at my github repo https://github.com/mwiede/greeter.

Free tier hunt – howto combine heroku and openshift

Background

Recently, RedHat shutdown its openshift platform v2. Now only openshift 3 is available and it is based on kubernetes and docker.

So why do I care?

I had some JEE projects running on openshift 2 and now I was forced to migrate them to the new infrastructure. Basically my applications consist of a tomcat or wildfly server running next to a mysql database. This setup was pretty easy and I could run this at no cost as long as I could tolerate, that the cartridges would be shutdown if they idle longer then 24 hours (meaning no http request was coming to the server during this time). But ok…

So now I had to migrate my projects following the migration guide provided by Redhat. But in parallel I was interested whether I have other options or if there are any other Paas providers giving out a free tier for personal small projects. And yes, there are plenty of them, but as I found, all have their limitations.

AWS is for 1 year, then pay. Google cloud … cant remember what was holding me back…Oracle cloud gives a certain amount of money, but evaluation phase is 3 month max. Microsoft…really?

Going with Heroku, but…

Then I found heroku offering “free dyno”, which also idles after 30min inactivity, but I wanted to give it a try. Later I found, that if you want to use a database with heroku, the limitations on the “free” database are like 5MB or 100 rows, so even if I have only a few playaround datasets, that was too small.

Then I had the idea of connecting two Paas providers. One giving me the application tier for free, the other one is giving me the database for free.

I ended up looking on how to connect to a database running on openshift from outside of the docker environment. What I found is the same way an admin would connect to it, via a tunnel and/or port forwarding.

Oh my god, the latency between application and database will be huge!

Yes, that is possible, but as I do not want to propose this setup for an enterprise application running in production, I am fine.

So here is my solution on how to connect from an application running in a heroku dyno to a mysql database running in openshift.

Heroku provides a mechanism which allows you to pack anything into your dyno, which you need to run your application. So if you need java, then the buildpack “heroku/java” is for you. If you need node, there is a buildpack for node and so on. The nice thing about the buildbacks is, that you can also create them on your own, using the buildback API

A buildpack can contain a shell script (profile.d), which is executed during startup of the container. The perfect way to create a tunnel and provide access to my remote database. You can find the buildpack at https://github.com/mwiede/heroku-buildpack-oc

So here is how you can create a heroku application having access to a remote database:

  1. install Heroku CLI
  2. create an app
  3. add my buildpack
    heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-oc
  4. configure environment variables
    $ heroku config:set OC_LOGIN_ENDPOINT=https://api.starter-ca-central-1.openshift.com 
    $ heroku config:set OC_LOGIN_TOKEN=askdjalskdj 
    $ heroku config:set OC_POD_NAME=mysql-1-weuoi 
    $ heroku config:set OC_LOCAL_PORT=3306 
    $ heroku config:set OC_REMOTE_PORT=3306
  5. deploy the app
  6. look for the logs, whether connection works properly.

Advanced usage

The profile.d script contains a loop so whenever the connection of the tunnel shuts down, it tries to open it up again.

From the perspective of openshift, the database runs in a so called pods and unfortunely it’s name can change.

I tried to make this as robust as possible, so the name in OC_POD_NAME should only contain a prefix of how the pod is named, for instance “mysql” is enough to detect the right one.