Deploy an app to Heroku using Wildfly and Postgresql driver

Since a Postgres Database is the most simple persistent datastore at Heroku, I was trying to setup a jee application example using this SQL database. If you have already read some of my previous blogposts, I am using Buildpack API to deploy the application to a Dyno.

Since I already got an example working with Wildfly and Mysql, it was an easy step to conduct a buildpack to install the Postgresql driver into the Wildfly container. It’s now available at https://github.com/mwiede/heroku-buildpack-wildfly-postgresql.

I also provide an example of the application, which uses maven properties to extract the database url and credentials from system properties and use them in persistence.xml.

Here are the build and deploy steps to make it work (compare to https://github.com/mwiede/greeter#usage)

$ git clone https://github.com/mwiede/greeter.git
$ cd greeter
$ heroku create
$ heroku addons:create heroku-postgresql:hobby-dev
$ heroku buildpacks:clear
$ heroku buildpacks:add heroku/java
$ heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-wildfly
$ heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-wildfly-postgresql
$ git push heroku master

encode special characters using maven-antrun-plugin

Recently I was deploying an java web application on heroku, but one of the fields, which were picked up from system properties to be used during the maven build to set the credentials of the datasource, contained a non escaped character, which brought the deployment to fail:

21:37:17,355 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC000001: Failed to start service jboss.deployment.unit."ROOT.war".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."ROOT.war".PARSE: WFLYSRV0153: Failed to process phase PARSE of deployment "ROOT.war"

        at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:154)

        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)

        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:748)

Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: Unexpected character '<' (code 60) (expected a name start character)

 at [row,col {unknown-source}]: [25,26]

        at org.jboss.as.connector.deployers.ds.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:105)

        at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:147)

        ... 5 more

Caused by: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '<' (code 60) (expected a name start character)

 at [row,col {unknown-source}]: [25,26]

        at com.ctc.wstx.sr.StreamScanner.throwUnexpectedChar(StreamScanner.java:647)

        at com.ctc.wstx.sr.StreamScanner.parseFullName(StreamScanner.java:1933)

        at com.ctc.wstx.sr.StreamScanner.parseEntityName(StreamScanner.java:2057)

        at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1525)

        at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2748)

        at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1073)

        at com.ctc.wstx.sr.BasicStreamReader.getElementText(BasicStreamReader.java:670)

        at org.jboss.jca.common.metadata.common.AbstractParser.rawElementText(AbstractParser.java:166)

        at org.jboss.jca.common.metadata.common.AbstractParser.elementAsString(AbstractParser.java:153)

        at org.jboss.jca.common.metadata.ds.DsParser.parseDataSource(DsParser.java:1149)

        at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:177)

        at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:120)

        at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:79)

        at org.jboss.as.connector.deployers.ds.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:90)

        ... 6 more

To come around this problem, we need to encode those characters as proper xml entities. So here is how I did this:

First, introduce the system property as maven property inside of the property section:

 <properties>
<db.url>${JDBC_DATABASE_URL}</db.url>
<db.user>${JDBC_DATABASE_USERNAME}</db.user>
<db.password>${JDBC_DATABASE_PASSWORD}</db.password>
</properties>

Second, define the maven-antrun-plugin as plugin to be executed during validate phase, the phase, before even resources are copied or filtered by maven. Important is that the flag exportAntProperties is set to true. I was not able to override an existing property, therefore I created new ones.

<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<phase>validate</phase>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<script language="javascript">
<![CDATA[ 

if (!String.prototype.encodeHTML) {
String.prototype.encodeHTML = function () {
return this.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
};
}

project.setProperty("enc.db.url",project.getProperty("db.url").encodeHTML());
project.setProperty("enc.db.user",project.getProperty("db.user").encodeHTML());
project.setProperty("enc.db.password",project.getProperty("db.password").encodeHTML());

]]>
</script>
</target>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>

Third, use the properties in any resouce file while filtering:

<datasources xmlns="http://www.jboss.org/ironjacamar/schema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.jboss.org/ironjacamar/schema http://docs.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
<!-- The datasource is bound into JNDI at this location. We reference
this in META-INF/persistence.xml -->
<datasource jndi-name="java:jboss/datasources/GreeterQuickstartDS"
pool-name="greeter-quickstart" enabled="true" use-java-context="true">
<connection-url>${enc.db.url}</connection-url>
<driver>postgresql</driver> 
<security>
<user-name>${enc.db.user}</user-name>
<password>${enc.db.password}</password>
</security>

</datasource>
</datasources>

See the complete code example at my github repo https://github.com/mwiede/greeter.

Connecting from jconsole or VisualVM to wildfly swarm instances

Recently I was trying out to monitor a wildfly swarm application and I wondered why the standard java approach to connect via JMX RMI did not work. I was using parameters

-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.rmi.port=9010 -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false

and I was not able to connect via rmi.

But some articles (http://www.mastertheboss.com/jboss-server/wildfly-8/monitoring-wildfly-using-visualvm and https://dzone.com/articles/remote-jmx-access-wildfly-or) gave hints about the same issue with Jboss or Wildfly, so basically it works the same way. On the other hand, it would have been possible using jolokia, which reports via http endpoint, but it was driving me nuts, why I could’nt get it to work with jdk tools. Even in the wildfly swarm documentation, it is not really described.

Here are the steps, how to connect.

  • Check whether you have fraction jmx and or management inside your pom. jmx is mandatory, and management is important to know, because if you do not have it, you will be able to connect on the applications listening port (standard is 8080), otherwise, you can only connect via administration port (standard is 9990).
  • Start your application. I realized, that although you are trying to connect from your local to an application running on your local, the connection does not succeed. In any other scenario, for example, if you application is running in your docker host, then you have to configure the swarm to allow connections from remote anyway. When service is not configured to allow remote connections, the following appears in the log:
[org.wildfly.swarm.jmx] (main) JMX not configured for remote access

To allow remote connections, you have to start your app with parameter

-Dswarm.jmx.remote=true

or you have to provide this setting in the yaml.

  • If you do not have management as fraction, then you will find the following line in the logs
[org.wildfly.swarm.jmx] (main) JMX configured for remote connector: implicitly using standard interface
  • When using jconsole or jvisualvm you will need a jboss-client.jar which contains the implementation of the vendors protocol. As mentioned on the other sites, you can easily start jconsole with a script provided in any of the downloads of wildfly application server, which already included the required library.
$JBOSS_HOME/bin/jconsole.sh

To start jvisualvm, you need to do it like this:

jvisualvm.exe -cp:a $JBOSS_HOME\bin\client\jboss-client.jar

If you do not want to download the complete server package, you can also use the library itself. Maven coordinates:

<dependency>
 <groupId>org.wildfly</groupId>
 <artifactId>wildfly-client-all</artifactId>
 <version>12.0.0.Final</version>
</dependency>

or download link https://repo.maven.apache.org/maven2/org/wildfly/wildfly-client-all/12.0.0.Final/wildfly-client-all-12.0.0.Final.jar

  • If you do not have management fraction connect with string
service:jmx:remote+http://localhost:8080

or if you have management fraction included, connect with

service:jmx:remote+http://localhost:9990
  • If you are using service:jmx:http-remoting-jmx://localhost:9990 to connect, you will find a warning in the standard out
WARN: The protocol 'http-remoting-jmx' is deprecated, instead you should use 'remote+http'.

So please keep in mind, that the deprecated connection strings mentioned everywhere in the internet, will not work any more in the future.

Free tier hunt – howto combine heroku and openshift

Background

Recently, RedHat shutdown its openshift platform v2. Now only openshift 3 is available and it is based on kubernetes and docker.

So why do I care?

I had some JEE projects running on openshift 2 and now I was forced to migrate them to the new infrastructure. Basically my applications consist of a tomcat or wildfly server running next to a mysql database. This setup was pretty easy and I could run this at no cost as long as I could tolerate, that the cartridges would be shutdown if they idle longer then 24 hours (meaning no http request was coming to the server during this time). But ok…

So now I had to migrate my projects following the migration guide provided by Redhat. But in parallel I was interested whether I have other options or if there are any other Paas providers giving out a free tier for personal small projects. And yes, there are plenty of them, but as I found, all have their limitations.

AWS is for 1 year, then pay. Google cloud … cant remember what was holding me back…Oracle cloud gives a certain amount of money, but evaluation phase is 3 month max. Microsoft…really?

Going with Heroku, but…

Then I found heroku offering “free dyno”, which also idles after 30min inactivity, but I wanted to give it a try. Later I found, that if you want to use a database with heroku, the limitations on the “free” database are like 5MB or 100 rows, so even if I have only a few playaround datasets, that was too small.

Then I had the idea of connecting two Paas providers. One giving me the application tier for free, the other one is giving me the database for free.

I ended up looking on how to connect to a database running on openshift from outside of the docker environment. What I found is the same way an admin would connect to it, via a tunnel and/or port forwarding.

Oh my god, the latency between application and database will be huge!

Yes, that is possible, but as I do not want to propose this setup for an enterprise application running in production, I am fine.

So here is my solution on how to connect from an application running in a heroku dyno to a mysql database running in openshift.

Heroku provides a mechanism which allows you to pack anything into your dyno, which you need to run your application. So if you need java, then the buildpack “heroku/java” is for you. If you need node, there is a buildpack for node and so on. The nice thing about the buildbacks is, that you can also create them on your own, using the buildback API

A buildpack can contain a shell script (profile.d), which is executed during startup of the container. The perfect way to create a tunnel and provide access to my remote database. You can find the buildpack at https://github.com/mwiede/heroku-buildpack-oc

So here is how you can create a heroku application having access to a remote database:

  1. install Heroku CLI
  2. create an app
  3. add my buildpack
    heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-oc
  4. configure environment variables
    $ heroku config:set OC_LOGIN_ENDPOINT=https://api.starter-ca-central-1.openshift.com 
    $ heroku config:set OC_LOGIN_TOKEN=askdjalskdj 
    $ heroku config:set OC_POD_NAME=mysql-1-weuoi 
    $ heroku config:set OC_LOCAL_PORT=3306 
    $ heroku config:set OC_REMOTE_PORT=3306
  5. deploy the app
  6. look for the logs, whether connection works properly.

Advanced usage

The profile.d script contains a loop so whenever the connection of the tunnel shuts down, it tries to open it up again.

From the perspective of openshift, the database runs in a so called pods and unfortunely it’s name can change.

I tried to make this as robust as possible, so the name in OC_POD_NAME should only contain a prefix of how the pod is named, for instance “mysql” is enough to detect the right one.

Feign Outbound metrics 

With dropwizard microservices, you can easily add inbound metrics on your jax-rs http resource classes via annotations:

@Path("/example")
@Produces(MediaType.TEXT_PLAIN)
public class ExampleResource {
@GET
@Timed
@Metered
@ExceptionMetered
public String show() {
return "yay";
}
}

The metrics can easily been reported to graphite database and visualized via Kibana.

WYIIWYG – What you instrument, is what you get!

On the other hand, a microservice often contains client libraries to access other services via http. Feign is client library, which provides a wrapper and simplifies the api to communicate to the target services.

In contrast to the inbound metrics from the example above, it is also desirable to monitor the outbound metrics of each of the targeted operations.

Looking at the third-party libraries of http://metrics.dropwizard.io/3.2.3/manual/third-party.html there is already something to retrieve metrics on http level. So in case you are using okhttp as http client implementation you can use https://github.com/raskasa/metrics-okhttp and you will receive information about request durations and connection pools.  Same holds good for Apache httpclient instrumentation.

okhttp example

MetricRegistry metricRegistry = new MetricRegistry();
final ConsoleReporter reporter = ConsoleReporter.forRegistry(metricRegistry).convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS).build();
GitHub github = Feign.builder().invocationHandlerFactory(
// instrumenting feign
new FeignOutboundMetricsDecorator(new InvocationHandlerFactory.Default(), metricRegistry))
// instrumenting ok http
.client(new OkHttpClient(InstrumentedOkHttpClients.create(metricRegistry)))
.decoder(new GsonDecoder()).target(GitHub.class, "https://api.github.com");
execute...
reporter.report();

Metric output:

-- Gauges ----------------------------------------------------------------------
okhttp3.OkHttpClient.connection-pool-idle-count
value = 1
okhttp3.OkHttpClient.connection-pool-total-count
value = 1
-- Counters --------------------------------------------------------------------
okhttp3.OkHttpClient.network-requests-running
count = 0
-- Meters ----------------------------------------------------------------------
okhttp3.OkHttpClient.network-requests-completed
count = 1
mean rate = 0,84 events/second
1-minute rate = 0,00 events/second
5-minute rate = 0,00 events/second
15-minute rate = 0,00 events/second
okhttp3.OkHttpClient.network-requests-submitted
count = 1
mean rate = 0,83 events/second
1-minute rate = 0,00 events/second
5-minute rate = 0,00 events/second
15-minute rate = 0,00 events/second
-- Timers ----------------------------------------------------------------------
okhttp3.OkHttpClient.network-requests-duration
count = 1
mean rate = 0,84 calls/second
1-minute rate = 0,00 calls/second
5-minute rate = 0,00 calls/second
15-minute rate = 0,00 calls/second
min = 215,41 milliseconds
max = 215,41 milliseconds
mean = 215,41 milliseconds
stddev = 0,00 milliseconds
median = 215,41 milliseconds
75% <= 215,41 milliseconds
95% <= 215,41 milliseconds
98% <= 215,41 milliseconds
99% <= 215,41 milliseconds
99.9% <= 215,41 milliseconds
view raw gistfile1.txt hosted with ❤ by GitHub

httpclient example

MetricRegistry metricRegistry = new MetricRegistry();
final ConsoleReporter reporter = ConsoleReporter.forRegistry(metricRegistry).convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS).build();
GitHub github = Feign.builder().invocationHandlerFactory(
// instrument feign
new FeignOutboundMetricsDecorator(new InvocationHandlerFactory.Default(), metricRegistry)).client(
// setting an instrumented httpclient
new ApacheHttpClient(InstrumentedHttpClients
.createDefault(metricRegistry, HttpClientMetricNameStrategies.HOST_AND_METHOD)))
.decoder(new GsonDecoder()).target(GitHub.class, "https://api.github.com");
execute...
reporter.report();

Metric output:

-- Gauges ----------------------------------------------------------------------
org.apache.http.conn.HttpClientConnectionManager.available-connections
value = 1
org.apache.http.conn.HttpClientConnectionManager.leased-connections
value = 0
org.apache.http.conn.HttpClientConnectionManager.max-connections
value = 20
org.apache.http.conn.HttpClientConnectionManager.pending-connections
value = 0
-- Meters ----------------------------------------------------------------------
-- Timers ----------------------------------------------------------------------
org.apache.http.client.HttpClient.api.github.com.get-requests
count = 1
mean rate = 4,19 calls/second
1-minute rate = 0,00 calls/second
5-minute rate = 0,00 calls/second
15-minute rate = 0,00 calls/second
min = 174,59 milliseconds
max = 174,59 milliseconds
mean = 174,59 milliseconds
stddev = 0,00 milliseconds
median = 174,59 milliseconds
75% <= 174,59 milliseconds
95% <= 174,59 milliseconds
98% <= 174,59 milliseconds
99% <= 174,59 milliseconds
99.9% <= 174,59 milliseconds
view raw gistfile1.txt hosted with ❤ by GitHub

As you can see, the provided metrics only provid information on http level, not really showing differences between different service endpoints. The only differentation is available on the httpclient metrics, which shows metrics based on host and http methods.

Closing the gap

What was missing in my eyes was a way to instrument metrics on the interface level, which is provided from the Feign builder. In my example below I am calling the github API on two different resource endpoints, contributors and repositorySearch. With the instrumentation on http, one is not able to see and monitor those one by one.

Therefore I created a library, which makes it possible to instrument metrics on method or interface level by using annotations like you do it in jersey resource classes.

Using this instrumentation you are able to retrieve metrics based on the interface and methods the client is calling. So for example when you start reporting via JMX, you are able to see the metrics in jconsole.

Usage of the library

To instrument the feign interfaces you basically have to do three things:

  1. add the maven dependency to the pom.xml of your project.
    <dependency>
      <groupId>com.github.mwiede</groupId>
      <artifactId>metrics-feign</artifactId>
      <version>1.0</version>
    </dependency>
  2. add FeignOutboundMetricsDecorator as invocationHandlerFactory in Feign.builder
  3. add the metric annotations @Timed, @Metered and @ExceptionMetered to the interface you are using with feign.
@Timed
@Metered
@ExceptionMetered
interface GitHub {
@RequestLine("GET /repos/{owner}/{repo}/contributors")
List<Contributor> contributors(@Param("owner") String owner, @Param("repo") String repo);
}
static class Contributor {
String login;
int contributions;
}
public static void main(String... args) {
MetricRegistry metricRegistry = new MetricRegistry();
final ConsoleReporter reporter = ConsoleReporter.forRegistry(metricRegistry).convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS).build();
GitHub github = Feign.builder().invocationHandlerFactory(
new FeignOutboundMetricsDecorator(new InvocationHandlerFactory.Default(), metricRegistry))
.decoder(new GsonDecoder()).target(GitHub.class, "https://api.github.com");
// Fetch and print a list of the contributors to this library.
List<Contributor> contributors = github.contributors("mwiede", "metrics-feign");
for (Contributor contributor : contributors) {
System.out.println(contributor.login + " (" + contributor.contributions + ")");
}
reporter.report();
}
view raw Example.java hosted with ❤ by GitHub

The library is available from maven central and the source is hosted at github, so please checkout https://github.com/mwiede/metrics-feign

Jax-RS + Codahale/dropwizard metrics + CDI + Prometheus

I used to use dropwizard built-in metrics annotations and graphite, but now I wanted to integrate these into my javaee project exposing prometheus metrics format. The main difference between graphite and prometheus is the push or pull mentality. So instead of pushing the metrics data to the sink, they are provided via http servlet and the prometheus server is scraping them from there.

There are two registry classes which we have to bring together. One is the com.codahale.metrics.MetricRegistry which holds all codahale metrics and the other one is the io.prometheus.client.CollectorRegistry which holds all metrics being published in the prometheus. So in our case, we will receive all metrics from our Jax-RS resource classes annotated with com.codahale.metrics.annotation.Timed or com.codahale.metrics.annotation.ExceptionMetered.

The prometheus library contains some default Jvm metrics (DefaultExports), but I could not use them because of some sun jdk classes which do not exist in OpenJDK for example. But it is good to add these in addition to the metrics coming from our annotated Jax-RS resource classes. So this is the class

import io.prometheus.client.CollectorRegistry;
import io.prometheus.client.dropwizard.DropwizardExports;
import io.prometheus.client.hotspot.ClassLoadingExports;
import io.prometheus.client.hotspot.GarbageCollectorExports;
import io.prometheus.client.hotspot.MemoryPoolsExports;
import io.prometheus.client.hotspot.ThreadExports;
import io.prometheus.client.hotspot.VersionInfoExports;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.context.Destroyed;
import javax.enterprise.context.Initialized;
import javax.enterprise.event.Observes;
import javax.inject.Inject;
import com.codahale.metrics.MetricRegistry;
/**
* A bean, which is starting the metrics reporter during startup and its is shutting down the
* reporter when application is destroyed.
*
*
*/
@ApplicationScoped
public class MetricsBean {
@Inject
private MetricRegistry registry;
public void init(@Observes @Initialized(ApplicationScoped.class) final Object init) {
// DefaultExports.initialize();
new MemoryPoolsExports().register();
new GarbageCollectorExports().register();
new ThreadExports().register();
new ClassLoadingExports().register();
new VersionInfoExports().register();
CollectorRegistry.defaultRegistry.register(new DropwizardExports(registry));
}
public void destroy(@Observes @Destroyed(ApplicationScoped.class) final Object init) {
CollectorRegistry.defaultRegistry.clear();
}
}

and these are the dependencies for the project:

<dependency>
<groupId>io.astefanutti.metrics.cdi</groupId>
<artifactId>metrics-cdi</artifactId>
<version>1.3.6</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient</artifactId>
<version>0.0.23</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_servlet</artifactId>
<version>0.0.23</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_dropwizard</artifactId>
<version>0.0.23</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_hotspot</artifactId>
<version>0.0.23</version>
</dependency>
view raw pom.xml hosted with ❤ by GitHub

As you can see, by the help of CDI we are able to bind everything together during the startup phase of the application inside of the JEE container.

Then we add the  servlet to the web.xml like this and we are done:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0">
<welcome-file-list>
<welcome-file>/index.html</welcome-file>
</welcome-file-list>
<!-- -->
<servlet>
<servlet-name>prometheusMetrics</servlet-name>
<servlet-class>io.prometheus.client.exporter.MetricsServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>prometheusMetrics</servlet-name>
<url-pattern>/prometheusMetrics/*</url-pattern>
</servlet-mapping>
</web-app>
view raw Web.xml hosted with ❤ by GitHub

In your prometheus server create a scrape config

scrape_configs:
- job_name: 'jaxrs'
metrics_path: /prometheusMetrics/
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['www.example.com:80']
labels:
group: 'jaxrs'
view raw prometheus.yml hosted with ❤ by GitHub

and you should be able to see you metrics in prometheus expression browser or Grafana.

Exploring Feign – Retrying

Feign is a library, which makes it easier to implement a http client. Recently more and more people start writing http clients, because they are creating microservices which communicate with http protocol. So there are all sorts of libraries supporting this task like Jersey, Resteasy and others – and there is Feign.

Today I do not want to explain the basic functionality, this is all done on the Readme page itself. Today I want to get into the details of a feature, which becomes more and more important, because in modern distributed systems, you want to have resilient behaviour, which means that you want to design your service in the way, that it can handle unexpected situations without noticing on user’s site. For example an API you are calling is not reachable at the moment, the request times out or the requested resource is not yet available. To solve this issue, you need to apply a retry pattern, so that you increase the chance that the service request is successfull after the first, the second or the nth attempt.

What most developers don’t know, Feign has a default retryer built-in.

Now I show a few code examples, what you can expect from this feature. What I am showing are junit tests with a client mock, so that we are able to stub certain errors and verify, how many retries have been made.

Case 1) Success

no retry needed.

@Test
public void testSuccess() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenReturn(
Response.builder().status(200).headers(Collections.<String, Collection<String>>emptyMap())
.build());
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
github.contributors("OpenFeign", "feign");
verify(clientMock, times(1)).execute(any(Request.class), any(Options.class));
}
view raw FeignTest.java hosted with ❤ by GitHub

Case 2) Destination never reachable.

In this case, we can see the Default Retryer working, which ends up doing 5 attempts, but finally the client invocation throws an exception.

@Test
public void testDefaultRetryerGivingUp() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenThrow(
new UnknownHostException());
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
try {
github.contributors("OpenFeign", "feign");
fail("not failing");
} catch (final Exception e) {
} finally {
verify(clientMock, times(5)).execute(any(Request.class), any(Options.class));
}
}
view raw FeignTest.java hosted with ❤ by GitHub

Case 3) Configure maximal number of attempts

Taking the same error scenario from case 2, this example shows how to configure the retryer to stop trying after the 3rd attempt.

@Test
public void testRetryerAttempts() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenThrow(
new UnknownHostException());
final int maxAttempts = 3;
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.retryer(new Retryer.Default(1, 100, maxAttempts))
.target(GitHub.class, "https://api.github.com");
try {
github.contributors("OpenFeign", "feign");
fail("not failing");
} catch (final Exception e) {
} finally {
verify(clientMock, times(maxAttempts)).execute(any(Request.class), any(Options.class));
}
}
view raw FeignTest.java hosted with ❤ by GitHub

Case 4) trigger retrying by error code decoding

For some (restful) services, http status code 409 (conflict) is used to express a wrong state of the target resource, that might change after resubmitting the request. We simulate, that the first retry will lead to a successfull response.

@Test
public void testCustomRetryConfigByErrorDecoder() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenReturn(
Response.builder().status(409).headers(Collections.<String, Collection<String>>emptyMap())
.build(),
Response.builder().status(200).headers(Collections.<String, Collection<String>>emptyMap())
.build());
class RetryOn409ConflictStatus extends ErrorDecoder.Default {
@Override
public Exception decode(final String methodKey, final Response response) {
if (409 == response.status()) {
return new RetryableException("getting conflict and retry", null);
} else
return super.decode(methodKey, response);
}
}
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.errorDecoder(new RetryOn409ConflictStatus())
.target(GitHub.class, "https://api.github.com");
github.contributors("OpenFeign", "feign");
verify(clientMock, times(2)).execute(any(Request.class), any(Options.class));
}
view raw FeignTest.java hosted with ❤ by GitHub

Case 4a) Behavior without error decoder

If no error decoder is configured, no retry is executed by Feign.

@Test
public void test409Error() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenReturn(
Response.builder().status(409).headers(Collections.<String, Collection<String>>emptyMap())
.build(),
Response.builder().status(200).headers(Collections.<String, Collection<String>>emptyMap())
.build());
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
try {
github.contributors("OpenFeign", "feign");
fail("not failing");
} catch (final Exception e) {
} finally {
verify(clientMock, times(1)).execute(any(Request.class), any(Options.class));
}
}
view raw FeignTest.java hosted with ❤ by GitHub

Case 5) Evaluation of Retry-After header

In contrast to the cases 4 and 4a, any response having a Retry-After header, which is a standard header defined in http protocol, the default Feign behavior is to honor this and trigger a retry at the date given.

@Test
public void test400ErrorWithRetryAfterHeader() throws IOException {
when(clientMock.execute(any(Request.class), any(Options.class))).thenReturn(
Response
.builder()
.status(400)
.headers(
Collections.singletonMap(Util.RETRY_AFTER,
(Collection<String>) Collections.singletonList("1"))).build(),
Response.builder().status(200).headers(Collections.<String, Collection<String>>emptyMap())
.build());
final GitHub github =
Feign.builder().client(clientMock).decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
github.contributors("OpenFeign", "feign");
verify(clientMock, times(2)).execute(any(Request.class), any(Options.class));
}
view raw FeignTest.java hosted with ❤ by GitHub

You can download my example on Github.