the future of jsch without ssh-rsa

With the release notes of openssh 8.3 it is clear, that some day in the future, servers will not accept the ssh-rsa signature algorithm any more. It will be disabled by default, because the risk is too high, that people spend money for attacks to break it. Instead, rsa-sha2-256 or rsa-sha2-512 should be used, amongst others.

At the moment, some people ask about the Jsch library, a popular java SSH implementation, because it’s furture is unclear. Unfortunately there is no answer on the sourceforge mailing list and I also tried to reach out to jcraft, it’s original author, via email but did not receive an answer yet.

Because of the popularity of Jsch, I think a lot of people will be interested in new features and additional signature algorithms being released, because without support, they cannot use the current version 0.1.55 from November 2018 any more. When servers will not accept ssh-rsa any more and no alternative host keys are set up, connections will not work any more!

Then the question is, whether Jcraft will continue to maintain it, whether some fork will take it over or that projects will have to switch to other libraries, which are maintained more actively (like sshj). But do you want to spend the time and effort to switch to a new API if you can avoid it?

Speaking about forks. When I was looking for a ssh library to support forwarding unix sockets, I did not find it. Then I decided for Jsch, because I knew the API from previous projects and just implemented it myself. But after looking at the inactivity on the sourceforge platform, I decided to create a fork of Jsch on Github. And after receiving pull requests about more supported algorithms, I realized, what was going on and what it means for the future. And that is why I am writing this post actually.

When I was searching for a fork to contribute to, I did not find a useful one for Jsch. There is https://github.com/vngx/vngx-jsch which improved javadoc amongst other things, but at least this was released to maven central 8 years back. All other projects (https://github.com/gaoxingliang/JSch or https://github.com/is/jsch), that contain the sourcecode of Jsch, did not publish the code in the form, that others can embed it into their projects. Now the question is: Did they not publish it because of legal rights or the license? As far as I understand, the BSD license allows publishing it as long as you keep the original copyright notices within the artifact.

So finally my goal of setting up a fork was to make it useful to the community. And this is by releasing it to maven central. Please check https://github.com/mwiede/jsch for the latest released version.

The benefits of having this setup are the following:

  • everybody can use it right away by setting up the artifact with maven or gradle or kotlin.
  • drop-in replacement: because the code and the artifact are forks, the package name, the inner class names and the API remain the same.
  • open for contribution, because it is hosted on GitHub. I think most of the people are used to Git nowerdays, so contribution on sourceforge is not attractive.
  • upstream compatible: in the case, that Jcraft will jump in again and will continue to provide maintenance and releases, it will be easy to give it back in their hands.

So let’s see, how fast openssh will continue with its announced plans to disable ssh-rsa in the near future.

Recommendation If you are a user of Jsch, please have a look at your servers, and if in doubt, switch your maven or gradle coordinates to the releases of the fork I created.

Maven users, replace

<dependency>
  <groupId>com.jcraft</groupId>
  <artifactId>jsch</artifactId>
  <version>0.1.55</version>
</dependency>

with

<dependency>
  <groupId>com.github.mwiede</groupId>
  <artifactId>jsch</artifactId>
  <version>0.1.58</version>
</dependency>

Gradle users, replace

implementation 'com.jcraft:jsch:0.1.55'

with

implementation 'com.github.mwiede:jsch:0.1.58'

Happy coding!

encoding in jdk openj9 docker images

Recently I was looking into encoding problems and I was really surprised to find this:

docker run –rm adoptopenjdk/openjdk11-openj9:jre-11.0.2.9_openj9-0.12.1-alpine java -XshowSettings 2>&1 | grep encoding

file.encoding = ANSI_X3.4-1968
file.encoding.pkg = sun.io
ibm.system.encoding = ANSI_X3.4-1968
os.encoding = ANSI_X3.4-1968
sun.io.unicode.encoding = UnicodeLittle
sun.jnu.encoding = ANSI_X3.4-1968

Even adding environment variables like LC_ALL or LANG did not help 🙁

docker run –rm -e LANG=en_US.UTF-8 -e LC_ALL=en_US.UTF-8 adoptopenjdk/openjdk11-openj9:jre-11.0.2.9_openj9-0.12.1-alpine java -XshowSettings 2>&1 | grep encoding

But the issue was fixed with a more recent version of the image:

docker run –rm adoptopenjdk/openjdk11-openj9:jre-11.0.6_10_openj9-0.18.1-alpine java -XshowSettings 2>&1 | grep encoding

file.encoding = UTF-8
file.encoding.pkg = sun.io
ibm.system.encoding = UTF-8
os.encoding = UTF-8
sun.io.unicode.encoding = UnicodeLittle
sun.jnu.encoding = UTF-8

Be aware of this insides of the images you use!

Happy coding

fault tolerance for jax-ws with resilience4j

In a recent project, we still have to use SOAP webservices and I wanted to apply some resilience pattern such as retry to my project.
Also a collegue just presented a java library called resilience4j so I wanted to use that one.

Of course, there are a lot of other possibilites like using other libraries (like Hystrix) or applying the sidecar pattern outside of my application in a cluster.

As you can see in the documentation, resilience4j is build for functional programming style and it supports some functional interfaces
which can be decorated to apply the retry mechanism to the function invocation. In the examples, you can always find a simple setup to pass the supplier and decorate it only for the particular method.

In my use case, I do not want to write the decoration code for each and every method (or function). I have a third party WSDL, generated the webservice interface and port via wsimport and now I want to generically apply the retry mechanism to the webservice client. But my solution is not very complicated, it is using a reflection proxy and invocation handler to direcly decorate and execute the method via the Retry classes.

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import io.github.resilience4j.retry.Retry;
public final class WebserviceFactory{
static <T> T decorateWithRetryer(final T service, Retry retry) {
InvocationHandler invocationHandler = (proxy, method, args) -> retry.executeCheckedSupplier(() -> method.invoke(service, args));
return (T) Proxy.newProxyInstance(service.getClass().getClassLoader(),
service.getClass().getInterfaces(), invocationHandler);
}
}

This way I am able to decorate my whole service interface.

One note I can make regarding the exception handling: In case of an exception, an InvocationTargetException is thrown. If you want to have the target one, you have to unwrap it.

The source code of my example is available at https://github.com/mwiede/jaxws-resilience4j.

Memory Footprints of hello-world microservices

I was curious reading about openj9 as a JVM being high performance and using a low memory footprint. I was working in a project, where the environment of software systems consisted of around 30 Java Applications, a few running on tomcat, a few on weblogic and the most running on dropwizard microservice framework. The desirable goal for every developer was to start the complete platform on his local notebook and therefor a virtalbox image was build using vagrant. As there were so many applications, each microservice itself consumed around 250MB of RAM and with the number of services growing we already hit 24GB image size of the virtualbox image. I found the blogpost https://codeburst.io/microservices-in-java-never-a7f3a2540dbb which describes the same issue and that if you those java microsevices in a cloud infrastructure, you would even have to pay even more money only because of the memory footprint.

Luckily there was somebody doing a comparsion of openj9 vs hotspot VM in his two blogpots https://royvanrijn.com/blog/2018/05/openj9-jvm-shootout/ and https://royvanrijn.com/blog/2018/05/openj9-hotsport-specjvm2008/ already. He was is showing some benchmarks and results.

My own simple comparison

I tested it on my own (just for curiosity) and came to the same result. In terms of memory consumption, there is a potential improvement with openj9 as JVM alternative.

I picked three helloworld examples using maven archetypes from dropwizard, helidon and Spring Boot.

First I packacked them into docker images using adoptopenjdk/openjdk8-openj9:alpine-slim on the one hand and adoptopenjdk/openjdk8:alpine-slim on the other hand.

Then I compared memory consumption using application metrics and dockers stats and could have a rough idea of the differences. Here is the result of the dropwizard app:

openj9 without any parameters to the jvm:

  • jvm.memory.heap.used = 10563328 B, jvm.memory.total.used = 37838048
  • docker container = 48.55MiB

hotspot without any parameters to the jvm:

  • jvm.memory.heap.used = 28511520 B, jvm.memory.total.used = 61381328
  • docker container = 118.3MiB

Conclusion

Whether you have the goal of running your complete software on your developer laptop or you want to save money running your services in a public cloud, openj9 provides a possibility to reduce the memory footprint of your java application by around 50%.

As it was mentioned in the other blog posts, there are also a few downsides with this, but if you test everything and the requirements (in terms of computation performance) are met, you should give it a try.