I have always contended that there are three things that have made Java the most successful language of our time:
Open source - Lawsuits against Microsoft and Google aside, Java’s community process, conventions, and general open-source nature have made development on this platform a largely white-box affair. If you aren’t sure why a library is not working the way you want, it is almost always possible to dig into the source code.
I’m pleased today to announce the release of the Simiacryptus data tools v1.8.0, including the first version of a new image art publishing application named and located at DeepArtist.org - Notably using the subdomain examples.deepartist.org.
What is it? DeepArtist.org is an image processing platform using convolutional neural networks to perform state-of-the-art image processing techniques. This software is targeted at hobbyists and digital artists, and as such this documentation is focused on the practical tools provided to produce pretty pictures.
Recently the OpenAI team made news again by releasing a 335-million parameter pre-trained natural language model. This model, using Python and TensorFlow , can generate text based on preceding text with such impressive capabilities that is can be used to translate and answer questions. This team actually has models several times larger, but have not yet released them due to risks of abuse.
Today I have released my small contribution to this awesome project - A deployable TensorFlow model and Java-based reference implementation which uses only the core (i.
I have been looking at TensorFlow again, after a couple of years, and I was very pleased to see there is now an official Java release of the API, interop, and native binaries. These are all published to maven and easily loaded into the process using the standard Java API. Hurray!
However, I quickly ran into limitations. The API gave a bridge to the native runtime, so you effectively have the functionality of the C++ API within Java, but the C++ API doesn’t do everything.
Our newest improvements to style transfer have improved results even further, with better quality results that are both artistically more distinctive and more recognizable in content. The first improvement used an additional color transformation step to remove color differences before style transfer, and re-add the difference afterwards using an inverted transformation; without it, the best results would require the style and content inputs have similar color schemes.
The second improvement is much more involved, but essentially involves identifying regions of a given image.
Let’s say you have a local java application you are developing. For some reason, you want to run some code on AWS EC2 - After all, the cloud and virtual computing revolution makes all that theoretically easy. All you need is an AWS account… right?
However, if you are starting from a local Java application and just have the goal “run this code on the cloud”, there are actually quite a few problems to solve before you can do this.
One very entertaining application of deep learning is in style modification and pattern enhancement, which has become a popular topic on the internet after Google’s Deep Dream post and subsequent research and publications on style transfer. Reproducing this research has long been a goal for the development of MindsEye, and now that it is achieved I’m having quite a bit of fun with this playground I built! I have collected the interesting visual results of my work in this online album.
Deep Learning has in recent years seen dramatic success in the field of computer vision. Deep convolutional neural networks tens of layers deep are becoming common and are some of the best performers for image recognition. Additionally, these learned networks can be used to produce novel artwork, as seen in recent publications about Deep Dream and Style Transfer. Today we will explore these applications with our own neural network platform, MindsEye.