MindsEye 2.0

Page content

Hello Everybody!

Today I’d like to announce the 2.0.0 release of MindsEye and related projects!

It’s been quite a journey already…

Neat, Huh? This type of rendering is my newest addition to deepartist.org - It generates an animation based on several key frames in a symmetric pattern, both of which use the same style-defining network. The final images are animated by a script written in ScalaJS, which is a whole other topic.

This animation isn’t the only new thing I have working - I am just wrapping up many months of reworking code at a pretty deep level (things were not even close to working!) I’ve finally released MindsEye 2.0.0. The biggest change here was introducing a coding tool which uses automated code analysis to verify reference counting memory management. This change touched the majority of files/methods/lines, but in theory was nonfunctional. It converted previously hand-coded reference counting code into a more rigorous pattern for which static code analysis can be applied. This change will allow easier management of a large code base, as general reference management can be checked by static code analysis and integration testing can be verified by logging interception. This code analysis isn’t perfect and must go along with runtime event log monitoring to prevent memory leaks, but it does make it a lot easier to code!

The 2.0 release has several other improvements as well. One of the most obvious is better HTML output for markdown rendering via frontmatter. This has been enhanced with the latest version, with needed javascript libraries for several neat features including code formatting, graphs, and collapsing sections. Also, the testing has been converted to JUnit 5 and heavily restructured. Both the unit tests and the release build are now packaged as tasks that can be run on an EC2 node, and the test results are scanned and reported on in a second process. The unit test executor can even isolate tests by method or class in separate jvms.

In general, these developments have a cloud-based theme: All tests, applications, and builds can now be run on EC2, with all important storage on S3. S3 is also an important data buffer, since it is used to store individual test results and then summarize them in aggregate reports. Although a cloud-centric design has many benefits, the main goal I was trying to solve was scale: I want to make this project something that wouldn’t be prohibitively complex for 1) features to be added, or 2) others to contribute. This meant making it easier to develop new code, easier to validate new code for inclusion in a PR, and easier to manage tests and builds. I knew 6 months ago I could not expect anybody else to make a significant contribution to MindsEye due to reference counting code development and special testing concerns; now it is at least a possibility.

This was a huge code change - most files and interfaces were updated, and the core patterns used in reference management were completely replaced. Also, the project structure changed as we moved from Data-Science-Tools to All-Projects as our umbrella repo. Fortunately, due to a large amount of current layer tests, we have a good deal of validation that major functionality was preserved on the vast majority of code.

We did have some interesting new eye-candies to show off since our last release, of course. I’ve improved the rotationally-symmetric renderings to produce a zooming effect, which loops to produce an ad-infinitem effect, previously linked. This is initially previewed as animated GIFs, but is also finally rendered as a very smooth javascript animation.

Click an ant below to view in detail:

The Javascript library powering that animation is implemented in Scala JS, which I’ve found to be pretty cool. I’ve even implemented in a fairly short time a port of my previously GWT-powered langton ants, which I’ve now published in improved form. In general it felt a lot more lightweight than GWT - I could use the Scala language and usual platform classes, but for UI I was accessing the DOM through a thin wrapper class which felt more direct and transparent.

Partially inspired by the recent release of OpenAI Microscope, I created a variety of notebooks which highlight the activity of a single neuron at a time in renderings. (These can be viewed at examples.deepartist.org) This can either be iterated over in arbitrary numeric order, or sorted by how closely they match a given image.

These single-neuron patterns can make an interesting base to select from when making new works, as some of these infinitely zooming animations. I think there is a funny similarity between the thumbnails of these ants and the results of the neuron surveys - both display an interesting variety of forms, like biological samples to be cataloged. I hope you have as much fun exploring them as I have had!