So, I’ve been using/learning Clojure for about six months now and attended my first Clojure/conj. It’s been a lot of fun. Since I’m so new to Clojure, there was plenty that I wasn’t able to grok. But even in those cases I feel like I could see there was a lot of potential in whatever thing the speaker was sharing with us.
Now that I’m home I wanted to debrief and make note of the things I want to look at more closely. Here goes:
- data.fressian – a Clojure wrapper around Fressian, which is an extensible binary data notation (not sure what the difference is between a format and a notation), with a really simple API. Stuart Halloway presented this. It sounds really easy to use and is designed to be language-agnostic. It would help make sure that data generated from Clojure code would not require that someone use Clojure to read it. I’ll have to look into this.
Prismatic’s Schema – Aria Haghighi talked about this. It seems that Schema doesn’t explicity add types, but it does allow one to sort of do type checking. And it can do a whole lot more than that, like compile down to Objective-C and make it easier to see what the inputs to a function actually are. This doesn’t appear to be a competitor to core.typed, but seems like it could be used as a complement, since it allows you to add much more information than just the types. It could be interesting to have some kind of documentation generated from the schema and existing docstrings.
core.logic and core.match – core.logic allows you to do logic programming in Clojure and core.match adds pattern matching (exciting!). I don’t know much about logic programming, but after looking at the examples on core.logic wiki I’d like to learn more. I wonder if it might be possible to use it to implement a markov logic network using core.logic. Pattern matching in Scala has been wonderful, so I’m glad to see that it’s available in Clojure as well.
core.async – We got a walkthrough of core.async. I won’t pretend to understand it. That said, the examples and use cases in the talk were pretty clear. If/As we get into more real-time analytics at DR it could be pretty useful. We’ll have to see. Want to learn more about it though, especially since I’m learning about web development.
Cascalog 2.0 – Sam Ritchie gave a great talk on Cascalog 2.0 (slides), which is a refactoring of the codebase that lets you run Cascalog on more than just Hadoop and for more than just “big data”. It could run on Storm, Spark (think you’ll have to roll your own generator?), or more. I will definitely be using this.
So, yeah, it was pretty interesting.