Tail-recursion in Scala and Clojure

I recently read a blog post about issues with writing purely functional code in Scala. The post talked a bit about a paper addressing tail-recursion in Scala. The code example (taken from the paper) was an implementation of zip lists, called zipIndex, that was not properly tail-recursive and would result in a stack overflow for relatively small inputs. A later post from the author will look at ways of addressing the problem and I’m looking forward to reading about it.

I’m assuming the next post will do something similar to the tail-recursive factorial function.

I’d write a zip list in much the same way:

The recursion is handled with a function called loopRecur, the name of which was inspired by Clojure’s loop and recur forms. I’ve tested the above implementation of zipIndex with inputs of up to 100,000 elements. If this were in Clojure, the code would run much faster than the Scala.

To test the Clojure, assuming you have access to a Clojure REPL, you can run (zipList (range 100000)) and then be amazed at how much faster the Clojure version runs compared to the Scala.

Advertisements

Building a Simple Neural Network

So… you want to learn about neural networks? Well, you’ve come to the right place. This post won’t focus on the theory behind how neural nets work. There are already numerous blog posts and books for that.1 This focuses on building a neural net in code. So, if you want to skip straight to the code, the repo is on Github.

What is a Neural Network?

There are many ways to answer this question, but the answer that resonates most deeply with me, and is perhaps most fundamental, is that a neural network is basically a function. It transforms its input into its output. One of my old college professors actually wrote a paper, Approximation by superpositions of a sigmoidal function, proving that neural networks can approximate any function. 2 This capability is what makes neural networks so powerful and exciting. And all you need to do is select the right weights (it’s not quite that simple).

The Simplest Neural Network

perceptron

We’ll start with a single perceptron, the simplest model of a neuron. Depending on the size of the inputs, the output is either 1 or -1. An output of 1 means the perceptron is on, -1 means the perceptron is off.3 For our simple example, there’s one input, x, which has weight w. To determine the output, called the activation, we first take the dot product of the input and weight vectors, \sum_{i=1}^{N}w_i \cdot x_i, then pass the result through the sign function to get the activation. You can see the code for this below.

You may be wondering why we have to use the sign function. This is just because we want the output of the perceptron to be 1 or -1. For other problems we might want to have a wider range of output values. In such cases we would replace the step function with something else, like the sigmoid or arctangent. In general though, the activation functions used in neural networks take real-valued input and return output that is limited to specific range or to a set of specific values.

A Simple Problem

We’ll look at a simple binary classification problem. That is, to classify an input as belonging to one of two categories. In such a case, we can map each category to one of the two possible output values of the perceptron. Let’s consider the case where x is zero. In such a case it doesn’t matter what the weights are set to, the activation of the perceptron will always be off. That’s not good. To combat such problems we add a fixed input to the perceptron, called the bias; the bias is generally set to 1. The addition of the bias slightly changes how we compute the output. Now we add a term to the sum we saw above, \sum_{i=1}^{N}w_i \cdot x_i + w_0 x_0, where x_0 is the bias and w_0 is the bias weight:

Let’s put these pieces together:

But how do we pick the right weights? The answer is that we don’t. There’s an algorithm for that, backpropagation, though it seems no one really calls it backpropagation until there are multiple layers. Backpropagation is a fancy way of saying that we propagate the error in the output back to the inputs:

  1. See how far away the prediction of our network is from the expected output
  2. Take a step in weight parameter space in the direction that minimizes the error. If you remember your calculus lessons, this is a step in the negative gradient direction. How a big a step to take depends on the size of the error and on how fast we want to move in that direction. We don’t want to take too big a step or too small step. In the former case, we can easily shoot past the optimal weights and in the latter case we might take a long time to get there.
  3. Return to step 1 and repeat until the error is “small enough”.

We could write steps 1 and 2 in code as

Note that we can also use the squared error instead of the The full training happens when we pass a sequence of input, output pairs to the backProp function. With each call to backProp the weights of the perceptron are altered to decrease future errors. To handle the training, I’ve made a PerceptronTrainer struct and a struct to hold the training data as well.

Training the Perceptron

We want our perceptron to tell us if a given point is above or below a line in the xy plane. You can pick any line you want to, but I’ll take a simple one like y(x)=3x+1. We can generate training data by

  1. picking N input values at random and computing the y value for each
  2. Determine whether the y value is above or below the line.

Then, create a PerceptronTrainer  and pass the training data to it and call the train function.

How Well Does the Perceptron Work?

Let’s pass 100 random inputs to the perceptron and see how often the predictions are correct. We’ll also create a new, untrained perceptron and see how often it’s predictions are correct.

I get anywhere from 88-100% correct for the trained perceptron and about 4-40% correct for the untrained perceptron. Not bad for a simple neural network and a simple problem.


  1. A very nice online book is Michael Nielsen’s Neural Networks and Deep Learning
  2. There were several papers written around the same time that talk about these issues. They’re behind a paywall, but you can probably get them on Sci-Hub: Multilayer feedforward networks are universal approximatorsUniversal approximation of an unknown mapping and its derivatives using multilayer feedforward networksOn the approximate realization of continuous mappings by neural networksApproximation capabilities of multilayer feedforward networks 
  3. Usually, a perceptron’s output is 1 or 0. I have a specific use case in mind for which -1 is more convenient than 0. 

Probability Monad in Swift

I read about the probability monad in The Frequentist Approach to Probability a couple of years ago and thought it was pretty neat. I decided to make one in Swift, as an exercise in learning the language, after having done the same in Clojure and Java 8.

If you don’t know what a monad is, don’t worry, it’s not important for this post. Just know that it lets you do neat stuff with probability distributions, programmatically speaking. Ok… on to the code.

Protocols for Probabilities

I initially tried to do this with protocols. I had two. One that let us randomly sample values from a probability distribution

and another one that allowed for parameterization, like for the Poisson distribution.

You may have noticed that there’s no protocol that allows you to map one distribution into another, which is what would make this into a monad. That’s because I had not yet figured out how to do it with structs or with classes. It’s easy to map one set of values drawn from a distribution into another set of values according to a function. But I really needed to create a new struct with a specific get() function. And then I remembered that functions were first class values in Swift!

Turns out you don’t need protocols or classes for this at all. You can do it all with a pretty simple struct!

Probability Distributions via Closures

With a single generic struct, we have everything we need for the probability monad. To convert one distribution into another, we need only pass in a function that maps elements of one distribution into elements of the other.

Let’s see what we can do if we start from the uniform distribution.

For starters, we can easily generate the true-false distribution by mapping the Uniform distribution with a function that generates the booleans from a double. From there, it’s straightforward to transform the true-false distribution into the Bernoulli distribution

By passing the appropriate transformation function or closure to the map function, one type of distribution can be converted into another.

BTW, to use the random number generators you’ll need to import Foundation. If you’re on a Mac, you could also import GameplayKit’s GKRandomSource. Or you can always use something from C.

Composing Distributions

If you want to compose distributions, the our struct needs an appropriate function: flatMap.

A standard example is the distribution you get from combining a pair of six-sided dice. We can start with a single die:

Next, we can use the flatMap function to compose the distributions of a pair of six-sided dice by passing in a function that provides the behavior we need.

Now that you’ve seen all the pieces, here’s the final form of the probability distribution struct:

Computing Statistics for a Distribution

You may have noticed a few functions for summary statistics (mean, etc) and probability computation. The most important function is prob, which lets you use predicates to ask questions of the distribution. There are a few basic examples of what you can do below.

If I can figure how to properly implement the given() function I’ll add that in a future post. I also want to be able to handle more interesting distributions, like that seen in a probabilistic graphical model.

All the code is on Github.

Symmetry

Symmetry is one of the most powerful ideas in physics. Emmy Noether, the most important woman in the history of mathematics, determined that every continuous symmetry in a physical system results in a conserved quantity. This is called Noether’s Theorem. Every beginning physics student learns that any system’s total energy, momentum, and angular momentum are conserved. Noether’s Theorem explains why. Each law comes from one of our universe’s symmetries. Let’s take a look.

vira3

Conservation of Energy

If each conservation law stems from an underlying symmetry, what causes energy conservation? First of all, what does it mean for energy to be conserved? Suppose we take a box and count the total energy inside. If come back later and count again, we should get the same answer, provided nothing entered or left the box. So, for an isolated system, whether we turn the clock forward or backward, we should see the same energy. That’s energy conservation. If you pick any system and study its “equations of motions”, which are the equations which govern its behavior, you’ll find that the equations look the same at any time t0 and at time t0 + t1. Or in other words, the laws of physics are symmetric in time. Energy is conserved because the laws of physics (or the universe) are homogeneous in time, i.e., whether we turn the clock forward or backward the equations are the same.

Conservation of Momentum

Similar to the discussion for energy, if we compute the total momentum for the box now and the total momentum at a later time, then the total momentum should be unchanged. If we look at the laws of physics, like we did for energy, we’ll find that they don’t explicitly depend on position. The laws of physics are the same no matter where you are in the universe. If we are at position x0 and then shift to position x0 + x1, the laws of physics are unchanged. This symmetry is called the homogeneity of space and it causes momentum conservation.

Conservation of Angular Momentum

You’re probably catching on now. When the laws of physics don’t explicitly depend on a variable like time or space, we get a conservation law. So what about rotational symmetry? If we rotate a system through an angle, the equations of physics are the same. So, there is no preferred direction in space. This symmetry is called the isotropy of space and it causes the conservation of angular momentum.

The Underlying Mathematics

If you look at classical (or quantum) mechanics, you’ll encounter Noether’s theorem and see that there’s a conservation law between every pair of canonically conjugate quantities. Each member of a pair will be an “observable” quantity that we could measure. One member of the pair is what we usually think of as an independent variable, like time, and can produce transformations in the system (like translation in space or time). The other will be a physical quantity like energy. So, at this level of understanding, time and energy are canonically conjugate. The same is true for space and momentum and for angular position and angular momentum.

Summary

Every conservation law stems from some underlying continuous symmetry. Energy conservation is due to the homogeneity of time, momentum conservation to the homogeneity of space, and the isotropy of space produces angular momentum conservation.

To learn more about this, you can take a look at any of the following resources, listed in order of increasing sophistication:

How to see

I have been a fan and follower of Edward Tufte’s work, and data visualization in general, since I was in graduate school, when I came across The Visual Display of Quantitative Information sharing a bookshelf with the textbooks in the condensed matter theory office suite at Penn State. I even got to attend his course when I was working in the DC area. By the way, if you have the chance to go sometime, do it. It was time well spent and all-around awesome!

Anyway, I wanted to share an interesting parallel that I noticed between his current book/film project, The Thinking Eye, which is about how to see and reason about what one sees, and yoga. If this sounds like a spurious connection, bear with me for a few more lines. Your patience will pay off.

Tufte and Seeing

In 2013, Tufte was interviewed on NPR’s Talk of the Nation, where he discussed, among other things, “seeing”. Here’s some of what he had to say:

“Well, first, it’s about how to see, intensely, this bright-eyed observing curiosity. And then what follows after that is reasoning about what one sees, and asking: What’s going on here? And in that reasoning, intensely, it involves also a skepticism about one’s own understanding. The thinking eye must always ask: How do I know that? That’s probably the most powerful question of all time. How do you know that?”

“And so the seeing right then is being transformed into information, into thinking, right as that step from the retina to the brain. And the brain is really busy, and it likes to economize. And so it’s quick to be active and jump to conclusions. So if you’re told what to look for, you can’t see anything else. So one thing is to see, in a way, without words.”

Seeing and Yoga

The seeing that Tufte is talking about, when directed inward, becomes a powerful process of self-transformation and a powerful way to strengthen your mind.

In Yoga, much has been said about the importance of bring your attentiveness to its peak, and of what sort of things will happen when your can direct your attention towards something, without a break. In this direction, yogi and mystic Sadhguru has said:

“The only reason why someone is a mystic and someone is not, is lack of attention. Someone is an artist, someone is not. Why? Lack of attention. Someone can shoot straight and someone cannot. Why? Lack of attention. From the simplest to the highest things, it is just lack of attention.” (Isha Blog, 2013)

“This is the basis of yoga. There is no corner in the universe that will not yield to you if you know how to pay attention to it… It is only a question of the depth of your attention.”

Last October, Sadhguru was speaking to a large group of business leaders about attention being the key to success in their endeavors. Here’s an excellent Youtube video from the conference (Insight: The DNA of Success) where this topic is covered: 

When I’m teaching Hatha Yoga classes, I often emphasize the importance of visualizing yourself getting into and out of a given posture, especially those that you aren’t able to fully get into. There are several reasons for this, one of which is because it increases your ability to pay attention. But, instead of focusing on something external, your gaze is turned inward as you (try to) see yourself in each asana in as much detail as possible. Does that sound easy? Let’s do an experiment.

Hold up one of your hands. Take one minute and look at it. See it clearly. Now, close your eyes and visualize your hand in as much detail as possible. See it millimeter by millimeter. Can you see it? We’ve looked at our hands countless times in our lives, but we can’t visualize it so well. Instead, let’s start with something simpler, like a pen, or even better, draw a line segment on a sheet of paper. Take about five minutes and look at each part of the line, from one end to the other. Look at every point. See it in as much detail as possible. When you can close your eyes and can see the line clearly, then you can move on to the pen. Once you can recreate the pen in your mind, then you’re ready to move on to other things, like your hand. If you work at this, your ability to see will increase by leaps and bounds.

Comments on the Apple Watch

My Apple Watch arrived yesterday. I chose the 42mm stainless steel model, with the black leather loop. It looks great and feels comfortable on my wrist. The only thing that I find a bit odd is that I can sometimes hear the magnets in the band moving against each other when I move my wrist. Not a deal-breaker, just a minor oddity.

I wanted the Apple watch primarily for the fitness tracking aspects. I was thrilled when I heard it had a gyroscope, accelerometer, and heartbeat sensor. I was less thrilled when I learned that Apple hasn’t opened up sensor access in WatchKit. I hope they’ll change their minds about that in the future. I have (or had) an app in mind that needs such data. Also, it would be interesting to look at gyroscope or accelerometer data from a session of Angamardana, which is the most intense workout I’ve ever encountered.

I do wish the watch’s Workout app would allow me to rename the “Other” exercise to something of my choosing or at least rename it in the Health app on my iPhone. The ironic thing is that if you manually add workout data in the workout section of the Health app, there are a myriad of workout types. The Health app does allow data exports though, so that’s nice.