Functional Programming Fundamentals

In this post, we’ll discuss the fundamentals of Functional Programming and how they apply to modern JavaScript. We’ll also avoid unnecessary jargon like monads and functors and stick to concepts that will make our code better.

Functional Programming Fundamentals

In the past few years, React and Redux have generated a surge of Functional Programming which we often take for granted. However many of us never got a chance to learn the fundamentals.

In this post, we’ll cover the fundamentals of Functional Programming and how they apply to modern JavaScript. We’ll also avoid unnecessary jargon like monads and functors and stick to concepts that will make our code better.

If you'd like to learn all the unnecessary jargon, check out my other posts, What The Functor and Mary Had a Little Lambda. I also have an article on map, filter, and reduce.

Note: If you wanna see this post in talk form you can here.

None of this is new

Before we dive into actual concepts, I want to lead by saying that none of this is new. Functional Programming as existed for over 60 years. Lisp, a popular FP programming language, was first developed in 1958.

What is new is the popularity of FP, which has surged in the past few years.

When is Functional Programming most useful?

Before we cover what functional programming is. I think it's helpful to define when we use it.

Functional Programming is most useful when we’re doing 1 to 1 data transformations.

In the code snipped above we have some types listed for a data store, a component, and a functional layer in between them.

This functional layer, convertUserMapToArray, converts the data from a format that makes sense to the store to a format that makes sense for the UI. This is what we're going to zero in on today.

What is Functional Programming?

Functional programming (often abbreviated FP) is the process of building software by composing pure functions. A pure function is a function which given the same inputs, always returns the same output, and has no side-effects. [9]


For example, we can trust that 2 + 2 is always 4 and 3 x 3 is always 9.

Side Effects

The no side-effects bit is particularly important, because this is what allows us to trust that a function will always behave the same in any environment. This will help with future concepts.

Now side-effects aren't inherently bad, but you should isolate them to parts of your codebase where you can easily identify them.

Let's take a look a some examples of side effects.


Modifying the argument that’s passed in [1].

In this example above, we're changing the value of arr at the reference it lives at. As a result, we can't predict what this function will return at any point. What happens when arr runs out of values?

Shared State

Using some form of global state [1].

In this example, we can't predict what these functions will return because they depend on some external value. The order of the function calls will matter.

Furthermore what happens if someone else changes the value of i? Do you feel like googling what string++ is?

Asynchronous Code

Code that doesn't execute immediately [1].

This one deserves an extra mention because its a necessity. We have to do some things asynchronously. We have to hit APIs; we have to fetch data.

This brings me back to my earlier point. Side effects aren't inherently bad, but they should be properly isolated to make your code more predictable.

Example Time

Looking at the code above, we would expect it to produce the following:

Unfortunately, this isn't what we get. This is:

What happened?

Well the first function, clone is a pure function and works as expected. It produced a new object at a new reference.

killParents is not a pure function. It mutates the given argument and marks the parent as dead. It does however return the object so it appears we're getting a new copy.

addScar really doesn't care. It mutates the original object, and then returns nothing, so addScar(c) returns undefined even though it also modifies c.

As a result a is pointing to the original reference, b and c are pointing to the cloned copy (with dead parents and a scar), and d is pointing to nothing.

Declarative and Imperative

Declarative code describes what it does.

Imperative code describes how it does it.

Looking at the two code samples above, we can see a stark contrast. The top block is written using React and just says "we want a counter on the page." This code trusts React, the declarative library, to get it right.

The bottom block is using vanilla JS. It explicitly finds a DOM node and updates it. While this is fine for such a simple example this won't scale well. What happens when we want multiple counters in multiple locations? React is ready for that, with vanilla JS we have a lot of work to do.

Now it's worth noting that declarative code will always end up either compiling down to or being processed by something imperative. What do I mean by that? Well something has to do the DOM mutation. In this case that's React. Even with functional languages like Lisp or Haskell they eventually get compiled to imperative machine code.




Now these two functions achieve the exact same thing. They take a list of files and return a dictionary of files where the key is

But the imperative one is sloppier. It's 8 lines of code instead of 3. It also leaves a lot of room for error.

What happens if a file doesn't have an id? What happens if we get our exit clause wrong? And fun fact, there are faster ways to do for loops (they're ugly), but we can trust lodash to do those under the hood.

Functional Concepts

It's now time to move onto functional concepts. Let's take these in pairs, the first two separation and composition.


If you try to perform effects and logic at the same time, you may create hidden side effects which cause bugs in the logic. Keep functions small. Do one thing at a time, and do it well. [4]


Plan for composition. Write functions whose outputs will naturally work as inputs to many other functions. Keep function signatures as simple as possible. [4]

I pulled these quotes from an article called The Dao of Immutability. What we're saying here is we want small functions that easily chain together to make larger ones. Let's take a look at an example:

These three functions are all super simple. They take one argument and do a transformation on that argument. The first function sorts the list of files.

The second function is a filter.  filter is an FP term for filtering in, as opposed to filtering out. So this code returning an array of files who's extensions are PDFs.

The last function is just a map. map is another FP term, a 1:1 transformation over a list to a new version of that list. In this case we're looping over the list of files and returning the key name from each of them.

Now we can combine them together:

Now both of these functions are equivalent. They take a list of files, filter for pdfs, get their names, and return a sorted list of file names.

lodash.flow does have some optimizations under the hood, but the second syntax might be more readable to you. Do whatever you think is best.

Let's move on to the next set of concepts.


The true constant is change. Mutation hides change. Hidden change manifests chaos. Therefore, the wise embrace history. [4]


Memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. [5]

These pair really nicely. Immutability says we're never gonna mutate an argument only return a new one, and memoization allows us to remember outputs. Let's look at an example combining them.

What's going on here? Well we have a function called killSibling (dark I know), that takes a wizard. It copies over the wizard and decrements the number of siblings that wizard has. Please ignore any glaring bugs here, I wanted to keep this simple.

We then pass killSibling to lodash.memoize, this produces a new function called killSiblingMemoized. Now when we call killSiblingMemoized on ron, it returns a brand new object. If we do a strict equality check it returns false. Of course it does, his brother died, he's a different person now.

Because we've memoized this, if we repeat this call to killSiblingMemoized we'll get the exact same version of ron we got after the first call.

We've covered a lot, now let's move onto some advanced topics.

Higher Order Functions

In mathematics and computer science, a higher-order function is a function that does at least one of the following:

  • takes one or more functions as arguments(i.e.,procedural parameters),
  • returns a function as its result. [6]

This seems much more intimidating than it actually is. In fact I bet you've written some of these before without realizing it.

Function that takes a function

These are just callbacks. In this case then is the higher order function that takes an anonymous function as its argument.

Function that returns a function

Another thing you've probably done before, closures. In this case counterGenerator is the higher order function because it returns a function.

Function that takes a function and returns a function

Last, the final form, a function that does both. But you've already seen these.

Both memoize and flow were higher order functions that take a function (or several) as arguments and return a new function.


Currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument. [7]

This seems a little intimidating but I think an example will help.

We have a function sum which takes arguments a, b, and c. When we pass sum to lodash.curry (a higher order function), it becomes a new function that keeps returning functions until a, b, and c have all been filled.

If we give it arguments 1,2,3 then it executes immediately and returns 6. If we give it just 2,3 it returns a new function waiting for one more argument.

This works for all permutations of arguments.

Partial Application

Partial application (or partial function application) refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. [8]

Note: Arity is just a term for the number of arguments a function takes. [2]

With partial application, we can take a function and bind arguments to it. Let's take a look at an example.

In this case we have a function called learnSpell that takes a spell and a wizard. If we pass these functions to lodash.partial with a spell, it will create a new function that teaches a wizard a predefined spell. In this case we can generate learnExpelliarmus and learnExpectoPatronum.

We actually saw some examples of this earlier when we were talking about composition.

All of these functions are partially applying an argument to a lodash function. In fact it's functionally equivalent to this block.

Much like with flow you should do what you're comfortable with here. Both methods works well.

What Are We Optimizing For?

  1. Make our code more readable
  2. Make our code easier to reason about
  3. Make our code easier to test
  4. Make our users happier

We're not trying to reduce lines of code.


Thanks so much for reading! If you'd like to buy me a drink you can do so here or follow me on twitter @MatthewGerstman.

If you'd like to learn all the scary jargon check out my other posts What The Functor, Mary Had a Little Lambda, and Map, Filter, Reduce.