Functional programming and F#: Introduction


Computer programmers sometimes mistake brevity for elegance, especially when discussing the relative merits of programming languages. Haskel partisans, for example, love to show how one can implement QuickSort in one line of inscrutable gibberish that looks like somebody had an epileptic seizure with the caps lock on. Haskell is a great language, but not because it saves you some typing. If being concise is really all that important, one might as well define a new programming language called ZipC, whose “compiler” is defined as “gunzip -c $1 | gcc”. You’ll get some really concise and “elegant” programs out of that, I’m sure! I’m usually pretty wary of academic languages like Lisp and ML, which often put computer science orthodoxy above usability. I’m just not interested in investing the time to learn a new language just so that I can save some typing and use cute tricks. The time saved is likely to be more than offset by the inefficiency of maintaining competence in multiple special purpose languages.

F#, however, is the first new language in a long time that I’ve felt is worth taking the time to learn. Developed by Microsoft research in England, F# is a functional language with roots in ML. Unlike many academic functional languages, however, F# is a pragmatic mix of imperative object oriented programming and functional constructs. Instead of forcing a paradigm on you, it simply makes several available. It can also leverage the full .NET framework, and is thus just as capable of implementing a GUI as it is an abstract recursive algorithm. It produces code that’s about as fast as C#, and it should only get faster with improvements in its CLR compiler and in .NET’s JIT compiler. A novel aspect of F# (in the context of .NET languages) is that it can be used as an interactive scripting language from within Visual Studio, allowing for interactive data visualization.

After working with it for a month or so, I find it to be a highly productive and expressive language, especially for numerical algorithms. It manages to be concise and high-level while maintaining coherence and readability. Well-written functions in F# are easier to follow and debug, relative to C++ or C#, and there are fewer opportunities for bugs to begin with. More than just cute syntax, it’s a language that actually encourages you to think differently. Programs implemented well in F# are a closer representation of the underlying concept and thinking, and are less obscured with with mundane details. Perhaps the best argument of all is empirical: the test program I wrote to compute fractal Newton basins worked the first time I compiled it, something that virtually never happens to me with C or C++.

In this post, I’ll provide a rough overview of functional programming and F#. There are much better introductions out there, but I nonetheless wanted to write this as a way to help myself learn about functional programming. I also figure F# is new enough that many of the people who read this blog might have not yet heard of it, and any small additional exposure this great language gets is worth it. This is not meant to provide a tutorial that will bring you up to speed on F#, but is intended to give you enough of an idea about it that you can decide whether or not you want to learn more.

In a follow up article, I’ll step through a short program in F# which uses some of the example functions I define below to implement a parallel computation of some pretty fractals generated by running Newton’s method for polynomial roots in the complex plane. In a third post, I’ll talk about some of the libraries available to speed up F# development. Because of the functional nature of F#, libraries written for F# can do some pretty impressive things. For example, the fractal generator detailed in the next post uses a third-party library function that handles all the details of multithreaded task parallelization through the surprisingly simple use of higher-order functions. In fact, parallelizing the program required typing exactly nine characters.

If you’d like to experiment with F#, you will first need some version of Visual Studio. I’m not sure if one of the free Express versions will work. (Someone please let me know if they have the answer to this.) The F# compiler is freely available from Microsoft Research, and integrates nicely with VS 2005. It is also reported to run under Mono on Mac OS X and Linux, though I haven’t verified this myself.

Functions as variables

A functional language is characterized by several key ideas. The core notion is that functions are fundamental objects, on par with variables. They can thus be passed to, and returned by, other functions. This enables some very efficient and conceptually satisfying ways of expressing algorithms, as we’ll see shortly. A functional language does nothing C or C# can’t do (sometimes in the hype of a new language, it’s easy to forget the old Church-Turing thesis) it’s just that it can often do so much more simply and in a way that is often closer to the way we think.

One oddity in F#, and your first hint that you’re dealing with a different kind of language, is the way function calls are defined. In a sense, an F# function really only ever takes one argument. To get a function to take multiple arguments, you are essentially writing a function which takes a single argument and returns another function which takes the second argument (and so on). For example, a built-in function which adds two integers ( + ), is listed in F# as having type (int -> int -> int). The interpretation of this odd notation is that it’s a function which takes an integer and returns a function of type (int -> int), which is itself a function which takes an integer and returns an integer. Perhaps a simpler way to think about this type is with the bracketed notation(int -> (int -> int)). When you write

z = ( + ) x y

what is happening, at least conceptually, is that the compiler greedily evaluates ( + ) x, which returns a function, and then it applies that function to y. (Don’t worry, F# also has infix syntax for binary operators so you can also just write the familiar z = x + y.)

One consequence of this is that you can do things like define a new function which increments an integer by 2 in terms of the ( + ) function with the following syntax

let incrtwo = ( + ) 2;;
let shouldbethree = incrtwo 1;;

This technique of making functions with multiple inputs from a chain of higher-order functions which each take single arguments is known as function currying. Granted, it’s pretty pointless in this static example, but consider the following interactive F# fragment:

> let makeincrementor s = ( + ) s;;
val makeincrementor : int -> (int -> int)
> let incrbyfive = makeincrementor 5;;
val incrbyfive : int -> int
> incrbyfive 6;;
val it : int = 11

The first line defines a function which takes an integer and returns a new function which increments its input by a run-time defined value s. The second applies this to define a new function which increments an integer by 5. So, function currying can be used to generate new functions dynamically. One exciting aspect of F# being a .NET language is that, in theory, this allows for a nice clean syntax for automatically generating specialized compiled functions on the fly during run time. (If anybody knows whether or not this already actually happens, please leave a comment with the answer.)

All functions in F# are inherently curried functions. Fortunately, there is syntax to write such functions without having to keep track of this fact. You can simply write

let myplus x y = x + y

and you automatically have a curried function of type (int -> int -> int). The fact that it will assume you want integer types all around is part of the automatic type inference of F#, which you will find to be both a blessing and a curse in practice.

The use of functions as arguments to other functions allows for tremendous abstraction, and certain “design patterns” which are no more than conceptual models in imperitive languages can actually be implemented as actual library routines in functional languages. One very powerful example of so called higher-order functions is the F# List.mapi function. It takes two arguments: a function and a list. The function is called repeatedly with two arguments: an integer representing the index into the list and the corresponding element of the list, creating a new list by looping over all elements of the initial list. A loop with an incremented local variable is one of the most common idioms in all of programming. How many times have you typed in the same for loop construct in C or C#? And how many times have you accidentally typed in the termination condition incorrectly, or missed a corner case?

As an example of how this is used in F#, below is a function which computes the derivative of a polynomial, passed in as a list of coefficients, ps, in ascending order:

let polyderiv (ps:float list) =
match ps with
| [] -> []
| lowcoef::pshigh ->
List.mapi (fun k pcoef -> float (k+1) * pcoef) pshigh

Pattern Matching

This example also uses another common idiom in functional programming: pattern matching. Program branch and control can often be handled very elegantly by matching an expression against a list of potential patterns. In the example function above, there are two possible situations. If the list is empty, we return an empty list. Otherwise, the list matches against lowcoef::pshigh, which is syntax for “take the first element as the local variable lowcoef and the remaining elements (if any) as pshigh.” This illustrates another powerful aspect of pattern matching: one simulaneously branches control while assigning relevent local variables to be used in the corresponding branch. Not only is this pattern matching approach more powerful and conceptually intuitive than if/then or switch statements, it also has the added side benefit of doing the work of local variable creation. After all, if you are bothering to treat a given pattern as a special case, you probably need to use at least one of the elements that match that pattern.

The last line uses the List.mapi higher order function applies an anonymous function to each coefficient (except the first), which simply multiplies the coefficient by its cardinal order in the list. Anonymous functions are to the function type what numerals are to the integer type: Just as you can write 3 + 7 without having to make two variables to hold each value, likewise you can just write a function without having to assign it a label. This makes for very efficient use of higher-order functions like List.iter or List.mapi, where a simple function to be given to them as argument can be defined on the spot. A curried function is also commonly used as an anonymous function.

Immutable Data

Another key idea, and one that is very odd at first, is the notion of immutable data structures. In most cases, when you declare a “variable” in F# you are really just creating a label for a value, and nothing more. You can reassign a new value to that label, but it’s just pure syntax. This seems like a huge impediment, but in practice it’s surprisingly not so, especially in the typical idioms of functional programming. If this doesn’t make sense yet, don’t worry, it will when you see more example code. For now, just consider immutable data structures to be like variables which can never be written to by pointer. Immutable data allows the compiler to do some unique optimizations and makes for efficient use of memory; if you create a new list that is the concatenation of two other lists, for example, no new memory needs to be created as the new list is simply composed of references to the immutable lists composing it. It also can make concurrent programming much more straightforward: provided you can express your algorithm in terms of immutable data structures, there’s no need to worry about locks as each thread can’t possibly have side effects that would affect the others. Having code that is automatically thread-safe makes programming concurrent algorithms much more straightforward. As many-core processors loom on the horizon, F# seems a rather timely language.

Recursions

The last functional programming aspect I’ll talk about is recursion. Mathematical induction is a natural way of thinking about many algorithms, and thus recursion is therefore a natural way to implement them that is concise and easily verified as correct. The typical example is computing the Fibonacci sequence: if you know the last two Fibonacci numbers, you just add them to get the next one. A recursive algorithm to compute the nth number would call itself with a request for the (n-1)st and (n-2)nd number and then add them together, with control code in the function for handling the base cases of n = 0 and n = 1. As a concrete example, and in keeping with the polynomial theme, here is a rather ugly function which uses recursion to evaluate a polynomial at a given value:

let rec polyval_aux (ps : float list) x xpow accum =
match ps with
| [] -> accum
| coef::psrest -> polyval_aux psrest x (x * xpow) (accum + (coef * xpow))

The rec keyword signifies to the compiler that this function calls itself, so it had better not try to evaluate the right hand side to greedily. This function certainly isn’t the most computationally efficient way to do this (though it is rather memory efficient) but it’s a simple example of a recursive function, one that implements two more common idioms: tail recursion and continuations. Tail recursion is the idea that if the complete final value of the function is simply the function called again, the stack only has to keep track of one function call at a time. It’s probably not a big deal here, but in cases where the function is recursively called a great number of times, it can matter.

You can also see one of the consequences of immutability: we have to keep passing the state of the computation. The arguments of the function are: a list ps of the polynomial coefficients, the value “x” at which to evaluate them, followed by two utility arguments which keep track of the state of the computation in the form of the current, a simple form of continuation. The values of “x” raised to increasing powers are passed on in “xpow,” and the running sum of the polynomial is kept in accum. Pattern matching is used as before, except now we use the head of the list, computing its contribution to the polynomial before passing along the remaining terms to the function again. When nothing is left, the accumulated sum is simply returned. The extra arguments are ugly, of course, so we use this as an auxilary function, to be called from the following wrapper function:

let polyval ps x =
polyval_aux ps x (Complex.Create (1.,0.)) (Complex.Create (0.,0.))

I think the above implementation of polynomial handling routines has two advantages over a similar set of functions in C. First, it’s really hard to screw up with stupid mistakes, like misallocation of arrays or bad referencing. Second, the intent of the code is more apparent, as the details of loops and counters have been hidden in the library functions. (Though I understand the intent may not be entirely transparent if you’re not used to the F# yet.)

If you’d like to learn more about F#, a couple good places to start are the Flying Frog consultancy and Microsoft Research. In the next post, I’ll present a stand alone Windows program that computes Newton basin fractals in 50 lines. Following that, I’ll finish by writing about some of the nascent resources that are starting to crop up for F#, in particular the excellent graphics and numerical libraries under development at Flying Frog.


29 responses to “Functional programming and F#: Introduction”

  1. That Haskell code is not a “clever little trick”, it’s the bog-standard way to write it and no Haskell programmer would have any trouble reading it.

    The only way the F# version would differ is to use explicit “match” declarations and lambda expressions because F# doesn’t have syntactic sugar for pattern-matching and “sections” that Haskell does.

    Since they’re a fundamental language feature and a ubiquitous idiom, everyone who knows the language know what they mean. It might be a matter of taste which syntax you prefer, but I posit that you only prefer the F# versions because you’ve seen similar in other languages. I’m familiar with both versions and must say I prefer reading the less verbose syntax.

    Apart from syntax, you could implement it some other way (less functionally, I guess) but your code wouldn’t be as good. The OO parts of F# are a nice crutch for beginners, but you can’t really run with crutches.

  2. Greg: I do have to admit the Haskell sort it is rather easy to understand, but given that it’s not even actually implementing a real quicksort and would never be used in practice, I do think it’s just a cute trick. It was a cheap joke, but I stand behind the underlying point that comparisons between languages based on brevity are pointless.

    I like Haskell syntax and F# syntax about the same, in fact. Actually, part of me likes Haskell a bit better, because it’s purer and simpler. F# is a bit of a mess and given that it’s owned by MS, it’s only going to get worse. But in the end, I think I’ll be able to do a lot more with F# than with Haskell, so I’m bothering to learn F#.

    I don’t think the OO elements of F# are a crutch, they are an option. Sometimes functional approaches are best, sometimes imperitive and OO programming paradigms are called for. I imagine the best solution will often be a mix, such as an OO interface with a functional programming implementation behind the scenes. I don’t want my language to turn me into an idealogue, I want it to get out of the way and let *me* choose the approach.

    At any rate, didn’t mean to insult Haskell programmers. It wasn’t about Haskell in particular but about the general arguments used in favor of functional languages. If Microsoft research had decided to implement Haskell as a .NET language and added OO features to it, I’d be writing this article about H#.

  3. Hi,

    First, I’d like to thank you for the article – in fact it’s the first time I have come across F# language.

    This language will definitely be a good part of .NET familly – it will bring the functional paradigm to more people – just because Microsoft is the “father” of the language. It will bring the choice and it’s allways good, but to say the truth I prefer Haskell syntax – the readability is on much higher level.

    PS: There is a mistake in the polyval_aux function example – the xpow value is not incremented correctly – it shouldn’t be (xpow *x), but (xpow + 1) …

  4. Having written enough imperative code the FP (higher-order functions and monads, instance declarations seperated from type definitions) way and the OO way, I just don’t see the point of OO anymore. Message-passing is good for Actors, but if you aren’t doing it for concurrency, then it’s just too blunt a tool to properly express the abstraction and polymorphism you want for any but the simplest of programs.

  5. Å tÄ›pán: Thanks very much for the comment. The xpow argument represents the current power of x that is being added to the polynomial, so it get’s “incremented” by multiplication repeatedly with x. I know I didn’t explain this very well; thanks to your comment I’ve gone back and edited that part to make it more clear.

  6. Greg: Being relatively new to functional programming, I haven’t gotten to the point where I’m ready to give up OO entirely, but I believe you when you say you can achieve everything you need to without it. One thing I will say, however, is that it’s nice to have in F# at least so that you can interact natively with .NET frameworks. I think that’s one reason why F# might actually become a widely used language, where others have never really managed to venture too far from the ivory towers.

  7. Mmm yeah, I totally agree. I have some fear that we are going to end up with half-hearted FP entrenched as a standard – like C++ did for half-hearted OO – but it’s still a step in the right direction and the people behind it are people who I know are going to want to get it right.

  8. A few comments. First, no, brevity is not a complete measurement, but simplicity in the brevity is. The nice thing about Haskell and F# is that the base syntax is not very complicated, and everything else is defined _on top_ of that. At first two languages may look indistinguishable by brevity, but it’s the underlying rules that end up making the difference between succinct elegance and a jumble of magic.

    On “currying”, Maybe it was just an example, but the definition of mmakeincrementor is actually useless. You could just as well do:
    > let incrbyfive = (+) 5;;
    val incrbyfive : (int -> int)
    [I understand currying to be taking something of the form (a,b) -> c and turning into a -> b -> c. From there, we can do partial function application. Currying is more useful with .NET methods since they all come in “tupled” form.]

    As far as OO, it quickly starts dropping out of your mindset. It’s so much easier to think and work with a function when you know it reacts only based on its arguments and not if someone somewhere else had called Foo on it before or not. On top of that, in a lot of places where someone would use inheritance, it’s quite easy just to pass in a function to “override” behaviour.

    The real feat of F# is that MSRC/Don Syme got Microsoft to actually adopt it as a commercial endeavour. I’m guessing they had a “way in” due to the fact that they did generics for the CLR already.

  9. Michael: Thanks for writing. I don’t really have it out for Haskell. It just has the bad luck of people very often giving the same dumb reason for why others should use it that it was easy to link to. I meant it as a general criticism of the way functional languages are often touted by simply wowing people with brevity of implementation. Anyway, I probably should’ve just picked on Lisp. Nobody defends Lisp.

    Yes, the makeincrementor is utterly useless. I just wanted to illustrate the fact that you can have function factories that generate functions with runtime-decided parameters, so that I could then make the point that in the context of the CLR, one is essentially generating code on-the-fly. I don’t think the CLR actually takes advantage of this, but I’m intrigued by the unique possibilities of curried functions in a virtual machine environment.

  10. Michael: What I meant was that when you call a curried function without the full number of arguments, I *think* that in theory the plumbing is available in .NET to actually have the resulting function recompiled, with the given variables substituted in as constants.

    For example, consider if I created a function by writing

    let mypoly = polyval [1. 2. 3.];;

    I think the way it’s handled now, the resulting “function” mypoly is really just essentially a delayed evaluation of polyval. If I call mypoly a million times with different arguments, polyval is still fully evaluated each time.

    However, in theory I would think a smart enough CLR could recompile the original CIL code for polyval, with the constant list [1. 2. 3.] substituted in. A smart JIT compiler could then do some clever loop unrolling, essentially resulting in mypoly being compiled dynamically as if I’d written

    let mypoly x = 1. + 2.*x + 3.*x*x;;

    I think .NET’s CLR has just scratched the surface of the kind of runtime optimizations of which it’s ultimately capable, and I would think functional programming would be a paradigm where a JIT could do a lot of really interesting stuff. I predict that at some point, especially on many-core systems, .NET code will run faster than unmanaged C code.

  11. OK, I see what you mean. I’m just not sure that partial evaluation adds anything new. If the JIT could do this, it should be able to do it to current C# code even. What’s the difference?

    For instance, even with a normal “tupled” call, if you wrapped it with constants for the certain parameters, the same argument should hold. (Most overloaded functions in .NET are exactly this, in reverse order.) I guess the JIT could trace and when it sees a specific set of arguments being passed enough times, it can emit specific code for them or something like that…

    I suppose the compiler could determine constants at compile time and make a decision to inline them.

    In fact, just by poking around it seems as if the F# compiler already does this (statically) if you mark a function “inline”. For instance, create an inline wrapper around your polyval_aux and add cases for [] -> and a::b::c::[] -> a + b*x + c*x*x. Then disassemble the calling code and you’ll see it does statically handle the pattern matching.

    It also seems to do it in other, simpler, cases, even without marking it inline. I guess over time the compiler will get better and do it on more cases? (Balancing of course, code bloat…)

  12. Michael: Oh, I agree. There’s nothing special about F#, or even formally defined functions versus anonymous functions. The key is the CLR, not a particular language. I just think F# would be a natural place to implement this kind of thing. The more I think about it, there would probably have to be some sort of pragma to tell the JIT compiler that it will be worth recompiling something, or maybe the compiler could figure it out from context (i.e. if you use a curried function in a List.map, for example). For all I know, something like this happens already. Maybe I should ask this question on the F# list…

    • Manjeet,
      Sorry about the delay, this is the first time I have read this blog, hopefully you have found help. But if you haven’t here are some links to get you started:
      Download the F# CTP from here (

    • Manjeet,
      Sorry about the delay, this is the first time I have read this blog, hopefully you have found help. But if you haven’t here are some links to get you started:
      Download the F# CTP from here (

  13. Good News, Bad News:
    Good News: You can use Visual Studio with F#
    Bad News: Visual Studio Express versions do not support F#
    Good News: Build it yourself! Follow this link to a free download of the Visual Studio Shell, which is quick and easy download as well as install (although if you VS 2008 Pro or Team System):
    http://www.microsoft.com/downloads/details.aspx?FamilyId=40646580-97FA-4698-B65F-620D4B4B1ED7&displaylang=en

    After you do the installation of the Visual Studio Shell, go to this site and download the latest version of F#, follow the simple installation instructions:

    http://www.microsoft.com/downloads/details.aspx?FamilyID=61ad6924-93ad-48dc-8c67-60f7e7803d3c&displaylang=en

  14. Hi Jonathan, I’m just wondering if there is a way to migrate C++ codes to F# codes? Because in that way I can understand better by comparing my old C++ codes to translated F# codes.

    • I’m not sure what you mean. Can you explain a bit more about what you’d like to do? C++ and F# are very different languages. The latter is functional, while the former is object oriented.

  15. Just desire to say your article is as astounding.

    The clearness in your post is just spectacular and i could assume you’re an expert on this
    subject. Fine with your permission let me to grab your feed to keep updated with forthcoming post.
    Thanks a million and please keep up the rewarding work.

Leave a Reply to blog Cancel reply

Your email address will not be published.