Thursday, July 03, 2014

Uncovering Inherent Structures in Organizations

Vladimir Levenshtein
This post should have a subtitle: "Using Team Analysis and Levenshtein Distance to Reveal said Structure." It's the first part of that subtitle that is the secret, though: being able to correctly analyze and classify individual teams. Without that, using something clever like Levenshtein distance isn't going to be very much help.

But that's coming in towards the end of the story. Let's start at the beginning.

What You're Going to See

This post is a bit long. Here are the sections I've divided it into:

  • What You're Going to See
  • Premise
  • Introducing ACME
  • Categorizing Teams
  • Category Example
  • Calculating the Levenshtein Distance of Teams
  • Sorting and Interpretation
  • Conclusion

However, you don't need to read the whole thing to obtain the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.

Premise

Companies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.

Of the many issues faced by growing companies (or rapidly growing orgs within large companies), the structuring one can be most problematic: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"

The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.

The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.



Introducing ACME

ACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.

More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.

Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?

Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
  • How do these teams interact with other parts of the company? 
  • Who are the stakeholders in feature development? 
  • Which sorts of customers does each team primarily serve?
There are many more questions you could ask (some are implicit in the analysis data linked below), but this should give a taste.

ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
  • Digital Anvil Product Team
  • Giant Rubber Band App Team
  • Digital Iron Carrot Team
  • Jet Propelled Unicycle Service Team
  • Jet Propelled Pogo Stick Service Team
  • Ultimatum Dispatcher API Team
  • Virtual Rocket Powered Roller Skates Team
  • Operations (release management, deployments, production maintenance)
  • QA (testing infrastructure, CI/CD)
  • Community Team (documentation, examples, community engagement, meetups, etc.)

Early SW Dev team hacking the ENIAC

Categorizing Teams

Each of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)

Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.

Here are the categories which we will use for the ACME Software Development Group:
  • Language
  • Syntax
  • Platform
  • Implementation Focus
  • Supported OS
  • Deployment Type
  • Product?
  • Service?
  • License Type
  • Industry Segment
  • Stakeholders
  • Customer Type
  • Corporate Priority
Each category may be either single-valued or multi-valued. For instance, the categories ending in question marks will be booleans. In contrast, multiple languages might be used by the same team, so the "Language" category will sometimes have several entries.

Category Example

(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)

In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.

(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)

Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
  • LFE - #b0000000001
  • Erlang - #b0000000010
  • Elixir - #b0000000100
  • Ruby - #b0000001000
  • Python - #b0000010000
  • Hy - #b0000100000
  • Clojure - #b0001000000
  • Java - #b0010000000
  • JavaScript - #b0100000000
  • CoffeeScript - #b1000000000
A team that used LFE, Hy, and Clojure would obtain its "Language" category value by XOR'ing the three supported languages, and would thus be #b0001100001. In LFE, that could be done by entering the following code the REPL:


We could then compare this to a team that used just Hy and Clojure (#b0001100000), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common). 


Calculating the Levenshtein Distance of Teams

As a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.

In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)

Partial view of the spreadsheet's first page.
(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shell
That will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)

Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:


We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:


That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.

It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:


Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:


It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)

The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:


(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )



Sorting and Interpretation

Before we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:
  • it has stakeholders in Sales, Engineering, and Senior Leadership;
  • it has a "Corporate Priority" of "Business critical"; and
  • it has both internal and external customers.
Of all the products and services, it seems to be the biggest star. Let's try a sort now, using our new custom function -- inputting something that's human-readable: 


Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:


For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:
  • Lisp- and Python-heavy
  • Middleware running on BSD boxen
  • Mostly proprietary
  • Externally facing
  • Focus on apps and APIs
It would make sense to group these three together.

A sort (and thus grouping) by comparison to critical team.
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.

Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.

The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.

Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you. 

The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.

Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.

Let's find out!


As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.

A sort (and thus grouping) by comparison to highly-connected team.
To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).

As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?


This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.

A sort (and thus grouping) by comparison to a non-critical team.

If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.


Conclusion

If there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.

If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.


Sunday, May 18, 2014

lfest: A Routing Party for LFE Web Apps

The Clojure community (in particular, James Reeves) has produced one of the best APIs I've seen for creating RESTful services, or any application using path dispatching (routing), really: Compojure.

I've been really missing this in LFE. Despite the fact that creating RESTful apps in LFE with YAWS is so simple it doesn't even need a framework, the routes can be a bit difficult to read, due to some of the destructuring that's done (e.g., URL paths).

Take the following route declaration from the yaws-rest-starter project:

Note that this example delegates additional routing to other functions, since all the routing in one function was pretty difficult to read. This, however, detracts from the overall readability in a different way: there's not one place to look to see what functions map to the complete URL-space for this service.

I wanted to be able to do something like what you can do in Clojure:

LFE is a Lisp with macros, so why not? Also, nox or mheise (I forget who) on the #erlang-lisp channel had previously noted all the lfe* projects (and these too), and suggested that someone create the inevitable "lfest" repo on Github.

Enter lfest. After a weekend of hacking and debugging some tiny macros, surprisingly little code now supports routes definitions like the following in LFE+YAWS web apps:

If you're curious to see what this actually expands to before getting compiled in a .beam file, you can view it here.

Bonus for LFE webdevs: this project also has a module of HTTP status codes, for easy re-use in other projects.

The recent work I've been doing with macros has really helped shape the section I've been planning to write in the LFE User Guide. I haven't wanted it to be yet another dry description of macros in a Lisp-2. Instead, my hope is to help jump-start people into how to think about macros, how to start writing them immediately (without years of study first), and how to debug them.

The trick, as with all complicated subjects, is how to remove the barrier to entry and get folks that necessary hands-on experience without dumbing-down the material. It's almost ready...

Also, it's Bay to Breakers today, and Ocean Beach is bumpin. The synchronicity is just eery. May you party just as hard with routes in LFE.


Wednesday, May 07, 2014

LFE Design Summit

Well, maybe not summit, but certainly a meeting of minds :-)

The LFE Community is pleased to announce that we will be holding our first ever community design meeting in Stockholm, Sweden this June during the Erlang User Conference! We have set aside 1 hour for a kick-off discussion that we can continue in breakout sessions, hallways, meadhalls, in #erlang-lisp on Freenode, and on the LFE mail list

Our goals are to not only share plans, but to discover what you want in LFE, how you want to use it. This is an opportunity for us to work together, explore interesting problems to solve, build community awareness around language goals, and identify efforts around which each of us are interested in collaborating with others in the course of subsequent months.

We've opened up a Google Docs form so you can add your ideas for session topics. Here are some discussion topics that we've come up with so far:
  • LFE Standard Library 
  • Java Interop via Erjang 
  • State of LFE/LFE Roadmap 
  • Specs, Types, Debug Info and Dialyzer: Erlang Core AST in BEAM 
  • Refining LFE docs and guides, creating a CookBook, etc.
Add your favorites!

In the next couple weeks, we'll identify the top session topics the folks have proposed and attempt to cover these during the Erlang User Conference. Robert was able to get us some meeting space & time for this; he's hoping to have something viewable on the EUC site soon.

This is a first step; let's see how this goes, and if people really enjoy it, we can have a real summit next time!

Update: Robert has announced the time and place.


Wednesday, April 02, 2014

Hash Maps in LFE: Request for Comment

As you may have heard, hash maps are coming to Erlang in R17. We're all pretty excited about this. The LFE community (yes, we have one... hey, being headquartered on Gutland keeps us lean!) has been abuzz with excitement: do we get some new syntax for Erlang maps? Or just record-like macros?

That's still an open question. There's a good chance that if we find an elegant solution, we'll get some new syntax.

In an effort to (re)start this conversation and get us thinking about the possibilities, I've drawn together some examples from various Lisps. At the end of the post, we'll review some related data structures in LFE... as a point of contrast and possible guidance.

Note that I've tried to keep the code grouped in larger gists, not split up with prose wedged between them. This should make it easier to compare and contrast whole examples at a glance.

Before we dive into the Lisps, let's take a look at maps in Erlang:

Erlang Maps

Common Lisp Hash Tables

Racket Hash Tables

Clojure Hash Maps

Shen Property Lists

OpenLisp Hash Tables

LFE Property Lists

LFE orddicts

I summarized some very basic usability and aesthetic thoughts on the LFE mail list, but I'll restate them here:
  • Erlang syntax really is quite powerful; I continue to be impressed.
  • Clojure was by far the most enjoyable to work with... however, doing something similar in LFE would require quite a bit of additions for language or macro infrastructure. My concern here is that we'd end up with a Clojure clone rather than something distinctly Erlang-Lispy.
  • Racket had the fullest and most useful set of hash functions (and best docs).
  • Chicken Scheme was probably second.
  • Common Lisp was probably (I hate to say it) the most awkward of the bunch). I'm hoping we can avoid pretty much everything the way it was done there :-/
One of the things that makes Clojure such a joy to work with is the unified aspect of core functions and how one uses these to manipulate data structures of different types. Most other implementations have functions/macros that are dedicated to working with just maps. While that's clean and definitely has a strong appeal, Clojure reflects a great deal of elegance.

That being said, I don't think today is the day to propose unifying features for LFE/Erlang data types ;-) (To be honest, though, it's certainly in the back of my mind... this is probably also true for many folks on the mail list.)

Given my positive experience with maps (hash tables) in Racket, and Robert's initial proposed functions like map-new, map-set, I'd encourage us to look to Racket for some inspiration:
Additional thoughts:
  • "map" has a specific meaning in FPs (: lists map), and there's a little bit of cognitive dissonance for me when I look at map-*
  • In my experience, applications generally don't have too many records; however, I've known apps with 100s and 1000s of instances of hash maps; as such, the idea of creating macros for each hash-map (e.g., my-map-get, my-map-set, ...) terrifies me a little. I don't believe this has been proposed, and I don't know enough about LFE's internals (much less, Erlang's) to be able to discuss this with any certainty.
  • The thought did occur that we could put all the map functions in a module e.g., (: maps new ... ), etc. I haven't actually looked at the Erlang source and don't know how maps are implemented in R17 yet (nor how that functionality is presented to the developer). Obviously, once I have, this point will be more clear for me.
With this done, I then did a thought experiment in potential syntax additions for LFE. Below are the series of gists that demonstrate this.

Looking at this Erlang syntax:

My fingers want to do something like this in LFE:

That feels pretty natural, from the LFE perspective. However, it looks like it might require hacking on the tuple-parsing logic (or splitting that into two code paths: one for regular tuple-parsing, and the other for maps...?).

The above syntax also lends itself nicely to these:

The question that arises for me is "how would we do this when calling functions?" Perhaps one of these:

Then, for Joe's other example:

We'd have this for LFE:

Before we pattern match on this, let's look at Erlang pattern matching for tuples:

Compare this with pattern matching elements of a tuple in LFE:

With that in our minds, we turn to Joe's matching example against a specific map element:

And we could do the same in LFE like this:

I'm really uncertain about add-pair and update-pair, both the need for them and the names. Interested to hear from others who know how map is implemented in Erlang and the best way to work with that in LFE...

Wednesday, March 19, 2014

lfetool T-Shirts?!

So, yeah... waaaaay too early for a T-shirt; the code has just gone from 0.1.x to 0.2.x. But, how could we resist: the project logo is retro like woodblock prints!

Earlier today, lfetool converted to using plugins (easier for contributors to play!), and hopefully later tonight YAWS + Bootstrap support will land. Regardless, there's tons more work to do, and what's more motivating for new work than a T-shirt? As you can see, we had no choice.

So let's get some. T-shirts, that is.

The lfetool logo is on the front, and Robert Virding's example Fibonacci lfescript code is on the back (with the lfetool command that generates the script at the top). Here's the front of the T-shirt:




And here's the back:



We've got a CustomInk sign-up sheet that you can add your name to if you want a shirt. I'll coordinate with folks individually, once we meet our minimum number (we're going to need to use paypal with payment upfront). Due to the colors of the source code on the back, the minimum order is 15. This will put the shirts at $22 a piece + $5 to send it to you. I've just ordered 2; 13 more go!


Tuesday, March 18, 2014

lfetool 0.2 Released

This release of lfetool has a bunch of new interesting features -- too much to cover in a tweet, so it's getting a little blog post ;-)

Here are the high-level bits that should make users' lives better:
  • New yaws project type, provided for building basic web sites; includes the exemplar project as a dependency for generating HTML with S-expressions.
  • Every project now gets TravisCI support by default.
  • Unit tests, integration tests, and systems tests now each have their own dir.
  • Tests now get run in parallel for faster run times.
  • Added support for running on Linux and *BSD systems.
  • Added unit tests for lfetool itself (shUnit) (for checking skeleton-generation and tool options).
  • lfetool has a new logo :-)
Note that the version jumped from 0.1.2 to 0.2.0 due to the strong improvements in quality and feature set of the tool. Forth-coming features include:
  • Support for an e2 service skeleton project.
  • Support for a YAWS-RESTful service skeleton project.
  • Support for YAWS embedded in a skeleton OTP application.
  • Support for YAWS + Exemplar + Bootstrap projects
  • Support for slide decks with Exemplare + Reveal.js
If you've got more ideas for the lfetool wishlist, post a comment here or send an email to the LFE mailing list.


Thursday, March 13, 2014

The Future of LFE

Erlang Factory

First of all, Erlang Factory this year was just phenomenal: great talks, great energy, and none of the feared/anticipated "acquisition feeding frenzy." Instead, everyone was genuinely happy for WhatsApp and Cloudant, and with celebrations aside, they were ready to get on with the conference and dive into the tech :-)

And gosh, there was a bunch of good tech. If you don't believe me, check out the schedule. Also on that page are the speaker pics. For talks that have video or slides pushed up, the speaker pic is visually annotated and linked.

There's so much good stuff there -- I've definitely got my watching queue set up for the next few weeks ...

LFE Presentation

I gave a presentation on LFE which covered everything from motivational basics for using a Lisp in the 21st century, gave a taste of LFE in small chunks, and then took folks on a quick tour of creating projects in LFE. There was also some dessert of fun side/research projects that are currently in-progress. The slides for the presentation are here; also the slide source code is available (related demo project). I'd also like to give a shout out to the Hoplon crew for their help in making sure I could create this presentation in a Lisp (Clojure), and not HTML ;-) (It uses a Hoplon-based Reveal.js library.)

The Good Stuff

After the presentation, several of us chatted about Lisp and Erlang for a while. Robert and I later continued along these lines after heading over to the quiet of the ever-cool Marines Memorial 11th-floor library (complete with fireplace). Here we sketched out some of the interesting areas for future development in LFE. I'm not sure if I'm remembering everything (I've also added stuff that we didn't discuss in the library, like Sean Chalmers' recent experiments with types; blog and discussion):
  • getting the REPL to the point where full dev can happen (defining functions, macros, and records in the LFE shell)
  • adding macros (maybe just one) for easier use of Mnesia in LFE 
  • discussing the possibility of an LFE stdlib
  • gathering some of the best funcs and macros in the wild for inclusion in an LFE stdlib
  • possibly supporting both (: module func ...) and (module:func ...) syntax
  • possibly getting spec and type support in LFE
  • producing an LFE 1.0 release
Additional efforts planned:
  • building out an LFE rebar plugin
  • examining erlang.mk as another possible option
  • starting work on an LFE Cookbook
  • creating demos of LFE on Erjang
  • creating demos of LFE-Clojure interop via JInterface
  • creating more involved YAWS/REST examples with LFE
  • explore the possibility of an SOA tutorial with LFE + YAWS 
  • async tasks in LFE with rpc lib 
  • monad tutorial for LFE (practical, not conceptual)
  • releasing a planner demo
  • finishing the genetic programming examples
  • LFE style guide
  • producing a stdlib contribution guideline
  • continued work on the LFE user guide
I'll dump all these into github issues so they'll be easier to track.

If this stuff is exciting to you, feel free to jump into the low-volume discussions we have on the mail list.

More soon!


Sunday, January 12, 2014

Prefix Operators in Haskell

I wanted to name this post something a little more catchy, like "McCarthy's Dream: Haskell as Lisp's M-Expressions" but I couldn't quite bring myself to do it. If s-expressions had greater support in Haskell, I would totally have gone for it, but alas, they don't.

However, there is still reason to celebrate: many Haskell operators do support prefix notation! This was a mind-blower to me, since I hadn't  heard about this until last night...

At the Data Day Texas conference this past Saturday, O'Reilly/StrataConf had a fantastic booth. Among the many cool give-aways they were doing, I obtained a coupon for a free ebook and another for 50% off. Upon returning home and having made my free book decision, I was vacillating between an OCaml book and the famous Haskell one. I've done a little Haskell in the past but have never touched OCaml, so I was pretty tempted.

However, something amazing happened next. I stumbled upon a page that was comparing OCaml and Haskell, which led to another page... where Haskell prefix notation was mentioned. I know many Haskellers who might read this would shrug, or say "yeah, we know", but this was quite a delightful little discovery for me :-)

I don't remember the first page I found, but since then, I've come across a couple more resources:
That's pretty much it, though. (But please let me know if you know of or find any other resources!)

As such, I needed to do a lot more exploration. Initially, I was really excited and thought I'd be able to convert all Haskell forms to s-expressions (imports, lets, etc.), but I quickly found this was not the case. But the stuff that did work is pretty cool, and I saved it in a series of gists for your viewing pleasure :-)

Addition

The first test was pretty simple. Finding success, I thought I'd try something I do when using a Lisp/Scheme interpreter as a calculator. As you can see below, that didn't work (the full traceback is elided). Searching on Hoogλe got me to the answer I was looking for, though. Off to a good start:


More Operators

I checked some basic operators next, and a function. Everything's good so far:


Lists

Here are some basic list operations, including the ever-so-cool list difference operator, (\\). Also, I enjoyed the cons (:) so much that I made a t-shirt of it :-) (See the postscript below for more info.)


List Comprehensions

I'm not sure if one can do any more prefixing than this for list comprehensions:


Functions

Same thing for functions; nothing really exciting to see. (Btw, these examples were lifted from LYHGG.)


Lambdas

Things get a little more interesting with lambda expressions, especially when a closure is added for spice:


Function Composition

Using the compose operator in prefix notation is rather... bizarre :-) It looks much more natural as a series of compositions in a lambda. I also added a mostly-standard view of the compose operator for comparison:


Monads

I've saved the best for last, an example of the sort of thing I need when doing matrix operations in game code, graphics, etc. The first one is standard Haskell...


Wow. Such abstract. So brains

Note that to make the prefix version anywhere close to legible, I added whitepsace. (If you paste this in ghci, you'll see a bunch of prompts, and then the result. If you're running in the Sublime REPL, be sure to scroll to the right.)

And that pretty much wraps up what I did on Sunday afternoon :-)

Other Resources

Post Script

If you're in the San Antonio / Austin area (or SF; I'll be there in March) and want to go in on a t-shirt order, let me know :-) We need at least six for the smallest order (tri-blend Next Level; these are a bit pricey, but they feel GREAT). I'll collect payment before I make an order.


Tuesday, December 31, 2013

Joe Armstrong's Favorite Erlang Program... in LFE

It kind of shocked me to discover recently that a new post hasn't been pushed out on this blog since April. Looking at the drafts I've got backlogged -- 30 and counting -- I also realized that none of these were going to get finished in a single day. Of those 30, probably 6 will ever see the light of day, and all of those are drafts in various states of completion for the Lambda Calculus series.

There's a great Tiny Scheme example I want to write about, some cool Clojure stuff I've played with, and bunch of functional programming work I'd like to share, etc., etc., but again, none of these are anything I can complete in a few minutes (allowing me to do the other things that I'd like to do today!).

I'd given up, when I remembered that there was something short, sweet, and a bit of ol' fun that I wanted to share! Mr. Erlang himself recently blogged about it, and I wanted to convert it to LFE: Joe Armstrong's Favorite Erlang Program.

This little puppy is quite delightful -- and Joe shares a great little story about how he deployed it on Planet Lab in his "aside" section of that post :-)

After you read his post, come back and take a look at this code:
That's how you do it in LFE! (Also, did you notice that Github now knows how to colorize *.lfe files? That happened here.)

Ain't it just the purdiest thing you ever saw?

This code has also been submitted for inclusion in the LFE examples (thus the "examples/" below). Let's do a quick sanity check:
And now, let's run the example!

Happy New Year, everyone :-D


Tuesday, April 23, 2013

OpenStack Developer Summit: Heat Followup

Folks are finally starting to recover from the OpenStack Developer Summit that was held in Portland, Oregon recently. All reports indicate that it was a truly phenomenal experience, record-breaking in many ways, and something that has inspired incredible enthusiasm within the community. And that's great news, since there's an enormous amount of work to be done this release ;-)

Of particular importance to many in the community is the work around maturing the autoscaling feature in OpenStack Heat. There was a fantastic session at the summit, facilitated by the bow-tied and most dapper Ken Wronkiewicz (his notes from the Summit were published on the Rackspace blog).

In preparation for the session, the following resources were created:
That one in the middle is important, as it is also where notes were taken during the actual session itself (see the section entitled "ODS Session Notes"). Devs at Rackspace have started going through the notes from the session and started planning work around this -- all of which will be carried on in the open, on the OpenStack mail list (tagged with "[Heat]"), on Freenode, and on github/gerrit.

The discussion at the Summit indicated strong interest in building a REST API for the existing autoscaling feature. Needless to say, there is a lot involved in this, touching upon significant OpenStack components like Quantum, LBaaS, and Ceilometer. Once the appropriate code is in place, a REST API will need to be created, features will need to be expanded/added, etc., and we'll be off and running :-)

Lots to do, and lots of great energy and excitement around this to keep us all chugging through this cycle.

On that note, we'd like to send out a special "thanks" to all the countless folks who worked so hard to make ODS happen. This event anchors us in a most excellent way, providing the insight and fuel that supports future development work so well!