Shedding Some Light on Dark

Kevin Hoffman
12 min readJan 22, 2020
Dark company logo
Dark Logo

Since the first day I heard about Dark, I’ve been pestering its creators for access to the private beta. After the first round where they selected people based on a use case they wanted hardened (like choosing the most useful humans to be the first colonists to land on “New Earth”), they sent invites to those of us who didn’t specifically want to test that scenario. They were now ready for the less useful among us to start breaking things. To say that I was excited would be quite the understatement.

Dark isn’t just a functional programming language, it’s also an online-only IDE and a holistic cloud environment. The Dark creators call it deployless (a term originally coined by Jessie Frazelle) and they’re right, though it takes a while for the true impact of what that means to sink in. You don’t hit compile or build or even git push your code. Once the little semi-visual block you’re working on is considered complete, then it becomes live and is running before you even see an animated wait spinner. If you don’t want immediate live publication, you can use feature flags (temporarily unavailable in the current beta).

Trace-Driven Development

Trace-Driven Development has a chance to completely change our development paradigm. Like so many aspects of Dark, it takes a bit of getting used to, though once I got used to it, I didn’t want to develop any other way.

In traditional development, if we want to create a data handler, we create the function first, validate the data in the code, and then supply sample data and hope it all works. If we’re really diligent, we have unit tests that will prove our code works against a set of sample data.

With Dark, you push data to something that doesn’t yet exist. Then, you can convert the non-existent thing (which is bound to a trace of the data you supplied) into a real thing. To create a new HTTP POST handler for some specific JSON payload you want to handle you create the JSON payload and post it to the URL you want then you can convert the captured 404 into a real handler.

Not everything in the screenshot below will make sense (yet), but take a look at this sub-section of my Dark canvas in the live, in-browser IDE:

Screenshot showing a trace and a corresponding handler
HTTP POST handler and the current trace

On the left you can see a trace of an HTTP request that I submitted. Every single request to any of your endpoints is captured and available for replay.

This inversion of workflow — where you start with the data you want to handle and create the handler afterward — is a remarkably pain-free process. When you recall that everything you create in Dark can be versioned and controlled with feature flags, it makes sense that if you change the shape of your trace data, you’ll want that to be an instigating event in your workflow.

Going off the Rails

When we build applications, especially in functional programming languages, there are two main paths for handling errors: The first where you assume a safe path and you’re okay if your code crashes in the presence of failure, and the second where you use safe programming patterns to never assume a happy path. On the latter path, you pattern match on non-existent or invalid data.

In Elixir, you might label these two patterns respectively as “pure pipelining” and “with chaining”. In Rust, these paths equate to choosing between safely matching on Option and Result or calling unwrap() (it really should be called yolo()), which can cause a panic.

Dark visually separates these two forks in the road in the IDE. Your code is said to be “on the rail” if you are on the assumptive happy path (and it will handle exceptions upon failure). You’re “off the rail” (cue Black Sabbath’s Crazy Train) if error management is now your code’s responsibility, in which case you will get option or result values and will be expected to pattern match on them appropriately.

This aspect of Dark caused me some consternation and resulted in a number of errors in my code. I didn’t know what to expect, so I found it difficult to tell whether I was on or off the rails and when I should toggle those rails.

Take a look at this screenshot from a function that is off the rails (error management is my code’s responsibility):

Screenshot showing a function off the rail

And now take a look at one that is on the rail (assumption of safety):

Screenshot showing a function on the rail

The difference between these two is subtle, but important. You probably noticed that one has code that deals with the option type and the other just returns a value…but when/where does this second one “unwrap” the value? DB::get returns an option , but the screenshot shows it returning raw (I’d call it “unwrapped” in Rust) data. Where did the option go? Is this “magic”? (I dislike implicit magic in languages).

In the second screenshot, on the right side of the box, there is a rail (think of it as a “guard rail”). Inside that rail is a filled dot. This visually indicates that your code is on the rail, and that the current trace’s execution path through that code does not result in an error. If it did result in an error, you’d see an error icon (🚫). Dark is automatically extracting the raw value from inside the option/result types when your function is on the rail.

This felt very strange to me and I admit I was ready to start ranting about it after my initial exposure. Instead, I went through the motions of building a few more samples and the workflow actually started to feel natural. I started by writing the code I wanted to execute in the happy path (which of course gave me auto-completion from the trace). Then, when I was done prototyping, I would use alt/cmd-x and the resulting menu to take the function off the rail. Once I did so, the data coming back from DB::get would be an option, and I’d wrap my query in a match (or an if or other conditional).

My First* Dark Application

I put an asterisk next to first here because the first thing I wrote was a simple “counter” service, where you could create new counters and increment their values with a RESTful API. It also auto-incremented all counters in the background using a cron block in the Dark canvas. I had no idea what I was doing while writing this application, so I’ve chosen not to discuss that here.

My first non-embarrassing application was an IoT sensor log. In this application, the back-end exposes an API that lets a consumer create new sensors and record individual values for those sensors. Imagine a thermostat that sends a value every minute or a BBQ grill that sends meat and grill temps every 30 seconds (why yes, I really did build one of those for my smoker with a RasPi, spare parts, and thermistors). The handler that takes in a new sensor reading immediately returns OK but then asynchronously computes the min, max, and average for that sensor. In a real-world scenario I might have a first-class event store for this, but it was a good enough use case to give me an idea of what it’s like to tinker with Dark.

In TDD (trace-driven, not test-driven) fashion, I hit a non-existent endpoint with the payload that I wanted (if you issue the below command, you’ll wipe the history for this one test probe):

$ curl -X POST -H 'Content-Type: application/json' https://autodidaddict-sensors.builtwithdark.com/sensors -d '{"name": "Probe95", "description": "Meat thermometer"}'

This created a 404 record stored in my canvas. I was then able to hit a button to convert the trace that produced that 404 into a new handler.

Screenshot of POST handler for /sensors
Screenshot of POST handler for /sensors

Here you can see that I call DB::set to create a new record in the key-value store. What you do not see here is the infrastructure to support my database, the boilerplate to deal with database client operations, obtaining a secret for establishing a connection to the DB, the custom client code that builds retries into a client because persistent connections are unstable in the cloud, or even the code to de-serialize the body of the HTTP payload into a JSON structure or a generic map.

I can now just hit the play button (it’s a replay type button in the image because it’s been successfully processed at least once) and that trace is fed through the handler and my database is updated live, “in real-time” as the cable news people like to say. I’ll prove it: click on https://autodidaddict-sensors.builtwithdark.com/sensors. Unless there’s some beta-related instability going on, you’ll see the sensors I’ve added and updated while playing with this sample.

Now let’s take a look at how I can receive a new sensor reading and asynchronously update aggregate statistics without possessing any measurable skill in math:

Screenshot of sensor log handler
Sensor log HTTP POST handler

Here the code calls emit which publishes a fire-and-forget (though I think it might be guaranteed delivery) message to a worker (event handler) called processSensorUpdate. This call is asynchronous, so the consumer is immediately handed a JSON payload indicating success.

Now let’s take a look at the asynchronous, event-driven background worker:

Screenshot of sensor processing background worker code
processSensorUpdate Worker

I’ve expanded the view here so you can see just how powerful this workflow can be. At a glance, you are immediately aware of the fact that this worker utilizes the functions calculateMax, calculateMin , and calculateAvg. In addition, you can see the use of the Sensors data store. Relationships between blocks on the canvas are always visualized and include an arrow showing the direction of dependency. You can also see the currently selected trace on the left. If I hit the replay button on the top right of the worker, it’ll reprocess that one event. And as mentioned earlier, you can see this this worker is “on the rail” because of the visual indicator (the filled dot) on the right.

What you do not see here is how the functions are implemented. I can’t tell how many instances of the functions there are, if they’re in-process or out of process, or even if they’re independently deployed. I don’t really care how these functions are supported by the infrastructure — Dark takes care of it for me.

Lastly, let’s take a look at the GET handlers and a REPL. The REPL lets me execute arbitrary Dark code and it will affect data (or anything else) in my canvas live, without recompilation or deployment.

Miscellaneous screenshots showing HTTP GET handlers
HTTP GET handlers and a REPL

In this handler I don’t want the individual items returned to contain the entire sensor log history of the item, just the current value and the aggregate stats.

It quite literally took me around 45 minutes to make this entire application. Had I actually known what I was doing, I could have done it in about a third of that time. This application has zero footprint on my laptop, required zero effort to deploy (because there was “no” deployment), and required no cognitive overhead on my part to manage boilerplate and ceremony.

Compressing the Iteration Cycle

We don’t really know what it’s like to use a new programming language until we’re sitting in front of an empty function and we go through the process of typing something, having it not compile, repeating that process, swearing a few times to get it to compile, and slamming our heads on the desk when it doesn’t work at runtime until we’ve finally achieved something that works.

The process of iterating over a steaming pile of sewage and refining it into a shiny glob of meaningful code is one of our core daily rituals as developers. Dark makes this cycle vanishingly small. Everything you create is a tiny, reusable, semi-visual block. You literally start with an empty canvas and drag stuff onto it to start working. Refactoring a block is easy, as-is deleting the whole thing and starting over (which I did multiple times).

I’ve already mentioned the process of starting on the rail and implementing the happy code path. Once you’re satisfied with that, you can take it off the rail and wrap your option and result returning calls with matches and conditionals.

From there, I found myself spending some time iterating and refactoring nested function calls into function call pipelines. Maybe it’s my exposure to Elixir that makes me like the beauty and simplicity of this pattern, but with all this talk about Dark’s graphical and IDE components, it’s easy to forget that it’s also a real functional programming language.

Within just a few short hours of exposure to Dark, I had already developed muscle memory to refactor code, replay a trace, watch new values from the trace appear, and then refactor again. All of that was done in-place, live.

Honestly it feels like putting a breakpoint on isolated execution path through cloud native code, firing up gdb, and then changing the code on the other side of the breakpoint before resuming execution. While you’re sitting there editing code below the paused trace, other people can be hitting the previous version of that handler. What you’re doing in that Dark editor is designed to happen live without user interruption and it’s fantastic.

I’m not going so far as to say it was magical, but it was an absolute breath of fresh air to be so seamlessly unburdened from the usual boilerplate and needless developer friction that plagues even the most productive teams. Speaking of teams, did I mention that the Dark canvas is multi-user aware and you can see people’s avatars as they’re navigating through your canvas? I can only imagine how cool that experience (deployless pair programming!) will get as the product approaches 1.0.

Why Dark Matters

Dark..matters… see what I did there? No, I’m not even remotely sorry. Dark matters because most of us are sick and tired. We’re tired of cargo culting boilerplate from one project to another and watching our original good pattern decay over time like some kind of code rot half-life. We’re tired of having to re-invent wheels to deal with non-functional requirements and cloud native capabilities over and over again ad nauseam.

What we want is a blank canvas where we can simply describe our functional intent. This is what I’m trying to build with my WebAssembly work and this is what Dark is trying to solve for certain types of applications.

We want to tell our tools what we want to build and how it should function and execute our core business logic, but not have to worry about things like deployment, live updates, building our 712th HTTP server internally, setting up and maintaining a highly-available and scalable database, and the mountain of other concerns that usually consumes 90% of our development time while we’re left with just 10% to focus on features.

Dark is not a panacea — it doesn’t solve all problems for all people. But, if you’re looking to build a cloud-native back-end for an application (you can host static assets like your React apps in Dark, too!) and you’re not a company that is obligated to worry about some of the things Dark manages behind the scenes on your behalf, then it might be a match made in heaven.

I’m certainly planning on at least trying a “Dark-first” approach for my future projects. I’ll attempt to see if Dark is a good fit for that problem before moving on to more traditional (aka complicated, difficult, annoying, tedious) solution architectures.

Dark could be a sign of a bright future 😎!

Additional Reading

Here are some excellent blog posts that I found while looking for information on Dark:

--

--

Kevin Hoffman

In relentless pursuit of elegant simplicity. Tinkerer, writer of tech, fantasy, and sci-fi. Converting napkin drawings into code for @CapitalOne