Dapr and waSCC — 2 Approaches to the Same Problem

Kevin Hoffman
8 min readOct 17, 2019
Dapr Logo

The other day, Microsoft started making a bit of a splash on the tech news and social media feeds with the announcement of their new open source project called Dapr. I was immediately intrigued when I read Dapr’s logline:

An event-driven, portable runtime for building microservices on cloud and edge.

Compare this to the elevator pitch for waSCC:

waSCC is a WebAssembly host runtime that securely binds cloud-native capabilities to portable modules.

So what is Dapr and what does it have to do with waSCC? Dapr — once you condense it down to its most basic pattern — is a sidecar. In a sidecar, some set of functionality that you no longer want in your code base as raw code or even as a library dependency is moved to another co-located process called the sidecar. The term comes from the world of motorcycles, where a sidecar is a place for a passenger to sit that attaches to the motorcycle. Imagine your code driving the motorcycle, and a bunch of useful stuff from the Dapr project sitting in the sidecar next to you, within arm’s reach.

At this point, both Dapr’s developers and I agree on a very fundamental principle: when building applications, the boilerplate for non-functional requirements does not belong in our application code.

As I often say in my waSCC presentations, today we spend 90% of our time in service of non-functional requirements and only 10% of our time in service of the real business logic we’re trying to ship to production. By shipping all of the NFRs (like logging, messaging, infrastructure, communications, data stores, tracing, etc) off somewhere else, our code becomes easier to test, easier to build, easier to maintain, smaller, faster, and potentially more secure.

With Dapr, this offloading of boilerplate and NFRs is done by creating a sidecar process with which your application can communicate. If you ask the sidecar to publish a message, you are now decoupled from the means by which that request is carried out. You only interact with the contract that defines that such a request will be handled. This is also a fundamental aspect of waSCC. The difference is, with waSCC, your code asks the host runtime rather than another process to publish that message, but the interaction is still through a published, well-versioned contract.

So what’s the difference between a host runtime and a sidecar? Back to the motorcycle analogy: with waSCC, you’re the rider and your motorcycle provides you everything you need. With Dapr, everything you need is neatly stuffed into the sidecar next to you.

These two approaches differ in a number of ways: some subtle and some obvious. I’ll address how they differ by talking through the four pillars on which waSCC was built: portability, enterprise suitability, performance, and productivity.

Portability

Microsoft’s Dapr comes by its claim of portability by virtue of the concept of a public API implemented in HTTP and gRPC. Your code talks to the sidecar through one or both of those protocols. As long as your code can speak HTTP and HTTP/2 and perform protocol buffer encoding and decoding, it should be able to talk to Dapr.

waSCC comes by its claim of portability through the binary standard for passing parameters across the WebAssembly⟷host runtime boundary called waPC. As long as your code can compile to a WebAssembly binary target (a .wasm file), it can participate in and consume all the waSCC features.

Microsoft has — in relative terms — near-infinite resources, and so they’ve already created a number of language-specific SDKs to provide client libraries for accessing the Dapr sidecar. Because of limitations in the WebAssembly ecosystem these days, waSCC has only been able to support WebAssembly modules created in Rust and Zig. TinyGo should work as well, we just haven’t created an SDK for it yet (see previous comment about lack of near-infinite resources).

In both cases, if the contract (waPC or the Dapr public API) changes, then the client SDKs need to change, and you’ll have to recompile or potentially refactor your code.

For me, the portability claim of Dapr isn’t as deep as waSCC’s because code you write for the Dapr ecosystem is only as portable as the language you wrote it in. You can’t create a Dapr client in a language that produces processor- and OS-specific binaries, and then move that Dapr app to an incompatible host machine. Due largely to the fact that waSCC stays within the Wasm specification, waSCC is exactly as portable as WebAssembly — 100%. The .wasm modules produced are neither CPU- nor OS-bound.

Among many other things, it was WebAssembly’s portability that ultimately lured me away from building a sidecar waSCC, instead adopting WebAssembly. I truly believe that WebAssembly will change the way we all build software, especially for the cloud.

Enterprise Grade

The compiled output of an application that is a Dapr client is essentially the same as the compiled output of any other application written in that language. What this really means is that Dapr’s solution is mostly orthogonal to the concerns of the enterprise: it doesn’t make things better there, but it also doesn’t make them worse.

waSCC was created to solve enterprise problems. It embeds JWT tokens directly into WebAssembly modules, allowing for a robust secure capability system that can prevent modules from accessing specific capabilities offered by the host runtime. A module only granted messaging access cannot communicate with a key-value store, for example.

waSCC fully supports the ability to guarantee a module has not been altered, verify a chain of provenance, and integrate with Open Policy Agent, letting developers and operators create a secure environment in which their WebAssembly modules execute.

If none of that matters to you, then Dapr will neither harm nor hinder your development there.

Performance

Dapr’s documentation doesn’t make any specific reference to its performance characteristics. This isn’t to say it’s slow, but only to say that performance isn’t the reason why Microsoft built it.

Getting Dapr to perform some task on behalf of your code (publishing a message on a broker, reading a value from a key-value store) involves sending an RPC-like request over gRPC — or writing JSON over HTTP — and then handling the response value.

Getting the waSCC host runtime to perform some task on behalf of your code involves asking the host runtime via a binary, in-process protocol and then handling the corresponding response value.

Both of these actions involve the use of a client SDK, so in either case it should feel idiomatic to the developer, and the developer shouldn’t be aware of how their code is communicating with the capability provider.

The performance difference between these two approaches is the same as the performance difference between an in-process function call (waSCC) and an out-of-process network call across at least one network (Dapr/sidecar).

In other words, calling Dapr incurs the overhead of making a network call while calling waSCC incurs the overhead of making a local call, giving waSCC significantly less latency. For relative latency comparisons, check out this list.

You can likely achieve identical throughput with either solution via horizontal scaling. If latency doesn’t matter, then the two architectures probably balance out evenly on performance.

Though it doesn’t always impact performance, it’s worth noting that the size of a compiled WebAssembly module utilizing the waSCC host runtime is usually less than 2 megabytes in size. Your fully-logged, trace-enabled, secure, enterprise-grade business logic compiles into something less than the typical slack-embedded GIF. This impacts performance, but it also can add up to dramatic cost savings when it comes to things like managing container and process scheduling density in the cloud.

Productivity

Our hearts are in the same place here. We all want developers to be more productive. We want to be able to remove all sources of unnecessary friction from the development process. We want to be able to test, poke, prod, deploy, update, and re-deploy our code all in a way that isn’t just fast and easy, but is actually enjoyable. I want to delight developers because I am a developer and I deserve to be delighted 😊.

The essential difference between the sidecar approach and the waSCC approach is that the sidecar doesn’t actually remove friction from the development process. Sure, there’s a client SDK that provides abstractions around the capabilities your code needs…but we’ve been using such SDKs for decades now, and I’d wager that our lives are not any better for it. We still have to wrangle and herd dependencies, we still have to figure out how to produce a binary from our language of choice that we can deploy to the cloud. We still need to be able to write tight, testable code. Most of us need secure, deployable, immutable build artifacts.

As I mentioned earlier in the article, the process of building a sidecar-based application in any language is identical to the process of building an application without that sidecar. The difference lies in which SDKs you take on as dependencies — do you have a direct dependency on Redis, or do you get your K/V support with a single call to the Dapr/sidecar SDK? It’s still a library call that tugs on a dependency chain, and that call hasn’t done anything to make your build artifacts smaller, faster, more secure, or more suitable for the enterprise and cloud.

Summary

If you’ve read some of my older blog posts, then you know that I’m a huge fan of gRPC. If I have to write code that makes point-to-point networking calls, I’m probably going to do it in gRPC. I also believe it’s a strong choice to base a public API on HTTP and HTTP/2 calls because of the ubiquity of support for those technologies in mainstream languages.

In fact, before creating waSCC, I had started down the path of building what I called the cloud-native sidecar and such a beast would have ended up looking a lot like Dapr. It would provide “cloud capabilities in a black box”, and all you had to do was access the public API and it would handle your requests. It was only after exploring WebAssembly and realizing its potential that I pivoted and created what would ultimately become waSCC.

I don’t think that there is an objectively “better” choice between Dapr and waSCC. I think both of these solutions are solving the same problem in different ways. I know a lot of developers who simply feel better and more comfortable accessing sidecars via gRPC, and I know others that prefer the simplicity of scheduling just a single process for their component. There is no right or wrong answer.

The creation of Dapr gives me hope for the future because not only does it validate that my notion of removing NFR boilerplate from the developer workflow is beneficial, but it hints at a future where all of us might flip the time sink ratio and build our software components with 90% business logic and 10% boilerplate — and that sounds like a great time to be a developer.

--

--

Kevin Hoffman

In relentless pursuit of elegant simplicity. Tinkerer, writer of tech, fantasy, and sci-fi. Converting napkin drawings into code for @CapitalOne