In Part 1, the most important sentence was:

The inevitable conclusion: the server needs to run code on your behalf and then give you the result.

The solution from Part 1 was wrong because it didn’t go far enough in this direction.

A fresh look

Here’s the scenario:

  • You have two clients. A client might be “a server”, “a browser”, “an app”, etc.
  • The first client holds a bunch of data. You could also call this data “state”, “resources”, etc.
  • The second client wants to present the first client’s data.

Start with the simplest solution: Send the data to Client 2. Present it.

Produced by OmniGraffle 6.2.5 2015-07-31 06:01:27 +0000CLIENT 1CLIENT 2The raw dataRendered pixels

If this works well, nice work! You’re done!

But sometimes you’ll discover a new constraint:

  • The data is too big to send across the network.

Here’s a solution: Send less. Filter down to the subset of the data that the presentation uses.

Produced by OmniGraffle 6.2.5 2015-07-31 05:59:53 +0000CLIENT 1CLIENT 2The raw dataRendered pixelsFiltered data

Client 1 is essentially a database. Client 2 requests whatever it needs.

This is the solution described in Part 1. And it’s wrong.

Flaw 1: Tedious and fragile

Client 2 will transform data into a presentation. To do this, it needs to crawl the data. You’ve just committed to fetching any data that needs crawling. This means you need to keep this data-crawling code in sync with that data-fetching code. Any time you touch the data-crawling code, you’ll have to remember to go back and update the fetched values. This is especially bad if you reuse the data-crawling code elsewhere.

If your data-crawling code is nontrivial, this will break frequently. It’s another thing to think about. It’s complexity.

Flaw 2: Might not solve the network problem

Are you confident that the presentation shows only small subsets of the data at any given time? What if it’s showing aggregates that require seeing all of the data?

Even if you know it’s small now, are you confident it will always be small?

Taking things apart…

Let’s go back to the essential:

Produced by OmniGraffle 6.2.5 2015-07-31 06:01:27 +0000CLIENT 1CLIENT 2The raw dataRendered pixels

To make new solutions visible, let’s untangle a step. Decide what to draw, and then draw it.

Produced by OmniGraffle 6.2.5 2015-07-31 06:19:05 +0000CLIENT 1CLIENT 2The raw dataRendered pixelsWhat to draw

…so that we can put them back together

Now, just slide that line over.

Produced by OmniGraffle 6.2.5 2015-07-31 06:17:46 +0000CLIENT 1CLIENT 2The raw dataRendered pixelsWhat to draw

The data-crawling code operates on all the data. No fetching necessary.

And the network problem? In general, you can always describe “what to draw” in a network-friendly size, because you’ll design your presentation to show human-comprehendable amounts of information. This already-existing upper bound will help keep the necessary description size manageable. In the worst case, it’s a large bitmap, but it’s often a succinct JSON description in language that your drawing code understands.


But that’s gross, right? Should a database really be telling you what to draw?


Client 1 is not a database. Client 1 is a webserver.

The “what to draw” description is just content. Preparing content is exactly what webservers do.

What I’m really saying: “visualizing big remote things” is not an exotic new problem. You use lots of products that solve this problem. They’re called websites. A webpage is a visualization of a big remote thing.