Facebook’s Relay Isn’t For Me Yet
Having worked with Relay more-or-less since it was released on a medium-sized in-house tool, i’ve concluded that despite solving a challenging problem in an elegant way, it’s not quite what I was looking for. To explain, I want to describe the path I took to the point where it appeared like Relay was exactly the solution I was looking for.
I began by using jQuery to add small enhancements to pages. This grew into using jQuery for common functionality like carousels, modals, datepickers and general use of AJAX. I tended to avoid using widget plugins where possible, because they either had too few options for customisation (inflexible) or too many (bloated).
As I started to get involved with building more complex user interfaces, it became clear that the effort of trying to use jQuery was quickly becoming unwieldy. Even with significant efforts to keep things tidy and concise, DOM updates and event listening were a real nightmare. At this time I became aware of tools that allowed something called two-way data-binding, which sounded like the solution. I flirted with Knockout for a while, generally liking the productivity, but not enjoying the lack of guidance for structuring projects. For one project I integrated it with Backbone models (this was my first use of Backbone), with some success, but it never felt particularly enjoyable.
Enjoying the structure of Backbone, I did most of my UI work using it for a while. Even though I enjoyed working with templates for each view, the lack of bundled support for two-way data-binding eventually became frustrating. It felt like I was basically doing the same as I was with jQuery, but with better code organisation. I also fell into the trap of building deep inheritance hierarchies of models and views, which caused no end of bugs. Whilst working with Backbone, I was constantly on the lookout for a new template layer that would allow me to get two-way data-binding back. I experimented with a few of them, but none satisfied. I was dimly aware of React but had an aversion to JSX. It wasn’t until I saw Pete Hunt’s Rethinking Best Practices that I decided to give it a try. I was mid-sprint, building a moderately complex bespoke CMS and was getting frustrated with Backbone. I said to myself “fuck this”, installed React, spent a couple of hours learning the basics, then in a day had fully re-implemented what I’d spent the last week and a half struggling with. I was sold.
At this point I was still using Backbone for models (and possibly views, I can’t quite remember how I structured the codes), and was using the utilities for two-way data-binding. I encountered all the problems that two-way data-binding is known for, using increasingly convoluted hacks in my models to avoid them.
This stage is a little fuzzy, I can’t remember the exact progression. But I explored a few different ways of solving the issues with two-way data-binding, including things like a global event bus. It was just around this time that Flux was introduced, which seemed to solve that particular problem. It was eerily appearing like Facebook was starting to solve all my problems exactly at the point where they were growing to new levels of frustration.
The website I was working on, was a Django one, and we had no intention of doing a complete rewrite, so we looked for ways we could use React to render some parts of the page on the server, whilst continuing to use Django for everything else. The solution was to communicate with Node from the Python process and render a particular React component with data provided by Python. We opted to use zerorpc because the implementation seemed the most straightforward (we never released this at scale during my time on the project, so I can’t comment on whether it’s a practical production solution).
I was already looking ahead to rendering entire websites in Node. React-Router looked like it was going to make things much easier, and the existence of mobile apps meant I was increasingly viewing websites as just another client for backend APIs, rather than complex database-based applications. But there was a problem, React (and especially React-Router) applications were modelled as a hierarchy of components, whilst my previous method was fetching data via imperative operations and rendering a template with that data. Trying to use this pattern in a React application whilst talking to REST APIs was slow and felt awkward. Obviously Facebook announced a solution to this problem at pretty much this exact moment. GraphQL and Relay.
GraphQL provides clients (at the web server level or in the browser) with the ability to execute what was previously a sequence of API requests in a single step by writing a query that declares all the required data in the form of a graph traversal. A backend server interprets this query and returns the data. How this data retrieval is performed is left to us, since it typically depends on how your backend works. But it’s useful to note that the reference implementation combined with dataloader makes it pretty trivial to a relatively well-optimised set of sequential and parallel operations.
By itself GraphQL (and similar technologies like Falcor) is a pretty big win, and puts us in a better position that we were in before. Relay offers a much bigger leap forward by making it easy to define data-dependencies at the component level, whilst handling execution and caching for you. On the surface it looks like a big win, so what’s the problem?
For me it skips steps in the chain I was following, i.e. that of incrementally making it easier to build rich front-end apps. Remember that the problem I was looking to solve was how to handle data loading in universal (initial render on server, subsequent in the browser) websites, and in particular, multiple-page ones. This meant I had a few requirements:
Server-side rendering (I’m aware that it’s technically possible, and there’s even a hacky 3rd-party lib to support it, but it needs to be a core feature).
Small file size footprint, I’m having to become increasingly strict about what libraries I add to my stack, especially architectural ones. An architectural library is typically going to be used on every page (entry point) on your website, which reduces the impact of techniques like code splitting to bring down the overall file size.
Easy to reason about API. I hate to repeat it, and it’s been acknowledged by the team, but Relay’s mutations API (on the client) isn’t nice to work with, and makes a lot of limiting assumptions. It also takes a bit of experience to finally understand why you have to keep messing around with all those nodes and connections.
Clear integration points with the rest of my toolkit. At this point, I mean Redux and React-Router. For Redux (and Flux in general) I think we need to reach a community consensus as to whether server-side data that’s inherently mapped to your component tree benefits from living inside a flux store, and if it doesn’t, work out whether there should be any integration points. For example: I have an object being rendered by a component and hit an edit button, between now and the moment the server confirms acceptance of this data, is it now a Flux responsibility? How do put this data in the store (Relay only gives a component access to its own dependencies, rather than the entire tree or its children’s)? There is a very good library integrating Relay and React-Router, which i’ve used extensively, but I’m not sure the integration has been approach the right way around.
No premature decisions around things like caching until the other patterns are in place.
Instead I think we’re going to benefit from exploring how to progressively build upon GraphQL, and leverage the tools we’re already using. You’ll notice how this leads us to something that looks a little like Relay, but should hopefully be accepted more due to it’s incremental nature. This list comes from a gist I posted, but I think Medium is a better environment for discussion, and i’ve changed it a bit anyway:
Mimic Relay’s ability to compose queries from fragments declared against each component. There are libraries that do this, I even made a fairly crappy attempt at it in the months before Relay was released. We just need something clean, well-tested, that solves nothing more than this problem. This puts us in a land where the most immediately problem we had (straightforward data fetching in a universal web app) is solved
Mutations. We need to be able to write data. So we need to make it easy to execute a GraphQL mutation. This is actually pretty straightforward, almost all of Relay’s complexity is surrounding optimistic updates and how to efficiently re-fetch all the data that may have changed. Optimistic updates are important, and may be a potential integration point with Flux.
Navigation. If it’s possible to diff the GraphQL queries before and after navigation, then we may be able to only load the data we don’t already have. Relay solves this with caching, but we should explore whether there are other options (I’m suspicious about caching when I don’t have a proper invalidation caching).
Instrumentation. If we do want integration with Flux or caching, we need to be able to add some extra things to the query (fetching ids and __typename as an example) transparently. We’ll have probably already worked with the AST in previous steps, so this may not be too difficult.
Essentially I believe we need some low-level APIs to handle things like query composition, query diffing (essentially building a patch query for navigation) and instrumentation. This could be implemented as a utility library. We can then explore building Relay-esque things on top of it.
Aside from continued turbulence surrounding how data flow should work in applications, the problem of handling backend data seems to be the last big one (unless you have a sense of what the next one is going to be, i’d love to know). Relay may ultimately end up being the solution, but it’s not there yet, and there’s still time for other solutions in this space.
Then hopefully we can build a decent framework and actually get some work done ;)