Thoughts on Next 13 - Dynamic with some limits

Published on

I’ve been trying out the pre-release version of Next 13 (and by association, React 18.3) for the last couple of months, specifically the parts related to the new router and server components. I suspect a lot of the more nuanced takes will be drowned out by the excitement of its release, so I want to take the opportunity to jot down my thoughts based on the framework as it stands today. It needs to be reinforced that what I’m commenting on is still beta and experimental functionality, and as such my comments may not age well.

Edit: Some of the sections have been updated based on things I've learned since first publishing this.

What I like

1. Nested routes / layouts

This feature has been in demand more-or-less since the first version of Next. Many of us were used to having the ability to define a hierarchy of stateful layout components from earlier experience with the likes of React Router. I’ve always been dissatisfied with the documented workarounds, so this is a big win for me.

2. Server components

I’m a big fan of JSX as a templating approach, I’ve used string-based templates in various programming languages for decades at this point and I’ve never bought into a lot of the justifications for them (with the exception of raw performance, they can be very fast). So the idea that I can build something with JSX that runs primarily on the server, and surgically opt-into client-side interactivity for specific pieces of UI is very attractive. Now this has been always possible in various forms, but the key thing for me is that I don’t have to switch paradigms. We can use server components by default, and a simple module pragma opts a component into being able to run on the client.

3. Data loading

A controversial one which will definitely need more than a quick paragraph to explain properly. The combination of server components, suspense, and streaming rendering should mean that loading data directly in components doesn’t cause catastrophic data loading waterfalls. The waterfalls will exist, but because it’s all happening on the server, network latency shouldn’t be a major concern. We can also use libraries like Dataloader to batch loads should parallelisation be a concern. This won’t completely obviate the need to hoist some data loads up to parent components, or to reach to solutions like Relay, but it may give us a good compromise as a better starting point.

4. Edge rendering

Being able to render pages closer to the user is a win — assuming the data is also available close to that user, and I can’t complain about faster or instant warmups compared to the traditional node runtimes.

Areas for concern

1. Nested routes / layouts

I love the raw capabilities that’ve been added here, but I fear we may have jumped the shark in terms of how far using the file system as route configuration can (or should) go. Anecdotally, I’ve been finding it harder to “place” myself within the codebase as I’ve worked on various pages and layouts. I’ve always been somewhat cold to the file-system routing paradigm — preferring to define these things in code, and this feels like doubling-down on the worst aspects of it.

2. Server components

The rules related to how server components can be used may be unintuitive and hard to teach. React already has a reputation for being too complicated, I don’t think this is going to help. Dealing with client-side JS and Node runtimes in the same codebase is already a bit of a dance, but in the old paradigm at least there was only one interaction point between the two worlds (getServerSideProps/getStaticProps), now it could be at every component boundary. Additionally, whilst I enjoy being able to drop in a hefty syntax highlighting library without worrying about its file size; I’m not truly seeing the JS payload sizes that have been touted (~60kb baseline). I don’t know if this is because I’m doing something very wrong, or because there’s still some heavy pruning to be done before it comes out of beta. In my experiments, there’s a hefty JS payload in the HTML that appears to be a serialised representation of the component tree, this amounts to 10s of kilobytes per page -- for pages without any client components. So a win perhaps, but not an overwhelming one yet.

Update: In some cases, Next was erroneously bundling multiple versions of React, a subsequent release (13.0.1) appears to have fixed this, resulting in a notable improvement in file sizes. I'm also aware of an intention to significantly reduce the size of the JS payload in the HTML, so things look promising.

3. Data loading

Still a lot of unknowns. Will server components end up having a solution for writes? It feels a bit weird to be able to perform reads directly in a component, but still have to fall back to calling API routes and a manual data refresh for writes. Libraries like tRPC and Blitz may help with making traversing this network chasm feel elegant, but I want component-level writes, and I think there are ways to do it. Additionally, in the old Next paradigm, it was possible to simulate Remix’s form pattern (well, not Remix, this pattern has been around for decades) by handling POSTs in getServerSideProps (I’ve been using next-runtime quite heavily at work); these solution will no longer work with the new router, which feels like a regression in some ways. I truly hope that Vercel has plans for something that feels ergonomically similar to Remix forms.

4. Status codes

A consequence of streaming rendering and the lack of getServerSideProps means that it’s no longer possible to serve the appropriate status codes (404, 307 etc) based on the data. Next has some functional workarounds in the form of meta tags that mimic the end result, but I’m finding it really hard to accept the loss of what I’ve always seen as a critical part of how the web works. These fears may be proven unfounded, time will tell.

5. Edge rendering

How many runtimes is too many for one website? Every route segment can independently decide whether to use the edge, node or browser JS runtimes, each runtime has substantially different runtimes. Next’s middleware only lets you use the edge runtime, which means it currently doesn’t work with key services like LaunchDarkly — purportedly the very kind of functionality that middleware exists to support. Can we expect even more of this in the future? It feels like this is going to add another dimension to the concerns I raised above about server components.

6. Observability, request context etc

Bit of a grab bag this one, but there’s currently no mechanism for attaching your own “context” to a request (I already tried using AsyncLocalStorage, didn’t work). There’s a heap of use cases for this, ranging from observability tracing, to logging, to passing around client instances for databases and third parties. This is exasperated by the fact that a single request from the client’s perspective can hit multiple servers around the world, each running a different JavaScript engine. I’m assuming there’s some kind of plan for this.

Update: React's experimental (and undocumented) cache API appears to do the trick here. It lets you define a cache that last for the duration of a "render". You can wrap client instances (in my case, the Sanity SDK) and any non-fetch I/O, to get the desired request-level context instances. Observability looks to be a bit trickier, I'm still investigating.

Conclusion

This was just a quick brain dump, I’m sure there’s more to say for both the good and the bad. Overall I’m pretty excited for the potential of these new additions, but I think there’s still a lot to do before I’d consider them ready for prime time use. Hopefully some of my concerns end up just being FUD.