Moving off the edge.
- edge
- computing
- CloudFlare Workers
- serverless
- web development
Introduction
I have been using Cloudflare Workers for my personal projects for a while now. I have been a big fan of the edge computing paradigm and the benefits it brings. However, I have decided to move away from the edge for my personal projects, including this blog's backend.
In this article, I will explain why I made this decision and how I made it happen. I will also share my thoughts on serverless and why it is not always the best choice.
What is edge computing?
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed (consumed). This reduces latency and bandwidth usage, and it enables close to real-time data processing. CloudFlare has implemented a technology called Cloudflare Workers, which allows developers to run JavaScript code at the edge of the Cloudflare network. Obviously, not only Cloudflare has implemented edge computing, but they are one of the biggest players in the market along with AWS, Azure, and Google Cloud.
Behind the scenes, Cloudflare Workers are powered by V8 Isolates, which are lightweight, isolated JavaScript environments that run on the V8 engine. This allows developers to run JavaScript code in a serverless environment without the need to manage servers or containers. As you can imagine, this is cheaper and more scalable than traditional server-based solutions (or so it seems).
Why I decided to move off the edge?
There are a few reasons why I decided to move off the edge and run my backend on an almost traditional server. I will explain what "almost traditional server" means later in this article, but for now, let's focus on the reasons why I made this decision.
1. Complexity
While Cloudflare Workers are easy to get started with, they can become complex as your project grows. You need to manage your routes, your KV stores, your secrets, and your deployments. You need to write tests, monitor your workers, and debug them when something goes wrong. You need to understand the limitations of the platform and work around them.
Although the concept is to write code using web standards like Request
and Response
,
you still need to follow some Cloudflare-specific patterns to make your code work at the edge like loading .env
files etc.
2. External Dependencies
I do run a few NPM packages in my website's backend, and I have to bundle them with my worker code. This is not a big deal, but it adds complexity to the deployment process. Wrangler (the official CLI tool for Cloudflare Workers) does a good job at bundling your code, but it is not perfect and I don't feel comfortable bundling my dependencies with my worker code, because you have to think about the size of your bundle.
The web has become so complicated that you need to think about the size of your server-side bundle, even when you are running serverless functions at the edge. Not my cup of tea.
3. Debugging
Debugging Cloudflare Workers is not as straightforward as debugging your local Node.js server.
You can't just console.log
your way through your code. I'm being honest here,
I am not a testing expert but you need Wrangler to test your workers locally.
How I Moved Off the Edge
My backend is now a simple express Node.js server running on a Fly.io instance. When I say simple, I mean simple. It is a single file with a few routes and a few middlewares. I have a few NPM packages running in my backend, and I don't have to bundle them with my code. I can use the latest version of Node.js, and I can debug my server with the tools I am familiar with.
I have a Dockerfile that I use to build my server, and I have a GitHub Actions workflow that deploys my server to Fly.io. This might sound complicated, but it is not. It is a simple and predictable setup that works for me.
How does the dockerfile look like?
# Adjust NODE_VERSION as desired
ARG NODE_VERSION=22.2.0
FROM node:${NODE_VERSION}-slim as base
# Set up corepack
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
LABEL fly_launch_runtime="Node.js"
# Node.js app lives here
WORKDIR /app
# Set production environment
ENV NODE_ENV="production"
# Throw-away build stage to reduce size of final image
FROM base as build
# Install packages needed to build node modules
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y build-essential node-gyp pkg-config python-is-python3
# Install node modules
COPY --link package.json ./
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod
# Copy application code
COPY --link . .
# Final stage for app image
FROM base
# Copy built application
COPY --from=build /app /app
# Start the server by default, this can be overwritten at runtime
EXPOSE 3000
CMD [ "node", "index.js" ]
How The GitHub Actions Workflow Looks Like
This is also very simple. I have a single job that checks out my code, sets up flyctl
, and deploys my server to Fly.io.
You only need a Fly.io API token to deploy your server which can be stored as a GitHub secret.
You are supposed to obtain the API token from the Fly.io dashboard.
name: Deploy backend on Fly.io 🚀
on:
workflow_dispatch:
jobs:
deploy:
name: Deploy app
runs-on: ubuntu-latest
concurrency: deploy-group
steps:
- uses: actions/checkout@v4
- uses: superfly/flyctl-actions/setup-flyctl@master
- run: flyctl deploy --remote-only ./packages/backend
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
Where is the edge and where is the server?
So when you deploy to the edge, you are deploying to the edge of the Cloudflare network which means that your code is running in a data center close to the user. When you deploy to a server, you are deploying to a server that is running in a data center somewhere in the world and your customers are connecting to that server over the internet.
In theory, the edge is closer to the user, so it should be faster. In practice, the difference is negligible for many use cases.
I am not running a global CDN, I am running a personal blog. I am not serving millions of requests per second worldwide, I am serving a few requests per minute, if I am lucky.
So I decided to move off the edge and run my backend on a humble server in Amsterdam. I am not saying that you should do the same, but you should consider the trade-offs before you decide to go serverless.
The cost of running a server on Fly.io
I am running my backend on a shared CPU instance with 1GB of memory. The runtime is Node.js, and I am using the latest version which at this point is 22.2.0
.
There is a single instance running in Amsterdam, and I am paying €0 per month for it.
Yes, you heard it right. I am not paying anything for my server, because Fly.io has a free tier that allows you to run a single instance for free. To be clear, you can run more than one instance for free, but at this point, I don't need more than one instance.
Unlike many SaaS providers that try to lock you in with their free tier, Fly.io is very transparent about their pricing and I don't believe that they will change their pricing model in the future to force you to pay for something that was free. Even if they do, I don't think they'll do something crazy like PlanetScale did.
Remember what PlanetScale did to their free tier? They changed it from free to $39 per month. Their technology got abused by some users, and they decided to change their pricing model. I don't think that Fly.io will do the same, because they are not a database provider, they are a cloud provider that operates on virtualized images which are cheaper to run by design.
Nevertheless, I am not an exprert predicting the future, so take my words with a grain of salt.
Conclusion
Simply put, serverless is not always the best choice. It is not a silver bullet that will solve all your problems. It is a trade-off that you should consider before you decide to go serverless. I am not saying that you should not use Cloudflare Workers, I am saying that you should consider the trade-offs before you decide to go serverless.
I'm thinking about creating a post about Fly.io and how I use it to run my backend, let me know if you are interested in that.
That's all for now. Thanks for reading.