879

February 24th, 2025 × #cloudflare#serverless#javascript

Fullstack Cloudflare

Overview of using Cloudflare Workers and associated services to build fullstack applications

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Syntax. Today, we have an episode on, we're gonna call it Fullstack CloudFlare, which is big fan of CloudFlare. We've got CJ on today. Scott's not feeling greatest. So CJ's

Guest 1

filling in. And turns out CJ also is a, a big Cloudflare fan. Right, CJ? It's true. Yeah. I mean, I actually didn't use Cloudflare much until last year. Before that, I kind of only knew them as, like, DNS setup. But then last year, I started working with Node and then learned about all the stuff they have, like, key value workers and d one. Yeah. It's crazy. It's kinda crazy. Like, for those who don't know, CloudFlare is a

Topic 1 00:38

Cloudflare has products competing with AWS

Wes Bos

I think everybody knows them as the, like, DDoS, DNS protector, but they're really a bit of, like, an AWS competitor, you Node? And they have so many different products to a point where you can pretty much build well, you can build an entire full stack application just on Cloudflare products. And they're kind of unique in that all of their products are slightly different than the norms, but because of the way that they approach things, if you're willing to sort of approach it in a CloudFlare way, you can get huge benefits in terms of of cost. It's it's relatively cheap and, performance as Wes, where you can sometimes just get crazy things up and running with just a couple couple lines of code. I did just get a message from Scott. Scott bumped his head when he was snowboarding, which JS my greatest fear. Yeah.

Wes Bos

And, so he's he's not feeling too great right Node, so that's why CJ is filling in.

Guest 1

Yeah.

Wes Bos

You know, what is supported with CloudFlare is Sentry.

Wes Bos

If you are going to be running code in CloudFlare, you're gonna wanna turn on the Sentry integration, so you'll get some error exception.

Wes Bos

You get all the details about what's going on with your your application if there's anything that goes wrong, and you can fix it right away.

Wes Bos

Alright. So I thought, like, Cloudflare has bazillions of products, but, like, we're gonna sort of tackle explaining the ones that are, you're a web developer, you wanna build a full stack application.

Topic 2 02:10

Overview of products for fullstack devs on Cloudflare

Wes Bos

These are the ones that you should you should know about.

Wes Bos

They have several, like, what I'll call primitives, which is like they're sort of like low level product, like a like a worker for running JavaScript code or a stat r two for holding static assets or their their d one for holding a MySQL database or or, sorry, SQLite database. And then they also have several products that are sort of, like, built on top of their primitives, which are like handy little features.

Wes Bos

And that's not uncommon. AWS JS kinda has has the same offering in terms of both low level as well as higher level things. So workers, that's the first one. Workers is their JavaScript runtime. It's their their server.

Wes Bos

They don't like it when you call it serverless or edge functions, but it's essentially the way that you can run, server code on CloudFlare is you create what's called a worker.

Topic 3 03:06

Workers allow running serverless code on Cloudflare

Wes Bos

And the way that it works is it takes in a request that's coming in, performs some operation, and then sends the request out the other end very similar to how a

Guest 1

serverless function would work. It sort of spins up, does its work, and then it will spin down, meaning that it's not like a typical server where it's running all the time. Yeah. And this was interesting for me to, like, wrap my head around once I got used to it. And, really, I got used to it because of Node. Yeah. But the the whole thing is, like, it's literally a JavaScript module that exports a fetch function. That's all it is. And that one fetch function handles every single incoming and outgoing request, which is why something like Hono is useful because it basically gives you the ability to, like, create routes like you're used to in ESLint, instead of having to, like, have a bunch of if statements inside of that, like, just fetch function that you're doing. Yeah. If you if you, like, like, raw dog a CloudFlare worker,

Wes Bos

you just get the whole request object, which I'm a big fan of that. It's, like, it's all standardized, right, based on the the fetch API. You if you've or if you've worked with a fetch request in the past, you you probably already know the the thing. But there's no there's no, framework there. Right? There's no router. There's there's nothing. It's simply just you send the data back. You gotta set all the headers yourself, and you probably don't wanna do that. You probably wanna reach for something with a little bit more helpers, which we'll talk about the different things. Hono is probably the the biggest one, but there's all kinds of different ones, including a new one called Orange JS, which is pretty nifty. Yeah. And it supports we'll talk about it, but it supports other frameworks too. Yes. So Cloudflare Workers runs on not on Node. It's a JavaScript runtime environment, but they've built their own, which has v eight as an engine.

Topic 4 04:50

Workers have Node compatibility with some exceptions

Wes Bos

So similar to how BUN or Deno is not Node, CloudFlare Workers is also not Node. They do have mostly Node compat at this point.

Wes Bos

They went for many years trying to be like same thing with Deno as well. Many years being like, use the standards, and I'm I'm very much about using the standards, but they also packages and stuff just use Node APIs at the end of the day. So they have pretty close to full node compat, with the exception of several things with, like, socket connections. They'll they have their own version of that.

Wes Bos

There's no file system access in Cloudflare Workers because there is no file system. So if if you're using that, you Scott think around that.

Guest 1

There's a couple other little gotchas, but not not massive ones. Yeah. I've only come across them a couple of times. One of them was, like, I had some library that was trying to do a DNS lookup, but they don't have that primitive for creating those socket connections.

Wes Bos

So, Wes, it's I've only Sanity it a couple of times. It usually won't really hit that. Mhmm. It like, it really depends on, like like, what kind of stuff you're doing. Right? Like, the the socket one is the probably the biggest one is is WebSockets, and they have their own sort of implementation of that. Workers are limited to three megs on the free platform.

Wes Bos

So three megs of JavaScript, which is is quite a bit, and then 10 megs on the pro plan, which also is quite a bit. So I'm thinking of I'm working on my website right now, which is the Next. Js website, and I'm trying to get that thing running on CloudFlare Workers.

Wes Bos

And that thing is, something like 900 pages, something like, 7,000 import export statements, a pretty large website, I would say. And it's just squeaking under, I think it's about eight megs.

Wes Bos

And that, that eight megs is also gzipped, as well. It actually turns out to be about 65 megs, not gzipped. So you can fit a lot of code in workers. And it also doesn't include assets, meaning that if you need an image during your worker, if you have any HTML, any other like static asset that is needed, that's not part of your JavaScript bundle. That's just 10 megs of actual JavaScript running.

Wes Bos

And then you can also run multiple workers, which is is not the easiest thing in the world, but you can just split your code up into to multiple,

Guest 1

sort of run times. Yeah. And that might be the architecture that some people choose. Like, essentially, it's, like, microservices. Like, you have multiple little worker functions that if you're hitting that that size limit, just break them down. Each one can do their own thing. Yeah. I Wes, on the point of, like, assets and you and earlier Wes talked about the not being able to access the file system, there are other things and offerings that Cloudflare gives you that kind of Scott of gives you that functionality, which we'll talk about in a second. Yeah. It's it's kinda funny. Like, with everything Cloudflare, it's

Topic 5 07:55

Using Cloudflare leads to some vendor lock-in

Wes Bos

they don't provide x, y, and z, but they have their own version of it, which at some point is great, but also I don't like coding my apps in in just a Cloudflare way. Whenever I'm writing a line of code, I wanna know that I'd be able to to duck out of this thing and and be able to host it anywhere. Yeah. And it's tricky. And I think it's probably the the one downside of all of this is vendor lock in, which you you potentially get with with these other things. And so, like, we'll say that upfront. It's like, once you choose Cloudflare, you're kind of choosing Cloudflare. Yeah. I don't know. That that's true or not. Like, I I'm thinking about all the stuff that I have. Okay. And I don't know that I've ever written well, maybe the durable objects, although there is an open source project right now. I'll try to find the name of it that is is making sort of an an open version of that. So anything I write, I make sure that I'm not too too locked in. Even the database and whatnot, I'll use Drizzle, with just a SQLite adapter.

Wes Bos

One kinda interesting thing about workers is you pay for what's called CPU time and not wall time, meaning that Wes your worker starts up and when a when a serverless function starts up, you pay based on the number of seconds that that thing is running. Right? And if you have, like, a fetch request that's going off to OpenAI and and waiting for two seconds to to get a response and then coming back, you're paying for those two seconds. Right? And Vercel actually last week rolled out something similar to this as well where you can sort of use that downtime to handle multiple requests because, like, that's that's an expensive time to to pay for. So with Claflyr workers, you're only paying for compute, meaning that, like, when you're waiting on the network, when you're waiting for a fetch request, when you're waiting for those types of things, it's not counted towards your actual bill. They'll give you 10 it doesn't sound like a Scott. Ten milliseconds of CPU time, and then thirty milliseconds on the on the paid version. So 10 on the free, which doesn't sound like a lot, but I I'm on the paid version, so I have, like, thirty, forty, something like that. And I've not hit it except for once.

Wes Bos

And I I had explained that there was actually a bug in the library I was using that was causing too much processing.

Guest 1

Yeah. And they do have queues. So you potentially if you have the long running process, you could push that off to a, like, a queue consumer if you needed to. Oh, yeah. Yeah. You don't have to sit there and wait for it. Yeah. So there Vercel solutions if you're hitting those those those, those limits. Exactly. So CloudFlare also has CloudFlare Pages,

Wes Bos

which is sort of like a sibling to CloudFlare workers.

Wes Bos

And it was their, like, platform as a service offering, which, like, you could, like, have a Git commit, and it would build and and and deploy HTML.

Wes Bos

Most of those features, if not all, are now in Cloudflare Workers. And from the read I get from just looking at the docs and whatever and from the, Next. Js development, because they they have this thing called Next on Pages, which I've used, and now they're putting all their effort into open Next. Js, which is Next. Js on workers. So the read I get from it is they're put trying to put all the pages features into workers, and then eventually, they probably just get rid of pages entirely because I'm assuming that's what is. I don't I don't know. Node don't have any inside hook, but that's kind of the vibe I get for it. So, we'll link up on compatibility matrix, the difference between the two of them. But pretty much you can do everything in a worker.

Guest 1

That was my insight too because because I only started using CloudFlare last year, I think before that, pages, like you were saying, is the thing they were pushing the most and people were using the most. Yeah. But last year, I was really just using workers, and I I went through this matrix and I was like, yeah. I'm just gonna stick with workers. Like, I don't see anything too crazy that I can't just do with workers. So Yeah. I think the and early on, the idea was that if you have server code and front end code,

Wes Bos

that's what pages was for. But now that workers has assets, you can serve them all up, and, you can do routing and and all kinds of stuff.

Wes Bos

Let's talk about durable objects.

Wes Bos

We've mentioned them briefly on the show before, but I have since had a bit more of a chance to play with them. And they're kinda interesting because so the problem with serverless compute in general is that they are short lived, meaning that somebody visits a URL, what will happen is if somebody hasn't visited that URL or or that route in a long time, that that will spin up, meaning it will start. Right? That's that's often referred to as a cold start in the biz.

Topic 6 12:11

Durable objects persist data between workers

Wes Bos

And and then that server will run for however long it needs, and often they'll stay warm for five, ten minutes or so.

Wes Bos

But once they're done or if you're getting a ton of traffic, multiple versions of that worker or serverless function will spin up. And the problems that you have in in that land versus a traditional server land is that they're not always running, meaning you can't expect them to always be, running and looking for cron jobs and handling incoming requests.

Wes Bos

And then they also they don't share memory. So if you're storing something in memory, you can't expect that all the other processes are also going to be able to to share that memory. Right? So that downside, even, like, database connections as well has has a sometimes has an issue with that. And and that could be a bit a bit of an issue if you're trying to do stuff with, like, real time WebSockets API, if you're trying to hold state between requests.

Wes Bos

So Cleffler has this relatively new product called Durable Objects, which will it's kinda cool. Essentially, it JS just an object that will persist between Wes, and you can stick stuff on that object. And the next time you come back, you'll know that that object will have that data. And then there's a WebSocket API for it, meaning that you can you can stick stuff on that object, and then anyone else that's listening will also get that, WebSocket connection, which is is wild. So I've been using this library called PartyKit to I eventually want to take one of these, which is a 16 by 16 LED matrix.

Wes Bos

And I was building an app where a user could draw on the website, and then it would also show up on everybody else who has it currently open. And I I just put my local server up on Twitter, and I had, like, hundreds of people visiting it. It was wild. It was working just great. But then I also want to like, like mirror it to the LEDs as people are drawing on it. And it was amazing that it worked so well and I was able to get it up and running in, in relatively small amount of code.

Wes Bos

And all that was happening was there was no database.

Wes Bos

It's, it's just like sticking, updating a property on an object in JavaScript.

Wes Bos

And then when that object is updated, it both stays there forever and will

Guest 1

mirror itself to other everybody else who has that WebSocket currently open. That's super cool. So I haven't played with Robo objects yet, but that's I I've read about them and that's how I think about it. Like, if you if you're familiar with OOP, think about creating a class instance, but now that class instance lives in the cloud, and any serverless function can put stuff on properties or access properties, but it just lives forever, which is which is pretty it's a it's an interesting paradigm for how to deal with persistent data and how to share that data across workers because it's just it's just an object.

Wes Bos

Yeah. It's it's actually kinda wild that you just think about like, because often Node of my favorite, like, caching techniques or one of my favorite real time techniques is simply just putting data in memory, and then grabbing it out of memory when you need it. And that's essentially what this API JS.

Wes Bos

And like Sanity Kit is built on this. I think TL draw is also, built on this. I'll have a tweet for some, some simple example code because there's server Node, and then there's also client code for connecting to it. So if you do want to do WebSockets, this is probably the approach to it. And and then you're a little bit locked in at this ESLint, although there is I can't find it right now, but there is an open source sort of alternative that people are working on, which I like because then you'd be able to still deploy it to Cloudflare, but you could also, assuming you could deploy other places if you want to.

Topic 7 16:41

Workflows allow long running multi-step apps

Guest 1

Yeah. And so they've also come out with workflows.

Guest 1

I think they they put it into open beta back in October.

Guest 1

I haven't used it yet, but it essentially allows you to build multi step applications, build workflows.

Guest 1

And this is one of the ways you can kind of get around the issue of, like, your workers hitting CPU time because each one of these steps can run for however long, and then each step will pass data on to the next step. And you can you can specify, like, how many times each step should retry.

Guest 1

You can have state that's shared between the the steps. So for instance, one thing that you could build JS, let's say, you want to, transcribe the audio from a Syntax podcast episode. You could build a a multi step thing that the first step is download the m p three, and that could fail because of whatever reason. Maybe it's too big. But if that succeeds, the next step is maybe call an API that can transcribe it, or you pull in some library that can transcribe it, and so that's gonna run for a while. And then maybe the next step is you pass that on to a piece that will maybe summarize it with AI. And so that'll make a call off to AI, summarize it, and then the last step is, like, store it in a database. And so, typically, like, you could write this in a single JavaScript function, but if any one of those things fails, you have to add all kinds of, like, error handling, and you have to be running it in a place that can run for that long. So workflows is nice because you can basically break it down into these distinct steps, and it has built in retries, and and it's a pretty great way to to build out those kinds of features.

Wes Bos

It it's wild because, like, if I think about that process that that you've just explained, like, you'd have to, like, stick it in a database or put it in a queue.

Wes Bos

And then the queue, you'd have to explain how to retry if it fails. Yeah. You Node? Send an email. But if if if something's down, then try again in two minutes and then try again in five minutes. All of that is built into this. So you could simply just write the script in one file top to bottom.

Wes Bos

The way that it works is you just simply await step dot do, and you can sleep it for however long you want. So you could say, send CJ an email, wait for two years, and then after two years, like, you're you're literally awaiting two years, and then you could come back after the next line will run-in two years from now. All of your memory will be reinstated. All of your variables.

Wes Bos

It's pretty wild because there there was, like, a whole Scott ups dedicated to to running code like this, where you could simply just, like, sleep a function and then you could start it up in in a little while.

Wes Bos

And this is wild to me that it's so easy, for this this type of thing. Even like something if you have you have rate limits, you Node, you could I need to loop over this list of 10 files, and I need to search for each one, but I can only do one per Sanity. You know? You could just do a for loop, and inside the for loop, you could wait one minute.

Wes Bos

And and then in that case, you're not waiting. If it takes five seconds to do each one, you're only doing twenty five seconds instead of ten minutes

Guest 1

of of tied up CPU time. Yeah. I'm super excited to play with this. Like, honestly, that that flow that I just described, I kinda wanna try and implement because we have something in the syntax code base that's kinda it's it's just like a a single function, right, that has a few awaits? It's a single function, and it's all squeezed into a single request.

Wes Bos

And there's a possibility it doesn't happen very often, but there's a possibility that something could time out.

Wes Bos

I think we on we're on Vercel. I think we get three minutes of time with Vercel, which means we can download, send it off for transcribing, have it come back, send that off to, anthropic and come back with with the data, and then save it to the database. And that's all squeezed inside of a single cron job.

Wes Bos

But that's you probably shouldn't be sticking three minute long stuff inside of a request. It should probably be in a queue, but I don't feel like introducing a whole queue for that type of thing. So it's a you should switch it over. Well, I'll try I'll give it a try. Alright.

Guest 1

Yeah. I wanted to bring up queues. And so this is a product that has been around, and the one one that I have tried recently. And, essentially, it gives you a similar functionality, but it's kind of just built into workers.

Guest 1

It's just called CloudFlare Queues. You could compare it to something like Redis with BullMQ or Apache Kafka or RabbitMQ.

Topic 8 21:04

Queues allow scaling apps with background tasks

Guest 1

Essentially, for a queue, you have producers and consumers. So the way that I implemented this was when I was working with the GitHub star data. So for, each repo that I wanted to get data for, I needed to download all of the stargazers, and it was hundreds of requests because I'm only getting a hundred stars at a time. And if you consider a a library like Shadzy and UI, it got 38,000 stars, on GitHub last year. So I needed to request, 38,000 bits of star data. And so it takes a long time to, like, to request page after page after page after page.

Guest 1

And so I actually handled this inside of a queue. So when you hit a hono endpoint that says, I would like this star data, it checks to see, like, is there a job? And I'm storing job information in key value. And if there's not a job, it will send a message to the queue that says, hey. We need this star data. And so that's the producer. The producer is saying, I would like this data, please. And then the consumer is just waiting there to see, when these messages come in. And so I have a consumer that when it sees a message to get star data, it then fires off that long running task. And so this is another piece of a CloudFlare that actually can run for a long time JS the consumer of the queue doesn't won't time out. It'll basically just keep running and running until it's finished. And so that one consumer is what I have that makes all those requests to the API.

Guest 1

And what the way that I implemented this is it's gradually storing that data inside of key value. And so if you hit an endpoint with hono, it's just gonna give you the intermediary data, and it's like, hey. I'm still processing, but here's the data so far. And then finally, when that consumer is done getting all of the data, it updates the job status. And now if you hit that endpoint, you're just getting the the latest data. But it's a great way to to to, like, break up tasks when you have long running things like that. Yeah. Totally. And or or even things when things could possibly fail. I've told the story

Wes Bos

where what do I use? Postmark. Postmark went down for, like, twelve hours or or something the other a couple of months ago. And when somebody buys a course from me, I send them an email with their access. Right? And what was I don't have that NAQ.

Wes Bos

So if that fails, then then they don't get the email. Right? And it's actually never been a problem for eight years until they went down for for whatever. And I had to to quickly jump on and write some code that ESLint those emails. But ideally, what would happen is is you would not just send that as part of the setup Scott. You would you would just put that in the queue as part of the setup TypeScript, and then the queue would, would process them. And that's why sometimes, you know, when you're like, you you sign in and it's like, we emailed you a Node. And sometimes that code comes instantly and sometimes you have to wait like two minutes for that thing to show up.

Wes Bos

Often that's because they just add, send Wes a code to their queue and then their email goes through and processes the queue as fast as they can, but they may be backed up a little bit. Definitely.

Guest 1

And so what I'm thinking about right now is, like, this is kind of how Cloudflare gives you that, like, scalability even without really having to know about all this, like, complex architecture and stuff. Because this idea of, like, queues and splitting up work, this is what you do if you wanna scale an app.

Guest 1

But on Flare, it's super simple. You literally add the queues product or, yeah, add the the queues to your app, and now you have a producer and a consumer.

Guest 1

So it's interesting how they've kind of taken something that used to be super complex and now, like, it's pretty easy to just implement in your in your worker. That's why I'm so, like, bullish on on CloudFlare stack.

Wes Bos

Like, I've got lots of lots of qualms about it, but I'm, like, long term, I'm I'm I'm very excited about it because they are both like a Netlify, Vercel, Fly, whatever, like a platform as a service, we'll call it. Yeah. But they're also the the, like, AWS, like the the low level infrastructure provider.

Wes Bos

So you you kinda get the best of both worlds. It's certainly not at a Vercel level or Netlify level at all yet, but I I do think it's it's going to get there, especially when they keep rolling out these silly easy things like sleeping something for a couple hours and and it can comes back or durable objects. So I'm I'm big on it. Definitely. Alright. Let's talk about files. So Cloudflare has or is a a CDN, meaning that if you serve a file up with Cloudflare, their cache will cache that file and distribute it around the world to all their their different data centers. Right? And you can opt into it by default. You can control the the knobs with with a worker itself, which is kinda cool. That's one use case for our workers. It's not necessarily just the end result. You can also put a worker in front of something where where it will intercept the request and and either pass it along or send something back or tweak a couple knobs and and keep the Wes going.

Wes Bos

But along with their their CDN cache, that usually sits on top of their other asset provider. So r two is their Amazon s three alternative.

Topic 9 26:09

Cloudflare provides CDN and object storage

Wes Bos

That's their storage. Right? So if you wanna host a bunch of images, you wanna host a bunch of video parts, some m p threes, PDFs, literally anything, you wanna host anything, that's where you would stick your data into to Cloudflare r two. And like all r two alternatives, we had a whole show on this. Backblaze is another one. Bunny is a big one.

Wes Bos

They are all s three compatible, meaning that the s three API, you can just grab the Amazon s three node package, and you can upload files,

Guest 1

directly to Cloudflare. Yeah. So this is the one piece that, like, isn't vendor lock in because they're using that exact same API. Like, you could store your stuff there, but if you wanna switch, you could switch to any

Wes Bos

other compatible one. And one cool thing about Cloudflare and and many other companies is they have the this thing called the Bandwidth Alliance.

Guest 1

Mhmm. I mean, let me find out who else is in this. Yeah. I think I know what you're talking about because I I've looked at Backblaze before, and they they talk about free egress Wes, like, if you're if you're transferring data between these specific companies, it's completely free or, like, super low cost. Yeah. So we talked about this on episode seven eighty. Go to syntax.FM/

Wes Bos

7 eighty, and we talk about all the different approaches, and trying to figure out which one is cheapest because some of them will charge you based on bandwidth and less to hold files. So if you have like a lot of people downloading a very small amount of files, that will be different than if you have tons of files, which are are very infrequently download, something like a Dropbox. You know, you might have a file in there for ten years and it will never never get asked. But Cloudflare has this thing called the Open Bandwidth Alliance, or they're part of the Open Bandwidth Alliance, which means that any data egress between these vendors will not be charged.

Wes Bos

And and that's huge because if you ever wanted to move all of your data out of something like a Google cloud into CloudFlare or out of CloudFlare into back plays, like there's people that are locked ESLint, to Amazon Wes for life because it would cost them millions of dollars simply just to move them out of, S3.

Wes Bos

So that open bandwidth alliances is pretty exciting.

Guest 1

Definitely.

Wes Bos

On top of R2, CloudFlare also has like an image pipeline, which I'm using right now. This both has an API, so you can upload an image and you can ask for it in different resolutions, different formats, different you can crop them, you can resize them, or you can also just, like, use the, like, the URL scooping, meaning that, like, you you set up a special URL on your domain Node, and then any requests that come in via that URL are you sort of proxy your images through, and then it will resize and and crop and and whatnot. So there's, like, a Next. Js image component that you can use, which will just sort of resize them on the fly as they're requested. So that's really nice because you simply just need to serve up the high res version of whatever it is that you want, and you know that your app will always deliver the resized, converted, compressed version for your app.

Guest 1

I this is the one thing I've looked into but haven't used yet. Like, I saw that images has a storage option, but you could also, like, point it at r two. When I was looking into it, it seemed like r two would be the cheaper path because it's, like, cheaper to store images there and then, like, use their convert. Which path are you going down? I did both. So I once built an app that had, like, an upload button.

Topic 10 29:38

Image transformation pipelines

Wes Bos

Like, you wanna, like, create a product and upload photos that are associated with that. So in that case, I used their their image product, and that just simply uploaded it to the images. And, yeah, you're right. Like, the the prices for that are a little bit more expensive, and that's you get the convenience of them being part of that product, or you can just simply resize them and stick them in r two. I'm pretty sure the Next. Js image component just uses the the URL pipeline because I don't see those images I don't see those images in my, like when I log in to Cloudflare and click on images, they're not in there. Mhmm. Because that's that's just things that I've explicitly uploaded to Cloudflare images.

Wes Bos

Okay. But, again, it's it's relatively inexpensive and a big I've been I'm also using Cloudinary quite a bit for mine, but that the Cloudinary jump from free to paid JS, like, zero to I think it's, like, $200 or something like that, which I think is is worth it for a lot of people. But if you're on a on, like, a little hobby project and and you need to make that jump, like, it's getting a little less hobby ish, that that can get to be a little expensive.

Guest 1

Yeah. It's free for 5,000 transformations

Wes Bos

and $5 per 100,000 images stored. So pretty Oh, yeah. Pretty generous, free tier. One of the most hilarious thing about all these pricings is, like like, you've heard of girl math and boy math? Like, there's, like, sysadmin math as well. Trying to figure out how much something is going to cost Yeah.

Wes Bos

Is is hilarious and

Guest 1

a bit tricky to to figure out. I will say, like, I've been on the the free tier for the long I didn't switch to the paid tier until, like, a few months ago. And really, for all the things I was trying to use, like Workers and KD, like, I didn't hit any limits.

Guest 1

But there there are a few products you have to be on the paid tier to try. I think maybe it was Q's or something like that. Definitely, durable objects is is paid tier only. That's why I think that's why I upgraded.

Wes Bos

But, yeah, like, my bill I have several stuff running on Cloudflare, and my my bill JS always always tiny. I've never heard somebody be shocked with a bill. I think Cloudflare makes most of their money on enterprise and the sort of lower level stuff, the free plan, the pro plan that we're on or business plan JS is relatively cheap. And I think once you get up into the enterprise level, they start getting ESLint, they have their whole Okta replacer where you can have access to your products. You could put a whole firewall in front of all of your products and you could do single sign on with that. You know, there's so much in the enterprise world that I think will make them a lot of money when you multiply it by, like, something like a Salesforce or whatever that has 20,000 employees that need to have access, you know, $5 a month per employee.

Guest 1

Definitely.

Wes Bos

So I'm happy for that, at least for now.

Wes Bos

I, I, CloudFlare has been dirt cheap for, for many, many years. I don't feel like they would ever do a rug pull, but I've, I've been rug pulled before and they're, they're a public company already. It's not like they're, like a VC company that is just burning VC money.

Wes Bos

And, and then once it comes time to actually make the money, they, they triple the price.

Wes Bos

So, I'm not too concerned there. The Cloudflare Stream, this is for video. We did an episode a couple months ago on how you can do your own streaming simply by using Cloudflare r two.

Wes Bos

And because bandwidth is free, it literally costs pennies to host your own video, but then you have to do your whole data ingest ESLint gestation pipeline.

Wes Bos

So you have to chop up your videos with FFmpeg. You gotta upload all the bits. You gotta do all that. Cloudflow Stream is simply just upload a video out the other end. They'll give it to you. They do not support four k, and they haven't for, like, five years.

Wes Bos

So that's

Guest 1

a probably a deal killer for, I would say, most people. I don't know why they don't have four k support. Yeah. This is another product I haven't tried yet, but I looked into, like, host because I do live streaming on Twitch and YouTube for a while. I was like, should I try to stream myself on on my site? Maybe for, like, members only or or something like that. So that's that's definitely one of the products I was looking into. Alright. Data. So we talked about assets and and files and whatnot.

Wes Bos

They also have several, like, database products.

Wes Bos

So d one is their SQLite relational database. So if you're trying to build a product that you would typically reach for MySQL or Postgres or SQLite or something like that, d one is is what you want. It it follows the SQLite spec mostly, and you can use things like, like a drizzle, is is what I've been using to interface with it. I have three or four different projects running on d one, and,

Guest 1

big fan of it. Definitely. And I think the the magic there is, like, these workers all get their own little local instance of a of a SQLite database. So the reads are super fast. The writes have to be replicated across. Mhmm. But that's one of the the bits of magic there is, like, your your reads from the this database are super fast.

Topic 11 35:23

Cloudflare provides database offerings

Wes Bos

Yeah. They they use the word eventually consistent, meaning that you you write to a database. It is not guaranteed immediately after writing to a database in Canada that reading it in Tokyo will have that result. Right? It has to propagate out. I have never had that problem in my life Wes things need to be so instant.

Wes Bos

Certainly, there are lots of applications that that would need that, but it is it is relatively fast in my experience, and it's we're talking, like, milliseconds here, not, like, minutes Yeah. To to propagate around the world.

Wes Bos

D one is really interesting because you can roll back a database to any second in the last thirty days, which is really cool.

Wes Bos

You could just, like if, yo, I need to roll back my database. And, like, with with our existing database, with the syntax database, if you simply wanna make a copy of that thing or you wanna roll it back, like, that that takes, like, I don't know, ten, fifteen minutes, to get a new version of it running. So this is is kinda kinda neat. Definitely.

Wes Bos

Key value. This key value is awesome. So if you ever have the problem where, like, I need need to store something in memory or I just need to put this somewhere and then be able to get it later, key value is what you want. It's it's kinda like the local storage API for the browser or sorry, for servers.

Wes Bos

And key value in general is is pretty nifty. You can basically, there's just set and get.

Wes Bos

You can set some data into it, and then you can get the data. And, alternatively, you can also get the data and telling it what the type is. So I believe there is just a string, binary, or JSON.

Wes Bos

So you can pull the pull the JSON data out of the key value, which which could be pretty handy. You can also set expires on it. If you've if you use Redis in the past, that's a kind of a handy value because I will often use key value as a kind of cheap cash.

Wes Bos

Mhmm. Meaning that I'll stick something in key value, and then when I need to get something, I'll first check if it's in key value.

Guest 1

If it's not, it'd probably be expired, and then I'll just go through the rest of the function that goes to recreate it. Yeah. This is comparable to something like Redis. I've used it for similar things, and I actually combined KV with queues when I was building out that GitHub stars API because that's where I was storing all of the intermediate data of that that background task. So, like, it'd be getting data. It'd be storing it in key value, and super easy to use. I I I I didn't come across the typed API. Like, I think I was manually stringifying because that's one of the things about key values. Your values have to be stored as strings. Yeah. But it sounds like they have a helper that will, like, JSON parse it for you. Oh, yeah. I just pulled it up.

Wes Bos

You can set the type to be text, JSON, array buffer, or stream.

Wes Bos

So, yeah, it could be be any of those types, which is pretty nifty. Yeah. It's awesome. And then you can set the the cache on that.

Wes Bos

I've used key value as simply just a way to to store hit counter values.

Wes Bos

If you go to fav.farm, which is this little website I made for generating fav Scott, so basically turning emojis into fav icons for a quick favicon.

Wes Bos

And it's gotten extremely popular, because people just use it whenever you're making a quick little demo HTML.

Wes Bos

Yeah. You wanna stick a favicon in there without finding one.

Wes Bos

And I started this is running on Deno and is using Deno's key value.

Wes Bos

And I started counting how often each emoji is requested, and then I put the the outputted values on on the Node page. And about halfway through the month, I get an email from Dino being like, you're exhausted the free tier of key value read and writes, or just writes. There's, like, there's, like, a limit of a hundred thousand writes or something like that per month, so I I exhausted. So the the numbers aren't perfect.

Wes Bos

But, once I put that up there, people started gaming it by writing little bots to hit the favicons and and jack the numbers up, and it's it's really I had to eventually take the country's emojis and put them in their own spot because everybody was gaming

Guest 1

the the country emojis trying to make them number one. Yeah. What's that one at the top? Is that Netherlands or it what it what is that flag? There's a number one flag. Right? France. Oh, France. Okay. Yeah. France. France was getting it.

Wes Bos

Yeah. It's I think that was was pretty funny. It's but it just goes to show, like, that's a cheap way to build a hit counter. And if you want an example to build in Cloudflare Workers,

Guest 1

a hit counter is a very easy one to do. Definitely. I'll just mention really quick. The the TTL is also really great when you're maybe you're scraping data or maybe you're you're pulling data from an API and you wanna do it on a regular Bos. Making an API request, storing it in KV with a time to live of, let's say, like, twenty four hours, And then every other request that comes into your worker just pulls from KV, but then after twenty four hours, it's expired, and now you go off and pull that data from the API again. So that's another thing that I've used this pretty often for is just expire the data so I can always refresh it. That's exactly what I'm using it for on my my updated website

Wes Bos

because I'm, I'm pinging Twitter, LinkedIn, Blue Sky, all the social platforms, and I'm pulling in stats for each of my like posts.

Wes Bos

And you can only hit those APIs, especially cause I'm, I'm like scraping them. You can only do it so often before they block you. So I have like a TTL of forty five minutes or something on each one of them so that they at most only hit each API every forty five minutes.

Guest 1

TypeScript. This one's actually kinda cool. You wanna explain that? Yeah. So, I haven't used it yet, but I started looking into it because pretty often Wes you start working with workers, everything you see JS, like, just use d one. Well, what if you don't wanna use SQLite? They introduced TypeScript, and it essentially allows you to take any Postgres database, host it anywhere, and turn it into a globally distributed Postgres database.

Topic 12 41:35

Typescale distributes Postgres globally

Guest 1

This was fascinating to me because if you're the kind of person that maybe, like, you're self hosting some things, maybe you you have Coolify and you're you're running a Postgres database.

Guest 1

That thing, it probably isn't gonna scale very well once you get thousands and thousands of users.

Guest 1

But if you put it behind TypeScript, hyperdrive will essentially, like, cache any reads going into the DB and then also make it super fast for reads across the world.

Guest 1

So it's it's I don't know if it's built on top of PgBouncer, but it's very similar in that it creates a bunch of connection pools so that the database connection can be reused. And you talked about this a little bit earlier, but this is one of those things you need when you're dealing with, edge functions or serverless or, in this case, workers, because so many of them spin up. You could potentially, like, saturate the number of available connections to your database.

Guest 1

Mhmm. So, typically, if you're trying to connect to something like Postgres from a worker, you'd wanna put it behind TypeScript. So that way, first of all, it can scale it globally.

Guest 1

But beyond that, it, make sure to manage all those connections of all the workers spinning up. I I actually had this problem with my receipt printer,

Wes Bos

that I was working on a while ago because the receipt printer allows a TCP connection, but it only allows one.

Wes Bos

And once that one is saturated I thought this was kind of an apt expression of how databases work. Right? Once something is connected to it, you you can't have more connections to it. Databases can sometimes have, like, five or six connections at once before it says, hey. No more connections to me. But then you need to you need, like, something that will pull them, right, and and sit in front of it, and and that's what hyperdrive does.

Wes Bos

I I'm curious if you could also use a durable object for that as well and have the durable object be the but but there's no there's no TCP

Guest 1

in in the in Cloudflare Workers. I don't think so. Curious if if you would hit, like, storage limits in durable objects. You'd probably have to do some jumping around to make sure you Wes, I'm not I'm not saying yeah. I'm not saying

Wes Bos

use the durable object to store the data, but I'm saying, like, use the durable object as the connection pool, and then take all their quests coming in and pass them along as you have them coming in. So, like, I even had that with my little ESP 32 here Wes I built a a WebSocket server on it, and the WebSocket server can be connected to a million times. And then the the server itself sort of collects them all and passes it along as a serial data.

Wes Bos

Nice.

Vector database allows similarity search

Wes Bos

Alright. Next one we have here is is vector databases and and and vectorizing data. So in the AI world, when you have something like, like, let's say, the title of of our podcast, what you can do is you can convert your the titles of the podcast to something called an embedding. And an embedding is a, like, a, an a mathematical representation of of that data. And then once something is in a embedding, you can store in a vector database, and that allows you to query based on, like, similarity.

Wes Bos

There's different algorithms that you can use. Cosine similarity is probably the big Node. And that's often how these similar episode things actually work because you can vectorize the title, you can vectorize pieces of the transcript, you can vectorize photos. That's how similar photos will often work.

Wes Bos

Basically, you you take a photo and you vector it and that or you you turn it into an embedding, and then you say, show me embeddings that are mathematically close to this, and and then it will it'll bring you back photos that are similar or or podcast episodes that are are similar.

Wes Bos

And in order to do those calculations, like cosine similarity being one of them, you either need to download all the data and run it in memory, but once that gets too large, you need a database that can run vector queries.

Wes Bos

And, Cloudflare has a vectorized database as well as, APIs for creating the embeddings, which is pretty nifty.

Guest 1

Definitely. And this goes right in with their, AI offering because they have a way to, like, call AI APIs. But pretty often, you would use a vector database to kinda, like, store additional context for your your queries to an AI.

Guest 1

And like Wes was mentioning, the fact that you can search through the database and find the most relevant things, that's how you're able to create context that doesn't fill up your context window for when you're calling the API, but it's just enough to have the relevant pieces so that the AI doesn't hallucinate as much and it has a little bit more, things that it can reference when it's doing that query.

Wes Bos

That that's often called RAG or it's called retrieval augmented generation.

Wes Bos

Yeah. We actually built a little demo with all the syntax transcripts once, where we we converted all the syntax transcripts into embeddings, and then we asked it a question like, what does Wes think about React? And then what it would do is it would take that question, vectorize it, and then it would compare that sentence to the rest of the the podcast episodes.

Wes Bos

Or it wasn't just the entire podcast episode. It was like it found, like, a little snippet of where we talked about it, and Then it grabbed like a couple sentences before and a couple sentences after.

Wes Bos

And then we fed that to the prompt to say, given this context, now answer this question. Right. And yeah, like you said, it doesn't hallucinate as much in that case.

Wes Bos

Yeah. They have all kinds of stuff in the AI world. It very similar to most, like, cloud offerings right now. They are sort of trying to be the the place where where people go to to get it. So you can run all the different models on Cloudflare, which is pretty cool because you can simply just import the AI model, like, the name of it. You don't have to download this 80 gigabyte model, then you just run a query against it and you get the data back. So it's a very quick way to get access to all these different AM models, and they're pretty good at introducing new ones. Once once the big ones hit, they'll they'll try to roll them out and get them into production as fast as possible.

Guest 1

Definitely. And one of the features I like is the automated rate limiting. So if you're let's say you're integrating with the OpenAI API, and all of a sudden you you get an influx of traffic on your website, it's potential your OpenAI bill is gonna go up and up and up because you're getting all those queries to their API. But they have an offering, and because they already do all of this DNS and preventing bots and stuff like that, they have rate limiting. And so they give you a URL. You just replace the OpenAI URL with their URL, and now you can configure your rate limiting settings. Yeah. So you don't even have to use use their AI stuff. You just it's basically like a little AI proxy that Oh, so you can still hit your own oh, that's I didn't realize they had that. Yeah. That isn't it funny how

Topic 14 47:50

Cloudflare integrates latest AI models

Wes Bos

the OpenAI API has has sort of become, like, the standard for it's similar to how what the s three API is is standard for all, image uploads or Scott? Just what's it called?

Guest 1

File uploads, file storage, object storage? Yeah. File uploads. There we go.

Wes Bos

Oh, man. We're we're running up on an hour here. Let's go through frameworks, how to actually use them. I've been using the Open Next. Js Cloudflare adapter, still very heavily under development.

Wes Bos

I probably they recommend don't use it for production just yet, but, it is very close to supporting all the Next JS features that you would expect, which is really cool. And being able to access all of your bindings, meaning that if you wanna access d one or key value or r two or any of these things, they make it really easy because one of the things about all of the Cloudflare offerings is you often have to access them through a worker. And if your code is not running inside of a worker, it's hard to get access to them. It's not like a regular database where you just get a connection string and you can connect to it. So they've done a great job there.

Topic 15 49:45

Multiple frameworks integrate Cloudflare

Wes Bos

Orange. Js is another one that's built on React Router seven, and they're attempting to make a full stack framework, which is kinda cool. Like, is this gonna be the Laravel for Cloudflare?

Guest 1

I I can imagine something like that popping up. Basically I mean, all of these offerings are already pretty easy to use, but to just put them into a single framework and maybe you have, like, a consistent API, like, yeah, this this could be the Laravel of the JS world. Yes. And, of course, Node, which is I mean, you've probably heard me talk about it on the on the podcast here before. And that's mainly what I've used it for JS, building back end APIs. And like we mentioned at the beginning of the episode, essentially, you can define all of your routes. You can have middlewares, all the stuff you're used to when building back end APIs, but they can live inside of that CloudFlare worker. I will say that the other thing that I've done is I've combined that with a single page application built with React. And because of their static assets offering, that single page app can live as static files right next to the worker. And then you might have set up a site like this before, but, essentially, if a request comes into the root, it redirects back to that index dot HTML, but then everything that's slash API is served up via the worker.

Guest 1

So if you like So it's like a a single deploy still, though. Right? Front end, back end? Exactly. But yeah. But that's it's not server rendered. It's API and front end, which is how I like to build most things, but it does live in the same deployment. It's a single deployment. Yeah.

Wes Bos

And Node has, like, RPC. Right? So you could import the Hono RPC and and call server functions

Guest 1

from the client? Exactly. So I did a video on monorepos, and this is one of those monorepos that I showed Wes you have the React app and the Hono app living in the same monorepo that share those RPC types. So all the endpoints you define with Hono are fully typed, and then Mhmm. Wes you use that client, in your React app, it knows exactly what the types are gonna be versus for various responses,

Wes Bos

and also, like, what the errors are shaped like. So, yeah, it's it gives you full type safety, for that. Beautiful. And then, like other things, most other stuff will have, like, an adapter for Cloudflare deploying. You know, we had we talked about Nuxt last week, React Router seven. There will just be adapters that you can export to it. It it's pretty much most new modern stuff with the exception of Next. Js will have an an adapter, and it it's built in a way that, yeah, you can deploy this thing anywhere and and Cloudflare being one of them.

Wes Bos

Last thing I wanna talk about real quick is they have, like, a they have analytics engine, which, Ben Vinegar, he we had him on the podcast to talk about this. He's building this thing called CounterScale, which is, basically you can host your own analytics on your own CloudFlare, which is is wild. And then he just built this you get this whole dashboard.

Wes Bos

Yeah. Similar to, like, Plausible, where you can kinda see who's visiting your website, how many hits you got, what pages, what are the refers, but self hosted and it's dirt cheap to run on the analytics ESLint. So it's pretty nifty. They also have WebRTC engine, which if you wanna do like, you wanna build anything that has real time calling, they've got that primitive. And then they have a Puppeteer API, which is something I've been using because I do, screenshots to generate the open graph images.

Wes Bos

Definitely.

Wes Bos

So anything else to add about full stack Cloudflare? I we didn't even touch. We maybe touched, like, 26%

Guest 1

of the products, but those are the the big ones for me. Yeah. I guess I'll just mention I'm sure there's gonna be somebody in the YouTube comments. Oh, this is Cloudflare Shield sponsored. It's not sponsored. Like, we we like the product. We use it, and it just has a bunch of cool stuff. Like, I I would highly recommend, like, if you're building Scott.

Guest 1

Exactly.

Wes Bos

I actually don't own any of their stock. No. No. Me neither. Me neither.

Wes Bos

Yeah. Yep. Everyone always says that type of thing.

Wes Bos

But honestly, I'm I'm I'm critical in of them in in many ways, but they are very receptive to feedback JS well as, like, I think once this stuff sort of gets smoothed out, they've been working, they've been putting out products like crazy.

Wes Bos

I I do think that this is going to be a very big option in the future for when you're you're building apps. Definitely. Completely agree.

Wes Bos

Cool. Alright. Let's wrap it up with some sick picks and shameless plugs. You got a sick pick for me today? Alright. I have one.

Guest 1

So my MacBook, I think, only has, like, it's, like, a 256, gig SSD.

Guest 1

But I do yeah. I I struggle to transfer stuff back and forth because I do a lot of video editing, and, like, those files are really big.

Guest 1

But I I came across an SD card reader that's smaller than usual, and it sits flush, on the Mac. So I have a, a one terabyte, micro SD. So this plugs into my MacBook, and then it doesn't stick out. And so I have, like, a, basically, a terabyte of extra storage all the time. So, yeah, I That's awesome. I'll link to the one that I found. There's probably other examples out there. If you just search for, like, MacBook Pro small micro SD Yeah. You'll probably find it. And do you edit edit off of that, or is that sort of, like, you offload? I offload, offload, but you could. Like, the transfer speeds I Wes I guess it depends on the micro SD Yarn, but the transfer the transfer speeds are are decent enough that you could actually edit directly off of it. Okay. Yeah. I often think about that. Like, when I got my MacBook, I

Wes Bos

I was so sick of running out of space. Drives me nuts. So, you know, you record a couple of videos and you're out of space. And so I got the two terabyte, and that was good for, like, a year or two, but now I'm all I'm running up on it. Especially, you download a couple of these AI models Node before you know it, it's, you're you're out of space. It drives me crazy that I had, like, a two terabyte desktop, like, fifteen years ago, and now I still only have a two terabyte. Like, give me a hundred terabytes. I don't wanna use Apple drive or whatever the stupid thing JS. Node always try to prompt me for say with the phones. Drives me nuts. Definitely. Okay. I'll sick pick along the same rounds.

Wes Bos

I have a Synology with, you know, 20 terabytes in it or something crazy like that.

Wes Bos

And, I use that as my like cold storage.

Wes Bos

So when I'm done a video, I will drag it into my Synology on the network.

Wes Bos

The whole the the thing will transfer over there, and then I have the Synology set up so that it mirrors to Backblaze b two, so that I have, like, a local and a remote copy of that. And that I've been running that for years, and I love that setup because I I don't have to like worry about having access to it when I'm, when I, when I need it. I can either, I can just connect to it remotely or I can go into Backblaze and get it, and it just, it feels really good and Backblaze JS, is dirt cheap for holding stuff long term,

Guest 1

where you're not accessing it frequently. So big fan of that. That's awesome. I I'm Synology curious. Like, I started building my own rate. Yeah. I see all this like, Synology, like, you pay a premium for, like, for their hardware and but Yeah. It seems like they have so much built in. I definitely gotta give it a try. It's I I've been the opposite way because it only has a one gig network card. Okay. And there's

Wes Bos

there's two one gig cards, and I've been trying to, like, not bridge them, but there's a what is it? SMB? You can multicast SMB. You can you can send stuff over both. Mhmm. And, like, I have a two and a half gig NIC on my, my, MacBook.

Wes Bos

I cannot get it working.

Wes Bos

So I'm like, what I would do for a 10 gig, NAS. And that's the path I'm down is just custom build. Yeah. Be kinda fun to do that, but then also, like, I'll just wait an extra three minutes for the thing to transfer, and it'll be okay. Definitely. So that's that's good. But, yep, we should do another show on Synology, me and Scott, because we, we've been using ours for five, six years Node, and we're we're we're both pretty heavily into it. I've been running I was running Coolify on mine for a long time, but I had to stop it because I was it was eating too much RAM. Sure. Only have, like, 16 gigs on there or something.

Wes Bos

Alright. That's it for today. Thank you so much for everybody tuning in. Make sure you check us out on YouTube, youtube.com/@syntaxfm.

Wes Bos

You can check out some of CJ's videos as Wes, specifically on the Cloudflare stuff. He's got lots of them. Thanks for tuning in. Catch you later.