“bundle list”

Buckle Up, There’s a New Gem Server in Town: gem.coop

Assuming you haven’t been living under a rock these past few weeks, the Ruby community has been embroiled in quite a bit of drama. I won’t recap it here…there are plenty of other sources to go (Joel Drapper for one), and I also have my own pointed take on the matter on my personal blog. But here on Fullstack Ruby I like to maintain a positive, can-do attitude, and to that end, let’s talk about some very exciting developments!

Most Rubyists are familiar with rubygems.org and the reason that you see source "https://rubygems.org" at the top of every Gemfile is so Bundler can download and install gems from the rubygems server.

What I, and I suspect most of you, never considered is that source could be pointed at, well, anything. In fact, you can have multiple sources as well, and you can even write blocks to install groups of gems from different sources including git repos. (TIL 👀)

So Bundler is very flexible in this regard, as is the gem command (more on that in a moment). Which is why this news matters: The Gem Cooperative has announced a new community-minded gem server is now available, currently mirroring all the gems from rubygems. Martin Emde says that “all Ruby developers are welcome to switch to using this new server immediately.”

And here’s how you do it. Edit your project’s Gemfile and replace the source line at the top with this:

source "https://gem.coop"

Now you can bundle install and bundle update and support this new community effort.

Who is behind The Gem Cooperative, you may ask? Basically it’s all those folks who had previously been working on rubygems before they were unceremoniously kicked out of Ruby Central during the takeover. Ouch. From gem.coop, they are:

And Mike McQuaid of Homebrew fame is also helping out with establishing governance for the project (and some technical advisory from the looks of things).

Work is currently underway on an updated version of Bundler that will support namespaces, and once that happens and gem.coop is updated to support gem pushes, we could see folks decide to publish new/updated gems only to gem.coop under new namespaces. This is all in service of moving away from Ruby Central as a single (and very problematic) point of failure.

All right, so updating your Gemfile is easy enough, but what about when you need to install new gems from scratch using the gem command? You will use gem sources for that. First, you can get a list of which sources you currently use by running gem sources --list. Typically you’ll just see the rubygems server listed. To add gem.coop, run:

gem sources --add https://gem.coop

and to remove rubygems, run:

gem sources --remove https://rubygems.org/

You can verify when you install a new gem which server is used by including the -V flag, e.g.:

gem install solargraph -V

Otherwise just use gem as per usual.

With news like this and the previous round of news regarding rv which is an attempt to create next-generation unified Ruby tooling (install Ruby, install dependencies, run app commands, create new gems, etc.), I think we may be on the cusp of a big leap forward for the language both in terms of technical prowess as well as acceptable forms of community governance. small red gem symbolizing the Ruby language



Little Content Tricks for Your Bridgetown Website

Well my Ruby friends, a new day has dawned with the release of the Ruby web framework Bridgetown 2, and that means I can start to enjoy the fruits of our labor by sharing useful code examples and architectural explanations here on Fullstack Ruby. Yay! 🎉

(BTW…how cool is this custom artwork by Adrian Valenzuela??)

Greetings from River City

Now onto today’s little batch of snippets.

On a Bridgetown client project, we wanted to be able to drop in links to the client’s many videos hosted on Vimeo. I didn’t want to have to deal with the hassle of grabbing <iframe> tags for every single video, so my first inclination was to write a helper method and use those calls in the markup where needed. But then I realized I could go a step further: just paste in the damn link and get an embed on the other side! 😂

It needs to go from this Markdown source:

Ever wonder what it's like to dance under the sea? Here's your chance to experience lights that simulate moving water. These are customizable with different color variations and ripple speeds.

https://vimeo.com/390917842

to this HTML output:

<p>Ever wonder what it’s like to dance under the sea? Here’s your chance to experience lights that simulate moving water. These are customizable with different color variations and ripple speeds.</p>

<p><iframe src="https://player.vimeo.com/video/390917842" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen loading="lazy"></iframe></p>

And using a bit of string substitution in a builder hook, the solution is straightforward indeed:

# plugins/builders/vimeo_embeds.rb
class Builders::VimeoEmbeds < SiteBuilder
  def build
    hook :resources, :post_render do |resource|
      resource.output.gsub! %r!<p>https://vimeo.com/([0-9]+)</p>!, %(<p><iframe src="https://player.vimeo.com/video/\\1" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen loading="lazy"></iframe></p>)
    end
  end
end

In your case you might be using YouTube, or PeerTube, or some other form of video hosting, but the concept would be just the same. You could even layer up several gsub calls to handle them all.

Lazy Images #

For better frontend performance, due to the large number of images we display on some of the content pages, I wanted to ensure that images added in the Markdown would output with a loading="lazy" attribute. This tells browsers to hold off on loading the image until the reader scrolls down to that place in the document.

After making sure I had gem "nokolexbor" installed, and had added html_inspector_parser "nokolexbor" to my Bridgetown configuration in config/initializers.rb, I proceeded to write an HTML inspector plugin to do the job:

# plugins/builders/lazy_images.rb
class Builders::LazyImages < SiteBuilder
  def build
    inspect_html do |doc|
      main = doc.query_selector('main')
      next unless main

      main.query_selector_all("img").each do |img|
        next if img[:loading]

        img[:loading] = :lazy
      end
    end
  end
end

This loops through all img tags within the main layout element, and if it doesn’t already have a loading attribute, it will get set.

Extracting an Image for Graph #

On another project, I wanted to have some smarts where the image used for open graph previews could be pulled directly out of the content, rather than me having to set an image front matter variable by hand. I decided to solve this with a bit of regex wizardry:

# plugins/builders/image_extractions.rb
class Builders::ImageExtractions < SiteBuilder
  def build
    hook :posts, :pre_render do |resource|
      next if resource.data.image && !resource.data.image.end_with?("an-image-i-wanted-to-skip-here.png")

      md_img = resource.content.match %r!\!\[.*?\]\((.*?)\)!
      img_url, _ = md_img&.captures

      unless img_url
        html_img = resource.content.match %r!<img src="(.*?)"!
        img_url, _ = html_img&.captures
      end

      if img_url && !img_url.end_with?(".gif")
        img_url = img_url.start_with?("http") ? img_url : "#{site.config.url}#{img_url}"

        # Set the image front matter to the found URL
        resource.data.image = img_url
      end
    end
  end
end

You could simplify this if you’re only dealing with Markdown content…in my case I have a lot of old HTML-based content predating the age of modern Markdown files, so I need to support both input formats.

And that’s it for today’s round of Bridgetown tips! To stay in touch for the next installment, make sure that you follow us on Mastodon and subscribe to the newsletter. What would you like to learn about next for building websites with Bridgetown? Let us know! ☺️ small red gem symbolizing the Ruby language



Sunsetting the Fullstack Ruby Podcast (and What I’m Doing Instead)

I always hate writing posts like this, which is why I rarely do it and tend to let content destinations linger on the interwebs indefinitely.

But I’m in the midst of spring summer cleaning regarding all things content creation, so I figured it’s best to be upfront about these things and give folks a heads up what I’m currently working on.

TL;DR: I’m bidding the Fullstack Ruby podcast a bittersweet farewell and gearing up to launch a new podcast centered on current events in the software & internet technology space, because we’ve reached a crisis point and future of the open web is more fragile than ever.


Here’s the truth. There’s a lot that’s fucked up about Big Tech and software development right now. Pardon my language, but I have struggled mightily with burnout for going on two years now; not because I don’t like writing software (oddly enough, I care as much about the actual work I do as a developer as I ever have!), but because I don’t like the software industry. ☹️

And sadly, I have been particularly disappointed with what’s going on with Ruby. I don’t want to rehash past consternation (you can read about my attempt to fully reboot Fullstack Ruby a year ago for more background, and listen to the followup podcast episode). Here’s the summary:

There are two devastating downward pressures on the software industry right now: the unholy alliance of far-right bigotry/propaganda & Big Tech, and the atomic bomb-level threat to the open web that is Generative AI. And the crazy part is, there’s actually a cultural connection between fascism and genAI so in a sense, these aren’t two separate problems. They’re the same problem.

Unfortunately, the Ruby community taken as a whole has done NOTHING to fight these problems. Certain individuals have, yes. Good for them. It’s not moving the needle though.

Ruby already has suffered in recent years from the brain-drain problem and the lack of mainstream awareness for new project development. I have always felt that problem alone is one we can surmount. But when you pile on top of that the fascism problem (personified in the founder & figurehead of Ruby on Rails, DHH) and the AI problem (which major voices in the Ruby space have not only failed to combat but they’re actively advocating for and encouraging genAI use), I find it increasingly difficult to remain an active cheerleader going forward.

Don’t get me wrong, I’m not saying it’s any better per se in other programming language communities. But if I have to deal with fighting off fascism and the evils of Big Tech on a daily basis, I might as well be writing JavaScript while I’m doing it. JavaScript is the lingua franca of the web. It just is. And it’s already what we use for frontend, out of technical necessity.

I still prefer writing Ruby on the backend. I do, I really do! And I’m not sure yet I’m ready to give that up, even now. But it does mean I find my enthusiasm for talking about Ruby and recommending it to others fading into the background. Damn.


I haven’t yet decided what the ultimate fate of this blog is. But I do know I’m ready to scale my ambitions way down in this particular community, so to start, I must bid the podcast farewell. 🫡

As I alluded to above, I’m actually gearing up to launch a brand new podcast! You may be interested it in, you may not. Regardless, the easiest way to stay notified on the launch is to follow me on Mastodon and subscribe to the Cycles Hyped No More newsletter (the podcast will be a companion product if you will to that newsletter).

It should come to no surprise by now that the purpose of the new podcast is to take the twin perils of fascism-adjacent Big Tech and genAI head on. I will be speaking about this early, often, for as long as it takes for people to wake up and realize the open web is under assault. I don’t mean to sound unnecessarily dramatic, but while we’re over here arguing about programming languages and coding patterns and architectural choices for our web apps, the very web itself is getting pillaged and dismantled brick by brick by hostile forces.

I’m not going to let that happen without a fight.

I hope you’re interested in joining me in this fight. Stay tuned.

–Jared ✌️



Finding My Happy Place with Hanami and Serbea Templates

It sure seems like the Hanami web framework has been in the news lately, most notably the announcement that Mike Perham of Sidekiq fame has provided a $10,000 grant to Hanami to keep building off the success of version 2.2. I also deeply appreciate Hanami’s commitment to fostering a welcoming and inclusive community.

Thus I figured it was high time I took Hanami for a spin, so after running gem install hanami, along with a few setup commands and a few files to edit, I had a working Ruby-powered website running with Hanami! Yay!! 🎉

But then I started to miss the familiar comforts of Serbea, a Ruby template language based on ERB but with a few extra tricks up its sleeve to make it feel more like “brace-style” template languages such as Liquid, Nunjucks, Twig, Jinja, Mustache, etc. I’ve been using Serbea on nearly all of my Bridgetown sites as well as a substantial Rails client project, so it’s second nature to write my HTML in this syntax. (Plus, y’know, Serbea is a gem I wrote. 😋)

After feeling sad for a moment, it occurred to me that I’d read that Hanami—like many Ruby frameworks—uses Tilt under the hood to load templates. Ah ha! Serbea is also built on top of Tilt, so it shouldn’t be too difficult to get things working. One small hurdle I knew I’d have to overcome is that I don’t auto-register Serbea as a handler for “.serb” files, as Serbea requires a mixin for its “pipeline” syntax support as an initial setup step. So I’d need to figure out where that registration should go and how to apply the mixin.

Turns out, there was a pretty straightforward solution. (Thanks Hanami!) I found that templates are rendered within a Scope object, and while a new Hanami application doesn’t include a dedicated “base scope” class out of the box, it’s very easy to create one. Here’s what mine looks like with the relevant Serbea setup code:

# this file is located at app/views/scope.rb

require "serbea"

Tilt.register Tilt::SerbeaTemplate, "serb"

module YayHanami
  module Views
    class Scope < Hanami::View::Scope
      include Serbea::Helpers
    end
  end
end

Just be sure to replace YayHanami with your application or slice constant. That and a bundle add serbea should be all that’s required to get Serbea up and running!

Once this was all in place, I was able to convert my .html.erb templates to .html.serb. I don’t have anything whiz-bang to show off yet, but for your edification here’s one of Hanami’s ERB examples rewritten in Serbea:

<h1>What's on the Bookshelf</h1>
<ul>
  {% books.each do |book| %}
    <li>{{ book.title }}</li>
  {% end %}
</ul>

<h2>Don't miss these best selling titles</h2>
<ul>
  {% best_sellers.each do |book| %}
    <li>{{ book.title }}</li>
  {% end %}
</ul>

This may not look super thrilling, but imagine you wanted to write a helper that automatically creates a search link for a book title and author to a service like BookWyrm. You could add a method to your Scope class like so:

def bookwyrm(input, author:)
  "<a href='https://bookwyrm.social/search?q=#{escape(input)} #{escape(author)}'>#{escape(input)}</a>".html_safe
end

and then use it filter-style in the template:

<li>{{ book.title | bookwyrm: author: book.author }}</li>

I like this much more than in ERB where helpers are placed before the data they’re acting upon which to me feels like a logical inversion:

<li><%= bookwyrm(book.title, author: book.author) %></li>

Hmm. 🤨

Anyway, I’m totally jazzed that I got Hanami and Serbea playing nicely together, and I can’t wait to see what I might try building next in Hanami! This will be an ongoing series here on Fullstack Ruby (loosely titled “Jared Tries to Do Unusual Things with Hanami”), so make sure that you follow us on Mastodon and subscribe to the newsletter to keep abreast of further developments. small red gem symbolizing the Ruby language



A Casual Conversation with KOW (Karl Oscar Weber) on Camping, Open Source Politics, and More

This is a right humdinger of an episode of Fullstack Ruby! I got the chance to talk with Karl Oscar Weber all about the Camping web framework, as well as his Grilled Cheese livestream, working as a freelancer, and how to criticize by creating as a programmer in a world fraught with political upheaval. Great craic, as the Irish say.



Tired of Dealing with Arguments? Just Forward Them Anonymously!

I don’t know about you, but after a while, I just get tired of the same arguments. Wouldn’t it be great if I could simply forward them instead? Let somebody else handle those arguments!

OK I kid, I kid…but it’s definitely true that argument forwarding is an important aspects of API design in Ruby, and anonymous argument forwarding is a pretty awesome feature of recent versions of Ruby.

Let’s first step through a history of argument forwarding in Ruby.



The Era Before Keyword Arguments #

In the days before Ruby 2.0, Ruby didn’t actually have a language construct for what we call keyword arguments at the method definition level. All we had were positional arguments. So to “simulate” keyword arguments, you could call a method with what looked like keyword arguments (really, it was akin to Hash syntax), and all those key/value pairs would be added to a Hash argument at the end.

def method_with_hash(a, b, c = {})
  puts a, b, c
end

method_with_hash(1, 2, hello: "world", numbers: 123)

Run in Try Ruby

Fun fact, you can still do this even today! But it’s not recommended. Instead, we were graced with true language-level keyword arguments in Ruby 2. To build on the above:

def method_with_kwargs(a, b, hello:, **kwargs)
  puts a, b, hello, kwargs
end

method_with_kwargs(1, 2, hello: "world", numbers: 123)

Run in Try Ruby

Here we’re specifying hello as a true keyword argument, but also allowing additional keyword arguments to get passed in via an argument splat.

Back to the past though. When we just had positional arguments, it was “easy” to forward arguments because there was only one type of argument:

def pass_it_along(*args)
  you_take_care_of_it(*args)
end

def pass_the_block_too(*args, &block) # if you want the forward a block
  do_all_the_things(*args, &block)
end

There is also a way to say “ignore all arguments, I don’t need them” which is handy when a subclass wants to override a superclass method and really doesn’t care about arguments for some reason:

def ignore_the_args(*)
  bye_bye!
end

The Messy Middle #

Things became complicated when we first got keyword arguments, because the question becomes: when you forward arguments the traditional way, do you get real keyword arguments forwarded as well, or do you just get a boring Hash?

For the life of Ruby 2, this worked one way, and then we got a big change in Ruby 3 (and really it took a few iterations before a fully clean break).

In Ruby 2, forwarding positional arguments only would automatically convert keywords over to keyword arguments in the receiving method:

def pass_it_along(*args)
  you_take_care_of_it(*args)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123) # ["hello"] 123

However, in the Ruby of today, this works differently. There’s a special ruby2_keywords method decorator that lets you simulate how things used to be, but it’s well past its sell date. What you should do instead is forward keyword arguments separately:

def pass_it_along(*args, **kwargs)
  you_take_care_of_it(*args, **kwargs)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123) # ["hello"] 123

But…by the time you also add in block forwarding, this really starts to look messy. And as Rubyists, who likes messy?

def pass_it_along(*args, **kwargs, &block)
  you_take_care_of_it(*args, **kwargs, &block) # Ugh!
end

Thankfully, we have a few syntactic sugar options available to use, some rather recent. Let’s take a look!

Give Me that Sweet, Sweet Sugar #

The first thing you can do is use triple-dots notation, which we’ve had since Ruby 2.7:

def pass_it_along(...)
  you_take_care_of_it(...)
end

def you_take_care_of_it(*args, **kwargs, &block)
  puts [args, kwargs, block.()]
end

pass_it_along("hello", abc: 123) { "I'm not a blockhead!" }

Run in Try Ruby

This did limit the ability to add anything extra in method definitions or invocations, but since Ruby 3.0 you can prefix with positional arguments if you wish:

def pass_it_along(str, ...)
  you_take_care_of_it(str.upcase, ...)
end

def you_take_care_of_it(*args, **kwargs, &block)
  puts [args, kwargs, block.()]
end

pass_it_along("hello", abc: 123) { "I'm not a blockhead!" }

Run in Try Ruby

However, for more precise control over what you’re forwarding, first Ruby 3.1 gave us an “anonymous” block operator:

def block_party(&)
  lets_party(&)
end

def lets_party
  "Oh yeah, #{yield}!"
end

block_party { "baby" }

Run in Try Ruby

And then Ruby 3.2 gave us anonymous positional and keyword forwarding as well:

def pass_it_along(*, **)
  you_take_care_of_it(*, **)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123)

Run in Try Ruby

So at this point, you can mix ‘n’ match all of those anonymous operators however you see fit.

The reason you’d still want to use syntax like *args, **kwargs, or &block in a method definition is if you need to do something with those values before forwarding them, or in some metaprogramming cases. Otherwise, using anonymous arguments (or just a basic ...) is likely the best solution going, uh, forward. 😎



Do You Need More Advanced Delegation? #

There are also higher-level constructs available in Ruby to forward, or delegate, logic to other objects:

The Forwardable module is a stdlib mixin which lets you specify one or more methods to forward using class methods def_delegator or def_delegators.

The Delegator class is part of the stdlib and lets you wrap a another class and then add on some additional features.

So depending on your needs, it may make more sense to rely on those additional stdlib features rather than handle argument forwarding yourself at the syntax level.

No matter what though, it’s clear we have many good options for defining an API where one part of the system can hand logic off to another part of the system. This isn’t perhaps as common when you’re writing application-level code, but if you’re working on a gem or a framework, it can come up quite often. It’s nice to know that what was once rather cumbersome is now more streamlined in recent releases of Ruby. small red gem symbolizing the Ruby language



Dissecting Bridgetown 2.0’s Signalize-based Fast Refresh

As the lead maintainer of the Bridgetown web framework, I get to work on interesting (and sometimes very thorny!) Ruby problems which veer from what is typical for individual application projects.

With version 2 of Bridgetown about to drop, I’m starting a series of articles regarding intriguing aspects of the framework’s internals. This time around, we’re taking a close look at one of the marquee features: Fast Refresh.

The Feedback Loop #

Bridgetown is billed as a “progressive site generator” which offers a “hybrid” architecture for application deployments. What all this jargon means is that you can have both statically-generated content which is output as final HTML and other files to a destination folder, and dynamically-served routes which offer the typical request/response cycle you see in traditional web applications.

When it comes to the common development feedback loop of save-and-reload, traditional web applications are fairly straightforward. You make a change to some bit of code or content, you reload your browser tab which makes a new request to the application server, and BOOM! You’re refeshed.

But what about in a static site? You make a change, and suddenly the question becomes: which HTML files need to be regenerated? And what if your change isn’t in a specific page template or blog post or whatever, but some shared template or model or even another page that’s referenced by the one you’re trying to look at? Suddenly you’re talking about the possibility that your change might require regenerating only one literal .html file…or thousands. As the saying goes, it depends.

Prior to the fast refresh feature, Bridgetown regenerated an entire website on every change. You fix a typo in a single Markdown file…entire site regenerated. You update a logo URL in a site layout header…entire site regenerated. This may sound like a slow and laborious process, but on most sites of modest size, complete regeneration is only a second or two. Not that big of a deal, right?

And yet…some sites definitely grow beyond modest size. On my personal blog Jared White.com, the number of resources (posts, podcast episodes, photos, etc.) plus the number of generated pages (tag archives, category archives, etc.) has reached around 1,000 at this point, with no signs of stopping. What used to be measured in the milliseconds is now measured in the seconds on a full build—and while that’s perfectly reasonable in production when a site’s deploying, it stinks when you’re talking about that save-and-reload process in development.

Hence the need for a new approach. Some frameworks call this “incremental regeneration”, but I think “fast refresh” sounds cooler. Bridgetown has already had a live reload feature since its inception—aka, you don’t need to manually go to your browser and reload the page, the framework does it for you. But now with fast refresh enabled, your browser reloads almost instantly! It’s so fast, sometimes by the time I get back to my browser window, the change has already appeared. What a huge quality of life improvement! DX at its finest.

But how did we pull off such a feat? How do we know which .html files need to be regenerated? Is it ✨ magic ✨? The power of AI?

Nope. Just some good ol’ fashioned dependency-tracking via linked lists and closures…aka Signals. What the what? Let’s dive in.

I thought “signals” was a frontend thing. Why does Ruby need them? #

I’ve talked a lot about signals before here on Fullstack Ruby so I won’t go into the whole rationale again. Suffice it to say, if you need to establish any sort of dependency graph such that when one piece of data over here changes, you need to be notified so you can update another piece of data over there, the signals paradigm is a compelling way to do it. At first glance it looks a lot like the “observables” pattern, but where observables require a manual opt-in process (you as the developer need to indicate which bit of data you’d like to observe), signals do this automatically. When you write an “effect” closure (in JavaScript called a function, in Ruby called a proc) and access any signals-based data within that closure, a dependency is created between the effect and that signal (tracked using linked lists under the hood). Any time in the future some of that data changes, because the effect is dependent on the data, it is executed again. This automatic re-run functionality is what makes signals feel like ✨ magic ✨.

Some signals are “computed”—meaning you write a special closure to access one or more other signals, perform some calculation, and return a result. Computed signals update “lazily”—in other words, the calculation is only performed at the point the result is required. Under the hood, a computed signal is built out of an effect, which means when you write your own effects to access the values of computed signals, effects are dependent on other effects. Again, it can feel like ✨ magic ✨ until you understand how it works.

Now it’s true that the signals paradigm has taken off like wildfire on the frontend as a serious solution to the state -> UI update lifecycle. You need to know which specific parts of the interface need to be rerendered based on which specific state has changed.

Hmm.

Rerendering based on changes to data. Now where have I heard that one before?

Yeah, that’s it! Sounds an awful lot like the exact problem Bridgetown faces when you modify code or content in a file. We need to know how to rerender specific parts of the interface (aka which particular .html files) based on the dependency graph of how your modified code touches various pages.

Here’s how Fast Refresh solves the problem by effectively utilizing the Signalize gem. For a framework-level overview of the feature, check out this post on the Bridgetown blog.

Transformations in Effects #

The first road along our journey is making sure the process of transformation—aka compiling Markdown down to HTML, rendering view components, placing page content inside of a layout, etc. is wrapped in an effect. This way, if during the course of transforming Page A, there’s a reference to the “title” signal of Page B, any future change to Page B’s “title” would trigger a rerun of Page A’s transformation.

However, it’s a wee bit more complicated than that. We don’t want to perform the rerender immediately when the effect is triggered for a variety of reasons (performance, avoiding infinite loops, etc.). We instead want to mark the resource which needs to be transformed, and then later on we’ll go through all of the queued resources in a single pass in order to perform transformations.

Here’s a snippet from the Bridgetown codebase of what that looks like:

# bridgetown-core/lib/bridgetown-core/resource/base.rb
def transform!
  internal_error = nil
  @transform_effect_disposal = Signalize.effect do
    if !@fast_refresh_order && @previously_transformed
      self.content = untransformed_content
      @transformer = nil
      mark_for_fast_refresh! if site.config.fast_refresh && write?
      next
    end

    transformer.process! unless collection.data?
    slots.clear
    @previously_transformed = true
  rescue StandardError, SyntaxError => e
    internal_error = e
  end

  raise internal_error if internal_error

  self
end

There are a few things going here, so let’s walk through it:

So that’s one facet of the overall process. Here’s another one: we needed to refactor resource data (front matter + content) to use signals, otherwise our effects will be useless.

Here’s a snippet showing what happens when new data is assigned to a resource:

# bridgetown-core/lib/bridgetown-core/resource/base.rb
def data=(new_data)
  mark_for_fast_refresh! if site.config.fast_refresh && write?

  Signalize.batch do
    @content_signal.value += 1
    @data.value = @data.value.merge(new_data)
  end
  @data.peek
end

First of all, we immediately mark the resource itself as ready for fast refresh. This is to handle the first-party use case where someone has made a change to a resource and we definitely want to rerender that resource…no need for a fancy dependency graph in that case!

Next, we create a batch routine to set a couple of signals: updating the data hash itself, and incrementing the “content” signal. For legacy reasons, we don’t use a signal internally to store the body content of a resource, but we still track its usage via an incrementing integer.

All right, so we now have two core pieces of functionality in place. We can track when a resource is directly updated, and we can also track when another resource is updated that the first one is dependent on in order to rerender both of them.

(I’ll leave out all of the primary file watcher logic which matches file paths with resources or other data structures in the first place and handles all the queue processing because it’s quite complex. You can look at it here.)

Instead, let’s turn our attention to yet another use case: you’ve just updated the template of a component (say, a site-wide page header). How could Bridgetown possibly know which resources (in this instance, probably all of them!) need to be rerendered? Well, the solution is to use signal tracking when rendering components!

Here’s the method which runs when a component is rendered. If fast refresh is enabled, we create or reuse an incrementing integer signal cached using a stable id (the location of the component source file), and then “subscribe” the effect that’s in the process of executing to that signal.

# bridgetown-core/lib/bridgetown-core/component.rb
def render_in(view_context, &block)
  @view_context = view_context
  @_content_block = block

  if render?
    if helpers.site.config.fast_refresh
      signal = helpers.site.tmp_cache["comp-signal:#{self.class.source_location}"] ||=
        Signalize.signal(1)
      # subscribe so resources are attached to this component within effect
      signal.value
    end
    before_render
    template
  else
    ""
  end

  # and some other stuff…
end

Later on, when it’s time to determine which type of file has just changed on disk, we loop through component paths, and if we find one, increment the corresponding cached signal.

# bridgetown-core/lib/bridgetown-core/concerns/site/fast_refreshable.rb
def locate_components_for_fast_refresh(path)
  comp = Bridgetown::Component.descendants.find do |item|
    item.component_template_path == path || item.source_location == path
  rescue StandardError
  end
  return unless comp

  tmp_cache["comp-signal:#{comp.source_location}"]&.value += 1

  # and some other stuff…
end

So now, any time a component changes, the resources which had previously rendered that component will get marked for fast refresh and thus rerendered. (We do a similar thing for template partials as well.)

Create Your Own Signals #

There’s so much more we could go over, but I’ll mention one other cool addition to the system. Bridgetown offers the concept of a “site-wide” data object, which you can think of as global state. Site data (accessed via site.data naturally) can come from specific files which get read in from the src/_data folder like .csv, .yaml, or .json, but it can also be provided by code which runs at the start of a site build via a Builder.

Bridgetown 2.0’s fast refresh necessitated the need to make even site data reactive, so that’s exactly what we did using a special feature of the Signalize gem: Signalize::Struct (with some Bridgetown-specific enhancements layered in to make it feel more Hash-like).

In a nutshell, you can now set global data with site.signals.some_value = 123 and read that later with site.signals.some_value. In any template for a resource, a component, whatever, if you read in that value you’ll make that template dependent on the signal value. So in the future, when that signal changes for any reason, your template(s) will get rerendered to display the new value.

Bridgetown uses this internally for “metadata” (aka site title, tagline, etc.) so templates can get refreshed if you update the metadata, and who knows what use cases might be unlocked by this feature in the future? For example, you could spin up a thread and poll an external API such as a CMS every few seconds, and once you detect a new changeset, update a signal and get your site fast refreshed with the API’s new content.

Fast Refresh Edge Cases #

As anyone who has worked on incremental regeneration for a static site generator can tell you, the devil’s in the details. There are so many edge cases which can make it seem like the site is “broken” — aka you update a piece of data over here, and then view some page over there and wonder why nothing got updated. 🧐

Some solutions have come in the form of elaborate JavaScript frontend frameworks which require complex data pipelines and GraphQL and TypeScript and static analysis and Hot Module Reload and an ever-growing string of buzzwords…and even then, performance in other areas can suffer such as on first build or when accessing various resources for the first time.

Bridgetown will no doubt ship its v2 with a few remaining edge cases, but I’m feeling confident we’ve dealt with most of the low-hanging fruit. I’ve been using alpha and beta versions of Bridgetown 2.0 in production on my own projects, and by now I’m so used to fast refresh making it so I’m virtually never waiting for my browser to display updated content or UI, I’ve forgotten the bad old days of when we didn’t have this feature!

It was (and is) complicated to build, but I’m sure it would have been even harder and more byzantine if we’d needed to architect the feature from scratch. By leveraging the capabilities afforded by the Signalize gem and making it possible for dependency graphs to self-assemble based on how developers have structured their application code and site content, we now have a solid foundation for this major performance boost and can refactor bit by bit as issues and fixes arise.

Bridgetown 2.0 is currently in beta and slated for final release before the end of the year. If you’re looking to develop a new website or modest web application using Ruby, check it out! small red gem symbolizing the Ruby language



Episode 11: Designing Your API for Their API (Yo Dawg!)

It’s tempting to want to take the simplistic approach of writing “to the framework” or to the external API directly in the places where you need to interface with those resources, but it’s sometimes a much better approach to create your own abstraction layer. Having this layer which sits between your high-level business logic or request/response handling, and the low-level APIs you need to call, means you’ll be able to define an API which is clean and makes sense for your application…and then you can get messy down in the guts of the layer or even swap out one external API for another one. I explore all this and more in another rousing episode of Fullstack Ruby.

Older Posts
Skip to content