“bundle list”

Sunsetting the Fullstack Ruby Podcast (and What I’m Doing Instead)

I always hate writing posts like this, which is why I rarely do it and tend to let content destinations linger on the interwebs indefinitely.

But I’m in the midst of spring summer cleaning regarding all things content creation, so I figured it’s best to be upfront about these things and give folks a heads up what I’m currently working on.

TL;DR: I’m bidding the Fullstack Ruby podcast a bittersweet farewell and gearing up to launch a new podcast centered on current events in the software & internet technology space, because we’ve reached a crisis point and future of the open web is more fragile than ever.


Here’s the truth. There’s a lot that’s fucked up about Big Tech and software development right now. Pardon my language, but I have struggled mightily with burnout for going on two years now; not because I don’t like writing software (oddly enough, I care as much about the actual work I do as a developer as I ever have!), but because I don’t like the software industry. ☹️

And sadly, I have been particularly disappointed with what’s going on with Ruby. I don’t want to rehash past consternation (you can read about my attempt to fully reboot Fullstack Ruby a year ago for more background, and listen to the followup podcast episode). Here’s the summary:

There are two devastating downward pressures on the software industry right now: the unholy alliance of far-right bigotry/propaganda & Big Tech, and the atomic bomb-level threat to the open web that is Generative AI. And the crazy part is, there’s actually a cultural connection between fascism and genAI so in a sense, these aren’t two separate problems. They’re the same problem.

Unfortunately, the Ruby community taken as a whole has done NOTHING to fight these problems. Certain individuals have, yes. Good for them. It’s not moving the needle though.

Ruby already has suffered in recent years from the brain-drain problem and the lack of mainstream awareness for new project development. I have always felt that problem alone is one we can surmount. But when you pile on top of that the fascism problem (personified in the founder & figurehead of Ruby on Rails, DHH) and the AI problem (which major voices in the Ruby space have not only failed to combat but they’re actively advocating for and encouraging genAI use), I find it increasingly difficult to remain an active cheerleader going forward.

Don’t get me wrong, I’m not saying it’s any better per se in other programming language communities. But if I have to deal with fighting off fascism and the evils of Big Tech on a daily basis, I might as well be writing JavaScript while I’m doing it. JavaScript is the lingua franca of the web. It just is. And it’s already what we use for frontend, out of technical necessity.

I still prefer writing Ruby on the backend. I do, I really do! And I’m not sure yet I’m ready to give that up, even now. But it does mean I find my enthusiasm for talking about Ruby and recommending it to others fading into the background. Damn.


I haven’t yet decided what the ultimate fate of this blog is. But I do know I’m ready to scale my ambitions way down in this particular community, so to start, I must bid the podcast farewell. 🫡

As I alluded to above, I’m actually gearing up to launch a brand new podcast! You may be interested it in, you may not. Regardless, the easiest way to stay notified on the launch is to follow me on Mastodon and subscribe to the Cycles Hyped No More newsletter (the podcast will be a companion product if you will to that newsletter).

It should come to no surprise by now that the purpose of the new podcast is to take the twin perils of fascism-adjacent Big Tech and genAI head on. I will be speaking about this early, often, for as long as it takes for people to wake up and realize the open web is under assault. I don’t mean to sound unnecessarily dramatic, but while we’re over here arguing about programming languages and coding patterns and architectural choices for our web apps, the very web itself is getting pillaged and dismantled brick by brick by hostile forces.

I’m not going to let that happen without a fight.

I hope you’re interested in joining me in this fight. Stay tuned.

–Jared ✌️



Finding My Happy Place with Hanami and Serbea Templates

It sure seems like the Hanami web framework has been in the news lately, most notably the announcement that Mike Perham of Sidekiq fame has provided a $10,000 grant to Hanami to keep building off the success of version 2.2. I also deeply appreciate Hanami’s commitment to fostering a welcoming and inclusive community.

Thus I figured it was high time I took Hanami for a spin, so after running gem install hanami, along with a few setup commands and a few files to edit, I had a working Ruby-powered website running with Hanami! Yay!! 🎉

But then I started to miss the familiar comforts of Serbea, a Ruby template language based on ERB but with a few extra tricks up its sleeve to make it feel more like “brace-style” template languages such as Liquid, Nunjucks, Twig, Jinja, Mustache, etc. I’ve been using Serbea on nearly all of my Bridgetown sites as well as a substantial Rails client project, so it’s second nature to write my HTML in this syntax. (Plus, y’know, Serbea is a gem I wrote. 😋)

After feeling sad for a moment, it occurred to me that I’d read that Hanami—like many Ruby frameworks—uses Tilt under the hood to load templates. Ah ha! Serbea is also built on top of Tilt, so it shouldn’t be too difficult to get things working. One small hurdle I knew I’d have to overcome is that I don’t auto-register Serbea as a handler for “.serb” files, as Serbea requires a mixin for its “pipeline” syntax support as an initial setup step. So I’d need to figure out where that registration should go and how to apply the mixin.

Turns out, there was a pretty straightforward solution. (Thanks Hanami!) I found that templates are rendered within a Scope object, and while a new Hanami application doesn’t include a dedicated “base scope” class out of the box, it’s very easy to create one. Here’s what mine looks like with the relevant Serbea setup code:

# this file is located at app/views/scope.rb

require "serbea"

Tilt.register Tilt::SerbeaTemplate, "serb"

module YayHanami
  module Views
    class Scope < Hanami::View::Scope
      include Serbea::Helpers
    end
  end
end

Just be sure to replace YayHanami with your application or slice constant. That and a bundle add serbea should be all that’s required to get Serbea up and running!

Once this was all in place, I was able to convert my .html.erb templates to .html.serb. I don’t have anything whiz-bang to show off yet, but for your edification here’s one of Hanami’s ERB examples rewritten in Serbea:

<h1>What's on the Bookshelf</h1>
<ul>
  {% books.each do |book| %}
    <li>{{ book.title }}</li>
  {% end %}
</ul>

<h2>Don't miss these best selling titles</h2>
<ul>
  {% best_sellers.each do |book| %}
    <li>{{ book.title }}</li>
  {% end %}
</ul>

This may not look super thrilling, but imagine you wanted to write a helper that automatically creates a search link for a book title and author to a service like BookWyrm. You could add a method to your Scope class like so:

def bookwyrm(input, author:)
  "<a href='https://bookwyrm.social/search?q=#{escape(input)} #{escape(author)}'>#{escape(input)}</a>".html_safe
end

and then use it filter-style in the template:

<li>{{ book.title | bookwyrm: author: book.author }}</li>

I like this much more than in ERB where helpers are placed before the data they’re acting upon which to me feels like a logical inversion:

<li><%= bookwyrm(book.title, author: book.author) %></li>

Hmm. 🤨

Anyway, I’m totally jazzed that I got Hanami and Serbea playing nicely together, and I can’t wait to see what I might try building next in Hanami! This will be an ongoing series here on Fullstack Ruby (loosely titled “Jared Tries to Do Unusual Things with Hanami”), so make sure that you follow us on Mastodon and subscribe to the newsletter to keep abreast of further developments. small red gem symbolizing the Ruby language



A Casual Conversation with KOW (Karl Oscar Weber) on Camping, Open Source Politics, and More

This is a right humdinger of an episode of Fullstack Ruby! I got the chance to talk with Karl Oscar Weber all about the Camping web framework, as well as his Grilled Cheese livestream, working as a freelancer, and how to criticize by creating as a programmer in a world fraught with political upheaval. Great craic, as the Irish say.



Tired of Dealing with Arguments? Just Forward Them Anonymously!

I don’t know about you, but after a while, I just get tired of the same arguments. Wouldn’t it be great if I could simply forward them instead? Let somebody else handle those arguments!

OK I kid, I kid…but it’s definitely true that argument forwarding is an important aspects of API design in Ruby, and anonymous argument forwarding is a pretty awesome feature of recent versions of Ruby.

Let’s first step through a history of argument forwarding in Ruby.



The Era Before Keyword Arguments #

In the days before Ruby 2.0, Ruby didn’t actually have a language construct for what we call keyword arguments at the method definition level. All we had were positional arguments. So to “simulate” keyword arguments, you could call a method with what looked like keyword arguments (really, it was akin to Hash syntax), and all those key/value pairs would be added to a Hash argument at the end.

def method_with_hash(a, b, c = {})
  puts a, b, c
end

method_with_hash(1, 2, hello: "world", numbers: 123)

Run in Try Ruby

Fun fact, you can still do this even today! But it’s not recommended. Instead, we were graced with true language-level keyword arguments in Ruby 2. To build on the above:

def method_with_kwargs(a, b, hello:, **kwargs)
  puts a, b, hello, kwargs
end

method_with_kwargs(1, 2, hello: "world", numbers: 123)

Run in Try Ruby

Here we’re specifying hello as a true keyword argument, but also allowing additional keyword arguments to get passed in via an argument splat.

Back to the past though. When we just had positional arguments, it was “easy” to forward arguments because there was only one type of argument:

def pass_it_along(*args)
  you_take_care_of_it(*args)
end

def pass_the_block_too(*args, &block) # if you want the forward a block
  do_all_the_things(*args, &block)
end

There is also a way to say “ignore all arguments, I don’t need them” which is handy when a subclass wants to override a superclass method and really doesn’t care about arguments for some reason:

def ignore_the_args(*)
  bye_bye!
end

The Messy Middle #

Things became complicated when we first got keyword arguments, because the question becomes: when you forward arguments the traditional way, do you get real keyword arguments forwarded as well, or do you just get a boring Hash?

For the life of Ruby 2, this worked one way, and then we got a big change in Ruby 3 (and really it took a few iterations before a fully clean break).

In Ruby 2, forwarding positional arguments only would automatically convert keywords over to keyword arguments in the receiving method:

def pass_it_along(*args)
  you_take_care_of_it(*args)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123) # ["hello"] 123

However, in the Ruby of today, this works differently. There’s a special ruby2_keywords method decorator that lets you simulate how things used to be, but it’s well past its sell date. What you should do instead is forward keyword arguments separately:

def pass_it_along(*args, **kwargs)
  you_take_care_of_it(*args, **kwargs)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123) # ["hello"] 123

But…by the time you also add in block forwarding, this really starts to look messy. And as Rubyists, who likes messy?

def pass_it_along(*args, **kwargs, &block)
  you_take_care_of_it(*args, **kwargs, &block) # Ugh!
end

Thankfully, we have a few syntactic sugar options available to use, some rather recent. Let’s take a look!

Give Me that Sweet, Sweet Sugar #

The first thing you can do is use triple-dots notation, which we’ve had since Ruby 2.7:

def pass_it_along(...)
  you_take_care_of_it(...)
end

def you_take_care_of_it(*args, **kwargs, &block)
  puts [args, kwargs, block.()]
end

pass_it_along("hello", abc: 123) { "I'm not a blockhead!" }

Run in Try Ruby

This did limit the ability to add anything extra in method definitions or invocations, but since Ruby 3.0 you can prefix with positional arguments if you wish:

def pass_it_along(str, ...)
  you_take_care_of_it(str.upcase, ...)
end

def you_take_care_of_it(*args, **kwargs, &block)
  puts [args, kwargs, block.()]
end

pass_it_along("hello", abc: 123) { "I'm not a blockhead!" }

Run in Try Ruby

However, for more precise control over what you’re forwarding, first Ruby 3.1 gave us an “anonymous” block operator:

def block_party(&)
  lets_party(&)
end

def lets_party
  "Oh yeah, #{yield}!"
end

block_party { "baby" }

Run in Try Ruby

And then Ruby 3.2 gave us anonymous positional and keyword forwarding as well:

def pass_it_along(*, **)
  you_take_care_of_it(*, **)
end

def you_take_care_of_it(*args, abc: 0)
  puts "#{args} #{abc}"
end

pass_it_along("hello", abc: 123)

Run in Try Ruby

So at this point, you can mix ‘n’ match all of those anonymous operators however you see fit.

The reason you’d still want to use syntax like *args, **kwargs, or &block in a method definition is if you need to do something with those values before forwarding them, or in some metaprogramming cases. Otherwise, using anonymous arguments (or just a basic ...) is likely the best solution going, uh, forward. 😎



Do You Need More Advanced Delegation? #

There are also higher-level constructs available in Ruby to forward, or delegate, logic to other objects:

The Forwardable module is a stdlib mixin which lets you specify one or more methods to forward using class methods def_delegator or def_delegators.

The Delegator class is part of the stdlib and lets you wrap a another class and then add on some additional features.

So depending on your needs, it may make more sense to rely on those additional stdlib features rather than handle argument forwarding yourself at the syntax level.

No matter what though, it’s clear we have many good options for defining an API where one part of the system can hand logic off to another part of the system. This isn’t perhaps as common when you’re writing application-level code, but if you’re working on a gem or a framework, it can come up quite often. It’s nice to know that what was once rather cumbersome is now more streamlined in recent releases of Ruby. small red gem symbolizing the Ruby language



Dissecting Bridgetown 2.0’s Signalize-based Fast Refresh

As the lead maintainer of the Bridgetown web framework, I get to work on interesting (and sometimes very thorny!) Ruby problems which veer from what is typical for individual application projects.

With version 2 of Bridgetown about to drop, I’m starting a series of articles regarding intriguing aspects of the framework’s internals. This time around, we’re taking a close look at one of the marquee features: Fast Refresh.

The Feedback Loop #

Bridgetown is billed as a “progressive site generator” which offers a “hybrid” architecture for application deployments. What all this jargon means is that you can have both statically-generated content which is output as final HTML and other files to a destination folder, and dynamically-served routes which offer the typical request/response cycle you see in traditional web applications.

When it comes to the common development feedback loop of save-and-reload, traditional web applications are fairly straightforward. You make a change to some bit of code or content, you reload your browser tab which makes a new request to the application server, and BOOM! You’re refeshed.

But what about in a static site? You make a change, and suddenly the question becomes: which HTML files need to be regenerated? And what if your change isn’t in a specific page template or blog post or whatever, but some shared template or model or even another page that’s referenced by the one you’re trying to look at? Suddenly you’re talking about the possibility that your change might require regenerating only one literal .html file…or thousands. As the saying goes, it depends.

Prior to the fast refresh feature, Bridgetown regenerated an entire website on every change. You fix a typo in a single Markdown file…entire site regenerated. You update a logo URL in a site layout header…entire site regenerated. This may sound like a slow and laborious process, but on most sites of modest size, complete regeneration is only a second or two. Not that big of a deal, right?

And yet…some sites definitely grow beyond modest size. On my personal blog Jared White.com, the number of resources (posts, podcast episodes, photos, etc.) plus the number of generated pages (tag archives, category archives, etc.) has reached around 1,000 at this point, with no signs of stopping. What used to be measured in the milliseconds is now measured in the seconds on a full build—and while that’s perfectly reasonable in production when a site’s deploying, it stinks when you’re talking about that save-and-reload process in development.

Hence the need for a new approach. Some frameworks call this “incremental regeneration”, but I think “fast refresh” sounds cooler. Bridgetown has already had a live reload feature since its inception—aka, you don’t need to manually go to your browser and reload the page, the framework does it for you. But now with fast refresh enabled, your browser reloads almost instantly! It’s so fast, sometimes by the time I get back to my browser window, the change has already appeared. What a huge quality of life improvement! DX at its finest.

But how did we pull off such a feat? How do we know which .html files need to be regenerated? Is it ✨ magic ✨? The power of AI?

Nope. Just some good ol’ fashioned dependency-tracking via linked lists and closures…aka Signals. What the what? Let’s dive in.

I thought “signals” was a frontend thing. Why does Ruby need them? #

I’ve talked a lot about signals before here on Fullstack Ruby so I won’t go into the whole rationale again. Suffice it to say, if you need to establish any sort of dependency graph such that when one piece of data over here changes, you need to be notified so you can update another piece of data over there, the signals paradigm is a compelling way to do it. At first glance it looks a lot like the “observables” pattern, but where observables require a manual opt-in process (you as the developer need to indicate which bit of data you’d like to observe), signals do this automatically. When you write an “effect” closure (in JavaScript called a function, in Ruby called a proc) and access any signals-based data within that closure, a dependency is created between the effect and that signal (tracked using linked lists under the hood). Any time in the future some of that data changes, because the effect is dependent on the data, it is executed again. This automatic re-run functionality is what makes signals feel like ✨ magic ✨.

Some signals are “computed”—meaning you write a special closure to access one or more other signals, perform some calculation, and return a result. Computed signals update “lazily”—in other words, the calculation is only performed at the point the result is required. Under the hood, a computed signal is built out of an effect, which means when you write your own effects to access the values of computed signals, effects are dependent on other effects. Again, it can feel like ✨ magic ✨ until you understand how it works.

Now it’s true that the signals paradigm has taken off like wildfire on the frontend as a serious solution to the state -> UI update lifecycle. You need to know which specific parts of the interface need to be rerendered based on which specific state has changed.

Hmm.

Rerendering based on changes to data. Now where have I heard that one before?

Yeah, that’s it! Sounds an awful lot like the exact problem Bridgetown faces when you modify code or content in a file. We need to know how to rerender specific parts of the interface (aka which particular .html files) based on the dependency graph of how your modified code touches various pages.

Here’s how Fast Refresh solves the problem by effectively utilizing the Signalize gem. For a framework-level overview of the feature, check out this post on the Bridgetown blog.

Transformations in Effects #

The first road along our journey is making sure the process of transformation—aka compiling Markdown down to HTML, rendering view components, placing page content inside of a layout, etc. is wrapped in an effect. This way, if during the course of transforming Page A, there’s a reference to the “title” signal of Page B, any future change to Page B’s “title” would trigger a rerun of Page A’s transformation.

However, it’s a wee bit more complicated than that. We don’t want to perform the rerender immediately when the effect is triggered for a variety of reasons (performance, avoiding infinite loops, etc.). We instead want to mark the resource which needs to be transformed, and then later on we’ll go through all of the queued resources in a single pass in order to perform transformations.

Here’s a snippet from the Bridgetown codebase of what that looks like:

# bridgetown-core/lib/bridgetown-core/resource/base.rb
def transform!
  internal_error = nil
  @transform_effect_disposal = Signalize.effect do
    if !@fast_refresh_order && @previously_transformed
      self.content = untransformed_content
      @transformer = nil
      mark_for_fast_refresh! if site.config.fast_refresh && write?
      next
    end

    transformer.process! unless collection.data?
    slots.clear
    @previously_transformed = true
  rescue StandardError, SyntaxError => e
    internal_error = e
  end

  raise internal_error if internal_error

  self
end

There are a few things going here, so let’s walk through it:

So that’s one facet of the overall process. Here’s another one: we needed to refactor resource data (front matter + content) to use signals, otherwise our effects will be useless.

Here’s a snippet showing what happens when new data is assigned to a resource:

# bridgetown-core/lib/bridgetown-core/resource/base.rb
def data=(new_data)
  mark_for_fast_refresh! if site.config.fast_refresh && write?

  Signalize.batch do
    @content_signal.value += 1
    @data.value = @data.value.merge(new_data)
  end
  @data.peek
end

First of all, we immediately mark the resource itself as ready for fast refresh. This is to handle the first-party use case where someone has made a change to a resource and we definitely want to rerender that resource…no need for a fancy dependency graph in that case!

Next, we create a batch routine to set a couple of signals: updating the data hash itself, and incrementing the “content” signal. For legacy reasons, we don’t use a signal internally to store the body content of a resource, but we still track its usage via an incrementing integer.

All right, so we now have two core pieces of functionality in place. We can track when a resource is directly updated, and we can also track when another resource is updated that the first one is dependent on in order to rerender both of them.

(I’ll leave out all of the primary file watcher logic which matches file paths with resources or other data structures in the first place and handles all the queue processing because it’s quite complex. You can look at it here.)

Instead, let’s turn our attention to yet another use case: you’ve just updated the template of a component (say, a site-wide page header). How could Bridgetown possibly know which resources (in this instance, probably all of them!) need to be rerendered? Well, the solution is to use signal tracking when rendering components!

Here’s the method which runs when a component is rendered. If fast refresh is enabled, we create or reuse an incrementing integer signal cached using a stable id (the location of the component source file), and then “subscribe” the effect that’s in the process of executing to that signal.

# bridgetown-core/lib/bridgetown-core/component.rb
def render_in(view_context, &block)
  @view_context = view_context
  @_content_block = block

  if render?
    if helpers.site.config.fast_refresh
      signal = helpers.site.tmp_cache["comp-signal:#{self.class.source_location}"] ||=
        Signalize.signal(1)
      # subscribe so resources are attached to this component within effect
      signal.value
    end
    before_render
    template
  else
    ""
  end

  # and some other stuff…
end

Later on, when it’s time to determine which type of file has just changed on disk, we loop through component paths, and if we find one, increment the corresponding cached signal.

# bridgetown-core/lib/bridgetown-core/concerns/site/fast_refreshable.rb
def locate_components_for_fast_refresh(path)
  comp = Bridgetown::Component.descendants.find do |item|
    item.component_template_path == path || item.source_location == path
  rescue StandardError
  end
  return unless comp

  tmp_cache["comp-signal:#{comp.source_location}"]&.value += 1

  # and some other stuff…
end

So now, any time a component changes, the resources which had previously rendered that component will get marked for fast refresh and thus rerendered. (We do a similar thing for template partials as well.)

Create Your Own Signals #

There’s so much more we could go over, but I’ll mention one other cool addition to the system. Bridgetown offers the concept of a “site-wide” data object, which you can think of as global state. Site data (accessed via site.data naturally) can come from specific files which get read in from the src/_data folder like .csv, .yaml, or .json, but it can also be provided by code which runs at the start of a site build via a Builder.

Bridgetown 2.0’s fast refresh necessitated the need to make even site data reactive, so that’s exactly what we did using a special feature of the Signalize gem: Signalize::Struct (with some Bridgetown-specific enhancements layered in to make it feel more Hash-like).

In a nutshell, you can now set global data with site.signals.some_value = 123 and read that later with site.signals.some_value. In any template for a resource, a component, whatever, if you read in that value you’ll make that template dependent on the signal value. So in the future, when that signal changes for any reason, your template(s) will get rerendered to display the new value.

Bridgetown uses this internally for “metadata” (aka site title, tagline, etc.) so templates can get refreshed if you update the metadata, and who knows what use cases might be unlocked by this feature in the future? For example, you could spin up a thread and poll an external API such as a CMS every few seconds, and once you detect a new changeset, update a signal and get your site fast refreshed with the API’s new content.

Fast Refresh Edge Cases #

As anyone who has worked on incremental regeneration for a static site generator can tell you, the devil’s in the details. There are so many edge cases which can make it seem like the site is “broken” — aka you update a piece of data over here, and then view some page over there and wonder why nothing got updated. 🧐

Some solutions have come in the form of elaborate JavaScript frontend frameworks which require complex data pipelines and GraphQL and TypeScript and static analysis and Hot Module Reload and an ever-growing string of buzzwords…and even then, performance in other areas can suffer such as on first build or when accessing various resources for the first time.

Bridgetown will no doubt ship its v2 with a few remaining edge cases, but I’m feeling confident we’ve dealt with most of the low-hanging fruit. I’ve been using alpha and beta versions of Bridgetown 2.0 in production on my own projects, and by now I’m so used to fast refresh making it so I’m virtually never waiting for my browser to display updated content or UI, I’ve forgotten the bad old days of when we didn’t have this feature!

It was (and is) complicated to build, but I’m sure it would have been even harder and more byzantine if we’d needed to architect the feature from scratch. By leveraging the capabilities afforded by the Signalize gem and making it possible for dependency graphs to self-assemble based on how developers have structured their application code and site content, we now have a solid foundation for this major performance boost and can refactor bit by bit as issues and fixes arise.

Bridgetown 2.0 is currently in beta and slated for final release before the end of the year. If you’re looking to develop a new website or modest web application using Ruby, check it out! small red gem symbolizing the Ruby language



Episode 11: Designing Your API for Their API (Yo Dawg!)

It’s tempting to want to take the simplistic approach of writing “to the framework” or to the external API directly in the places where you need to interface with those resources, but it’s sometimes a much better approach to create your own abstraction layer. Having this layer which sits between your high-level business logic or request/response handling, and the low-level APIs you need to call, means you’ll be able to define an API which is clean and makes sense for your application…and then you can get messy down in the guts of the layer or even swap out one external API for another one. I explore all this and more in another rousing episode of Fullstack Ruby.



Top 10 Most Excellent Gems to Use in Any Ruby Web Application

The ecosystem of Ruby gems is rich with libraries to enable all sorts of useful functionality you’ll need as you write your web applications. However, at times it can be a challenge when you’re working within a broader Ruby context (aka not using Rails) to find gems which integrate well into all sorts of Ruby applications.

Occasionally you’ll come across a gem which doesn’t clearly label itself as Rails-only. In other cases, the manner in which you can use the gem outside of Rails isn’t clearly documented or there are odd limitations.

But thankfully, there are plenty of gems which are quite solid to use no matter what architecture you choose, and a few you might come across may even themselves be dependencies used by Rails because they’re reliable and battle-tested.

In this article, I’ll share with you some of my favorite gems you can use in your Ruby web apps. I have personal experience with all of them, and a few I’ve used extensively in the gems and frameworks I work on. (Note: order is mostly random.)

AmazingPrint #

It’s certainly true that the Ruby console, IRB, has seen a lot of improvements over the past few years (gotta love that syntax highlighting!). But there’s always more that can be done to make it easier to visualize complex objects and datasets, and that’s where the AmazingPrint gem comes in.

It’s easy to install and integrate into your IRB sessions, and once loaded you can gain a more comprehensive idea of what’s actually inside arrays, hashes, and other types of objects as you inspect variables and method output.

AmazingPrint is loaded into the Bridgetown framework’s console automatically, and I can definitely recommend giving it a try in your projects.

Mail #

Anyone who’s written a Rails application and used Action Mailer to send email, congratulations! You’ve used the Mail gem. Mail is indeed what powers Action Mailer under the hood—and good news is, it actually provides a very nice API all on its own.

As the readme demonstrates, you can send simple emails with a simple DSL:

mail = Mail.new do
  from     'me@test.lindsaar.net'
  to       'you@test.lindsaar.net'
  subject  'Here is the image you wanted'
  body     File.read('body.txt')
  add_file :filename => 'somefile.png', :content => File.read('/somefile.png')
end

mail.deliver!

Setting up a configuration to transport using SMTP is straightforward, and it’s also possible to send both text and HTML-formatted email parts in just a few lines of code.

Apparently Mail can also read email via POP3 or IMAP protocols, but I’ve never tried that personally. I can certainly vouch for sending email though, having done so in several Bridgetown + Roda projects. Thanks Mail!

Dotenv #

Y’know, you might want store that sensitive SMTP username & password in environments variables your application can read in…the perfect segue to our next gem, Dotenv.

Dotenv does exactly what it sounds like. It reads in .env files and provides those values as environment variables. While you typically wouldn’t need this functionality in production, in development or local testing environments having a .env file in your project root makes a lot of sense. For example:

# in .env file
MAIL_SMTP_USERNAME=emailgobrrr
MAIL_SMTP_PASSWORD=asdfzxcv987

Then after loading Dotenv, you’ll have access to ENV["MAIL_SMTP_USERNAME"] and so on.

At one point in the past, I’d used a gem called Figaro which could read in YAML files and populate env vars accordingly, but development on that gem stalled. Meanwhile, Dotenv is simple and proven. And I’ve integrated this gem into the Bridgetown framework so repos can make use of it right out of the box.

Zeitwerk #

Many Ruby frameworks—and Rails of course is among them—offer automatic code loading (and reloading!). It’s an expectation that once you’ve added your Ruby files in the appropriate folder structures with a naming convention that matches filenames to class names, it all Just Works™. No need to manually require various files in designated places and keep track of the changes needed if a file gets moved, renamed, or deleted.

Zeitwerk (pronounced zight-verk) was originally developed for Rails to replace the “classic” Rails autoloader, but as a standalone library it can be used by any number of frameworks and gems (and increasingly is!). I wrote about Zeitwerk before on Fullstack Ruby as well as the broader philosophy of why Ruby doesn’t have “imports” like so many other language ecosystems.

And at the risk of sounding like a broken record, the Bridgetown framework uses Zeitwerk both internally and as a code loader for application developers. It’s a fantastic library and a genuine workhorse for this very important Ruby functionality.

Ice Cube #

The ice_cube gem is for those cases—and you’d be surprised how often this can come up in application development—when you need to generate a series of dates. Every day. Every Tuesday and Friday at 1pm. The next several weekends. Just recently for example, I wanted to pull some data out of the database and display values on a month-based chart…which means I needed to generate a monthly series starting from now working backwards by n months. Perfect job for ice_cube!

There are a few additional features you gain if you’ve loaded in Active Support’s time extensions, but that’s optional. And the very Rubyesque API of ice_cube is quite enjoyable to work with. If you need to do anything at all with calendaring logic, this is the gem for you!

Nokolexbor #

In the grand tradition of software projects coming up with funny-sounding names simply because they’re bits of other names smooshed together, the Nokolexbor gem is named such because it’s a portmanteau of Nokogiri (the popular XML/HTML Ruby parser) and Lexbor (an HTML engine written in C). Nokolexbor, like its underlying engine, has a goal to be very, very fast, as well as offer a high degree of HTML5 conformance.

We’ve found good use for Nokolexbor in Bridgetown to transform output HTML in various ways after the initial markup generation (a common example is to add # symbols on hover to headings so you can copy the URL with its extra fragment for deep linking). And I’d probably reach for this over Nokogiri going forward, although if you already have Nokogiri in your project I don’t know that I’d say there’s a compelling reason to switch.

Still I like the fact that Ruby has this new(ish) option available, especially as I believe DOM-like transformation of HTML server-side will become more and more common in Ruby web frameworks as traditionally client-side view techniques transition back to the server.

Concurrent Ruby #

The Concurrent Ruby gem offers a huge collection of features and data structures which provide for writing Ruby code that is, well, concurrent. What does that mean? It could mean creating a thread-safe data structure for sharing between multiple threads executing in tandem. It could mean creating “promises” — aka code blocks which run in threads and return values to the main thread. These and many more use cases are difficult to write in “vanilla Ruby” all on your own (at least in a bug-free manner!), so it’s helpful to use a library like this instead.

See also: the Async gem which lets you schedule asynchronous tasks using fibers instead of threads.

Money #

Just like ice_cube lets you work easily with date series, the Money gem lets you work easily with currency values. You can parse strings into currency values (also requires monetize), perform math between values, exchange one currency for another (as long as an exchange rate is configured), and much more.

Values are stored internally as integers in cents, avoiding errors which can arise with floating-point arithmetic. And when you need to print out the money value, the format method makes this straightforward.

Phonelib #

Dates, cash, now phone numbers! The Phonelib gem is specifically designed to let you validate phone numbers. The variety of phone number formats across countries and regions makes phone numbers uniquely difficult to work with, especially when you need to ensure you have a correct number and know how to use it in order to send text messages or automated callbacks.

Phonelib makes use of Google libphonenumber under the hood to ensure robust validation and introspection features. Trust me, this is the sort of logic you do not want to attempt to cobble together on your own!

IP Anonymizer #

The IP Anonymizer solves a problem you may not even realize you have. One of my own pet peeves is the default manner in which many authentication frameworks and code examples just log or store IP addresses verbatim. This is information I never actually want to capture. Knowing roughly what address a user is coming from vs. any other address for debugging purposes can be helpful, but I never need to know the exact address.

IP Anonymizer to the rescue! It can mask both IPv4 and IPv6 addresses, and as long as you’re able to configure your authentication & logging subsystems to pass IP addresses through this gem, you’ll keep that PII (Personally Identifiable Information) out of your records—at least in part.

Bonus Round: HashWithDotAccess #

I just couldn’t keep myself to ten! So here’s an eleventh option for you, and it’s one I’ve written. The HashWithDotAccess gem is used extensively by Bridgetown, and it started out life as an enhanced version of Active Support’s HashWithIndifferentAccess before a recent rewrite to remove that dependency. Now you can use HashWithDotAccess::Hash anywhere you need a hash which provides read/write access via dot notation (aka user.first_name instead of user[:first_name]).

There have been other solutions like this out there for quite a long time, most notably Hashie, and there are also times when you’d simply want to use a Struct value (or more recently a Data value) instead. But I think having a flavor of Hash which allows for interchangeable string, symbol, and dot access is hugely valuable, and this gem tries to provide that in as performant a way possible. (I did a lot of benchmarking as I worked on the most recent refactor, so I’m pretty confident it’s reasonably speedy.)


So there you have it folks: my top 10 11 favorite gems which are useful across many variations of Ruby web applications. Which ones have you used? What are your favorites? Do you have additional suggestions of gems to cover in a follow-up? Head on over to Mastodon and let me know your thoughts!



Expressive Class Hierarchies through Dynamically-Instantiated Support Objects

When you’re designing an abstract class for the purpose of subclassing—very common when looking at the framework/app divide—it’s tempting to want to throw a whole bunch of loosely-related functionality into that one parent class. But as we all know, that’s rarely the right approach to designing the models of your system.

So we start to reach for other tools…mixins perhaps. But while I love mixins on the app side of the divide, I’m not always a huge fan of them on the framework side. I’m not saying I won’t do it—I certainly have before—but I more often tend to consider the possibility that in fact I’m working with a cluster of related classes, where one “main” class needs to talk to a few other “support” classes which are most likely nested within the main class’ namespace.

The question then becomes: once a subclass of this abstract class gets authored, what do you do about the support classes? The naïve way would be to reference the support class constant directly. Here’s an example:

class WorkingClass
  def perform_work
    config = ConfigClass.new(self)
    
    do_stuff(strategy: config.strategy)
  end

  def do_stuff(strategy:) = "it worked! #{strategy}"

  class ConfigClass
    def initialize(working)
      @working = working
    end

    def strategy
      raise NoMethodError, "you must implement 'strategy' in concrete subclass"
    end
  end
end

Now this code would work perfectly fine…if all you need is WorkingClass alone. But since that’s simply an abstract class, and the nested ConfigClass is also an abstract class, then Houston, we have a problem.

For you see, once you’ve subclassed both, you may find to your great surprise the wrong class has been instantiated!

class WorkingHarderClass < WorkingClass
  class ConfigClass < WorkingClass::ConfigClass
    def strategy
      # a new purpose emerges
      "easy as pie!"
    end
  end
end

WorkingHarderClass.new.perform_work
# ‼️ you must implement 'strategy' in concrete subclass (NoMethodError)

Oops! 😬

Thankfully, there’s a simple way to fix this problem. All you have to do is change that one line in perform_work:

class WorkingClass
  def perform_work
    config = self.class::ConfigClass.new(self) # changed
    
    do_stuff(strategy: config.strategy)
  end
end

Courtesy of the reference to self.class, now when you run WorkingHarderClass.new.perform_work, it will instantiate the correct supporting class, call that object, and return the phrase “it worked! easy as pie!”

Note: in an earlier version of this article, I used self.class.const_get(:ConfigClass), but I received feedback (thanks Ryan Davis!) the above is an even cleaner approach. 🧹

What’s also nice about this pattern is you can easily swap out supporting classes on a whim, perhaps as part of testing (automated suite, A/B tests, etc.)

# Save a reference to the original class:
_SavedClass = WorkingHarderClass::ConfigClass

# Try a new approach:
WorkingHarderClass::ConfigClass = Class.new(WorkingClass::ConfigClass) do
  def strategy = "another strategy!"
end

WorkingHarderClass.new.perform_work # => "it worked! another strategy!"

# Restore back to the original:
WorkingHarderClass::ConfigClass = _SavedClass

WorkingHarderClass.new.perform_work # => "it worked! easy as pie!"

This almost feels like monkey-patching, but it’s really not. You’re merely tweaking a straightforward class hierarchy and the nested constants thereof. Which, when you think about it, is actually rather cool.

Note: the code examples above are written in a simplistic fashion. In production code, I’d move the setup of config into its own method and utilize the memoization pattern. Read all about it in this Fullstack Ruby article. small red gem symbolizing the Ruby language

Older Posts
Skip to content