Hey everybody, I’m so glad you could tune in for the debut episode of Fullstack Ruby. I’ve been on a few Ruby-themed podcasts over the past 18 months, but this is the first time I’m running a show about Ruby myself!
To kick things off, I’d like to introduce you to Ruby2JS and explain why I think this technology is a game changer.
Ruby2JS isn’t simply about an attempt to write what appears to be Ruby code for your website frontend. It’s really about writing JavaScript—AS IF JavaScript had Ruby’s syntax and was inspired by Ruby’s stdlib, ActiveSupport, and the like. A “RubyScript” if you will.
Three examples I cover on today’s episode:
set_timeout
tap & yield_self
implicit self method calls within a class definition
Visit the Ruby2JS website for live compilation demos, documentation on the various transformations and approaches available, and a whole lot more.
Welcome back to RUBY3.dev! Only…it’s not! Rather, a very warm welcome from Fullstack Ruby. Why the name change?
Well, a couple of reasons—the first of which is that your humble author (that’s me!) is not just a “Ruby developer” but a “web developer” as well. Yes, I’ll admit it: I don’t just write Ruby because I like assembling command line tools or crafting data processors or solving algorithmic puzzles. I like building websites. And I like building tools for building websites. I’m a web developer. It’s in my DNA.
So running a blog that’s generically about Ruby couldn’t hold my attention for too long. Thus I had to simultaneously narrow the focus all while expanding it to the broader web industry.
The second reason is that today, right now, right this very minute, is the absolute best time to be a fullstack Ruby/web developer. And tomorrow will be even better! Never have we had such a robust arsenal of tools at our disposal for building sites and apps that encompass both the backend and frontend in novel and exciting ways. Let us enumerate just what’s so great about the Ruby landscape at this juncture:
Turbo: in many ways a straightforward evolution of Turbolinks, Turbo—as a cornerstone of Hotwire (aka HTML-Over-the-Wire)—brings a new layer of interactivity to the frontend which leverages the backend templates and processes you already know and love. Instead of having to write two apps (a frontend app and a backend API), you just write one app, and Turbo provides the baseplate of “glue code” for composing your frontend out of backend “parts”. Whereas fullstack web development used to be primarily a “page-based” notion, it’s now fully modular. Turbo even works on static sites! Whoa.
StimulusReflex & CableReady/CableCar: StimulusReflex has taken the Rails world by storm as a launching pad for “reactive” programming which leverages WebSockets for fast two-way communications and broadcasts. It utilizes Stimulus (also part of Hotwire) as well as CableReady, a lower-level fullstack toolkit for generating and performing dynamic DOM operations. Of personal interest to me is CableCar, a feature currently in beta which lets you build and execute CableReady operations via any standard request/response. Paired with mrujs, a new swiss-army-knife library by Konnor Rogers, it makes advanced Ruby-based form handling a breeze.
Ruby2JS: what if I told you…you could write Ruby for the frontend, not just the backend? 🤯 That’s the promise of Ruby2JS. It’s not Opal—it doesn’t ship a veritable Ruby runtime to your browser. (Though Opal is very, very cool in its own right and in fact powers Ruby2JS’ pure-Node compiler implementation.) Rather, Ruby2JS allows you to write clean, modern ESM-flavored frontend code via a Ruby syntax and many Ruby idioms (enabled by configurable “filters”). And it now sports a sweet, sweet Lit component filter which I use heavily. To underscore just how real this is, I use Rubocop to lint all my Ruby2JS files. And the output? Looks 99% like concise, hand-written JavaScript with no compromises. Works with Webpack, Snowpack, Vite, and—soon—esbuild. Boom. 💥
Serbea: after literally decades of Ruby’s most popular template language, ERB, remaining entirely unchanged, Serbea is an exciting new take created by yours truly. It combines ERB’s power & flexibility with the expressiveness of handlebar-style languages like Nunjucks or Liquid, and it offers a native directive for rendering view components. I use it on all my projects these days—yes, even in Rails—and can’t imagine ever going back to plain ERB.
Bridgetown: sure, I’m extremely biased. What can I say? As lead maintainer of Bridgetown, I believe it’s the best platform upon which to build public-facing websites. By taking full advantage of the power of Ruby, and combining it with nearly all of the next-gen techniques enumerated above, you can create sites which start out as blogs, landing pages, portfolios, stores, educational resources, etc.—then grow into fullstack applications with authentication, paywalls, payment processing, headless CMS integrations with live previews, and more. We’re still in the alpha days of what I call the DREAMstack (Delightful Ruby Expressing APIs & Markup), but everything listed above is under active development. Come 2022, this dream will officially turn into reality.
So that’s the primary goal of the Fullstack Ruby blog going forward: to talk at length and in depth about all of the above futuristic technologies. And not just here on the blog, but on a new podcast as well entitled—shocker I know—Fullstack Ruby. 😅 Keep an eye out for the first teaser episode in early December.
So if that’s the primary goal, what’s the secondary goal? To help introduce backend-focused Rubyists to some of the exciting new browser developments they may not be familiar with. Advancements in CSS and JavaScript. New APIs. New client/server architectures. Something I’ve discovered in talking with various long-time Ruby developers is that some have thrown the baby out with the bathwater. By rightly eschewing the madness of JS frontend frameworks/tooling run amuck, they’ve also limited their knowledge of what is genuinely cutting-edge and useful on the frontend. For example, it’s fine if you opine “gee, heavy-duty React development seems like a PITA!” But if in the process you also ignore custom elements/shadow DOM, libraries like Lit, CSS variables, animations, and other techniques for building live, reactive frontend components, you’re cutting off your nose to spite your face. Not everything can fit cleanly into a Turbo/CableReady pipeline, or even a Stimulus controller. Sometimes, you just need to embrace “vanilla” JS & CSS. It’s OK. You can do it—and maintain your sanity! 😌
Finally, our third goal here at Fullstack Ruby is to introduce JavaScript developers to Ruby. We can shout all day from the rooftops how much we love Ruby and think it’s expressive and delightful—plus MINASWAN and all that—but if a JS dev who’s written some APIs in Node Express and assembled some pages with Next.js has no idea what we’re talking about or why—or how it’s relevant to their career—the #Ruby #WebDev community won’t grow. It’s as simple as that. So let’s take a moment out of our day to respectfully showcase to our fellow JS devs what is so appealing about Ruby, about the ecosystem, and about the community. Not in a spirit of competition, but in a spirit of collaboration. We’re ultimately all in the same boat: building great websites and applications. A polyglot web is a stronger web, a better web.
As a core member of the Bridgetown project, I realize I’m biased. I think every Rubyist who works on or even near the web should take a look—especially anyone who has current or past experience using Jekyll. But today’s post isn’t about Bridgetown per se but about how the next big release, v0.21 “Broughton Beach” (currently in beta and due out in late May), provides an intriguing new environment for teaching and learning Ruby and trying out new tools in the Ruby ecosystem.
One of the new features in Broughton Beach which is germane to this discussion is the ability to write web pages in pure Ruby. Previously, you could write a webpage in a template language such as Liquid, ERB, Haml, etc., similar to other Ruby frameworks like Rails.
Wait, I hear you say. Isn’t ERB just Ruby inside the<% %>delimiters?
Sure, it is. But you usually don’t see people writing an entire Ruby script in an ERB file. It’s mainly intended for first authoring the raw text of the template and then sprinkling bits of Ruby into it.
What’s changed in v0.21 is you can now add a page, or a layout, or a data file, using nothing more than .rb. Basically you can write any Ruby code you want, and the value returned at the end of the file becomes the content of the page. So you can build up web page markup using string concatenation, fancy DSLs, transformations of incoming data, the whole nine yards. And you can add methods and inner classes and anything else you need to accomplish your objective.
Feel free to fork the repo and take it for a spin! The only top-level files needed are the typical Gemfile/Gemfile.lock pair, and the bridgetown.config.yml file loaded by Bridgetown. Everything else goes in src. Let’s see what we have inside:
In _data/site_metadata.rb, we return a simple hash of key/value pairs we can access in any page via site.metadata. Only title is defined here but you can add any site-wide values you like.
In _layouts/default.rb, we define a simple HTML wrapper that can be used for any page on the site. First we obtain the logo SVG (aka ) we’ll use for our site-wide header. Next, we define a small stylesheet we’ll inject into the HTML head using a style tag. Then, we return the HTML itself using heredoc string interpolation to add in a few variables.
index.md is where things really get interesting. First we define a block of front matter using Ruby rather than YAML. (Note, we use the ###ruby … ### formatting in order for the front matter to get extracted and parsed separately at an earlier time than the rest of the page.) It references our default layout and sets the page title. Then in the body of the page, we create a few helper methods and values and begin constructing the page content using our home-grown DSL. Finally we return the @output of the page produced by our helper methods.
Is this the right way to build a Bridgetown site? 🤔 Well I certainly wouldn’t recommend shipping it to production! 😅 The point isn’t if you should use any of these techniques to build a website—rather that you can if you want to. (Just keep in mind that meme about scientists getting so preoccupied…)
Because you can, this becomes a compelling way to teach or to learn Ruby in the guise of building a website. Try out new techniques, new syntax, new parts of the standard library, new gems…the sky’s the limit! In the past, I might write one-off Ruby scripts and execute them on the command line, or maybe fiddle around in IRB. But now, with Bridgetown 0.21, I can actually maintain an experimental website full of pages which house various tips & tricks of Ruby programming I’ve picked up. Git init a repo, deploy it in mere minutes on Render, and we’re all set!
Want to get really fancy? Add the method_source gem to your project, and then inside a Ruby page you can grab a string representation of a proc or a method in the page and use that to output the source code to the webpage itself. Mind blown! 🤯
Another thing you can do (even if your pages use traditional ERB or another template language) is use the src/_data folder to drop .rb files that could load in data from filesystems or APIs (or generate data directly) and do all kinds of interesting things to it before returning either an array or a hash which is then accessible via site.your_data_file_here (tack on .rows if an array).
My goal in creating Bridgetown was always to consider the “Ruby-ness” of the tool a feature, not a bug. (By contrast it’s progenitor, Jekyll, strangely doesn’t overtly spell out that it’s a Ruby tool built by Rubyists for Rubyists).
I’m very excited to see what crazy, experimental projects people will build using this new version of Bridgetown. Feel free to hop on over to our Discord chat room and let us know!
As you sit down to write a new class in Ruby, you’re very likely going to be calling out to other objects (which in turn call out to other objects). Sometimes this is referred to as an object graph.
The outside objects created or required by a particular class in order for it to function broadly are called dependencies. There are various schools of thought around how best to define those dependencies. Let’s learn about the one I prefer to use the majority of the time. It takes advantage of three techniques Ruby provides for us: variable-like method calls, lazy instantiation, and memoization.
First of all, what do I mean by “variable-like method calls”? I mean that this:
thing.do_something(123)
could refer either to thing (a locally-scoped variable) or thing (a method of the current object). What’s groovy about this is when I instantiate thing, I can chose how to instantiate it. I could either set it up like this:
The beauty of the second example is it makes thing available from more than one method—all while using the same initialization values. The problem with this example however is if I access thing more than once, it will create a new object instance.
Oh no! The thing of the second line will be a different object than the thing of the first line! Yikes! Thankfully, we have a technique to fix that: “memoization via instance variable”.
Memoization is a technique used to cache the result of a potentially-expensive operation. In our particular case, we’re less concerned with performance-improving caching as we are with saving a unique value for reuse. We want the thing which gets used repeatedly to always refer to the same object. So let’s rewrite our thing method this way:
defthing@thing||=Thing.new(:abc)end
This code uses Ruby’s conditional assignment operator to either (a) return the value of the @thing instance variable, or (b) assign it and then return it. Now it’s assured we’ll never receive more than a single object instance of the Thing class. Let’s put it all together:
defsome_methodthing.do_something(123)# first call instantiates @thingthing.finalize!# second call uses the same @thingenddefthing@thing||=Thing.new(:abc)end
Let’s take a look at what we might do if we weren’t using the above technique and we needed thing available across multiple methods. We might use an approach like this:
classThingWranglerattr_reader:thing# create a read-only accessor methoddefinitialize@thing=Thing.new(:abc)# create @thing when this object is createdenddefsome_methodthing.do_something(123)thing.finalize!endend
Arguably this is an anti-pattern. Because if some_method never actually gets called, thing was instantiated for nothing—wasting memory and CPU resources. In addition, it makes swapping out the Thing class challenging in tests or subclasses because the Thing constant is hard-coded into the initialize method.
Some might recommend that you reach for the DI (Dependency Injection) pattern instead:
classThingWranglerattr_reader:thingdefinitialize(thing:)@thing=thingenddefsome_methodthing.do_something(123)# first call instantiates @thingthing.finalize!# second call uses the same @thingendend
Then you’d simply need to pass an initialized object to the new method of ThingWrangler from a higher-level:
Honestly, I really don’t like DI. It often makes for cumbersome APIs which are harder to comprehend as well as exposes implementation details to higher levels in situations where it might not even make sense. Do I really need to know that ThingWrangler doesn’t work without a Thing to rely on? Probably not. Contrast that with our friend the “lazily-instantiated memoized dependency” solution:
classThingWranglerdefinitialize(value)@important_value=value# we store useful data for future useenddefsome_methodthing.do_something(123)# first call instantiates @thingthing.finalize!# second call uses the same @thingenddefthing@thing||=Thing.new(@important_value)# aha! time to use saved dataendend# This level doesn't need to know about the Thing class!# It also doesn't cause any premature instantiation of @thing:wrangler=ThingWrangler.new(:abc)# NOW we call a method which in turn instantiates @thing:wrangler.some_method
What’s great about this pattern is it affords you many opportunities for customization. For example, you can write a subclass which swaps Thing out entirely! Dig this:
classHugeThingWrangler<ThingWranglerdefthing@thing||=HugeThing.new(@important_value)endendwrangler=HugeThingWrangler.new(:abc)wrangler.some_method# uses HugeThing under the hood
Or when testing ThingWrangler where you want Thing to be a mock object under your control, you could simply stub the thing method so it returns your mock instead of the usual Thing instance.
Or if you wanted to get real wild, here’s a bit of metaprogramming to add custom functionality around the original method:
ThingWrangler.class_evaldoalias_method:__original_thing,:thingdefthingputs"ThingWrangler#thing has been called!"obj=__original_thingputs"Now returning the thing object!"objendend
Now every time ThingWrangler accesses thing internally, your custom code will get run. (Careful out there!)
A memoized method shouldn’t be reliant on changing data, because its job is to return a single instance of Thing that gets cached and won’t ever change. So if you had code that looks like this:
You can’t memoize that instantiation, because you need a new Thing instance every time. However, what you could do instead is memoize the class itself! 🤯
This still provides many of the benefits of the techniques we’ve described in terms of allowing subclasses to alter functionality, mock objects in tests, etc. Depending on the needs of your API, you might even want to create a configuration DSL to allow that Thing constant to be officially customizable by consumers of your API. (And to reiterate, still no DI techniques required!)
One other caveat is if the original memoization method is overly complicated or reliant on internal implementation details, you could get into trouble with future subclasses.
classParentClassdefdependency@dependency||=DependentClass.new(lots,of,input,values)endendclassChildClass<ParentClassdefdependency# Hmm, what if the parent class changes internally and I don't?!@dependency||=AnotherDependentClass.new(what,should,go,here)endend
In fact, expensive custom logic typically isn’t compatible with the memoization technique as-is. Instead, a good pattern (if possible) to use for your dependency is simply to be given a reference to the calling object itself:
That way, it’s up to the dependency to glean any relevant data from the calling object in order to perform its work when required. This technique is used frequently across the Bridgetown project which I maintain.
The Lazily-Instantiated Memoization technique is a powerful one and, when used appropriately and in a consistent fashion, it will help your objects become more modular and more easily customized and tested. Consider it whenever you need to manage dependencies within your Ruby code.
I’ve had a doozy of a time writing this article. See here’s the thing: I’ve been a Ruby programmer for a long time (and a PHP programmer before that). My other main language exposure just before becoming a Rubyist was Objective-C. That did require putting type names before variable or method signatures, but also Objective-C featured a surprising amount of duck typing and dynamism as well (for better or worst…Swift tried to lock things down quite a bit more).
But then there’s JavaScript / TypeScript.
My relationship with JavaScript is…complicated, at best. I actually write quite a lot of JavaScript these days. Even more to the point, a lot of the JavaScript I write is in the form of TypeScript. I don’t hate JavaScript. The modern ESM environment is quite nice in certain ways. Certainly an improvement over jQuery spaghetti code and callback hell.
But TypeScript is simply a bridge too far for me. I use it because a project I’m on requires it, but I don’t enjoy it. At times I hate it so much I want to throw my computer across the room. However, I can’t deny its appeal in one respect: those Intellisense popups and autocompletes in VSCode are very nice, as well as the occasional boneheaded mistake it warns me about.
What does any of this have to do with Ruby? I’m getting there. Bear with me just a wee bit longer, I implore you!
One interesting trend I’ve started to see as of late (at least on Twitter) is taking what’s cool about TypeScript type checking, Intellisense, and all the rest…but applying it in such as way that you’re not actually writing TypeScript, you’re writing JavaScript. What you do is use JSDoc code comments to add type hints to your file (but not as your actual code). Then use a special mode of TypeScript type checking which will parse the JSDoc comments and interpret them as if you’d written all your type hints inline as actual code. Here’s a fascinating article all about it.
If this is starting to sound just a wee bit familiar to you, O Rubyist, it should—because that’s exactly what it’s like using YARD + Solargraph with Ruby.
Right now, I’m in the middle of an extensive overhaul of the Bridgetown project to add YARD documentation comments to all classes and methods. With the Solargraph gem + VSCode plugin installed, I get extensive type descriptions and code completion with a minimal amount of effort. If I were to type:
resource.collection.site.config
It knows that:
resource is a Bridgetown::Resource::Base
collection is a Bridgetown::Collection
site is a Bridgetown::Site
config is a Bridgetown::Configuration
And if I were to pass some arguments into a method, it would know what those arguments should be. And if I were to assign the return value of that method to a new variable, it would know what type (class) that variable is.
Livin’ the dream, right? But the one missing component of all this is strict type checking. Now the Solargraph gem actually comes with a type checking feature. But I’ve never used it, because I feel like if I were to go to the trouble of adding type checking to my Ruby workflow, I’d want something which sits a little closer to the language itself and is a recognized standard.
Sord was originally developed to generate Sorbet type signature files from YARD comments. Sorbet is a type checking system developed by Stripe, and it does not use anything specific to Ruby 3 but is instead a custom DSL for defining types. However, Sord has recently been upgraded to support generation of RBS files (Ruby Signature). This means that instead of having to write all your Ruby 3 type signature files by hand (which are standalone—Ruby 3 doesn’t support inline typing in Ruby code itself), you can write YARD comments—just like with Solargraph—and autogenerate the signature files.
Once you have those in place, you use a tool called Steep, which is the official type checker “blessed” by the Ruby core team. Steep evaluates your code against your signature files and provides a printout of all the errors and warnings (similar to any other type checker, TypeScript and beyond).
So here’s my grand unifying theory of Ruby 3 type checking:
You write YARD comments in your code.
You install Solargraph for the slick editor features.
You install Sord to generate .rbs files.
You install Steep to type check your Ruby code.
Nice theory, and extremely similar in overall concept to all the folks writing JavaScript yet using JSDoc to add “TypeScript” functionality in their code editors and test suites.
Unfortunately the reality is…not quite there yet. It kinda sorta works—with several asterisks. Hence the reason it took me so long to even write an article about Ruby 3 typing…and I’m not even sharing examples of how to do it but instead my thought process around why you’d want to do it and what the benefits are relative to all the hassles and headaches.
In my opinion, a type checking system for Ruby is useless unless it’s gradual. I want everything “unchecked” by default, and “opt-in” specific classes or even methods as we go along. While YARD + Solargraph alone gives you this experience, adding Sord + Steep into the mix does not. There doesn’t currently seem to be a way to say only generate type signatures for this file or that and only check this part of the class or that. At least I wasn’t able to find it.
In addition, all this setup is confusing as hell to beginners. There’s no way I can take someone’s fresh MacBook Air and install Ruby + VSCode + Solargraph + Sord + Steep (perhaps also Rubocop for linting) and get everything working perfectly with a minimum of headache and fuss. I myself have seen Solargraph and/or Rubocop support in VSCode break several times for unclear reasons, and it’s been a PITA to fix.
So here’s my crazy and wacky proposal: This should all be one tool. 🤯 I want to sit down at a computer, install Ruby + AwesomeRubyTypingTool, and it all just works. That’s the real dream here. I mean, TypeScript is TypeScript. It’s not a bunch of random JS libraries you have to manually cobble together into some kind of coherent system. TypeScript—for all its gotchas and flaws—is a known quantity. You might even say it just works—at least in VSCode. (No surprise there: both VSCode and TypeScript are Microsoft-sponsored projects.)
I have no idea what it would take for the Ruby core team and the other folks out there building these various tools to get together and hash this all out. But I really hope this story gets a hell of a lot better over the coming months. Because if not…I might just kiss Ruby 3 typing goodbye.
But not Solargraph. You’d have to pry that out of my cold dead hands. 😆
For the longest time, I’ve wanted to be able to do a very simple thing in Ruby.
I’ve wanted to be able to run a block of expensive code multiple times in parallel and see all my CPU cores light up. ✨
This was very hard to do before! While Ruby does support multi-threaded code, only one thread at a time can be actively executing instructions (due to the Global Interpreter Lock, or GIL). That’s fine for apps that are often waiting on external I/O and so forth, but it doesn’t help you much if all your app is primarily concerned with is internal data processing. Historically, the only way you could truly achieve async parallelism in Ruby would be to fork multiple processes or schedule background jobs.
Until now.
Welcome to Ractor, a brand-new method of running async code in Ruby 3.
Ractor is an experimental new class in the Ruby corelib. With ractors, Ruby has for the first time lifted restrictions on the GIL. Now you can have multiple “RILs” if you will—aka one interpreter lock per ractor (and shared between multiple threads within a single ractor if you spawn threads).
Ractor is shorthand for “Ruby actor”. The actor concept has long been established in other languages such as Elixr to handle concurrency concerns. Essentially an actor is a unit of code that executes asynchronously and uses message passing to send and receive data from the main codepath or even other actors. For more on the history and conceptual thinking behind Ruby actors, read this Scout APM blog post by Kumar Harsh.
There are a variety of patterns at your disposal when using ractors, some of which are explained in the extensive Ractor documentation.
I’m very impressed by how simple it is to program with ractors. I’ve tried to work with Threads or gems in the past that aid with async development, and it’s always made my brain hurt with little to show for my efforts. Using the Ractor class is about as easy as I could possibly imagine (short of a one-line async keyword).
The other thing I’m impressed by is how straightforward it is to get deterministic, ordered output from multiple ractors. In the past if I tried to use threads to process data and add the outputs to an array, the array values would be out of order. If thread 1 finished after thread 2, the final array would be in 2, 1 order. With the ractors.map(&:take) pattern, you’re guaranteed that even if one ractor takes 2 seconds to process and another takes 6, you’ll still end up with an array of values in the same order in which you started up the ractors.
I wanted to create the most basic example of ractors I could think of that would also be an interesting sort of benchmark comparing to typical, synchronous Ruby code.
Here’s a script that spins up 20 ractors which perform some intensive data processing and return an output value, and the final script output is a joined array of all the ractor outputs.
require"benchmark"ractors=[]values=[]puts"Starting Ractor processing"time_elapsed=Benchmark.measuredo20.timesdo|i|ractors<<Ractor.new(i)do|i|puts"In Ractor #{i}"5_000_000.timesdo|t|str="#{t}";str=str.upcase+str;endputs"Finished Ractor #{i}""i: #{i}"# implicit return value, or use Ractor.yieldendendvalues=ractors.map(&:take)end# avg: 22 seconds, 1.6x performance over not_ractorsputs"End of processing, time elapsed: #{time_elapsed.real}"# deterministic output. nice!putsvalues.join(", ")
As you can see, using the Ractor class can be nearly as easy as working with standard lambdas. You don’t have to spend much mental overhead working through any additional data structures, scheduling, or thread concepts like mutexes. It “just works”.
And not only that, but it’s noticeably faster than a non-Ractor-based script:
require"benchmark"values=[]puts"Starting Not-Ractor processing"time_elapsed=Benchmark.measuredo20.timesdo|i|puts"In Not-Ractor #{i}"5_000_000.timesdo|t|str="#{t}";str=str.upcase+str;endputs"Finished Not-Ractor #{i}"values<<"i: #{i}"endend# 34.5 seconds, fans spun up !!!puts"End of processing, time elapsed: #{time_elapsed.real}"putsvalues.join(", ")
After a number of runs of both scripts on my tricked-out 16” MacBook Pro, the ractors exhibited a 1.6x performance increase. I’ve heard reports of other tests where converting Ruby code to use ractors resulted in 3x performance increases.
It’s very exciting to run a Ruby script and see every CPU light up in Activity Monitor, plus I noticed the single-core script made my fans spin up whereas the multi-core script kept my fans nearly inaudible.
As cool as ractors are, you can’t just flip a switch and Ractor all the things (!). There are a number of limitations around how sharing objects and passing them back and forth via messages works—limitations that make sense considering we’re now bypassing the GIL. So it really does require a whole new level of thinking around how you structure your objects, methods, and data structures in general (particularly objects which are “global” in nature). As an example of something I’m hoping to work on soon, I recently started a rewrite of the content pipeline in Bridgetown (a static site generator). When Bridgetown is processing a site, there are a number of shared objects in memory—most notably, a site object and a series of collection objects. Typically, when a particular page/post/etc. is getting loaded, it adds itself to the necessary arrays in the site or the collection. With ractors, you can’t do that! Multiple concurrent ractors running in parallel can’t be modifying shared state directly. Instead, you’d have to separate the whole process out into multiple stages: gather the metadata required to load the page, then spin up ractors to perform all the loading logic, and then use message passing to gather up the loaded pages from the ractors and add them to the shared objects.
That’s the theory anyway. I’ll have to report back (a) if it works, and (b) if it’s a performance improvement over the regular synchronous code. But the promise is there: by architecting your app or gem around the ractor concept, your Ruby code gains the ability to shuttle intensive operations off to all your CPU cores at once—potentially yielding monumental performance increases.
Since Ruby 3 is so new and Ractor itself is marked experimental, I think it will take some time for the Ruby ecosystem at large to evolve into this exciting new direction. And it may take a few point releases for esoteric ractor bugs or gotchas to get resolved. But I have no doubt this will happen. The rewards are too tantalizing to be left on the table for long. Finally, we can look at other languages like Elixir or Go and, instead of sighing wistfully at how easy might be to write concurrent code, we can roll up our sleeves, fire up some ractors, and watch those CPU cores light up.
Welcome to our first article in a series all about the exciting new features in Ruby 3! Today we’re going to look how improved pattern matching and rightward assignment make it possible to “destructure” hashes and arrays in Ruby 3—much like how you’d accomplish it in, say, JavaScript—and some of the ways it goes far beyond even what you might expect. December 2021: now updated for Ruby 3.1 — see below!
Now there’s a method for Hash called values_at which you could use to pluck keys out of a hash and return in an array which you could then destructure:
a,b={a: 1,b: 2,c: 3}.values_at(:a,:b)
But that feels kind of clunky, y’know? Not very Ruby-like.
In Ruby 3 we now have a “rightward assignment” operator. This flips the script and lets you write an expression before assigning it to a variable. So instead of x = :y, you can write :y => x. (Yay for the hashrocket resurgence!)
What’s so cool about this is the smart folks working on Ruby 3 realized that they could use the same rightward assignment operator for pattern matching as well. Pattern matching was introduced in Ruby 2.7 and lets you write conditional logic to find and extract variables from complex objects. Now we can do that in the context of assignment!
Let’s write a simple method to try this out. We’ll be bringing our A game today, so let’s call it a_game:
defa_game(hsh)hsh=>{a:}puts"`a` is #{a}, of type #{a.class}"end
Now we can pass some hashes along and see what happens!
a_game({a: 99})# `a` is 99, of type Integera_game({a: "asdf"})# `a` is asdf, of type String
But what happens when we pass a hash that doesn’t contain the “a” key?
Darn, we get a runtime error. Now maybe that’s what you want if your code would break horribly with a missing hash key. But if you prefer to fail gracefully, rescue comes to the rescue. You can rescue at the method level, but more likely you’d want to rescue at the statement level. Let’s fix our method:
defa_game(hsh)hsh=>{a:}rescuenilputs"`a` is #{a}, of type #{a.class}"end
And try it again:
a_game({b: "bee"})# `a` is , of type NilClass
Now that you have a nil value, you can write defensive code to work around the missing data.
Looking back at our original array destructuring example, we were able to get an array of all the values besides the first ones we pulled out as variables. Wouldn’t it be cool if we could do that with hashes too? Well now we can!
{a: 1,b: 2,c: 3,d: 4}=>{a:,b:,**rest}# a == 1, b == 2, rest == {:c=>3, :d=>4}
But wait, there’s more! Rightward assignment and pattern matching actually works with arrays as well! We can replicate our original example like so:
[1,2,3,4,5]=>[a,b,*rest]# a == 1, b == 2, rest == [3, 4, 5]
In addition, we can do some crazy stuff like pull out array slices before and after certain values:
[-1,0,1,2,3]=>[*left,1,2,*right]# left == [-1, 0], right == [3]
You can use rightward assignment techniques within a pattern matching expression to pull out disparate values from an array. In other words, you can pull out everything up to a particular type, grab that type’s value, and then pull out everything after that.
You do this by specifying the type (class name) in the pattern and using => to assign anything of that type to the variable. You can also put types in without rightward assignment to “skip over” those and move on to the next match.
Take a gander at these examples:
[1,2,"ha",4,5]=>[*left,String=>ha,*right]# left == [1, 2], ha == "ha", right == [4, 5][8,"yo",12,14,16]=>[*left,String=>yo,Integer,Integer=>fourteen,*right]# left == [8], yo == "yo", fourteen == 14, right == [16]
What if you don’t want to hardcode a value in a pattern but have it come from somewhere else? After all, you can’t put existing variables in patterns directly:
int=1[-1,0,1,2,3]=>[*left,int,*right]# left == [], int == -1 …wait wut?!
But in fact you can! You just need to use the pin operator ^. Let’s try this again!
int=1[-1,0,1,2,3]=>[*left,^int,*right]# left == [-1, 0], right == [2, 3]
You can even use ^ to match variables previously assigned in the same pattern. Yeah, it’s nuts. Check out this example from the Ruby docs:
In case you didn’t follow that mind-bendy syntax, it first assigns the value of school (in this case, "high"), then it finds the hash within the schools array where level matches school. The id value is then assigned from that hash, in this case, 2.
So this is all amazingly powerful stuff. Of course you can use pattern matching in conditional logic such as case which is what all the original Ruby 2.7 examples showed, but I tend to think rightward assignment is even more useful for a wide variety of scenarios.
“Restructuring” for hashes and keyword arguments in Ruby 3.1 #
New with the release of Ruby 3.1 is the ability to use a short-hand syntax to avoid repetition in hash literals or when calling keyword arguments.
First, let’s see this in action for hashes:
a=1b=2hsh={a:,b:}hsh[:a]# 1hsh[:b]# 2
What’s going on here is that {a:} is shorthand for {a: a}. For the sake of comparison, JavaScript provides the same feature this way: const a = 1; const obj = {a}.
I like {a:} because it’s a mirror image of the hash destructuring feature we discussed above. Let’s round-trip-it!
hsh1={xyz: 123}hsh1=>{xyz:}# now local variable `xyz` equals `123`hsh2={xyz:}# hsh2 now equals `{:xyz=>123}`
Better yet, this new syntax doesn’t just work for hash literals. It also works for keyword arguments when calling methods!
Prior to Ruby 3.1, you would have needed to write say_hello(first_name: first_name). Now you can DRY up your method calls!
Another goodie: the values you’re passing via a hash literal or keyword arguments don’t have to be merely local variables. They can be method calls themselves. It even works with method_missing!
classMissMedefprint_messagemiss_you(dear:)enddefmiss_you(dear:)puts"I miss you, #{dear} :'("enddefmethod_missing(*args)ifargs[0]==:dear"my dear"elsesuperendendendMissMe.new.print_message# I miss you, my dear :'(
What’s happening here is we’re instantiating a new MissMe object and calling print_message. That method in turn calls miss_you which actually prints out the message. But wait, where is dear actually being defined?! print_message certainly isn’t defining that before calling miss_me. Instead, what’s actually happening is the reference to dear in print_message is triggering method_missing. That in turn supplies the return value of "my dear".
Now this all may seem quite magical, but it would have worked virtually the same way in Ruby 3.0 and prior—only you would have had to write miss_you(dear: dear) inside of print_message. Is dear: dear any clearer? I don’t think so.
In summary, the new short-hand hash literals/keyword arguments in Ruby 3.1 feels like we’ve come full circle in making both those language features a lot more ergonomic and—dare I say it—modern.
While you might not be able to take advantage of all this flexibility if you’re not yet able to upgrade your codebase to v3 of Ruby, it’s one of those features I feel you’ll keenly miss after you’ve gotten a taste of it, just like keyword arguments when they were first released. I hope you enjoyed this deep dive into rightward assignment and pattern matching! Stay tuned for further examples of rightward assignment and how they improve the readability of Ruby templates.
We all know that Ruby is a great language to use for the backend of your web application, but did you know you can write Ruby code for the frontend as well?
Not only that, but there are two available options to choose from when looking to “transpile” from Ruby to Javascript. These are:
My personal favorite, Ruby2JS was created by Sam Ruby (yep, that’s his name), and it is intended to convert Ruby-like syntax to Javascript as cleanly and “natively” as possible. This means that (most of the time) you’ll get a line-by-line, 1:1 correlation between your source code and the JS output. For example:
classMyClassdefmy_method(str)ret="Nice #{str} you got there!"ret.upcase()endend
will get converted to:
classMyClass{myMethod(str){letret=`Nice ${str} you got there!`;returnret.toUpperCase()}}
There’s actually a lot going on here so let me unpack it for you:
Depending on how you configure Ruby2JS, you can convert classes to old-school JS functions/constructors, or you can use modern ES6+ classes like in the example here (which I recommend).
Ruby2JS provides “filters” which you can apply selectively to your code to enable new functionality. In this example, the camelCase filter automatically converts typical Ruby snake_case to camelCase as is common in Javascript. The functions filter automatically converts many popular Ruby methods into JS counterparts (so upcase becomes toUpperCase). And the return filter automatically add a return to the end of a method just like how Ruby works.
String interpolation in Ruby magically become valid ES6+ string interpolation, and it even works with squiggly heredocs!
How do you get started using Ruby2JS? It’s pretty simple: if you’re using a framework with Webpack support (Rails, Bridgetown), you can add the rb2js-loader plugin along with the ruby2js gem, write some frontend files with a .js.rb extension, and import those right into your JS bundle. It even supports source maps right out of the box so if you have any errors, you can see the original Ruby source code right in your browser’s dev inspector!
Full disclosure: I recently joined the Ruby2JS team and built the Webpack loader, so let me know if you run into any issues and I’ll be glad to help!
The Opal project was founded by Adam Beynon in 2012 with the ambitious goal of implementing a nearly-full-featured Ruby runtime in Javascript, and since then it has grown to support an amazing number of projects, frameworks, and use cases.
There are plenty of scenarios where you can take pretty sophisticated Ruby code, port it over to Opal as-is, and it just compiles and runs either via Node or in the browser which is pretty impressive.
Because Opal implements a Ruby runtime in Javascript, it adds many additional methods to native JS objects (strings, integers, etc.) using a $ prefix for use within Opal code. Classes are also implemented via primitives defined within Opal’s runtime layer. All this means that the final JS output can sometimes look a little closer to bytecode than traditional JS scripts.
For instance, the above example compiled via Opal would result in:
/* Generated by Opal 1.0.3 */(function(Opal){varself=Opal.top,$nesting=[],nil=Opal.nil,$$$=Opal.const_get_qualified,$$=Opal.const_get_relative,$breaker=Opal.breaker,$slice=Opal.slice,$klass=Opal.klass;Opal.add_stubs(['$upcase']);return (function($base,$super,$parent_nesting){varself=$klass($base,$super,'MyClass');var$nesting=[self].concat($parent_nesting),$MyClass_my_method$1;return (Opal.def(self,'$my_method',$MyClass_my_method$1=function$$my_method(str){varself=this,ret=nil;ret=""+"Nice "+(str)+" you got there!";returnret.$upcase();},$MyClass_my_method$1.$$arity=1),nil)&&'my_method'})($nesting[0],null,$nesting)})(Opal);
Thankfully, Opal too has support for source maps so you rarely need to look at anything like the above in day-to-day development—instead, your errors and debug output will reference clean Ruby source code in the dev inspector.
One of the more well-known frameworks using Opal is Hyperstack. Built on top of both Opal and React, Hyperstack lets you write “isomorphic” code that can run on both the server and the client, and you can reason about your web app using a well-defined component architecture and Ruby DSL.
As you look at the requirements for your project, you can decide whether Ruby2JS or Opal might suit your needs.
If you use Webpack and already have a lot of JS code or libraries you need to interoperate with, Ruby2JS is a capable and lightweight solution which integrates easily into your build pipeline.
If you’re starting from scratch and want all the power of a full Ruby runtime as well as opportunities to write isomorphic Ruby code, Opal might be just what the doctor ordered.
Regardless of which you choose, it’s exciting to know that we can apply our Ruby knowledge to the frontend as well as the backend for web applications large and small. It’s a great day to be a Rubyist.